![]() ![]() Same steps you can follow with other table and use the common sequence with them. Now, attach the function with table: Create table mytab( Now, you can create a function like below to access the sequence for each insert: CREATE OR REPLACE FUNCTION public.func() ‘.XX’, port ‘5432’, dbname ‘mydatabase’) ĬREATE USER MAPPING FOR PUBLIC SERVER global_seq OPTIONS (user ‘User’ ,ĬREATE FOREIGN TABLE seqtable (a bigint) SERVER global_seq OPTIONS ( Now, We will be using a foreign data wrapper to access the sequence: CREATE EXTENSION postgres_fdw ĬREATE SERVER global_seq FOREIGN DATA WRAPPER postgres_fdw OPTIONS(host Like below: create sequence seq Ĭreate a view on this sequence: CREATE VIEW seq_view AS SELECT nextval(‘seq’) as a You can create a sequence and make it global. I am not much in favour of using some function to generate such PKsīut in our case since we do have other transformation rules- I think we can handle this.This requires transformation when bringing data to central nodes. ![]() Can be adjusted to add any number of nodes By far the simplest and most common technique for adding a primary key in Postgres is by using the SERIAL or BIGSERIAL data types when CREATING a new table.Very simple solution, less gaps with PK index.This gives you possible 100 nodes (you can always use 001,002 suffix for 1000 nodes). So while looking around I found this very old PG forum thread and I like the suffix approach more than the prefix so the new ids look like for node1: 101 ( I was having a larger number like 20M in mind) But even if I dont see it as a db perf issue there is stll a logical limit and assumption that records in first source will not exceed a Million. This is exactly what I was thinking a day earlier because I have the exact same problem. Regarding the approach shown in the question, my 2c on this: 200,000 I believe the only performance different will stem from using a 128bit column for the key against say a 32 or 64 bit serial. This is an old thread and I think the same question and problem still exists more now since people have all kinds of distributed systems. As for being non-sequential this is how a live database will end up (to some degree) - as rows get deleted the auto assigned sequential numbers get deleted along with them and don't get re-used leaving gaps. You get your primary key '1001' Using MINVALUE and MAXVALUEĪnother way is you use the MINVALUE and MAXVALUE of SEQUENCE to define an numeric space: CREATE SEQUENCE node100_seq If you now insert a value in the table INSERT INTO test (name) VALUES ('test text') NEW.id := '100' || nextval('test_seq')::TEXT įOR EACH ROW EXECUTE PROCEDURE insert_trigger() A similar question including answer you can find here:Īnother possible solution is, if you don't need you node-id in the primary-key field, you can use the uuid-ossp extension which provides the type uuid and the functions to generate uuids: Use a trigger for these solutionsįor both solutions: you can use a trigger to set the primary key.ĬREATE OR REPLACE FUNCTION insert_trigger() One possible solution is to reference the seqence in insert statement directly and prepend your node-id. I have 3 solutions for you: Direct reference of sequence and using concat ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |