Home Performance issues while passing UDTT[] to postgres Function
Reply: 0

Performance issues while passing UDTT[] to postgres Function

user2811
1#
user2811 Published in April 24, 2018, 6:27 am

I have created a function in postgres that takes a UDTT[] as an importing parameter, and want to eventually insert that data into a Table

Example Udtt

create type udtt_mytype as 
(
  id uuid,
  payload int
);

And then an example Function is something akin to

CREATE OR REPLACE FUNCTION dbo.p_dothething(p_import udtt_mytype[])
RETURNS void
LANGUAGE plpgsql
AS $function$
BEGIN
insert into mytab select * from unnest($1)
RETURN;
END
$function$;

My C# backend presently looks like

public class udtt_mytype
{
    [PgName("id")]
    public Guid id{ get; set; }
    [PgName("payload ")]
    public int payload  { get; set; }
}
var payload = CreateAndFillUdttMyType();
var conn = new NpgsqlConnection();
conn.Open();
var transaction = conn.BeginTransaction();
conn.MapComposite<udtt_mytype>("udtt_mytype");

var command = new NpgsqlCommand("dbo.p_dothething", conn);
command.CommandType = CommandType.StoredProcedure;
Object[] objArray = new Object[1];
objArray[0] = new NpgsqlParameter { ParameterName = "p_import", 
    Value = payload , NpgsqlDbType = NpgsqlTypes.NpgsqlDbType.Array | 
    NpgsqlTypes.NpgsqlDbType.Composite };
command.Parameters.AddRange(objArray);

var result = command.ExecuteScalar();
transaction.Commit();
conn.Close();

While the above works, it is pretty non-performant compared to a similiar UDTT -> SQL StoredProcedure. Prior to our NPGSQL implementation, this was <1 second, but now i seem to be seeing about a 6seconds per 6k rows (whereas the common usages for this end up being much higher numbers than that).

Using some timestamping and returning from the SP, i see that the processing of the data in the function isnt the issue at all..it appears to entirely be transfer time of the payload. In this particular case its a simple array of UDTT_MYTYPE's, and with a single object, execution is instantaneous, but w/ 6k, its up to the 6-7 seconds range. And this performance persists even if i pass it off to an empty function (removing the cost of the unnest/insert).

In reality, udtt_mytype has 12 columns of various types, but we are still talking about a relatively 'shallow' object.

I have attempted to compare this to NPGSqls' documentation on Bulk copy (found here http://www.npgsql.org/doc/copy.html), and that implementation seemed to be even slower than this, which seems contradictive.

Is postgres typically this much slower than MSSQL, or is there something that may be limiting xfer rate of data that im not aware of? Obviously no one can speak for my network connectivity/hardware setup, but anyone that may have converted between the two, was a performance increase seen along this same scale?

You need to login account before you can post.

About| Privacy statement| Terms of Service| Advertising| Contact us| Help| Sitemap|
Processed in 0.305361 second(s) , Gzip On .

© 2016 Powered by mzan.com design MATCHINFO