0

I have a query that runs perfectly for a small amount of records. However if I try to run a query with a large amount of records, it does not return any output. I suspect it is because I am not properly using Async/Await.

Here is the code for my class with the exception of the actual connecting string:

sql.js

class SQL {
 
    get connectionString() { return 'postgres://user:pass@server:port/db'; }

    async queryFieldValue(query) {
        const pgs = require('pg');
        const R = require('rambda');
        const client = new pgs.Client(this.connectionString);
        await client.connect();
        await client.query(query).then(res => {
            const result = R.head(R.values(R.head(res.rows)));
            console.log("The Result is: " + result);
        }).finally(() => client.end());
    }
}    
export default new SQL();

Any help is appreciated =)

1 Answer 1

1

Well, your usage of async/await is incorrect, but I don't think that's why you're getting results from small queries vs. large ones. When using Promises, try to stick to either async/await or chained promise resolution methods and not mix them together.

const pgs = require('pg');
const R = require('rambda');

class SQL {
  get connectionString() { return 'postgres://user:pass@server:port/db'; }
  get client() { return new pgs.Client(this.connectionString); }

  async queryFieldValue(query) {
    try {
      await this.client.connect();
      const { rows } = await this.client.query(query);

      const result = R.head(R.values(R.head(rows)));
      console.log("The Result is: " + result);
    } catch(e) {
      console.log('Some error: ', e);
    } finally {
      await client.end();
    }
  }
}    

export default new SQL();

Preferences on code style aside, the above is a cleaner usage of async/await without blending in chained resolvers.

As for the actual problem you're having, based on your code you're only logging the first column value from the first row returned, so maybe just slap a limit on there? I imagine you're trying to do something a little more involved with the resultant rows than just logging that value, additional information would help. I think you might be swallowing an error by using that .finally and no catch, but that's a guess.

Sign up to request clarification or add additional context in comments.

2 Comments

Hi Eddie, thank you for responding and imparting your knowledge. I implemented your code, and it runs without errors and is certainly much cleaner (thank you), however the issue seems to still exist. I'm keeping the query very simple however the amount of time it takes the database to return info from this table is around 30 seconds. This is the actual query: SELECT instrument_class_code FROM solr.metadata WHERE datastream = 'acxaerich1M1.b1' LIMIT 1; -- your thoughts? =)
Oh, this is rough. Hrmmm, if your PG db has a query timeout set to limit this, that could do it, but 30 seconds would be pretty short for a default timeout. If you log the output of rows before passing it off to rambda, what do you get?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.