3

I have some AWS Lambda functions that get run about twenty thousand times a day. So I would like to enable logging/alert to monitor all the errors and exceptions.

The cloudwatch log is giving too much noise, and difficult to see the error. Now I'm planning to write the log to AWS S3 Bucket, this will have an impact on the performance.

What's the best way you suggest to log and alert the errors?

1
  • I use ElasticSearch and the native AWS plugin to stream CloudWatch Logs into ElasticSearch... you can apply specific format filters as well Commented Apr 4, 2018 at 16:31

2 Answers 2

2

An alternative would be to leave everything as it is (from application perspective) and check AmazonCloudWatch Logs Filter.

You use metric filters to search for and match terms, phrases, or values in your log events. When a metric filter finds one of the terms, phrases, or values in your log events, you can increment the value of a CloudWatch metric.

If you defined your filter you can create a CloudWatch Alarm on the metric and get notified as soon as your defined threshold is reached :-)

Edit

I didnt check the link from @Renato Gama. Sorry. Just follow the instructions behind the link and your problem should be solved easily...

Sign up to request clarification or add additional context in comments.

Comments

0

If you did not try this already I suggest that you try creating CloudWatch alerts based on custom metric filters; Have a look here; https://www.opsgenie.com/blog/2014/08/how-to-use-cloudwatch-to-generate-alerts-from-logs

(Of course you don't have to use OpsGenie service as suggested on the post I linked, you can implement anything that will help you debug the problems)

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.