When your tools are brought into chat, something interesting happens. Queries become documentation. Deployments become teaching moments. Cost checks become shared context. The Glue + Supabase MCP doesn't just save time switching tabs. It makes your team's database knowledge accessible to everyone, right where decisions are being made. Learn how → https://lnkd.in/eeaSkg8D
How Glue + Supabase MCP enhances team collaboration
More Relevant Posts
-
🚀 Top 5 common ways to improve API performance every backend developer should know! 📄 𝟏. 𝐏𝐚𝐠𝐢𝐧𝐚𝐭𝐢𝐨𝐧 - When your API returns large datasets (e.g., list of users, products, or orders), sending everything at once is slow and memory-heavy. Pagination splits results into smaller chunks (pages). Example: • 𝙶𝙴𝚃 /𝚞𝚜𝚎𝚛𝚜?𝚙𝚊𝚐𝚎=𝟸&𝚕𝚒𝚖𝚒𝚝=𝟸𝟶 🛢️ 𝟐. 𝐂𝐚𝐜𝐡𝐢𝐧𝐠 - Frequently accessed data can be stored in a cache to speed up retrieval. Clients check the cache before querying the database, with data storage solutions like Redis offering faster access due to in-memory storage. 🗜️ 𝟑. 𝐏𝐚𝐲𝐥𝐨𝐚𝐝 𝐂𝐨𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧 - Before sending a response, the server can compress the data using algorithms like GZIP, Brotli, etc. The client decompresses it automatically. Benefits: • Greatly reduces data transfer size (especially for JSON). • Improves speed over slow networks. • No change needed on client side — browsers or HTTP clients handle it. 🔗 𝟒. 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧 𝐏𝐨𝐨𝐥 - Each API request often needs to connect to a database or another service. Creating a new connection each time is expensive — it takes time and system resources. - Connection pooling reuses existing open connections instead of creating new ones for every request. Benefits: • Reduces connection overhead. • Reduces latency and improves throughput. • Prevents “too many connections” errors under load. ⚡ 𝟓. 𝐀𝐬𝐲𝐧𝐜𝐡𝐫𝐨𝐧𝐨𝐮𝐬 𝐋𝐨𝐠𝐠𝐢𝐧𝐠 - Normally, when your API handles a request, it might write logs (e.g., to a file or database). If this logging happens synchronously, the API waits for the log operation to finish before sending the response — which slows down performance. - Asynchronous logging makes the logging run in the background, so the request can finish faster. - This approach involves sending logs to a lock-free buffer and returning immediately, rather than dealing with the disk on every call. Logs are periodically flushed to the disk, significantly reducing I/O overhead. 👉 Over to you: 𝑾𝒉𝒂𝒕 𝒐𝒕𝒉𝒆𝒓 𝒘𝒂𝒚𝒔 𝒅𝒐 𝒚𝒐𝒖 𝒖𝒔𝒆 𝒕𝒐 𝒊𝒎𝒑𝒓𝒐𝒗𝒆 𝑨𝑷𝑰 𝒑𝒆𝒓𝒇𝒐𝒓𝒎𝒂𝒏𝒄𝒆? (Image Credit: - ByteByteGo)
To view or add a comment, sign in
-
-
🚨𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻: 𝗞𝗗𝗕-𝗫 𝗥𝗲𝗹𝗲𝗮𝘀𝗲 𝗔𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗺𝗲𝗻𝘁 🚨 We are thrilled to announce this newest release of KDB-X and 𝗦𝗣𝗢𝗜𝗟𝗘𝗥: it's a good one 👀 🧩 𝗠𝗼𝗱𝘂𝗹𝗲 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 We’re excited to introduce the Modules Framework — a new way to organize, manage, and share code across your KDB-X environment and the wider community. Now available in Public Preview: • 𝗼𝗯𝗷𝗲𝗰𝘁 𝘀𝘁𝗼𝗿𝗮𝗴𝗲: Integrate qSQL with kdb+ databases stored on S3 compatible object storage. • 𝗸𝘂𝗿𝗹: Enables synchronous and asynchronous q calls for seamless interaction with web and cloud REST APIs. • 𝗥𝗘𝗦𝗧: Enables q APIs and functions to be accessed via native HTTP endpoints, simplifying the process of building RESTful APIs in KDB-X. • 𝗔𝗜: Fast semantic and time-series similarity search across unstructured and structured data. 📊 𝗗𝗮𝘀𝗵𝗯𝗼𝗮𝗿𝗱𝘀 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 KDB-X now integrates seamlessly with KX Dashboards, enabling users to explore, analyze, and visualize data in real time. 𝗧𝗿𝘆 𝗶𝘁 𝗼𝘂𝘁 𝘁𝗼𝗱𝗮𝘆 𝗮𝘁 https://lnkd.in/eaJs63vK Let us know your thoughts! 💭 Complete our short feedback survey (https://lnkd.in/e5NYXjTS) or email us directly at preview@kx.com
To view or add a comment, sign in
-
-
"without persistence, you can't simulate any operation that mutates data. And those operations are the ones with the highest probability of generating misalignment between API designers and stakeholders." https://lnkd.in/dPQSCcBg
To view or add a comment, sign in
-
Most RAG systems fail at this simple question: "What's the most common GitHub issue AND what are people saying about it?" Vanilla RAG follows a simple pattern: query -> retrieve -> generate. It's effective for straightforward question-answering, but struggles when tasks get complex. Let's say you ask: "What's the most common GitHub issue from last month, and what are people saying about it in our internal chat?" Traditional RAG would try to match your entire query to one knowledge source. It might find something relevant, but probably not exactly what you need. Agentic RAG works differently: 1. 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴: The agent breaks down your query into subtasks (select a tool to query the GitHub issues from last month, build a query to fetch the most common one, search internal chat for mentions) 2. 𝗧𝗼𝗼𝗹 𝗨𝘀𝗲: It routes the first part to your GitHub database, gets results, then routes the second part to your chat system using context from the first search 3. 𝗥𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝗼𝗻: The agent validates the retrieved information and can re-query if something doesn't look right This is really promising for complex queries that need multiple data sources or multi-step reasoning. 𝗧𝗵𝗲 𝘁𝗿𝗮𝗱𝗲𝗼𝗳𝗳𝘀: Agentic RAG typically requires multiple LLM calls instead of one. This means added latency and cost. It is also much more complex to develop, deploy and maintain. Here's my recommendation: For many use-cases, a simple RAG pipeline is sufficient, but if you are dealing with complex queries, response quality is very important and your users can afford waiting a few extra seconds - an Agentic RAG workflow is probably better suited for your use-case. The architecture can be simple (a single router agent) or complex (multiple specialized agents coordinating). You can have one agent that retrieves from your internal docs, another that searches the web, and a coordinator that decides which to use. For more information, my colleagues did a very nice Blog post about the different Agentic workflow: https://lnkd.in/eS2mFxUF
To view or add a comment, sign in
-
-
Understanding JSON Schema Tired of unpredictable data causing havoc in your APIs and integrations? 😫 Imagine a world where every piece of JSON data flowing through your systems is perfectly structured and validated. That's the power of **JSON Schema**. It's not just a nice-to-have; it's absolutely crucial for validating the structure and types of your JSON documents. JSON Schema ensures unparalleled data consistency and reliability across all your different systems. This is an absolute game-changer for designing robust APIs, streamlining data exchange, and significantly reducing debugging time. Think of it as a blueprint or a contract for your data. It defines what your JSON *should* look like, preventing unexpected types, missing fields, or incorrect formats from ever making it into your applications. This means fewer errors, stronger integrations, and more predictable software behavior. How are you currently ensuring data integrity in your projects? Have you leveraged JSON Schema, or are you exploring other solutions? Share your experiences below! 👇 #JSONSchema #APIDesign #DataValidation #SoftwareDevelopment #TechTips #DataConsistency #WebDevelopment
To view or add a comment, sign in
-
-
Just wrote this on r/dataengineering Introducing Open Transformation Specification (OTS) – a portable, executable standard for data transformations https://lnkd.in/dCdse2Gt Hi everyone, I’ve spent the last few weeks talking with a friend about the lack of a standard for data transformations. Our conversation started with the Fivetran + dbt merger (and the earlier acquisition of SQLMesh): what alternative tool is out there? And what would make me confident in such tool? Since dbt became popular, we can roughly define a transformation as: - a SELECT statement - a schema definition (optional, but nice to have) - some logic for materialization (table, view, incremental) - data quality tests - and other elements (semantics, unit tests, etc.) If we had a standard we could move a transformation from one tool to another, but also have mutliple tools work together (interoperability). Honestly, I initially wanted to start building a tool, but I forced myself to sit down and first write a standard for data transformations. Quickly, I realized the specification also needed to include tests and UDFs (this is my pet peeve with transformation tools, UDF are part of my transformations). It’s just an initial draft, and I’m sure it’s missing a lot. But it’s open, and I’d love to get your feedback to make it better. I am also bulding my open source tool, but that is another story.
To view or add a comment, sign in
-
Following up on my last post about the Open Transformation Specification (OTS), I'm humbled by the interest and questions I've received! A few of you even asked about a community. I'm continuing to refine and detail the OTS standard. To ensure its robustness, I'd greatly value the input of wiser heads. Is there a community to discuss ideas about OTS? Not yet, but we can build it. For now I have created a Discord server (as usual, link in the comments) https://lnkd.in/d2i2CtDr
Just wrote this on r/dataengineering Introducing Open Transformation Specification (OTS) – a portable, executable standard for data transformations https://lnkd.in/dCdse2Gt Hi everyone, I’ve spent the last few weeks talking with a friend about the lack of a standard for data transformations. Our conversation started with the Fivetran + dbt merger (and the earlier acquisition of SQLMesh): what alternative tool is out there? And what would make me confident in such tool? Since dbt became popular, we can roughly define a transformation as: - a SELECT statement - a schema definition (optional, but nice to have) - some logic for materialization (table, view, incremental) - data quality tests - and other elements (semantics, unit tests, etc.) If we had a standard we could move a transformation from one tool to another, but also have mutliple tools work together (interoperability). Honestly, I initially wanted to start building a tool, but I forced myself to sit down and first write a standard for data transformations. Quickly, I realized the specification also needed to include tests and UDFs (this is my pet peeve with transformation tools, UDF are part of my transformations). It’s just an initial draft, and I’m sure it’s missing a lot. But it’s open, and I’d love to get your feedback to make it better. I am also bulding my open source tool, but that is another story.
To view or add a comment, sign in
-
A To-Do List API is more than just CRUD. It’s a chance to implement real-world features like user authentication and data persistence while honing your backend skills. Test your skills and build a RESTful API to allow users to manage their to-do list. 🗒️
To view or add a comment, sign in
-
🚀 Built a clean and minimal REST API to sharpen my backend fundamentals! Kept it lightweight with a simple in-memory array as the data store — perfect for getting hands-on with real CRUD workflows before bringing in a full database. Tech I leveraged: ✨ Express.js for routing ✨ body-parser for handling request bodies ✨ nodemon for auto-reload ✨ Postman for API testing Endpoints implemented: ✔ GET – fetch users ✔ POST – add users ✔ PUT – update users ✔ DELETE – remove users This mini-build helped me double-down on: 👉 How routes actually work under the hood 👉 The difference between req.body, req.query, and req.params 👉 Real PUT/DELETE behavior during testing 👉 Why middleware order matters Keeping it simple, scalable, and future-ready. Database integration coming soon. 💪🔥
To view or add a comment, sign in
-
-
16 API Terms You Must Know → Resource: The fundamental concept in REST, representing data or service. → Request: A call made to a server to access a resource. → Response: The data sent back from the server to the client. → Response Code: Indicates the status of a HTTP request, like 404 not found. → Payload: Data sent within a request or response. → Pagination: The process of dividing response data into discrete pages. → Method: The HTTP actions such as GET, POST, PUT, DELETE. → Query Parameters: Data appended to the URL to refine searches. → Authentication: The verification of a user's identity. → Rate Limiting: Restricting the number of requests a user can make. → API Integration: Connecting various services using APIs. → API Gateway: A service that provides a single entry point for APIs. → API Lifecycle: The phases of API development and retirement. → CRUD: An acronym for create, read, update, delete. → Cache: Temporary storage to speed up data retrieval. → Client: The device or program that requests data from a server. What API term surprised you the most? #backenddevelopment #softwaredevelopment #api
To view or add a comment, sign in
-