🗂️ Day 3: (Building in public: OpsFlow - Authorization - The Part Most Developers Get Wrong) Built the task management core today. Projects, tasks, members. Standard CRUD stuff. Except it's not. The Real Challenge: Not "can users create tasks?" but "can THIS user modify THIS specific task?" Here's where most apps fail: ❌ Authorization in middleware: `app.use('/tasks', verifyToken, taskRoutes)` Problem: Every task route needs different rules. Update = different from delete. ✅ Authorization in service layer: ```typescript async deleteTask(taskId: string, userId: string) { const task = await Task.findById(taskId); const project = await Project.findById(task.project); // Resource-level check, not route-level if (!project.hasAccess(userId)) { throw new AuthorizationError(); } // Action-level check if (task.createdBy !== userId && !project.isOwner(userId)) { throw new AuthorizationError("Only creator or owner can delete"); } } ``` Why this scales: 1. Business rules stay in one place 2. Can be tested independently 3. Flexible per-resource rules 4. Clear error messages The pattern I followed: • Project owners can do everything • Members can view + create tasks • Task creators can edit their tasks • Owners override all permissions Also added: • Pagination (page, limit, total) for list endpoints • Filtering (status, priority, assignee) • MongoDB indexes on query fields • Proper population of related data What I learned: Security isn't a feature you add. It's a constraint you design around from day one. Middleware is for "who are you?" Service layer is for "what can you do?" For juniors: Start with "deny by default" - explicitly allow what's needed, not the reverse. How do you handle resource-level authorization in your stack? #SoftwareArchitecture #Authorization #NodeJS #APIDesign #Security
Building OpsFlow: Authorization in Service Layer
More Relevant Posts
-
A 6-Minute API Response Almost Killed Our CRM Last week, I debugged the worst performance issue I've seen in production. A developer built a notifications feature that worked perfectly in development with 50 test records. The code passed testing. The PR got approved. It shipped to production. Then users started complaining. The app was taking 6 minutes to load after login. What I found: The backend was fetching every notification from the database without any filters. All tenants. All users. 2.3 million records total. The entire dataset was being sent to the frontend as a 47MB JSON payload, then filtered client-side in Angular. The database was returning millions of rows. The server was serializing all of it. The network was transferring 47MB. The browser was parsing it all. Then JavaScript was filtering it down to the 30 notifications the user actually needed. The fix: Backend filtering by user and tenant. Scroll-based pagination returning 30 records per request. Proper database indexing. Redis caching layer. Result: 6 minutes became 380 milliseconds. 47MB became 12KB. The real problem: This never should have reached production. The code review failed. Nobody asked what happens with thousands of users. Nobody checked the SQL query being generated. Nobody tested with realistic data volumes. Nobody measured the response size. What I learned: Code review is not a formality. It's the last defense against production disasters. Every developer must review their own code first. Ask what happens at scale. Test with production-sized datasets. Measure actual performance metrics. Tech leads must do real reviews. Read the code. Run it locally. Check the database queries. Question the architecture decisions. A 15-minute code review would have prevented a 6-hour production outage and hundreds of angry support tickets. Backend filtering is mandatory. Pagination is essential. Performance testing is not optional. Code review matters. The difference between working code and good code is asking the hard questions before deployment. #SoftwareEngineering #CodeReview #ASPNETCore #Angular #WebDevelopment #PerformanceOptimization #BestPractices #CleanCode #TechLead #SoftwareDevelopment #Programming #FullStackDeveloper #API #DatabaseOptimization #ProductionIssues #TechLeadership
To view or add a comment, sign in
-
-
As a Java-based API developer, I often see teams struggling with one recurring question — should we build our next API with REST or GraphQL? Both sound promising, but the real answer depends on what you value more: simplicity or flexibility. Modern applications demand speed, scalability, and clean data handling. Yet, when APIs become complex, teams either over-fetch data or end up making endless endpoint calls that slow everything down. Option 1: REST API The traditional choice — simple, reliable, and well-documented. Perfect when your data requirements are predictable, and resources can be modeled clearly. It keeps things organised and is ideal for applications where stability and caching matter most. Option 2: GraphQL API A modern approach — flexible, efficient, and client-driven. It allows fetching exactly what’s needed, reducing payloads and network requests. It shines when front-end teams handle diverse data needs or when multiple clients consume the same backend differently. So, when should you choose what? Choose REST when you want consistency, simplicity, and faster onboarding for large teams. Choose GraphQL when you need agility, client-specific data responses, and minimal over-fetching. In short: REST gives you structure. GraphQL gives you control. The right choice isn’t about trends — it’s about aligning the API design with the problem you’re solving. Comment down your solution and why would you choose that, Let's make this a deep conversation. Got a scenario, where you are confused upon what to choose, Let me know in the comments.
To view or add a comment, sign in
-
Add measurable performance goals to your development lifecycle. This article shows how to log execution time, memory consumption and database queries from Artisan. https://lnkd.in/eyU5bXG7 #laravel #metricstracking #developer #benchmarking #codewolfy
To view or add a comment, sign in
-
I'm happy to share another 𝗦𝘆𝗻𝘁𝗮𝘅𝗶𝗹𝗶𝘁𝗬 𝗡𝗲𝘅𝘁 𝗔𝗽𝗽 𝗖𝗟𝗜 — 𝗧𝗵𝗲 𝗨𝗹𝘁𝗶𝗺𝗮𝘁𝗲 𝗡𝗲𝘅𝘁.𝗷𝘀 𝗦𝘁𝗮𝗿𝘁𝗲𝗿 𝗳𝗼𝗿 𝗠𝗼𝗱𝗲𝗿𝗻 𝗦𝗮𝗮𝗦 & 𝗔𝗜 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺𝘀 project I recently worked for the developers. The 𝗦𝘆𝗻𝘁𝗮𝘅𝗶𝗹𝗶𝘁𝗬 𝗡𝗲𝘅𝘁 𝗔𝗽𝗽 𝗖𝗟𝗜 empowers developers to bootstrap scalable, production-ready Next.js 14+ applications in seconds using: 𝙣𝙥𝙭 𝙘𝙧𝙚𝙖𝙩𝙚-𝙨𝙮𝙣𝙩𝙖𝙭𝙞𝙡𝙞𝙩𝙮-𝙣𝙚𝙭𝙩-𝙖𝙥𝙥@𝙡𝙖𝙩𝙚𝙨𝙩 This CLI generates a feature rich, enterprise-grade architecture with authentication, theming, database ORM, API utilities, and state management all preconfigured and ready for immediate development. It’s designed to eliminate repetitive boilerplate setup so teams can focus on innovation, not initialization. 𝗞𝗲𝘆 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: • Authentication: Built-in Clerk integration with OAuth, email, and social logins. • Theming System: Dark/light theme persistence powered by Skiper UI and HeroUI. • State Management: Lightweight global state management using Zustand with TypeScript support. • Database Ready: Integrated Prisma ORM setup for PostgreSQL, easily customizable for other databases. • API Framework: Unified API response handlers and data provider abstractions for UI integration. • Middleware + Proxy: Route protection, server proxy handling, and secure server-side communication. • Folder Architecture: Clean, modular structure designed for scalability and maintainability. • Environment System: .env preconfiguration for backend services, API keys, and AI integrations. 𝗢𝘃𝗲𝗿𝗰𝗼𝗺𝗶𝗻𝗴 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝗶𝗲𝘀: • Authentication and user management • UI theming and component libraries • ORM and database schema setup • State management wiring • Secure middleware and API routing 𝗚𝗶𝘁𝗛𝘂𝗯: https://lnkd.in/d9Fchc-T 𝗻𝗽𝗺: https://lnkd.in/dBxHPdde 𝗻𝗽𝘅: npx create-syntaxility-next-app@latest my-app
To view or add a comment, sign in
-
-
A To-Do List API is more than just CRUD. It’s a chance to implement real-world features like user authentication and data persistence while honing your backend skills. Test your skills and build a RESTful API to allow users to manage their to-do list. 🗒️
To view or add a comment, sign in
-
HNG Tech 🚀 Just Built A RESTful API with External API Integration! I recently completed a backend challenge that pushed me to level up my Node.js skills. Here's what I built: ✨ The Project: A clean REST API endpoint that serves my profile information combined with real-time cat facts from an external API. Simple concept, but packed with learning! 🛠️ What I Built: - GET /me endpoint returning JSON data - Live integration with Cat Facts API - Dynamic timestamps (ISO 8601 format) - Graceful error handling with fallback strategies - Full CORS support and request logging 💡 Key Challenges & Solutions: 1️⃣ External API Reliability Problem: What if the cat facts API goes down? Solution: Implemented timeout handling (5s) and fallback messages. The app never breaks - it just returns a default fact. 2️⃣ Data Freshness Challenge: Ensure both timestamp and cat fact update on EVERY request Solution: No caching anywhere. Each request triggers a fresh API call and timestamp generation. 3️⃣ Code Organization Why it matters: Messy code = maintenance nightmare My approach: MVC architecture with separate layers for routes, controllers, and services. Each file has one job. 🏗️ Tech Stack: - Node.js & Express.js - Axios for HTTP requests - Deployed on Microsoft Azure App Service - GitHub for version control 📊 What This Taught Me: ✅ External API integration isn't just about making requests - it's about handling failures gracefully ✅ Error handling is as important as the happy path ✅ Modular code = easier testing & scaling ✅ Environment variables are your friend in production ✅ Deployment is a skill in itself (learned about Azure's App Service, environment configuration, and CI/CD) 🔗 Try it yourself: https://lnkd.in/dVtjREi4 💻 Source code: https://lnkd.in/dqU9xaEg The best part? This project forced me to think like a backend engineer - not just "does it work?" but "what happens when things go wrong?" What's your biggest lesson from a recent project? Drop it in the comments! 👇 #BackendDevelopment #NodeJS #API #JavaScript #WebDevelopment #SoftwareEngineering #Azure #TechLearning #CodingJourney #LearningInPublic
To view or add a comment, sign in
-
🧱𝐁𝐮𝐢𝐥𝐭 𝐚 𝐅𝐮𝐥𝐥-𝐒𝐭𝐚𝐜𝐤 𝐓𝐫𝐚𝐯𝐞𝐥 & 𝐀𝐜𝐜𝐨𝐦𝐦𝐨𝐝𝐚𝐭𝐢𝐨𝐧 𝐏𝐥𝐚𝐭𝐟𝐨𝐫𝐦 — “𝐁𝐧𝐛𝐥𝐢𝐬𝐬” Recently completed a full-stack project built using MongoDB, Express, Node.js, and EJS — a travel and accommodation platform where users can upload and explore stays. I didn’t just follow tutorials. I first understood how each part works — routing, sessions, middleware, and data flow — and then implemented everything step by step. Here’s what I built and handled myself: 🧭 Search functionality with live suggestion dropdowns 🗂️ Filtering listings based on categories 🗺️ Map integration using Leaflet.js for location visualization 📸 Image upload & storage with Cloudinary + Multer 🔐 Authentication and session management using Passport.js 🌐 Deployment on Render Through this project, I got hands-on with: RESTful routing MVC structure Express middleware and error handling Working with environment variables Connecting backend logic to UI templates Handling real-world issues like validation, flash messages, and async errors You can check the deployed version here 👇 🔗 https://lnkd.in/gcDXXd8a And the codebase here 👇 💻 https://lnkd.in/gWyyzJ2h
To view or add a comment, sign in
-
⚙️ Day 18 — AddScoped vs AddTransient vs AddSingleton (DI Lifetimes in .NET) -> Dependency Injection is one of the most powerful features in .NET — but understanding service lifetimes is the real key to writing clean and efficient backend code. Here’s the simplest explanation 👇 🔁 1️⃣ AddTransient — New Instance Every Time The service is created every time it's requested. Use it for: ✔ Lightweight, stateless services ✔ Operations where each call should be independent builder.Services.AddTransient<IEmailService, EmailService>(); Every request → new EmailService() 🔄 2️⃣ AddScoped — One per HTTP Request Created once per request (per API call). If multiple components use it during the same request → they all get the same instance. Use it for: ✔ Database operations (EF Core DbContext) ✔ Services tied to a single API request builder.Services.AddScoped<IUserService, UserService>(); 🔒 3️⃣ AddSingleton — One Instance for the Entire App Created only once, when the application starts — reused everywhere. Use it for: ✔ Shared data ✔ Caching ✔ Configuration services ✔ Services that don’t depend on user-specific data builder.Services.AddSingleton<ILogService, LogService>(); App starts → one LogService() All users, all requests share the same instance ⚖️ How to Choose? When to Use -> Transient: Short operations, stateless logic -> Scoped: API request-based logic, database work -> Singleton: Global state, caching, configuration Understanding these three lifetimes helps you avoid memory leaks, race conditions, and unexpected behavior in your API — and makes your architecture more predictable and professional. #dotnet #csharp #dependencyinjection #aspnetcore #backenddevelopment
To view or add a comment, sign in
-
-
𝐌𝐨𝐬𝐭 𝐀𝐏𝐈 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 𝐩𝐫𝐨𝐛𝐥𝐞𝐦𝐬 𝐚𝐫𝐞𝐧’𝐭 𝐢𝐧 𝐲𝐨𝐮𝐫 𝐂# 𝐜𝐨𝐝𝐞... 𝐁𝐮𝐭 𝐭𝐡𝐞𝐲’𝐫𝐞 𝐡𝐢𝐝𝐢𝐧𝐠 𝐢𝐧 𝐲𝐨𝐮𝐫 𝐝𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐪𝐮𝐞𝐫𝐢𝐞𝐬. After refactoring to explicit loading, batching reads with Include(), and caching static lookups in memory, the average request time dropped to 0.21 seconds. No hardware upgrade. Just clean code + awareness. The client’s reaction: “𝐈𝐭 𝐟𝐞𝐞𝐥𝐬 𝐥𝐢𝐤𝐞 𝐚 𝐧𝐞𝐰 𝐬𝐲𝐬𝐭𝐞𝐦.” That’s when I realized performance isn’t a tool; it’s a discipline. A few weeks ago, I worked on a .NET Core project where API response times averaged 1.2 seconds per call. The team wanted to scale the server but scaling slow code just gives you expensive slow code. I opened Application Insights and found the real culprit: 🔹 EF Core’s lazy loading 🔹 Repeated joins on large tables 🔹 Missing .AsNoTracking() for read-only data If you’re working with .NET APIs: ✅ Always profile before optimizing ✅ Use .AsNoTracking() where writes aren’t needed ✅ Cache where data doesn’t change frequently ✅ Remember: Fast code is elegant code I love fine-tuning .NET systems to perform like Ferraris, not forklifts. If you’re building or maintaining enterprise-grade apps and struggling with performance then let’s connect. #DotNet #CSharp #BackendDevelopment #PerformanceTuning #EntityFramework #SoftwareEngineering #API #DOTNETAPI #Optimization #SoftwareDeveloperExpert #SAASSoftwareDeveloper #DotNetCore #AngularDeveloper #AzureDeveloper #AutomationEngineer #AIChatbotDeveloper #MuhammadAtharSaleem #SpargusSolutions
To view or add a comment, sign in
-
🚀 Project: Expense Tracker Application 💰 🔗 GitHub Repository: https://lnkd.in/d9aAzR74 I’m excited to share my latest project — a full-stack Expense Tracker Application that helps users manage and visualize their personal finances efficiently. 🧩 Tech Stack Highlights Back-end: ⚙️ Spring Boot – core framework for building a production-ready backend 💾 Spring Data JPA – for efficient CRUD operations 🔐 Spring Security + JWT – secure authentication using stateless JWTs stored in HttpOnly cookies 🗄️ Liquibase – database schema versioning and migration management Front-end: ⚛️ React – component-based UI for a smooth user experience 🎨 Material UI (MUI) – professional, responsive design 📊 Chart.js – interactive visualizations for expense insights Infrastructure & Deployment: 🐳 Dockerized both backend and frontend, with images pushed to Docker Hub ☁️ Deployed on AWS EKS (Elastic Kubernetes Service) for container orchestration 🧠 AWS RDS (PostgreSQL) as the managed database 🌟 Key Features Secure user authentication and authorization Full CRUD operations for managing expenses Advanced filtering & sorting using RSQL Interactive dashboard with line, pie, and bar charts summarizing expenses 🎥 I’ve also created a walk-through video explaining the project and included screenshots showcasing the UI. This project gave me end-to-end exposure — from back-end API design and secure authentication to front-end development and cloud deployment on AWS. 💬 I’d love to hear your thoughts or feedback! #SpringBoot #React #AWS #Kubernetes #Docker #FullStackDevelopment #DevOps #Java #PostgreSQL #SoftwareEngineering #CloudComputing #ProjectShowcase
To view or add a comment, sign in