Implementing a One-Dimensional Array Using Only Stacks in C++ Today, as an exercise for my DSA knowledge, I implemented a one-dimensional array using only a stack. The idea comes from a classic theoretical question: How can you simulate array indexing using nothing but stack operations (push, pop, top)? By using two stacks — one for storage and one as an auxiliary structure — I recreated array-like random access behavior behind a clean C++ interface. It was a fun way to revisit fundamentals and rethink how basic structures can be built from simpler ones. You can check out the full code and explanation here: 🔗 GitHub: https://lnkd.in/dDfGEbtw Always enjoyable building things from first principles!
Simulating Arrays with Stacks in C++
More Relevant Posts
-
When I started exploring modern data engineering tools, I kept running into outdated examples. Deprecated Flink configs, old Iceberg syntax, broken Compose files. Versions change quickly but tutorials don’t. So I built something I wish I had sooner to help build projects. Its a collection of Claude Code Skills for Data Engineers, using Anthropic’s new Skills feature. Includes: • Apache Iceberg - snapshots, time travel, partition evolution • Apache Paimon - streaming lakehouse with LSM compaction • Lance -columnar format for ML & vector search • Apache Fluss - sub-second streaming storage • Docker Compose V2 - clean syntax, healthchecks, resource limits Some Example prompts: @iceberg/iceberg.md help me design a partition strategy @paimon/paimon.md create a CDC pipeline from MySQL Available here: https://lnkd.in/eX7mb4vy
To view or add a comment, sign in
-
**Congratulations! AI can now generate the 10,000 SQL files your database immediately throws in the garbage. This is called “progress.”** https://lnkd.in/eUyFp5dG We spent 50 years teaching databases to generate code from metadata. Then GitHub convinced everyone that handcrafting 847 identical SQL files is “engineering.” Then we trained AI on this stupid pattern. Now AI generates MORE handcrafted SQL. **Timeline of collective amnesia:** **1976:** Databases invent information_schema. Problem solved forever. **2006:** “Infrastructure as Code” works for stateless servers. Someone applies it to databases that ALREADY GENERATE THEIR OWN CODE. Nobody stops them. **2016:** dbt launches. 1,000 handcrafted SQL files! Your database extracts metadata and THROWS YOUR CODE AWAY. You celebrate this as “modern data stack.” **2020:** AI trained on GitHub. Sees handcrafted SQL, learns to generate MORE handcrafted SQL. Doesn’t see the metadata systems running every database for 40 years. **2025:** You’re in a PR review changing 50 files. Your database regenerates all DDL from metadata in 0.3 seconds. The irony is lost on you. **What actually happens when you run CREATE TABLE:** 1. Parse your artisanal SQL 1. Extract metadata 1. Store metadata 1. **THROW YOUR CODE IN THE TRASH** 1. Generate new DDL from metadata when asked Your code was a disposable interface. The database kept metadata, discarded your code like a used napkin. But you’re storing that napkin in GitHub and training AI to generate more napkins. **The numbers:** - Your way: 3 weeks to handcraft 50 SQL files - Database way since 1976: 30 seconds to generate from metadata Every production database—Oracle, Postgres, MySQL, Snowflake—generates code from metadata. Has for decades. Powers millions of apps. **AI never learned this. Because it’s not in GitHub.** GitHub showed AI the workaround, not the solution running the entire digital economy since 1976. Click below to watch an industry forget what it knew, then train AI on the amnesia. #DataEngineering #GitHubBrokeOurBrains #MetadataIsSource
To view or add a comment, sign in
-
💻 Excited to share my first C project — a File Handling Based Database System! I’ve developed this project completely in C using multiple source files like main.c, insert.c, delete.c, find.c, print.c, sort.c, save.c, and sync.c, along with a Makefile for easy compilation. It performs operations like insert, delete, find, sort, print, save, and sync, and manages records efficiently using file handling techniques. 🧠 Through this project, I learned about: Data storage and file management in C Modular programming using multiple .c and .h files Automation of compilation using a Makefile ✨ The best part — You don’t need to download any files to run it! Just open my GitHub repository link, go to the “Codespaces” tab, and click “Create codespace on main” — it will open in an online compiler, ready to build and run with just: make make run 🔗 GitHub Repository: https://lnkd.in/gDWBeGpe I’ll soon be sharing more projects based on Data Structures — including Single Linked Lists, Double Linked Lists, and more! #CProgramming #FileHandling #Database #GitHub #Codespaces #DataStructures #LearningByBuilding #EngineeringStudents #CodingJourney
To view or add a comment, sign in
-
-
Thrilled to announce the v1.0.0 release of itchcpp, my new open-source, high-performance NASDAQ ITCH 5.0 parser! A few months ago, I built a parser for this data feed in Python. While it worked, the performance limitations were clear. For anyone working with market data, speed isn't just a feature; it's a fundamental requirement. This inspired me to go back to the drawing board and re-engineer the solution in C++20 to achieve the performance that professional financial applications demand. itchcpp is a modern C++20 library designed from the ground up for maximum speed, minimal memory overhead, and type safety. It's built for latency-sensitive financial applications, market data analysis, and quantitative research where every nanosecond counts. Key Highlights: 🚀 Blazing-Fast Performance: Achieves multi-gigabyte-per-second parsing speeds on modern hardware, making it ideal for processing massive datasets. 🧠 Zero-Allocation Core: The critical parsing loop performs zero dynamic memory allocations, eliminating jitter and ensuring predictable, low-latency performance. 🛡️ Modern & Type-Safe C++20: Uses std::variant to represent all ITCH messages, providing compile-time safety and preventing entire classes of bugs. No more unsafe unions or void* pointers. 🔧 Flexible & Powerful API: Use the callback-based parser for memory-efficient streaming of huge files. Parse an entire file into a std::vector for convenience. Filter messages by type at the source to process only the data you need. ✅ Cross-Platform & Zero Dependencies: Fully compatible with Linux, macOS, and Windows with no external runtime dependencies, making integration a breeze. 🚀 What's on the Horizon? The journey doesn't stop at v1.0.0. The next major milestone is to build a high-performance Limit Order Book (LOB) directly on top of the parser, transforming itchcpp into a more comprehensive market analysis toolkit. To make integration seamless, I'll also be working on making the library available through popular package managers like vcpkg and Conan. This project was a fantastic journey into low-level performance optimization, modern C++ architecture, and the intricacies of financial data protocols. The goal was to create a tool that is not only exceptionally fast but also robust, correct, and a pleasure to use for developers. I've included comprehensive documentation, usage examples, and benchmarks in the repository. Whether you're a quantitative analyst, an HFT developer, or a C++ enthusiast interested in high-performance computing, I'd love for you to check it out! GitHub Repo: https://lnkd.in/e3rnCCP2 Your feedback, stars ⭐, and contributions are more than welcome! #FinTech #CPP #Cpp20 #HighFrequencyTrading #HFT #QuantitativeFinance #MarketData #NASDAQ #OpenSource #Performance #LowLatency #SoftwareDevelopment #GitHub
To view or add a comment, sign in
-
Headline: 📢 Say goodbye to tedious DB operations! Introducing db_package – your new best friend for [Mention Key Feature, e.g., simplified SQL, easy connection management, faster querying]. Body: Tired of boilerplate code when dealing with databases? We built db_package to make [Mention Goal, e.g., connecting, querying, migrating] painless. It supports [Mention supported DBs, e.g., MySQL, PostgreSQL, SQLite] and comes packed with features like [Feature 1] and [Feature 2]. Call to Action: Give it a star and start using it today! Let me know what you think in the comments! 👇 Hashtags: #Database #OpenSource #DeveloperTools #Programming #db_package Link: [https://lnkd.in/g4ZYpeVv]
To view or add a comment, sign in
-
Everyone can be a Software Developer but few becomes Software Engineer. Data Structures and Algorithms are not just for learning but they allow us to use them according to our needs and write optimized and clean code. Some people tend to forget and just learn Data Structures and Algorithms for clearing the interview but they don't know how easy it would make their life as an engineer because development and engineering is totally different things. This type of content should be shared across the platform so that even a beginner who is new to LinkedIn could find these sources and try to learn to be best.
🚀 BCA Student @ Jamia Millia Islamia | Aspiring Full-Stack Developer (MERN) | DSA Enthusiast, solving problems daily 💻
🚀 Day 25 of my #100Daysofcode challenge - Implemented Linked List from Scratch in C++ Today I went back to the DSA fundamentals and implemented a Linked List completely from scratch using C++ ⚙️ 🔗 GitHub Repository: https://lnkd.in/eQxXHT9a Here’s what I built 👇 🧱 What’s Inside the Code: Node class → represents an element with data and next pointer List class → handles all the operations like 🔹 pushFront() & pushBack() — add elements to head/tail 🔹 insertNode() — insert at a specific position 🔹 popFront() & popBack() — delete nodes efficiently 🔹 search() — find element by value 🔹 printLinkedList() — visualize list elements 🔹 createLink_ListfromArray() — build linked list directly from an array 💡 Concepts Strengthened: ✅ Dynamic memory allocation (new, delete) ✅ Pointer manipulation ✅ Traversal, insertion, and deletion logic ✅ Object-oriented design (encapsulation and class-based structuring) 📚 Why I did this: Revisiting data structures helps reinforce problem-solving foundations. Understanding how a Linked List actually works internally gives me better control while tackling more advanced topics like stacks, queues, and graphs. ✨ Next Steps: Planning to extend this into Doubly Linked List and Circular Linked List implementations.
To view or add a comment, sign in
-
-
Looking for the fastest way to learn ADBC (Apache Arrow Database Connectivity)? Check out the ADBC Quickstarts repo for simple runnable examples in C++, Go, Python, R, and Rust: https://lnkd.in/ey3KwSS9
To view or add a comment, sign in
-
Announcing DBMD for VSCode! 🚀 Embed DuckDB and SQLite queries directly in your markdown files—execute and preview results realtime in VSCode. How it works: Install DBMD extension in VSCode. Create markdown document . Add frontmatter, embed SQL in markdown document, preview document - see combined formatted output of markdown + SQL in VSCode native markdown preview. Key features: • SQLite & DuckDB support • VSCode Native preview integration • Cross-platform • Theme-aware rendering Perfect for embedding duckdb and sqlite queries in markdown documentation, analysis notebooks, and quick database exploration. 🔗 VS Code Marketplace: https://lnkd.in/gfWTrAfN 🔗 GitHub: https://lnkd.in/gimDSCfg #DuckDB #sqlite #vscode #markdown
To view or add a comment, sign in
-