🚀 From Code to Clicks: Integrating Page Scripting with AL-Go CI/CD Ever wished your Business Central CI/CD pipeline could actually think like a user? Not just compile AL code… but open a Sales Invoice page, post it, and verify it works — automatically? That’s now possible. With Microsoft’s new Page Scripting Tool and the @microsoft/bc-replay package, Business Central pipelines can finally replay real UI interactions — and I’ve documented the full setup. 💡 In my latest blog, I show how I: Bootstrapped an AL-Go repo and added bc-replay Recorded a real Sales Invoice posting flow Integrated bc-replay as a post-CI workflow Watched GitHub Actions simulate a Business Central user step-by-step 👉 Read it here: https://lnkd.in/dfFqh7zE 🔗 Includes full YAML workflow, setup guide, and screenshots. If you’re building or maintaining Business Central extensions, this changes how you validate releases forever. hashtag#BusinessCentral hashtag#ALGo hashtag#DevOps hashtag#bcReplay hashtag#MicrosoftDynamics365 hashtag#Automation hashtag#CI_CD hashtag#GitHubActions hashtag#ALDevelopment
How to automate Business Central CI/CD with Page Scripting
More Relevant Posts
-
Atlassian Forge CLI just got smarter with AI-assisted error debugging. I've been testing the new Forge assistant command (still experimental), and it's changing how I troubleshoot Forge app development. What it does: When you hit errors during Forge CLI operations, the assistant sends error details to Rovo or Gemini for instant analysis. You get context-aware explanations and suggested fixes right in your terminal. The use case: Forge development involves complex APIs, manifest configurations, and deployment workflows. Errors can be cryptic. Instead of switching to Stack Overflow or docs, you get immediate help understanding what went wrong and how to fix it. How teams are using it: ➡️ Junior developers get faster onboarding without constant interruptions to senior team members ➡️ Complex DDL operation failures get decoded without manual TiDB doc searches ➡️ Deployment failures get diagnosed with specific environment context How to enable it: For Rovo: forge assistant on rovo For Gemini: forge assistant on gemini To disable: forge assistant off My experience: I recently hit a DDL operation error while setting up Forge SQL migrations. The error was vague about why my CREATE TABLE statement failed. The assistant immediately flagged that I was using AUTO_INCREMENT, which Forge SQL discourages due to hotspot issues on large datasets. It explained why AUTO_RANDOM is preferred and provided the correct syntax for my migrationRunner enqueue operation. Saved me from digging through TiDB compatibility docs and multiple failed deployments. Which AI to choose: ➡️ Use Rovo if you're already in the Atlassian ecosystem and want responses tuned to Atlassian products ➡️ Use Gemini if you prefer Google's model or need broader technical context Still experimental, but worth enabling on your next Forge project. Full docs: https://lnkd.in/eqggQG6t #Atlassian #Forge #ForgeCLI #AtlassianDeveloper #DeveloperTools #AI #Rovo #Gemini #ForgeSQL #DevOps #SoftwareDevelopment #CloudDevelopment #TechTips #DeveloperProductivity
To view or add a comment, sign in
-
-
Last month we shipped updates that automate test triage, streamline merge queue builds, and add NuGet support for .NET teams (and a whole lot more): https://lnkd.in/gNbKPRcJ TL;DR: - Test Engine workflows automatically identify problematic tests and create issues in Linear. - Buildkite now handles GitHub merge queues natively, creating and canceling builds automatically based on merge group status. - Package Registries supports NuGet for .NET teams, offers flexible storage options, and integrates more easily with third-party systems. More big updates are on the way 💚
To view or add a comment, sign in
-
Solving the SpecKit Update Problem: Preserving Customizations I'm excited to release an open-source Claude Code skill that addresses a critical pain point for teams using GitHub's SpecKit framework. The Challenge: SpecKit is an excellent tool for specification-driven development, but keeping templates up-to-date has been problematic. The standard update approach (specify init --force) overwrites all customizations, forcing teams to choose between: - Staying on outdated templates - Manually merging changes (time-consuming and error-prone) - Losing valuable customizations The Solution: I've developed a Claude Code skill that makes SpecKit updates safe, automated, and reversible: 🔍 Smart Detection - Uses normalized file hashing to identify customizations vs. official templates 🔀 Conflict Resolution - Integrates with VSCode's 3-way merge editor for guided conflict resolution 📦 Automatic Backups - Creates timestamped backups with retention management ⚡ Fail-Fast Safety - Automatically rolls back on any error 📊 Version Tracking - Maintains a manifest of installed versions and file states Technical Implementation: - 6 PowerShell 7+ modules with clear separation of concerns - 7 specialized helper functions for workflow orchestration - Comprehensive test suite (132 passing tests with Pester 5.x) - CI/CD pipeline with GitHub Actions - Full specification documentation and implementation plan Key Features: ✅ Dry-run mode to preview changes ✅ Selective updates preserving customizations ✅ Custom commands never overwritten ✅ Integration with GitHub Releases API ✅ Context-aware UI (VSCode or terminal) Use Case: Ideal for development teams using: - Specification-driven development practices - Claude Code for AI-assisted development - SpecKit for project structure and workflow management Get Started: Repository: https://lnkd.in/gBmR_tfT Installation is straightforward—clone to your Claude Code skills directory and restart VSCode. Contributions and feedback are welcome! If your team uses SpecKit, I'd love to hear about your update workflow challenges. #SoftwareDevelopment #DevOps #OpenSource #ClaudeCode #SpecificationDriven #PowerShell #DeveloperTools
To view or add a comment, sign in
-
Deploying used to drain me. Finish a feature → merge → build the Docker image → push → migrate the DB → deploy → pray nothing breaks. It felt like every small update required a mini-ritual. I knew CI/CD could automate all of this… … I just didn’t trust myself to set it up properly. Most of the pipelines I built were: • Copy-paste • AI-generated • Trial-and-error • Anything but confidence-inspiring Then GitHub Actions changed everything. Here’s what finally made sense: • Scripts → orchestration • Repeated steps → reusable actions • Complex YAML → clear logic • Limitations → custom JavaScript actions I used to deploy by habit. Now I deploy by design.
To view or add a comment, sign in
-
-
MCP and agentic are hot topics, but what does it actually mean for your day-to-day? We’re starting a blog series where we dive into how Glue connects to your team’s favorite tools through MCP. First up: GitHub. Here’s some of what this integration can enable ⬇️ 🌟 Search across all your repos 🌟 Find functions, classes, or design patterns 🌟 Query documentation All with natural language and without leaving the team chat. Get the details: https://lnkd.in/gJhy9D4i
To view or add a comment, sign in
-
𝗦𝘂𝗽𝗲𝗿𝗰𝗵𝗮𝗿𝗴𝗲 𝗬𝗼𝘂𝗿 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝘄𝗶𝘁𝗵 𝗚𝗶𝘁𝗛𝘂𝗯 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 𝗔𝘂𝘁𝗼-𝗙𝗶𝘅𝗲𝗿! 🚀🔧✨ Agents that read failing GitHub workflows and propose fixes aim to streamline the process of identifying and resolving issues within GitHub workflows. This tool enhances efficiency and accuracy by automatically suggesting fixes for common problems, ultimately saving time and effort for developers. 𝗪𝗵𝘆 𝗗𝗼𝗲𝘀 𝗜𝘁 𝗠𝗮𝘁𝘁𝗲𝗿 ?🤔 - The search results showcase real-world examples of how GitHub Actions Auto-Fixer tools can address specific issues within workflows, aligning perfectly with the post's Topic, Description, and Category. - They illustrate the practical benefits of using automation to streamline development processes, highlighting the value of automated solutions in enhancing productivity and code quality. - By demonstrating the capabilities of auto-fixing tools in various scenarios, these results emphasize the importance of incorporating automation in workflow management for smoother and more efficient development cycles. 𝗜𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗟𝗶𝗻𝗸𝘀 🔗: Option to auto-fix rules? · Issue #80 · DavidAnson/markdownlint | https://lnkd.in/dYafm5JJ Finding and Auto-fixing Lint Errors with GitHub Actions | https://lnkd.in/dd-cAwp5 Warning or auto repair for flat bottom when simplifying model · Issue | https://lnkd.in/dXxQN8ee My Reusable GitHub Actions Workflows | https://lnkd.in/dAxp5MYY wearerequired/lint-action | https://lnkd.in/dWf6-EA2 #GitHubActions #Automation #DevOps #Productivity #CodingTools
To view or add a comment, sign in
-
-
Building a Bulletproof Test Automation Framework: My Go-To Stack For an Automation Tester, the goal isn't just to write scripts; it's to build a robust, scalable, and maintainable automation framework. The tools you choose are the foundation of that structure. A well-architected framework saves time, provides reliable feedback, and accelerates release cycles. Here is a powerful, modern stack for building a comprehensive test automation solution: - Core Automation Library: Playwright. Why? While Selenium is the classic, Playwright is the future. Developed by Microsoft, it offers native auto-waits, which eliminates the vast majority of flaky tests caused by timing issues. Its ability to handle multiple tabs, contexts, and even emulate network conditions within a single test is incredibly powerful. The unified API for Chromium, Firefox, and WebKit makes true cross-browser testing seamless and efficient. - Programming Language & Test Runner: TypeScript + Jest. Why? Pairing Playwright with TypeScript brings the benefits of static typing to your test code, catching errors during development rather than at runtime. This makes your framework more robust and easier to refactor. Jest, as a test runner, provides a fantastic developer experience with its parallel test execution, built-in assertion library, and powerful mocking capabilities. The combination is fast, reliable, and highly scalable. - CI/CD Integration: GitHub Actions. Why? Your automation suite provides maximum value when it runs automatically as part of the development pipeline. GitHub Actions allows you to trigger your test suite on every pull request or merge to the main branch. You can configure it to run tests in parallel across different operating systems, generate reports, and post results directly to Slack or other communication tools. This creates a tight feedback loop, enabling developers to catch regressions instantly. This stack provides a solid foundation for end-to-end, API, and component testing. What does your ideal automation stack look like in 2024? #TestAutomation #AutomationTester #Playwright #TypeScript #Jest #GitHubActions #CI/CD #SoftwareDevelopment #QAAutomation #E2ETesting
To view or add a comment, sign in
-
-
Tired of digging through endless console logs to troubleshoot your Jenkins pipelines? 🤔 There's a better way! Follow me for more helpful content on linkedin ELK STACK TO THE RESCUE Integrating the ELK \(Elasticsearch, Logstash, Kibana\) stack into your Jenkins pipeline can revolutionize your workflow. Imagine having all your logs centralized, searchable, and visualized in real-time! 🤯 No more SSHing into different servers or manually parsing logs. With ELK, you get a powerful, centralized logging system that allows you to collect and store all your data in one place. This makes it incredibly easy to harness and analyze your logs in real-time. 🕵️♀️ BENEFITS UNLOCKED ✨ Centralized Logging: Aggregate logs from all your Jenkins nodes into a single, searchable interface. ✨ Real-Time Insights: Visualize your build data with interactive dashboards in Kibana. ✨ Faster Troubleshooting: Quickly identify and resolve pipeline issues by easily searching and filtering logs. ✨ Improved Visibility: Gain a comprehensive overview of your CI\/CD pipeline's health and performance. Ready to level up your Jenkins game? Give the ELK stack a try and say goodbye to logging headaches for good! 💻👍 \#jenkins \#devops \#cicd \#elkstack \#elasticsearch \#logstash \#kibana \#automation \#logging \#monitoring \#continuousintegration \#continuousdelivery \#softwaredevelopment \#developer \#followme \#techtips
To view or add a comment, sign in
-
-
🚀 Introducing Yam++ (Yet Another Modern Task Runner) I'm excited to share Yam++, a modern task runner I've been building that brings concurrent execution, cross-platform support, and a clean DSL to your development workflow. What makes Yam++ different? ✅ Concurrent by default - Parallel task execution with automatic dependency resolution ✅ True cross-platform - Native bash/PowerShell/cmd support (no adapters needed) ✅ Smart caching - Skip unchanged tasks automatically ✅ Interactive tasks - Built-in prompts and user input ✅ Plugin architecture - Extensible with TypeScript plugins ✅ Zero config - Just create a Yamfile and go Quick example: build watches "src/**/*.ts" { echo "Building..." npm run compile } test needs build { npm test } deploy(env) { echo "Deploying to $env..." ./deploy.sh --env=$env } Then simply run: npm install -g @yampp/yampp yampp build test Why I built this: Task runners should be simple, fast, and work everywhere. Yam++ combines the simplicity of Make with modern features like lifecycle hooks, execution profiles, and intelligent file watching. Perfect for: - Monorepo builds - CI/CD pipelines - Local development workflows - Cross-platform projects The project is open source (MIT license) and ready for testing (test it before going to prod). Would love to hear your feedback! 📦 Try it: npm install -g @yampp/yampp 🔗 GitHub: https://lnkd.in/drsgDyss 📚 npm: https://lnkd.in/dH2UthKr #OpenSource #DevTools #TaskRunner #TypeScript #Automation #SoftwareDevelopment
To view or add a comment, sign in
-
🚀 Releasing : Nice Specs – An open-source VS Code extension for Documentation-First Architecture for Your Codebase. No servers, no third party risk, local stores for keyword and contextual search. It uses code agents you installed in VS Code and already paid for. No extra cost of hosting LLM. With Coding Agents becoming the de-facto tool for coding, it is important to control the hallucination as well as make it easier for reviewers to go through large PRs. Also, it is essential to document code components to give better context to code agents for future iteration in code. Nice Specs creates documentation from the source, keeps it in the repo, and makes it incrementally maintainable. If you’re working with large codebases, onboarding new contributors, and working with code agents — this may help. 🔗 Check It Out here https://lnkd.in/gnBEmEFt ⭐ What Nice Specs Does It introduces a new doc-focused agent (@nicespecs) that generates folder-level architecture docs, component relationships, and root-level system overviews using your actual code as the source of truth. A few highlights: Documentation Guardrails: It only responds to documentation-related prompts, keeping it focused and predictable. Smart Workspace Traversal: Understands your project structure, skips noise, and organizes the codebase into “components.” Automatic Markdown Specs: Produces clean nicespecs.*.md files inside each folder, co-located with the code they describe. Parent/Child Architecture Summaries: Builds a living component tree across the repo. Incremental Updates: Only regenerates docs for changed components, saving tokens, time, and attention. Resilience & Transparency: Resume mid-run, track progress, estimate token cost before executing. All of this is built with readability in mind — for both humans and AI agents. Open for critique, feedback and collaboration on this. This is just an initial phase for making a transition towards documentation based LLM driven software development and maintenance. Lot can be done in this. On top of this, lot of other use cases can be made in the area of code reviews and automation testing for components.
To view or add a comment, sign in