
This report serves as the definitive April 2026 companion piece to Kelly Edinger’s mid-2024 session, "Build an HR Copilot Agent." The Microsoft Copilot Studio platform has undergone profound architectural transformations since that recording. As a senior consultant updating a colleague, my goal is to bridge the gap between what you watched and how we execute these builds today.
The original session demonstrated manual topic orchestration and separate application lifecycle management. Today, the platform favors generative orchestration, embedded solution management, and native tool calling. We rely on large language models (LLMs) to act as central planners rather than building rigid decision trees. This guide translates Kelly's exact demonstrations into current 2026 best practices.
What's Changed Since This Session
The landscape of Microsoft Copilot Studio has matured significantly over the last two years. The terminology, underlying architectural patterns, and licensing models have evolved to support enterprise-scale generative AI. The following table provides a factual summary of these critical shifts.
| Technology | Status in 2026 | What Replaced It (if applicable) |
|---|---|---|
| "Bots" or "Copilots" | Deprecated terminology | "Agents" |
| "Actions" or "Plugins" | Rebranded and expanded | "Tools" |
| Trigger Phrases (Classic) | Maintained for legacy support only | Generative Orchestration |
| External Power Apps ALM | Integrated natively inside the studio | Copilot Studio Solution Explorer |
| Manual Dataverse Uploads | Enhanced with semantic vectors | Dataverse Semantic Indexing |
| Per-Message Billing | Replaced in September 2025 | Copilot Credits / P3 Plans |
How to Build This Today
The original session showcased six distinct demonstrations covering the end-to-end creation of an HR agent. This section details exactly how you would build those same scenarios using the Microsoft Copilot Studio platform as it exists in April 2026. Because the platform now heavily leverages generative AI to simplify development, you will notice many previously manual steps are now entirely automated.
Scenario 1: Application Lifecycle Management (ALM) and Environments
The session showed you Kelly creating a Power Platform solution with Dataverse for application lifecycle management (ALM) to separate dev from production environments. She had to leave the conversational interface and navigate to the external Power Apps maker portal. Today, this external navigation is no longer required.
In April 2026, Copilot Studio embeds the solution explorer directly within the authoring environment. Agents are automatically provisioned within a default solution, but enterprise deployments require custom solutions. These custom solutions act as transport containers for your agents across different environments.
To build this ALM structure today, you must first configure a custom solution natively within Copilot Studio. The steps are straightforward and designed for immediate access. Open your Copilot Studio environment and look to the sidebar navigation.
Select the three dots (…) on the left menu, and then select Solutions. This action opens the native solution manager without forcing you into a new browser tab.
Quick Win: Do not rely on the default solution for enterprise agents. Immediately create a custom solution to ensure your agent components, environment variables, and connection references are properly grouped for future export.
Once the solution explorer is open, select New solution from the top menu bar. The system prompts you to define your solution requirements, including a display name, publisher, and version number. After the solution is created, it should open automatically in the explorer list.
You must then instruct the environment to use this specific container by default for all new components. Select Set preferred solution from the top menu. Choose your newly created custom solution from the dropdown list.
All newly created agents will now automatically reside in this managed container. This simple setup future-proofs your HR agent for seamless multi-environment deployments. When the HR agent is ready to move from testing to production, the deployment process is equally streamlined.
You no longer need to manually export and import ZIP files unless you choose to. Simply open your custom solution and select Pipelines from the side list. You can configure continuous integration and continuous delivery (CI/CD) pipelines directly from this menu.
These native pipelines allow you to execute single-click deployments to target environments. Furthermore, if you require source control, native Git integration is now readily available. For advanced developers, Microsoft also released a Visual Studio Code extension to manage these agent solutions directly within the IDE.
Scenario 2: Knowledge Attachment (SharePoint vs Dataverse)
The session showed you Kelly comparing knowledge attachment: uploading HR policy files (3 short docs) directly to Dataverse vs. linking to a SharePoint document library. She discussed pros and cons like licensing and auto-updates. Here is how you would evaluate and build that same thing in April 2026.
Today, there are two distinct methodologies for unstructured knowledge attachment. We refer to them as Option 1 (Dataverse File Upload) and Option 2 (SharePoint Connector). The fundamental differences between these two options revolve around storage consumption, search latency, and semantic capabilities.
Option 1 copies your HR policy files from SharePoint directly into Microsoft Dataverse. Once copied, the system processes these files into semantic indexes and creates vector embeddings. This enables incredibly high-quality semantic search across your documents.
Option 1 provides full-document search and uniquely supports reading text within images, such as scanned PDF files. However, this option consumes your Dataverse storage capacity, which carries distinct cost implications. Furthermore, Option 1 operates on a batch cycle for updates.
When an HR manager updates a policy in SharePoint, that change is not immediately reflected in the agent. The system synchronizes changes from the source files every four to six hours, based on ingestion completion. If real-time accuracy is critical for your HR policies, this delay poses a significant risk.
Option 2 utilizes the native SharePoint Connector to avoid these delays. This approach leaves the files resting natively in SharePoint and does not consume any Dataverse storage. When the agent receives a query, it directly leverages the SharePoint search infrastructure to find the answer.
Warning: Option 2 (SharePoint Connector) does not currently support searching within SharePoint lists. It is strictly limited to document libraries, wikis, and site content.
Option 2 guarantees real-time content freshness, reflecting the latest available document updates immediately. It also supports advanced query filters based on metadata, such as filtering by author, modified date, or title. Recently, Microsoft upgraded the tenant graph grounding architecture for this connector.
This upgrade dramatically improves how agents retrieve and rank information across content-heavy SharePoint environments. It provides more precise, context-aware responses without the need for manual Dataverse ingestion. For most modern HR agents, we recommend Option 2 due to its real-time synchronization and lack of storage overhead.
To implement Option , you must ensure your users have the correct permissions. The system performs a live authorization check on the user's Entra ID connection information (e.g., Sites.Read.All and Files.Read.All) at the source system. Files with sensitivity settings of "Confidential" or "Highly Confidential" are automatically blocked from agent answers.
Scenario 3: Designing Conversational Topics
The session showed you Kelly designing conversational topics in Copilot Studio. She started with basic Q&A on HR policies and transitioned into interactive time-off requests where the agent explicitly asked for details. Here is how you would build that same conversational flow in April 2026.
The entire paradigm of conversation design has fundamentally shifted from classic orchestration to generative orchestration. Traditional topic-driven designs required makers to anticipate every user intent and build rigid, manual branching logic. The agent previously relied on handcrafted "trigger phrases" to route users correctly.
Today, generative orchestration introduces a large language model (LLM) acting as a dynamic, central planning layer. This planner interprets the user's intent, breaks down complex requests, and automatically selects the correct topics and tools. You no longer need to build overlapping logic or exhaustive trigger phrase lists.
To build an interactive time-off request today, you create a topic but rely entirely on the LLM for slot filling. The system no longer requires manual question nodes to explicitly ask the user for missing data piece by piece. Instead, you simply define the required inputs for your time-off action (e.g., Start Date, End Date, Reason).
When the user types, "I need to take next Friday off," the generative orchestrator recognizes the underlying intent. The planner automatically maps "next Friday" to the required Start Date and End Date slots using natural language understanding. It then recognizes that the "Reason" slot is still empty.
The LLM dynamically generates a natural, conversational question to ask the user for the missing reason. It does this without the developer writing a single line of script or building a new question node. This slot-filling behavior drastically reduces the size of your topic inventory.
Quick Win: Ensure Generative Orchestration is explicitly enabled in your agent's settings. Provide clear, comprehensive descriptions for every topic and tool, as the LLM relies entirely on these textual descriptions to determine routing.
Furthermore, generative orchestration easily handles multi-intent utterances that would break classic agents. A user can say, "Check the HR policy on bereavement and then submit a request for tomorrow." The LLM processes both requests simultaneously, creating a dynamic execution plan. It retrieves the knowledge first, then seamlessly initiates the time-off tool sequence.
If you prefer to work with code, Copilot Studio now features a native YAML code editor for topic design. This allows developers to view and edit topics in a highly readable markup language. You can easily copy and paste complex YAML configurations between different agents, vastly speeding up development time.
Scenario 4: Customizing Topics and Preventing Hallucinations
The session showed you Kelly customizing out-of-the-box topics, adding new custom topics, and refining instructions to control agent behavior. Her goal was to prevent hallucinations and keep the agent focused on HR. Here is how you would secure and instruct that agent in April 2026.
Today, securing the agent requires robust, multi-layered generative AI guardrails. This configuration begins globally within the Generative AI tab of the agent settings. Here, you set the absolute boundaries for knowledge retrieval.
You must clearly define the permitted website URLs and uploaded documents. You also define the desired moderation level for the entire agent. These global settings act as your broadest knowledge baseline, overriding the generic, open-ended behavior of the base LLM.
To refine specific topic flows, you must utilize the Generative answers node within the authoring canvas. Adding custom instructions to this specific node yields the highest return on investment for preventing hallucinations. Unfortunately, this feature is frequently overlooked by novice makers.
To configure this, open your topic and select the three dots (…) on the Generative answers node. Open the Properties pane, and locate the Custom instructions field. The system demands precise, assertive instruction engineering here.
For an HR agent, your instructions must explicitly state the agent's persona and limitations. You must command the agent to maintain a professional, supportive tone and strictly avoid fabricating facts. Instruct the LLM to proactively ask clarifying questions when information is missing rather than guessing.
Warning: User prompts undergo strict safety filtering, but maker-configured system instructions do not. You carry total responsibility for ensuring your custom instructions do not introduce security vulnerabilities or encourage ungrounded, hallucinatory responses.
To further prevent context errors, you should employ dynamic prompts. Copilot Studio allows makers to use Power Fx formulas directly within the custom instructions field. This capability enables the instructions to adapt automatically based on variables established earlier in the conversation.
For example, if the agent detects the user is a manager, the Power Fx formula can dynamically inject instructions to reference leadership guidelines. This dynamically narrows the context window and minimizes hallucination risks. You can also implement Data Loss Prevention (DLP) enforcement across your agents using PowerShell commands to block unauthorized updates.
Scenario 5: Implementing Tools for SharePoint Lists
The session showed you Kelly implementing 'actions' to post time-off request data from conversations directly into a SharePoint list. This enabled subsequent Power Automate approval flows. Here is how you would build that same connection in April 2026.
In current terminology, 'actions' have been formally rebranded and expanded as 'Tools'.15 Tools serve as the primary mechanism for an agent to execute tasks and interact with enterprise systems. Natively calling a tool is significantly faster than designing a custom Power Automate flow for simple data entry.
To build this SharePoint connection today, open your agent and navigate to the Tools page from the left-hand navigation pane. Select Add a tool at the top of the screen. Choose Connector from the list of available tool types.
The system presents a search box; enter "SharePoint" and select the standard connector. The interface reveals all available operations for SharePoint. Locate and select the Create Item tool. This tool allows the agent to push new rows into a designated list directly from the chat.
Once added, you must configure the tool parameters. You will provide the specific SharePoint Site Address and the List Name. The interface dynamically exposes the columns of that list (e.g., Title, Start Date, End Date, Reason).
Here is where generative orchestration shines. Because you defined these exact slots conceptually in your instructions, the LLM automatically maps the extracted conversational entities directly into the SharePoint Tool inputs. You do not need to manually draw lines connecting variables.
You simply save the tool and publish the agent. When a user requests time off, the LLM planner gathers the required data through natural conversation. It then securely invokes the SharePoint Create Item tool using the user's Entra ID context. Finally, it confirms the successful database entry back to the user in natural language.
If you require advanced security for tools, Copilot Studio now supports Model Context Protocol (MCP) servers. MCP servers enable secure, enterprise-grade tooling driven by natural language, perfect for complex HR logic apps. You can also configure tool triggers to operate strictly with end-user credentials via a simplified OAuth EasyAuth experience.
Scenario 6: Full Agent Setup and Jumpstart Learnings
The final segment showed Kelly setting up the full agent with descriptions and sharing Jumpstart program learnings on pitfalls like dynamic content issues. Here is how you would execute full setup and avoid those pitfalls in April 2026.
Today, complete agent assembly is vastly accelerated. You rarely start with a blank canvas. Microsoft recently released the Employee Self-Service Agent template, providing a massive head start specifically for HR scenarios.
This template includes prebuilt connectors, starter workflows for leave management, and accelerator packs. These accelerator packs integrate seamlessly with HRIS systems like Workday and SAP SuccessFactors, dramatically reducing your implementation time.
If you choose to build from scratch, you start on the Copilot Studio Home page. The interface fully supports natural language creation. You simply type, "Create an HR agent to answer policy questions and manage time-off requests".27
The AI automatically provisions the base framework. It generates a suggested name, description, and initial instructions based on your prompt. It even suggests relevant triggers and knowledge sources.
Quick Win: Always manually refine the AI-generated instructions. Clear, concise, and boundary-defining system prompts remain the most critical component of a reliable, compliant HR agent.
The Jumpstart program learnings regarding dynamic content remain incredibly relevant. Historically, developers struggled to parse unstructured or dynamic JSON payloads returned by custom APIs. Makers frequently resorted to brittle string-scraping techniques to extract data for the conversational interface.
Today, the recommended architecture avoids string scraping entirely. Copilot Studio now supports native schema-based JSON parsing. When your agent retrieves complex backend data, the parsed output instantly becomes typed dynamic content.
You can simply reference @item()?['LINE'] or similar structured tags directly within your conversational response configuration. This ensures the agent perfectly formats complex data, such as a summary of available leave balances, without hallucinating formatting errors. By treating AI as an operational actor with scoped identity and structured data inputs, you ensure enterprise-grade reliability.
Licensing Quick Reference
The licensing model for Microsoft Copilot Studio underwent significant adjustments to support generative AI processing at scale. The legacy pay-per-message model was retired in September 2025, replaced by Copilot Credits as the universal currency. The table below outlines the necessary licenses for deploying the HR agent based on your target audience.
| License Type | Target Audience / Use Case | Cost Structure (April 2026) |
|---|---|---|
| Microsoft 365 Copilot (Enterprise) | Internal Employees (B2E) | $30 / user / month. Includes unlimited agent usage within fair limits. |
| Microsoft 365 Copilot (Business) | Internal SMB Employees (<300 seats) | $18 - $21 / user / month (promotional pricing until June 2026). |
| Copilot Studio Standalone (Pay-As-You-Go) | External Users / Custom Channels | Unlimited usage, billed via Azure meter based on Copilot Credits consumed. |
| Copilot Credit P3 Plan (Pre-Purchase) | Enterprise Scale (Predictable Billing) | Tiered annual commit. E.g., Tier 1 offers 3,000 CCCUs for $2,850.33 |
When rolling out an internal HR agent, most organizations leverage their existing Microsoft 365 Copilot licenses. Agents built specifically for Teams, SharePoint, and Microsoft 365 are included at no additional charge for these licensed users.
However, if you intend to deploy the HR agent to an external company website or allow unauthenticated users to access it, the standard M365 license is insufficient. You must utilize the standalone Copilot Studio subscription. This standalone model consumes Copilot Credits via your configured Azure meter for every interaction.
To manage costs proactively, administrators should utilize the Agent Usage Estimator tool. This tool allows you to forecast your Copilot Credit volume based on anticipated traffic, tool usage, and orchestration complexity before launching the agent company-wide. For massive enterprise deployments, the new Copilot Credit P3 Plans offer significant tiered discounts for upfront capacity commitments.