Lumina AI Screen Share: Trusted, Responsible AI

|
A person reviewing documents on a laptop, with digital overlays showing checked items and signatures.

How Lumina AI Screen Share Was Built to Earn Trust

By Harish Desai, SVP, Data and Applications & Kris Kimmerle, VP, AI Risk and Governance

(6-minute read)

OpenAI published a case study on our collaboration to build Lumina AI Screen Share, RealPage’s real-time, voice-enabled AI assistant designed for property management workflows. Among early adopters, 95% of issues are self-contained by users engaging with Lumina AI Screen Share agent. Average resolution time is under five minutes. 90% report stronger workflow confidence across their site teams.

Those results matter. We want to talk about the decisions we made to earn the right to ship it.

 

What Is AI Screen Sharing and How Lumina AI Screen Share Works Picture a site team member walking through a lease renewal in OneSite for the first time. Instead of toggling between help articles and the application, they have a voice in their ear that sees their screen, understands where they are in the workflow, and talks them through the next step. That is Lumina AI Screen Share.

It is an AI agent built on OpenAI’s Agents SDK with Realtime Voice models, communicating through WebRTC. It sees what the user sees on their screen, retrieves relevant guidance from our product knowledge base, and coaches the user through the workflow step by step. In real time, with voice.

The agent watches, listens, and guides. When someone is navigating a payment setup or figuring out how to process a move-in, it identifies where the user is in the workflow and provides contextual next steps grounded in documented procedures.

That is what “agentic AI” means in practice. A specific architecture with specific capabilities and, critically, specific constraints.

What Lumina AI Screen Share Agent Does Not Do

Every important design decision in Lumina AI Screen Share is grounded in what the agent is not allowed to do. The agent operates entirely at the UI layer, observing what is rendered in the browser and providing guidance based on that context. Because of this architectural choice:

  • It inherits existing role-based access controls (RBAC)
  • If a user cannot see certain records, the agent cannot see them either
  • No parallel access control system is required

The existing permissions model applies automatically based on where the agent sits in the architecture.

The agent provides guidance only; it does not execute actions within OneSite. Users remain in control of the workflow and verify each step before proceeding. This is a deliberate human-in-the-loop design, and it is staying.

Knowledge retrieval uses relevance threshold filtering to ensure accuracy over guesswork. When the agent retrieves content, results are scored and only used if they meet a defined confidence threshold. If nothing qualifies, the agent does not guess, it:

  • Asks clarifying questions
  • Retries with additional context
  • Routes the user to human support

The fallback is always a human, never a hallucination.

The agent is also domain-constrained to OneSite workflows. It does not answer general knowledge questions or operate outside the product context. This narrow scope is one of the most effective ways to reduce hallucination risk.

Every one of these constraints is load-bearing. Together, they are what make the product reliable, and what allow it to earn user trust.

Discover how real-time AI guidance helps your property teams resolve issues faster and work with confidence.  Watch an In-Depth Look at the Lumina AI Platform

Why AI Governance Expectations Are Rising. The conversations we have with customers about AI governance have changed over the past year. Operators used to ask whether we had a responsible AI policy. Now they want to see the documentation behind it. They want to understand success criteria, performance benchmarks, testing protocols, and risk classification. They want to know who provides oversight and whether governance has a seat at the table when product decisions get made.

A timeline from 2024 to 2026, depicting vendor readiness stages in AI, from policy to design decisions.

Figure 2. Customer expectations for AI governance have risen from policy-level questions to architecture and design-level scrutiny. Most vendors have not kept pace.

Two of the questions we wish more operators would ask every AI vendor in their stack is:

  • Where are the handoff points between AI and humans?
  • Which interactions does the AI handle end-to-end, and which ones trigger a human review?

The answer tells you a lot about how seriously a company has thought about the risks of what they are deploying.

We welcome those conversations. When a customer asks us to describe our controls, we walk them through a clear, evidence-backed answer. We built this product knowing those questions were coming.

How RealPage Governs AI: Architecture, Guardrails, and EvaluationWe leverage OpenAI’s foundation models rather than building our own. RealPage does not train or fine-tune these models. Our governance responsibility covers everything around them, and that is where governance lives or dies.

Diagram illustrating system layers: Continuous Evaluation, Guardrails, and Foundation Models with key details.

Figure 3. RealPage governance model. Foundation models sit at the center. Guardrails and continuous evaluation wrap around them. Every layer produces a record.

We run a continuous evaluation platform that monitors our AI agents across multiple dimensions, including conversation quality, task completion, hallucination detection, fair housing signals, and handoff behavior. Performance is tracked against defined success thresholds. When something fails, we categorize the failure pattern, analyze it, and feed those findings back into prompt refinement and system improvements. This is not a quarterly review. It runs continuously.

We deploy guardrails using a defense-in-depth approach. Prompt-level safeguards were part of the first GA release. Independent guardrail layers that evaluate requests and responses outside of the application workflow are in active development. Every layer is designed to produce evidence. If a guardrail fires, there is a record.

How AI Guidance Improves Property Management OutcomesThere is a direct line from operator confidence to resident outcomes. When a site team member can resolve an issue in under five minutes instead of submitting a support ticket and waiting, the resident gets a faster answer. When 95% of issues are resolved without escalation, site teams spend more time on the work that requires human judgment and less time navigating software.

Flowchart showing impact chain from AI guidance to resident outcomes, highlighting governance importance.

Figure 4. The impact chain from AI guidance to resident outcomes. Without governance, the chain breaks between guidance and resolution.

Lumina AI Screen Share is one product in a broader collaboration with OpenAI. We are working together to embed agentic AI across the RealPage platform, and every product in that pipeline will go through the same governance rigor described here. The same architectural discipline. The same commitment to constraints that earn trust. The same expectation is that we can show our work when customers and regulators ask.

That is what responsible AI looks like at RealPage. A product that earns trust by design because it was constrained by design. An agent that coaches rather than replaces. A governance program that produces evidence alongside policy documents.

Every proptech company will tell you they take AI governance seriously. Fewer can walk you through the architecture decisions that prove it. Fewer still will tell you what their AI cannot do, and why that is the point.

We can. Ask us.  Ready to Bring Trusted AI to Your Property Operations?  Lumina AI Screen Share is just one example of how RealPage is embedding responsible, high-impact AI across property management. 

Talk to an AI expert at RealPage

 

 

 

 

 

 

 

 

Have a question about our products or services?