Articles
February 26, 2025

Now Assist FAQs

Built on the Now Platform®, Now Assist combines generative AI capabilities with ServiceNow's powerful workflow automation platform. Now Assist lets you use domain-specific models to improve your organisation's productivity and efficiency, deliver better self-service, recommend actions and provide answers, and make searches more effective.

Data Handling:

Does data leave my instance when it is processed by the Now LLM?

Yes, data leaves your instance temporarily during processing by the Now LLM. It is securely transmitted to ServiceNow regional data centres or Azure Public Cloud infrastructure within the same region. Once processed, the data is deleted from the processing environment.

What data leaves my instance when being processed by the Now LLM?

Only the necessary data required to generate a response, such as the input prompt and any related fields (e.g., incident short description, description, priority), leaves the instance. Data is transmitted securely, and sensitive data can be redacted using the Sensitive Data Handler plugin.

What is the Sensitive Data Handler plugin?

The Sensitive Data Handler plugin is a ServiceNow tool that allows customers to redact sensitive information, such as personally identifiable information (PII), from data before it leaves the instance for processing. It provides an added layer of security for customers handling sensitive data.

How is customer data securely transported to the LLM?
  • All data in transit is encrypted using TLS 1.2 protocols to ensure it cannot be intercepted.
  • Data is only transmitted to processing centres within the same geographic region as the customer’s instance, complying with local regulations.
  • Data is processed in-memory only and is not cached or stored.
  • Customers can configure the Sensitive Data Handler plugin to redact personally identifiable information (PII) before data leaves their instance.
Is the Sensitive Data Handler plugin set up by default?

No, the Sensitive Data Handler plugin is not set up by default. Customers must actively configure it after purchasing Now Assist to ensure sensitive data is redacted.

What happens if I don’t configure the Sensitive Data Handler?

Without configuring the Sensitive Data Handler, sensitive data such as PII may be included in the data sent for processing. Customers are strongly encouraged to configure the plugin for additional security.

Where is my data processed when using Now Assist?

Data is processed in ServiceNow regional data centres located in:

  • United States
  • Canada
  • Germany
  • Japan

For burst workloads, Azure Public Cloud infrastructure is used, but data remains within the customer’s geographic region (e.g., Europe, Asia-Pacific).

Does data ever leave my region?

No, all data remains within the customer’s region for processing, even during burst workloads on Azure Public Cloud.

What happens to data after it is processed?
  • Data is processed in-memory and deleted immediately after generating a response.
  • Temporary data is not stored or cached on any servers.
Can I opt out of data sharing for AI model improvements?
  • Yes, customers can opt out of data sharing through the Now Assist Admin Console. Once opted out, no additional data is collected for model improvement.
  • ServiceNow waits 30 days after the installation of Advanced AI products before starting data collection to give customers enough time to configure their opt-out settings.
  • Data already collected before opting out will not be removed or retroactively excluded from ServiceNow’s training and development processes.  
Is customer data shared across regions or with other customers?

No, customer data is not shared across regions or co-mingled with data from other customers.

How does ServiceNow ensure compliance with regional regulations?
  • ServiceNow processes data within the same region as the customer instance, ensuring compliance with data residency requirements (e.g., GDPR for Europe).
  • All data handling adheres to stringent compliance frameworks and ServiceNow’s Data Security Addendum.
What security measures are in place to protect my data?
  • Encrypted transmission of data using TLS 1.2.
  • Isolation of data processing environments to prevent unauthorised access.
  • Strict access controls for ServiceNow personnel handling data.
  • Regular audits, compliance checks, and adherence to privacy-by-design principles.
Can I still use Now Assist if I opt out of data sharing?

Yes, opting out only stops your data from being collected for model improvements. All Now Assist functionality remains available.

What is the business value of contributing data for AI model improvement?
  1. Contributing data helps ServiceNow improve prediction accuracy, user experience, and reduce AI errors like hallucinations.
  2. Customers benefit from more accurate, tailored AI capabilities that align with their specific needs.
Is there a risk of copyrighted or sensitive data appearing in LLM outputs?

Outputs may reference input data, including copyrighted or sensitive information, depending on the data provided. Users are encouraged to review and edit AI-generated responses to ensure appropriate use.

How can I verify where my data is processed?

Customers can check their data processing location in the Now Assist Admin Console under the Language and Region tab.

How does ServiceNow ensure transparency in AI processing and decision-making?

ServiceNow ensures transparency by providing detailed documentation about its AI models, known as Model Cards. These documents explain the purpose of the AI model, what kind of data was used to train it, its intended use cases, and any known risks or limitations. They also include information about how the model makes decisions and what safeguards are in place to prevent errors or bias. This allows customers to clearly understand how the AI operates and to evaluate whether it aligns with their requirements.

How does ServiceNow mitigate risks of bias in AI outputs?

ServiceNow employs techniques like instruction fine-tuning and diverse training datasets to reduce bias. Additionally, customers can review and edit AI-generated outputs before using them.

Does ServiceNow comply with regulatory requirements like GDPR, HIPAA, and PCI DSS?

Yes, ServiceNow complies with regulatory frameworks applicable to specific industries, including GDPR, HIPAA, and PCI DSS. The platform adheres to rigorous security protocols and provides data processing addenda to meet industry-specific requirements.

Now Assist Guardian:

What is Now Assist Guardian?

Now Assist Guardian is a built-in security and compliance layer for Now Assist, designed to detect and manage offensive content, security threats, and sensitive topics in generative AI interactions.

It helps prevent AI misuse, such as prompt injection, biased outputs, or harmful content.

How does Now Assist Guardian protect against offensive or harmful content?

It monitors both user inputs and AI-generated outputs for harmful content, including hate speech, bias, misinformation, and security threats.

Customers can choose to either log detected content or block it entirely before it reaches end users.  

What is prompt injection, and how does Now Assist Guardian prevent it?

Prompt injection is a type of security attack where users manipulate AI by overriding its instructions (e.g., tricking it into revealing confidential data).

Now Assist Guardian detects these attacks using pre-trained models, blocking suspicious requests and logging the attempt for admins.  

Can I configure Now Assist Guardian to suit my organisation’s needs?

Yes, administrators can enable or disable specific guardrails, including:

  • Offensive content detection (e.g., hate speech, fraud, unethical behaviour).
  • Prompt injection prevention (e.g., role-playing, hidden commands).
  • Sensitive topic filtering (e.g., workplace issues, HR disputes).

You can also modify or add detection filters based on company policies.

What happens if offensive content or a security threat is detected?

If monitoring is enabled, the event is logged in the Now Assist Admin Console for review.

If blocking is enabled, the AI-generated response is replaced with a generic error message, ensuring harmful content isn’t shown.

Can Now Assist Guardian prevent misinformation or bias?

Yes, it detects and flags misinformation, biased responses, and unethical recommendations.

AI-generated knowledge articles, chat responses, and resolution notes are monitored for fairness and accuracy.

How does Now Assist Guardian handle sensitive workplace topics?

It filters conversations related to sensitive topics (e.g., harassment, discrimination, terminations) and redirects users to an appropriate HR or support process instead of AI-generated responses.

This ensures compliance with workplace policies and prevents AI from making inappropriate recommendations.

Can I see reports of detected incidents?

Yes, logs and reports are available in the Generative AI Metrics table or as downloadable .csv files.

Logs track offensive content detections, security threats, and sensitive topic filtering events.

Is Now Assist Guardian enabled by default?

No, guardrails are disabled by default, except for prompt injection logging.

Administrators must activate specific guardrails based on company risk policies.

Does Now Assist Guardian support multiple languages?

Currently, it only supports English, but some multilingual detection may work. Official support is English-only.

What industries benefit the most from Now Assist Guardian?

Highly regulated industries such as finance, healthcare, legal, and government can use Now Assist Guardian to reduce AI risk exposure and maintain compliance.

Can Now Assist Guardian be used with third-party LLMs?

Yes, this ensures that even third-party LLM responses comply with ServiceNow’s Responsible AI guidelines, helping prevent harmful, biased, or non-compliant outputs.

When a user submits a prompt in a ServiceNow workflow, the request is first sent to the third-party LLM. The LLM generates a response based on the input. Before the response is sent back to the user, Now Assist Guardian intercepts and reviews it applying the same security guardrails as on the native NOW LLM. If a violation is detected, Guardian can log the issue for admin review, or block the response, replacing it with a standard error message.

NOW LLM has built-in Guardian filtering, meaning security checks happen before and after the AI generates a response. Third-party LLMs rely on an extra Guardian call after the response is received, ensuring ServiceNow-level security before showing results to users.

However, this extra Guardian call introduces an extra processing step which might cause a delay in the response and potentially incur higher API usage costs for third-party LLM providers.

Is Now Assist Guardian available in all ServiceNow environments?

No, it is only available in commercial environments.

It is not available for self-hosted deployments or regulated markets.

Bring Your Own Model:

What is BYOM in ServiceNow?

BYOM (Bring Your Own Model) allows me to connect my own external LLM (e.g., OpenAI, Azure OpenAI) to ServiceNow using the Generative AI Controller.

This gives me greater control over my AI models, data handling, and compliance while still benefiting from ServiceNow’s automation and workflow capabilities.

What external LLMs can I connect to ServiceNow?

ServiceNow natively supports the following models:

  • OpenAI (GPT-3, GPT-3.5 Turbo, GPT-4)
  • Azure OpenAI
  • Google Cloud AI (Vertex AI & MakerSuite)
  • Aleph Alpha
  • IBM WatsonX

If my preferred model isn’t listed, I can use the generic LLM connector, but I’ll need to manually configure the API connection.

What happens to my data when I use a third-party LLM?

Once data leaves my instance and is processed by an external LLM, ServiceNow is no longer responsible for its security.

If compliance is a concern, I can:

  • Use the Sensitive Data Handler plugin to redact sensitive information before sending prompts.
  • Ensure my LLM provider meets industry regulations (e.g., GDPR, HIPAA).
  • Enable Now Assist Guardian filtering for extra security before responses are shown to users.
How does the extra Guardian call work for third-party LLMs?

If I use a third-party LLM, Now Assist Guardian applies an additional security check before the AI-generated response is displayed.

This helps detect and filter:

  • Offensive content (e.g., hate speech, bias, misinformation).
  • Prompt injection attacks (e.g., attempts to manipulate AI to reveal sensitive data).
  • Sensitive workplace topics (e.g., HR disputes, workplace safety).

While this improves security, it may increase response times and API usage costs.

What are the main limitations of BYOM?
  • Setup Complexity – I’ll need to manually configure my LLM API, which may require technical expertise.
  • Limited Integration – Some ServiceNow-native AI features, like Virtual Agent flows, may not work seamlessly with third-party LLMs.
  • Data Handling Risks – Once data is processed by an external LLM, I am responsible for its security and compliance.
  • Higher Costs – Using a third-party LLM means I must pay licensing fees and API call costs directly to the provider.
  • Performance Dependency – My AI’s response times depend on my LLM provider’s infrastructure and latency.
  • Support Limitations – ServiceNow only provides support for setting up the Generative AI Controller, not the performance of my LLM.
Can I use BYOM without sending any data outside my instance?

No, all LLM processing happens externally, so some data must be sent to the third-party provider.

If I want to keep all data within my ServiceNow instance, I should use Now LLM instead of a third-party LLM.

Will all ServiceNow AI features work with my third-party LLM?

Not necessarily, some features are optimised for Now LLM, and may not work as expected with external models.

Features that could be impacted include:

  • Pre-built Virtual Agent conversations.
  • ITSM, HRSD, or CSM AI-driven workflows.
  • Knowledge Base generation.

I may need to adapt my workflows and prompts to ensure compatibility.

Will using BYOM cost me extra?

ServiceNow doesn’t charge extra for BYOM, but I will need to pay for my third-party LLM separately.

Costs may include:

  • LLM licensing fees from my provider.
  • API usage charges based on the number of calls made.
  • Potential extra costs if Now Assist Guardian filtering increases my API requests.
Is BYOM available in all ServiceNow environments?

No, BYOM is only available in commercial environments i.e. for standard deployments of ServiceNow where data is hosted in ServiceNow-managed data centres, customers operate under standard licensing and security policies, and ServiceNow provides ongoing updates, patches, and maintenance.

It is not supported in regulated markets or self-hosted deployments.

ServiceNow cannot guarantee full visibility into how third-party LLMs handle customer data once it leaves the platform. For these reasons, customers operating in highly regulated sectors cannot use BYOM and must rely on ServiceNow’s Now LLM, which is built to meet stricter compliance needs.

Since BYOM relies on ServiceNow’s cloud-based Generative AI Controller, customers running self-hosted instances cannot access this functionality.

How can I ensure my BYOM setup is compliant with industry regulations?

Since ServiceNow is not responsible for my third-party LLM’s compliance, I should:

  • Review my LLM provider’s data handling policies.
  • Enable Guardian filtering for extra security screening.
  • Use ServiceNow’s Sensitive Data Handler plugin to redact sensitive information before processing.
How can I monitor the performance of my BYOM implementation?

I can track performance using the Now Assist Admin Console, which provides:

  • API response times for my LLM.
  • Security logs for blocked content or prompt injection attempts.
  • Reports on flagged offensive content or filtered topics.
  • Logs can be exported as .csv files, or reviewed in the Generative AI Metrics table.
What should I consider before implementing BYOM?

Before I proceed, I should evaluate:

  • Technical Setup – Do I have the expertise to configure an API connection?
  • Performance – Can my LLM handle real-time responses without excessive delays?
  • Compliance – Does my provider meet data residency and security requirements?
  • Cost – Have I factored in LLM licensing and API usage expenses?
  • Security – Have I enabled Now Assist Guardian to filter responses?

You may also be interested in

Trusted by CIOs & Digital Transformation leaders across the world

Speak to us today.

We are an industry-recognized ServiceNow Elite Partner built to guide you through every stage of your journey with the ServiceNow platform.

Whether you are new to ServiceNow or an existing ServiceNow customer, Crossfuze has you covered.

Valuable resources & insights from our experts, straight into your inbox.

SUBSCRIBE TO OUR NEWSLETTER TODAY

Whatever stage of the journey you're at with ServiceNow, Crossfuze has you covered.