Skip to content

Preamble Launches ATP Platform with Limited, Free Trials

Submitted by Preamble

Preamble, a Pittsburgh-based technology startup pioneering SaaS-based safety and security for generative AI platforms, has launched Preamble ATP, its comprehensive and customizable risk mitigation solutions platform. Preamble ATP is a powerful yet simple solution that requires no coding and is scalable – from businesses experimenting with AI – to enterprises desiring complex API integrations. Limited free trials are available at Preamble.com.

Preamble is the first end-to-end AI safety SaaS provider that curates unique metrics to implement policies around LLMs, with easy-to-use tools to create, evaluate, and deploy these metrics, with ongoing support. Preamble lets subscribers use natural language to set up custom guardrails or choose from existing AI safety, security, and privacy policies in its Marketplace to start using GPT-4 or Mistral in minutes directly on the platform. 

“The Preamble platform is ideal for getting started with AI and controlling risk, whether you’ve never tried ChatGPT or your company already uses genAI systems. We offer beginner to advanced options that allow users to customize how they interact with LLMs and protect their input and output according to their needs. Using our Chat feature, you can try different guardrails using OpenAI or Mistral in a protected environment. Our Dashboard allows you to monitor your organization’s input and output in real-time for threats or misuse,” said Jeremy McHugh, Preamble's CEO and co-founder. 

For small businesses, monthly pricing is competitive with the cost of current premium LLM subscriptions, plus Preamble’s powerful safety and security tools. The product is launching with full access to OpenAI and Mistral AI and plans to add additional LLM options, such as Anthropic, Google, and others. It will also introduce new features and continuous updates in the coming weeks and months.

"While the backend of our platform is likely the most sophisticated architecture available that's devoted to responsible AI integration, we created Preamble as a simple solution. While it is particularly significant for industries where generative AI poses threats to compromising sensitive data (such as financial and healthcare), our solution is vital for any organization concerned with upholding privacy, security, regulatory, or compliance requirements in an environment of rapidly advancing technology," said McHugh.

A veteran-led business, Preamble’s team includes AI safety and security pioneers and experts from leading technology entities such as UC Berkeley, MIT, Stanford, Penn State University, The United States Air Force, Snapchat, and Facebook. The team began building the platform's architecture in 2021, anticipating future AI risks and threats. With a mission to develop safe, inclusive AI systems that respect diverse values and principles, Preamble is committed to shaping the future of AI safety.

Use Cases

Unlike products from numerous new "AI safety" companies, Preamble is the only AI comprehensive solution that customizes and dynamically supports all of these use cases through a simple user interface, ongoing support, and a robust Policy Library and Marketplace:

  • Safety: Blocks harmful (e.g., illegal, dangerous, etc.) requests and unwanted responses
  • Security: Prevent prompt-based attacks
  • Privacy: PII (Personal Identifiable Information)
  • Compliance: AI Acceptable Use Policies
  • General: Limit AI capabilities

Platform, Policy Library, and Marketplace

Compatible with any generative AI system, Preamble's ATP translates users' existing policies, procedures, compliance needs, and principles into a protective layer as their data interacts with AI technologies. Users can build their own Policy Library to adhere to emerging guidelines from sources such as The Biden Administration's White House's AI Executive Order and NIST's Artificial Intelligence Risk Management Framework (AI RMF 1.0).

Preamble's Policy Marketplace is an extension of its platform that is in continued development. It serves as a hub where users – from independent developers to enterprise clients – can contribute and access a wide array of specialized AI policies through a simple user interface. Preamble designed the Marketplace to democratize safe and responsible AI deployment advancements by centralizing resources, enabling accessibility, fostering community engagement, and driving innovation.

"LLMs work in many languages, opening up more avenues for threats than ever before. We're providing a single solution for all generative AI platforms that combines the capabilities you'd expect to find across the entire range of traditional cybersecurity products with customizable solutions that no other company provides. We offer multiple ways to make it easy for users to determine how they want to control their data's interaction with AI technologies. Our platform is easy to set up and lets end-users dynamically control both AI inputs and outputs," added Jeremy McHugh, CEO and co-founder of Preamble.

For a limited time, Preamble is offering its platform free of charge for 14 days to users who want to explore its interface and functionalities, browse the Policy Marketplace, and upload policies to its Marketplace. Visit www.preamble.com.

About Preamble

Preamble democratizes safety and security guardrails for generative AI systems. Its comprehensive AI Trust platform, Guardrails Toolkit, and Marketplace allow organizations, domain experts, and stakeholders to curate shared values and deploy generative AI guardrails that integrate ethics, maintain security, comply with policies, and mitigate risk. Beyond applying values to AI, Preamble provides tools to improve risk-based guardrails continuously. With a mission to develop safe, inclusive AI systems that respect diverse values and principles, Preamble is committed to shaping the future of AI safety. Headquartered in Pittsburgh, Pennsylvania, Preamble is a veteran-led business.

Strategic Alliances

  • AI Vulnerability Database (AVID) - an open-source knowledge base of failure modes for Artificial Intelligence (AI) models, datasets, and systems.
  • NVIDIA Inception - a program that nurtures startups revolutionizing industries with technological advancements.
  • MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) Alliances
  • Penn State University’s Nittany AI Alliance
  • CEO Circle – Bunker Labs and JP Morgan Chase Commercial Banking