Pushing the boundaries of research and learning with AI you can trust.
Talk to usArtificial intelligence (AI) is transforming research, teaching and learning. Clarivate makes sure you can safely and responsibly navigate this new landscape, driving research excellence and student learning outcomes.
Clarivate AI-based solutions provide users with intelligence grounded in trustworthy sources and embedded in academic workflows, thus reducing the risks of misinformation, bias, and IP abuse.
Discover a new, conversational way to understand topics, gain insight, locate must-read papers, and connect the dots between articles in the world’s most trusted citation index.
A new way to navigate millions of full text academic works within ProQuest and easily find the high quality, trusted sources to accelerate your research and learning.
Nurture students learning skills and critical thinking with Alethea. The AI-based coach guides students to the core of their course readings, helping them distill takeaways and prepare for effective class discussion.
Transform your library discovery, providing an ideal starting point for users seeking to find and explore learning and research materials. Answers are grounded in the Ex Libris Central Discovery Index, one of the world’s most extensive scholarly indexes.
The Academic AI platform serves as a technology backbone, enabling accelerated and consistent deployment of AI capabilities across our portfolio of solutions.
Clarivate Academia AI Advisory Council is being formed to ensure that generative AI is developed in collaboration with the academic community. The council will help foster responsible design and application of GenAI for academic settings, including best practices, recommendations and guardrails.
At Clarivate, we’ve been using AI and machine learning for years, guided by our AI principles.
We do not train public LLMs. We use commercially pre-trained Large Language Models as part of our information retrieval and augmentation framework. Currently, this includes the use of a Retrieval Augmented Generation (RAG) architecture among other advanced techniques. While we are using the pre-trained LLMs to support the creation of narrative content, the facts in this content are generated from our trusted academic sources. We test this setup rigorously to ensure academic integrity and alignment with the academic ecosystem. Testing includes validation of responses through academic subject matter experts who evaluate the outputs for accuracy and relevance. Additionally, we conduct extensive user testing that involve real-world research and learning scenarios to further refine accuracy and performance.
We are committed to the highest standards of user privacy and security. We do not share or pass any publisher content, library-owned materials, or user data to large language models (LLMs) for any purpose.
We strongly believe that we have a critical responsibility to the academic community to mitigate AI-induced inaccuracies. We continuously test our solutions and the results they produce, including through dedicated beta programs and close collaboration with customers and subject matter experts. Our data science expertise helps ensure system accuracy, fairness, robustness and interpretability. Pairing this with our trustworthy, curated content, we significantly reduce the risk of ‘hallucinations’ and misinformation.
The ranking and prioritization of sources by our AI-based discovery tools will vary according to the specific characteristics of the user’s query, the user persona, and the context of each query.
The approach to ranking and prioritization is similar to the way it is traditionally done in our discovery solutions. This understanding enables us to present the most relevant and valuable sources first, ensuring that the information provided matches the user’s needs as closely as possible.