Research & Development
Advancing the State of the Art in Truly Open Source AI
Training Sparse MoE Text Embedding Models
Under Review
Introduces the first general purpose mixture of experts text embedding model, which achieves state-of-the-art performance on the MIRACL benchmark. The model is truly open source, meaning the training data, weights, and code are available and permisively licensed.
CoRNStack: High-Quality Contrastive Data for Better Code Ranking
ICLR 2025
An Open Dataset for training State-of-the-Art Code Embedding Models. Work done in collaboration with University of Illinois at Urbana-Champaign.
Tracking the Perspectives of Interacting Language Models
EMNLP 2024
Developing and studying metrics for understanding information diffusion in communication networks of LLMs. Work done in collaboration with Johns Hopkins University.
Nomic Embed: Training a Reproducible Long Context Text Embedder
TMLR 2024
The first truly open (i.e. open data, weights, and code) text embedding model that outperforms OpenAI Ada. Work done in collaboration with Cornell University
Nomic Embed Vision: Expanding the Latent Space
ArXiv 2024
The first multimodal embedding model to achieve high performance on text-text, text-image, and image-image tasks with a single unified latent space.
The Landscape of Biomedical Research
Cell Patterns Cover 2024
The first systematic study of the entirety of PubMed from an information cartography perspective. Work done in collaboration with University of Tubingen.
Embedding Based Inference on Generative Models
ArXiv 2024
An extension of Data Kernel methods to black box settings. Work done in collaboration with Johns Hopkins University.
GPT4All: An Ecosystem of Open Source Compressed Language Models
EMNLP 2023
How the first open source LLM to surpass GPT-3.5's performance grew from a model into a movement. Work done in collaboration with the GPT4All community.
Comparing Foundation Models using Data Kernels
ArXiv 2023
A method for statistically rigorous comparison of embedding spaces without labeled data. Work done in collaboration with Johns Hopkins University.
Mapping Wikipedia with BERT and UMAP
IEEE Vis 2022
The first systematic study of the entirety of English Wikipedia from an information cartography perspective. Work done in collaboration with New York University.