metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: >-
What challenge related to prompt injection has seen little progress since
September 2022?
sentences:
- >-
Except... you can run generated code to see if it’s correct. And with
patterns like ChatGPT Code Interpreter the LLM can execute the code
itself, process the error message, then rewrite it and keep trying until
it works!
So hallucination is a much lesser problem for code generation than for
anything else. If only we had the equivalent of Code Interpreter for
fact-checking natural language!
How should we feel about this as software engineers?
On the one hand, this feels like a threat: who needs a programmer if
ChatGPT can write code for you?
- >-
Prompt injection is a natural consequence of this gulibility. I’ve seen
precious little progress on tackling that problem in 2024, and we’ve
been talking about it since September 2022.
I’m beginning to see the most popular idea of “agents” as dependent on
AGI itself. A model that’s robust against gulliblity is a very tall
order indeed.
Evals really matter
Anthropic’s Amanda Askell (responsible for much of the work behind
Claude’s Character):
- >-
Industry’s Tardy Response to the AI Prompt Injection Vulnerability on
RedMonk Conversations
Posted 31st December 2023 at 11:59 pm · Follow me on Mastodon, Bluesky,
Twitter or subscribe to my newsletter
More recent articles
Qwen 3 offers a case study in how to effectively release a model - 29th
April 2025
Watching o3 guess a photo's location is surreal, dystopian and wildly
entertaining - 26th April 2025
Exploring Promptfoo via Dave Guarino's SNAP evals - 24th April 2025
This is Stuff we figured out about AI in 2023 by Simon Willison, posted
on 31st December 2023.
Part of series LLMs annual review
Stuff we figured out about AI in 2023 - Dec. 31, 2023, 11:59 p.m.
Things we learned about LLMs in 2024 - Dec. 31, 2024, 6:07 p.m.
- source_sentence: Which company released the QwQ model under an Apache 20 license?
sentences:
- >-
I also gave a bunch of talks and podcast appearances. I’ve started
habitually turning my talks into annotated presentations—here are my
best from 2023:
Prompt injection explained, with video, slides, and a transcript
Catching up on the weird world of LLMs
Making Large Language Models work for you
Open questions for AI engineering
Embeddings: What they are and why they matter
Financial sustainability for open source projects at GitHub Universe
And in podcasts:
What AI can do for you on the Theory of Change
Working in public on Path to Citus Con
LLMs break the internet on the Changelog
Talking Large Language Models on Rooftop Ruby
Thoughts on the OpenAI board situation on Newsroom Robots
- >-
OpenAI are not the only game in town here. Google released their first
entrant in the category, gemini-2.0-flash-thinking-exp, on December
19th.
Alibaba’s Qwen team released their QwQ model on November 28th—under an
Apache 2.0 license, and that one I could run on my own machine. They
followed that up with a vision reasoning model called QvQ on December
24th, which I also ran locally.
DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out
through their chat interface on November 20th.
To understand more about inference scaling I recommend Is AI progress
slowing down? by Arvind Narayanan and Sayash Kapoor.
- >-
“Agents” still haven’t really happened yet
I find the term “agents” extremely frustrating. It lacks a single, clear
and widely understood meaning... but the people who use the term never
seem to acknowledge that.
If you tell me that you are building “agents”, you’ve conveyed almost no
information to me at all. Without reading your mind I have no way of
telling which of the dozens of possible definitions you are talking
about.
- source_sentence: >-
How has Apple’s MLX library impacted the performance of running machine
learning models on Apple Silicon?
sentences:
- >-
These abilities are just a few weeks old at this point, and I don’t
think their impact has been fully felt yet. If you haven’t tried them
out yet you really should.
Both Gemini and OpenAI offer API access to these features as well.
OpenAI started with a WebSocket API that was quite challenging to use,
but in December they announced a new WebRTC API which is much easier to
get started with. Building a web app that a user can talk to via voice
is easy now!
Prompt driven app generation is a commodity already
This was possible with GPT-4 in 2023, but the value it provides became
evident in 2024.
- >-
On paper, a 64GB Mac should be a great machine for running models due to
the way the CPU and GPU can share the same memory. In practice, many
models are released as model weights and libraries that reward NVIDIA’s
CUDA over other platforms.
The llama.cpp ecosystem helped a lot here, but the real breakthrough has
been Apple’s MLX library, “an array framework for Apple Silicon”. It’s
fantastic.
Apple’s mlx-lm Python library supports running a wide range of
MLX-compatible models on my Mac, with excellent performance.
mlx-community on Hugging Face offers more than 1,000 models that have
been converted to the necessary format.
- >-
On the one hand, we keep on finding new things that LLMs can do that we
didn’t expect—and that the people who trained the models didn’t expect
either. That’s usually really fun!
But on the other hand, the things you sometimes have to do to get the
models to behave are often incredibly dumb.
Does ChatGPT get lazy in December, because its hidden system prompt
includes the current date and its training data shows that people
provide less useful answers coming up to the holidays?
The honest answer is “maybe”! No-one is entirely sure, but if you give
it a different date its answers may skew slightly longer.
- source_sentence: >-
What are some ways to run local, private large language models (LLMs)
mentioned in the context?
sentences:
- >-
We don’t yet know how to build GPT-4
Frustratingly, despite the enormous leaps ahead we’ve had this year, we
are yet to see an alternative model that’s better than GPT-4.
OpenAI released GPT-4 in March, though it later turned out we had a
sneak peak of it in February when Microsoft used it as part of the new
Bing.
This may well change in the next few weeks: Google’s Gemini Ultra has
big claims, but isn’t yet available for us to try out.
The team behind Mistral are working to beat GPT-4 as well, and their
track record is already extremely strong considering their first public
model only came out in September, and they’ve released two significant
improvements since then.
- >-
I’m still trying to figure out the best patterns for doing this for my
own work. Everyone knows that evals are important, but there remains a
lack of great guidance for how to best implement them—I’m tracking this
under my evals tag. My SVG pelican riding a bicycle benchmark is a pale
imitation of what a real eval suite should look like.
Apple Intelligence is bad, Apple’s MLX library is excellent
As a Mac user I’ve been feeling a lot better about my choice of platform
this year.
Last year it felt like my lack of a Linux/Windows machine with an
NVIDIA GPU was a huge disadvantage in terms of trying out new models.
- >-
I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly
great model) on my iPhone. You can install several different apps to get
your own, local, completely private LLM. My own LLM project provides a
CLI tool for running an array of different models via plugins.
You can even run them entirely in your browser using WebAssembly and the
latest Chrome!
Hobbyists can build their own fine-tuned models
I said earlier that building an LLM was still out of reach of hobbyists.
That may be true for training from scratch, but fine-tuning one of those
models is another matter entirely.
- source_sentence: >-
What is the most important factor in determining the quality of a trained
model according to the context?
sentences:
- >-
Intuitively, one would expect that systems this powerful would take
millions of lines of complex code. Instead, it turns out a few hundred
lines of Python is genuinely enough to train a basic version!
What matters most is the training data. You need a lot of data to make
these things work, and the quantity and quality of the training data
appears to be the most important factor in how good the resulting model
is.
If you can gather the right data, and afford to pay for the GPUs to
train it, you can build an LLM.
- >-
Now add a walrus: Prompt engineering in DALL-E 3
32.8k
41.2k
Web LLM runs the vicuna-7b Large Language Model entirely in your
browser, and it’s very impressive
32.5k
38.2k
ChatGPT can’t access the internet, even though it really looks like it
can
30.5k
34.2k
Stanford Alpaca, and the acceleration of on-device large language model
development
29.7k
35.7k
Run Llama 2 on your own Mac using LLM and Homebrew
27.9k
33.6k
Midjourney 5.1
26.7k
33.4k
Think of language models like ChatGPT as a “calculator for words”
25k
31.8k
Multi-modal prompt injection image attacks against GPT-4V
23.7k
27.4k
- >-
I think people who complain that LLM improvement has slowed are often
missing the enormous advances in these multi-modal models. Being able to
run prompts against images (and audio and video) is a fascinating new
way to apply these models.
Voice and live camera mode are science fiction come to life
The audio and live video modes that have started to emerge deserve a
special mention.
The ability to talk to ChatGPT first arrived in September 2023, but it
was mostly an illusion: OpenAI used their excellent Whisper
speech-to-text model and a new text-to-speech model (creatively named
tts-1) to enable conversations with the ChatGPT mobile apps, but the
actual model just saw text.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9692441461309548
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9583333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9583333333333334
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("manmah/legal-ft-717cb2ad-5d19-4d52-ad34-5656c2895fa9")
# Run inference
sentences = [
'What is the most important factor in determining the quality of a trained model according to the context?',
'Intuitively, one would expect that systems this powerful would take millions of lines of complex code. Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!\nWhat matters most is the training data. You need a lot of data to make these things work, and the quantity and quality of the training data appears to be the most important factor in how good the resulting model is.\nIf you can gather the right data, and afford to pay for the GPUs to train it, you can build an LLM.',
'I think people who complain that LLM improvement has slowed are often missing the enormous advances in these multi-modal models. Being able to run prompts against images (and audio and video) is a fascinating new way to apply these models.\nVoice and live camera mode are science fiction come to life\nThe audio and live video modes that have started to emerge deserve a special mention.\nThe ability to talk to ChatGPT first arrived in September 2023, but it was mostly an illusion: OpenAI used their excellent Whisper speech-to-text model and a new text-to-speech model (creatively named tts-1) to enable conversations with the ChatGPT mobile apps, but the actual model just saw text.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.9167 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9167 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9167 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9692 |
| cosine_mrr@10 | 0.9583 |
| cosine_map@100 | 0.9583 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 156 training samples
- Columns:
sentence_0andsentence_1 - Approximate statistics based on the first 156 samples:
sentence_0 sentence_1 type string string details - min: 12 tokens
- mean: 20.92 tokens
- max: 35 tokens
- min: 43 tokens
- mean: 135.28 tokens
- max: 214 tokens
- Samples:
sentence_0 sentence_1 What are the two main categories of AI agents described in the context?The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
Whatever the term may mean, agents still have that feeling of perpetually “coming soon”.How is the term "autonomy" treated in discussions about AI agents according to the context?The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
Whatever the term may mean, agents still have that feeling of perpetually “coming soon”.What colors and patterns are described on the two butterflies positioned in the feeder?Against this photo of butterflies at the California Academy of Sciences:
A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange slices of fruit are visible inside the dish.
Two butterflies are positioned in the feeder, one is a dark brown/black butterfly with white/cream-colored markings. The other is a large, brown butterfly with patterns of lighter brown, beige, and black markings, including prominent eye spots. The larger brown butterfly appears to be feeding on the fruit. - Loss:
MatryoshkaLosswith these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 10per_device_eval_batch_size: 10num_train_epochs: 10multi_dataset_batch_sampler: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 10per_device_eval_batch_size: 10per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 10max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size: 0fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin
Training Logs
| Epoch | Step | cosine_ndcg@10 |
|---|---|---|
| 1.0 | 16 | 0.9554 |
| 2.0 | 32 | 0.9484 |
| 3.0 | 48 | 0.9692 |
| 3.125 | 50 | 0.9692 |
| 4.0 | 64 | 0.9692 |
| 5.0 | 80 | 0.9692 |
| 6.0 | 96 | 0.9692 |
| 6.25 | 100 | 0.9692 |
| 7.0 | 112 | 0.9692 |
| 8.0 | 128 | 0.9692 |
| 9.0 | 144 | 0.9692 |
| 9.375 | 150 | 0.9692 |
| 10.0 | 160 | 0.9692 |
Framework Versions
- Python: 3.13.2
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}