Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 's1' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      Bad split: s1. Available splits: ['test']
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 219, in compute_first_rows_from_streaming_response
                  iterable_dataset = load_dataset(
                                     ^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1409, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1232, in as_streaming_dataset
                  raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
              ValueError: Bad split: s1. Available splits: ['test']

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

AL-GR: A Large-scale Generative Recommendation Dataset

Dataset Summary

AL-GR is a large-scale dataset designed for generative recommendation tasks using Large Language Models (LLMs). The core idea is to transform user historical behavior sequences into natural language prompts, enabling an LLM to learn and predict a user's subsequent actions in an e-commerce scenario.

The dataset contains over 400 million behavior sequences. Each sample includes three fields: system, user, and answer. The system field defines the LLM's role and task, the user field provides the sequence of historical user behaviors, and the answer field contains the next sequence of actions the model is expected to predict.

This format allows for direct use in instruction fine-tuning to train an LLM for powerful sequential recommendation tasks.

Supported Tasks and Leaderboards

  • generative-recommendation: This dataset primarily supports the generative recommendation task, where the model needs to generate multiple subsequent behavior codes at once based on the given history.

Dataset Structure

Data Instances

A typical data instance is as follows. Note that the answer field contains multiple subsequent behavior codes, concatenated as a single string.

{
  "system": "You are a recommendation system. Based on the user's historical behavior, predict the user's next action in an e-commerce scenario. I will provide a sequence of semantic codes for continuous behaviors, arranged in the order of user clicks.",
  "user": "The current user's historical behavior is as follows: C1220C8322C20452C6084C10195C20067C3256C14673C21112C705",
  "answer": "C9988C7766C5544"
}

Data Fields

  • system (string): A system-level instruction for the LLM, describing its role and task.
  • user (string): The user's specific request, containing a time-ordered sequence of historical behavior codes.
  • answer (string): The user's subsequent sequence of behavior codes that the model needs to predict. It is a single string concatenated from multiple semantic IDs (e.g., C9988, C7766, C5544).

Data Splits

The dataset comprises over 400 million behavior sequences in total and is divided into three distinct training sets based on time. This chronological split is suitable for training and evaluating time-aware models.

Split Description Number of Samples
s1 Early training data [Number of s1 samples]
s2 Mid-period training data [Number of s2 samples]
s3 Recent training data [Number of s3 samples]

Dataset Creation

Source Data

This dataset originates from a large-scale, anonymized, real-world industrial e-commerce dataset, ensuring the authenticity and complexity of the data.

Data Curation & Annotations

The codes in the behavior sequences (e.g., C1220) are not simple item IDs but semantic IDs. They are obtained by discretizing rich multi-modal features (such as images, text descriptions, etc.). This method ensures that each ID encapsulates deep semantic information about the items, providing a solid foundation for the LLM's comprehension and generation capabilities.

The dataset construction process is as follows:

  1. Extract user behavior sessions from the source data.
  2. Split each session chronologically into a historical part (for the user field) and a future part to be predicted (for the answer field).
  3. Combine these with a predefined instruction template (the system field) to create samples suitable for instruction fine-tuning.
  4. Finally, all data is partitioned chronologically into three splits: s1, s2, and s3.

Usage

You can easily load this dataset using the datasets library from Hugging Face:

from datasets import load_dataset

# Login using e.g. `huggingface-cli login` to access this dataset
# For the full AL-GR dataset, use:
# dataset = load_dataset("AL-GR/AL-GR")
# For a tiny demo subset, use:
dataset = load_dataset("AL-GR/AL-GR-Tiny", data_files="train_data/s1_tiny.csv", split="train")

# Inspect a sample
print(dataset[0])
# Output:
# {
#   'system': 'You are a recommendation system...',
#   'user': 'The current user\'s historical behavior is as follows: C1220...',
#   'answer': 'C9988C7766C5544'
# }

Prompting

For inference or training, you would typically combine the system and user fields to form the model's input. Here is an example following the Llama-2-chat format:

# To load the dataset with `datasets.load_dataset`
from datasets import load_dataset

# Login using e.g. `huggingface-cli login` to access this dataset
# For the full AL-GR dataset, use:
# dataset = load_dataset("AL-GR/AL-GR")
# For a tiny demo subset, use:
dataset = load_dataset("AL-GR/AL-GR-Tiny", data_files="train_data/s1_tiny.csv", split="train")

sample = dataset[0] # Access the first sample from the loaded split

# Prompt for inference
prompt = f"<s>[INST] <<SYS>>
{sample['system']}
<</SYS>>

{sample['user']} [/INST]"

# Full sequence for training
full_prompt = f"<s>[INST] <<SYS>>
{sample['system']}
<</SYS>>

{sample['user']} [/INST] {sample['answer']} </s>"

# The `prompt` or `full_prompt` can then be fed into a model for inference or training.
print("Inference Prompt Example:")
print(prompt)
print("
Training Prompt Example:")
print(full_prompt)

Citation

If you use this dataset in your research, please cite:

License

This dataset is licensed under the Apache License 2.0.

Downloads last month
658