Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 7 new columns ({'Code', 'RQ', 'Original', 'Q', 'Type', 'Industry', 'Respondent_ID'}) and 10 missing columns ({'I3_Finance', 'Research_Implication', 'I1_Tech', 'Sub_Code', 'I4_Telecom', 'Code_Category', 'Cross_Interview_Pattern', 'I2_Energy', 'I6_Education', 'I5_FMCG'}).

This happened while the csv dataset builder was generating data using

hf://datasets/itseffi/epfl-enterprise-osai-adoption-research-data/EPFL_Survey_Qualitative_Coding.csv (at revision 0a98ccfadf68b00256cd06f602fff6d317a457b7)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              Respondent_ID: int64
              Industry: string
              Q: string
              Original: string
              Code: string
              Type: string
              RQ: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1038
              to
              {'Code_Category': Value('string'), 'Sub_Code': Value('string'), 'I1_Tech': Value('string'), 'I2_Energy': Value('string'), 'I3_Finance': Value('string'), 'I4_Telecom': Value('string'), 'I5_FMCG': Value('string'), 'I6_Education': Value('string'), 'Cross_Interview_Pattern': Value('string'), 'Research_Implication': Value('string')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 7 new columns ({'Code', 'RQ', 'Original', 'Q', 'Type', 'Industry', 'Respondent_ID'}) and 10 missing columns ({'I3_Finance', 'Research_Implication', 'I1_Tech', 'Sub_Code', 'I4_Telecom', 'Code_Category', 'Cross_Interview_Pattern', 'I2_Energy', 'I6_Education', 'I5_FMCG'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/itseffi/epfl-enterprise-osai-adoption-research-data/EPFL_Survey_Qualitative_Coding.csv (at revision 0a98ccfadf68b00256cd06f602fff6d317a457b7)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Code_Category
string
Sub_Code
string
I1_Tech
string
I2_Energy
string
I3_Finance
string
I4_Telecom
string
I5_FMCG
string
I6_Education
string
Cross_Interview_Pattern
string
Research_Implication
string
Decision_Factors
Cost_Performance_Tradeoff
Primary Driver
Secondary
Not Primary
Balanced
Monitored
Secondary
Industry-specific cost prioritization
RQ2: Decision factors vary by industry risk tolerance
Decision_Factors
Compliance_Security
Secondary
Primary
Primary
Secondary
Primary
Secondary
Regulated industries prioritize compliance
RQ2: Risk tolerance drives decision factors
Decision_Factors
Technical_Suitability
Primary
Secondary
Secondary
Primary
Primary
Primary
Technical fit matters across industries
RQ2: Technical requirements are universal
Integration_Challenges
Expertise_Gaps
Present
Present
Present
Present
Present
Present
Universal skill gap barrier
RQ2: Organizational readiness is critical
Integration_Challenges
Documentation_Gaps
Present
Present
Present
Present
Present
Present
Universal documentation need
RQ2: Knowledge transfer barriers exist
Integration_Challenges
Legacy_Systems
Present
Present
Present
Present
Present
Present
Universal technical debt
RQ2: Infrastructure challenges are common
Motivations
Control_Customization
Primary
Secondary
Not Present
Primary
Secondary
Primary
Control needs vary by industry
RQ2: Motivations align with industry needs
Motivations
Risk_Aversion
Secondary
Primary
Primary
Secondary
Primary
Secondary
Risk tolerance drives adoption
RQ2: Risk profile determines adoption path
Governance_Gates
Licensing_Documentation
Present
Present
Present
Present
Present
Present
Universal governance requirement
RQ3: Gates are universal prerequisites
Governance_Gates
Compliance_Certification
Present
Present
Present
Present
Present
Present
Universal compliance need
RQ3: Compliance gates are mandatory
Governance_Gates
Provenance_Traceability
Present
Present
Present
Present
Present
Present
Universal transparency need
RQ3: Transparency is universal requirement
Execution_Levers
Cost_Optimization
Primary
Conditional
Not Active
Primary
Primary
Secondary
Gate-dependent activation
RQ3: Levers activate after gates
Execution_Levers
Performance_Latency
Primary
Secondary
Not Active
Primary
Secondary
Secondary
Performance focus varies
RQ3: Performance levers are conditional
Execution_Levers
Operational_Efficiency
Secondary
Secondary
Not Active
Secondary
Primary
Secondary
Operational focus varies
RQ3: Operational levers depend on gates
Gate_Lever_Sequence
Sequential_Activation
Present
Present
Present
Present
Present
Present
Universal sequential model
RQ3: Gates must precede levers
Gate_Lever_Sequence
Conditional_Levers
Present
Present
Present
Present
Present
Present
Universal conditional activation
RQ3: Levers depend on gate satisfaction
Gate_Lever_Sequence
Governance_First
Present
Present
Present
Present
Present
Present
Universal governance priority
RQ3: Governance-first adoption model
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
End of preview.

EPFL Enterprise Open-Source AI Adoption Research Dataset

Dataset Summary

This dataset contains mixed-methods research data from 100 organizations regarding their strategic adoption of open-source AI through the Hugging Face ecosystem. The research was conducted at EPFL (École Polytechnique Fédérale de Lausanne) and supports the development of the Gate-Lever framework for enterprise open-source AI adoption.

Dataset Structure

This dataset is organized into 4 configurations to handle different data schemas:

Configuration 1: Survey Data (survey_data)

  • EPFL_Enterprise_OSAI_Adoption_Survey_Data.csv: Main survey responses from 100 organizations

Configuration 2: Interview Data (interview_data)

  • EPFL_Enterprise_OSAI_Adoption_Interview_Data.csv: Interview transcripts and analysis from 6 organizations

Configuration 3: Coding Data (coding_data)

  • EPFL_Coding_Matrix.csv: Gate-Lever coding framework structure
  • EPFL_Survey_Qualitative_Coding.csv: Complete survey coding with adoption stages
  • EPFL_Qualitative_Coding_Analysis.csv: Coding frequency analysis
  • EPFL_Code_Frequencies_with_RQ.csv: Coding frequencies mapped to research questions

Configuration 4: Supporting Data (supporting_data)

  • EPFL_Quantitative_Summary_Statistics.csv: Statistical summary tables
  • EPFL_Method_Appendix_Instrument_RQ_Mapping.csv: Survey questions mapped to research questions

Research Framework

This dataset supports the Gate-Lever Framework for strategic open-source AI adoption:

  • Governance Gates: Compliance, data privacy, security, documentation, licensing clarity
  • Execution Levers: Performance, cost efficiency, customization, support, time-to-value
  • Adoption Stages: Pre-adoption, Adopting, Adopted
  • Organizational Clusters: Performance-Driven Adopters, Governance-Locked Organizations, Balanced Transition Organizations

Key Variables

Survey Data Variables

  • Respondent_ID: Anonymous respondent identifier
  • Org_Size: Organization size category
  • Industry: Industry sector (12 categories)
  • HF_Familiarity: Hugging Face familiarity (1-5 scale)
  • AI_Solution_Type: Current AI solution type (Proprietary/Both/Open-source)
  • Decision_Factors: Key decision factors (open text)
  • Integration_Difficulty: Integration challenge rating (1-5 scale)
  • Adoption_Intention: Plan to increase Hugging Face use (Yes/No/Not sure)
  • AI_Stage_Overall: AI maturity stage (Early/Intermediate/Advanced)
  • OpenSource_Stage: Open-Source AI Adoption stage (Pre-adoption/Adopting/Adopted)
  • Gate_Index: Governance/compliance factors index (0-1)
  • Lever_Index: Performance/optimization factors index (0-1)
  • Adoption_Intention_Binary: Binary adoption intention (0/1)
  • Cluster: Organizational profile cluster (0-2)

Interview Data Variables

  • ID: Interview identifier (I1-I6)
  • Industry: Participant industry sector
  • Role: Participant role/title
  • Stage: Adoption stage classification
  • Theme 1-6: Coded themes from thematic analysis
  • Framework_Link: Connection to Gate-Lever framework
  • Main_Takeaway: Key insights from each interview

Methodology

Data Collection

  • Survey: Online survey with structured and open-text questions
  • Interviews: Semi-structured interviews (30-45 minutes each)
  • Coding: Systematic thematic analysis using Gate-Lever framework
  • Validation: Inter-coder reliability testing (85% agreement)

Statistical Analysis

  • Descriptive Statistics: Sample characteristics and variable distributions
  • Correlation Analysis: Gate-Lever relationships and adoption predictors
  • Cluster Analysis: K-means clustering for organizational segmentation
  • Regression Analysis: Logistic and linear regression for adoption prediction
  • ANOVA: Integration difficulty differences across adoption stages

Usage

Loading the Dataset

from datasets import load_dataset

# Load specific configurations
survey_data = load_dataset("itseffi/epfl-enterprise-osai-adoption-research-data", "survey_data")
interview_data = load_dataset("itseffi/epfl-enterprise-osai-adoption-research-data", "interview_data")
coding_data = load_dataset("itseffi/epfl-enterprise-osai-adoption-research-data", "coding_data")
supporting_data = load_dataset("itseffi/epfl-enterprise-osai-adoption-research-data", "supporting_data")

# Or load all configurations
dataset = load_dataset("itseffi/epfl-enterprise-osai-adoption-research-data")

Suitable For

  • Open-source AI adoption frameworks
  • Strategic AI adoption research
  • Enterprise technology adoption studies
  • Mixed-methods research validation
  • Technology adoption theory development

Limitations

  • Sample limited to European organizations
  • Self-reported data may introduce bias
  • Focus on Hugging Face ecosystem specifically

License

This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

Contact

For questions about this dataset, please use the Hugging Face repository discussions.

Repository Information

Downloads last month
26