Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/referencias-normativas/referencia-normativa) changed from array to object in row 2
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 153, in _generate_tables
                  df = pd.read_json(f, dtype_backend="pyarrow")
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
                  obj = self._get_object_parser(self.data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
                  self._parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1403, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
              ValueError: Trailing data
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2643, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1659, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1816, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1347, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 318, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 156, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 130, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/referencias-normativas/referencia-normativa) changed from array to object in row 2

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Jurisprudencia de la Repùblica Argentina - Sistema Argentino de Información Jurídica

Este dataset es actualizado diariamente con la información de SAIJ utilizando la librería de SandboxAI

Formato

El formato del dataset es el siguiente:

{
  "numero-sumario": "Número de identificación del sumario",
  "materia": "Área del derecho a la que pertenece el caso",
  "timestamp": "Fecha y hora de creación del registro",
  "timestamp-m": "Fecha y hora de la última modificación del registro",
  "sumario": "Resumen del caso",
  "caratula": "Título del caso",
  "descriptores": {
    "descriptor": [
      {
        "elegido": {
          "termino": "Término elegido para describir al caso"
        },
        "preferido": {
          "termino": "Término preferido para describir al caso"
        },
        "sinonimos": {
          "termino": ["Lista de sinónimos"]
        }
      }
    ],
    "suggest": {
      "termino": ["Lista de términos sugeridos"]
    }
  },
  "fecha": "Fecha del caso",
  "instancia": "Instancia judicial",
  "jurisdiccion": {
    "codigo": "Código de la jurisdicción",
    "descripcion": "Descripción de la jurisdicción",
    "capital": "Capital de la jurisdicción",
    "id-pais": "ID del país"
  },
  "numero-interno": "Número interno del caso",
  "provincia": "Provincia donde se lleva el caso",
  "tipo-tribunal": "Tipo de tribunal",
  "referencias-normativas": {
    "referencia-normativa": {
      "cr": "Referencia cruzada",
      "id": "ID de la referencia normativa",
      "ref": "Referencia normativa"
    }
  },
  "fecha-alta": "Fecha de alta del registro",
  "fecha-mod": "Fecha de última modificación del registro",
  "fuente": "Fuente del registro",
  "uid-alta": "UID de alta",
  "uid-mod": "UID de modificación",
  "texto": "Texto completo del caso",
  "id-infojus": "ID de Infojus",
  "titulo": "Título del sumario",
  "guid": "GUID del registro"
}

Uso

Podés usar este dataset sin descargarlo por completo, trayendo data filtrada con un solo query. Podes hacerlo así:

# En este ejemplo, filtramos entradas por fecha
import requests
API_TOKEN = "tu_api_token"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
date='2024-03-01'
API_URL = f"https://datasets-server.huggingface.co/filter?dataset=marianbasti/jurisprudencia-Argentina-SAIJ&config=default&split=train&where=timestamp='{date}T00:00:00'"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
Downloads last month
2