hyperboria/nexus/cognitron
the-superpirate fff80cd4e7 - feat(nexus): Bump versions
- fix(nexus): Preparing configs to be published
  - feat(nexus): Various fixes for opening left sources
  - fix(nexus): Fine-tune versions
1 internal commit(s)

GitOrigin-RevId: 6c834cd3f4f5f18109a159a73503700dac63b0bb
2021-04-23 18:32:56 +03:00
..
configs - feat(nexus): Bump versions 2021-04-23 18:32:56 +03:00
installer - fix: Fix importing documentation 2021-01-29 12:08:40 +03:00
schema No description 2021-03-29 18:01:30 +03:00
web - feat(nexus): Bump versions 2021-04-23 18:32:56 +03:00
.gitignore - feat(nexus): Bump versions 2021-04-23 18:32:56 +03:00
BUILD.bazel - fix: Various fixes for release 2021-01-29 11:26:51 +03:00
README.md - feat(nexus): Bump versions 2021-04-23 18:32:56 +03:00
__init__.py - fix: Various fixes for release 2021-01-29 11:26:51 +03:00
docker-compose.yml - feat(nexus): Bump versions 2021-04-23 18:32:56 +03:00

README.md

Nexus Cognitron

Prerequisite

Follow the root guide to install Docker, IPFS and Bazel (optionally)

Guide

1. Download data dumps

export COLLECTION=bafykbzacebzohi352bddfunaub5rgqv5b324nejk5v6fltjh45be5ykw5jsjg
ipfs get $COLLECTION -o data && ipfs pin add $COLLECTION
export DATA_PATH=$(realpath ./data)

2. Launch Nexus Cognitron

Create docker-compose.yml file to set up Nexus Cognitron and then launch it:

docker-compose pull && docker-compose up

then go to http://localhost:3000

3. (Optional) Deploy data dumps into your database

There is a function work in traversing script that you can reimplement to iterate over the whole dataset and insert it into your own database or do whatever you want in parallel mode.

By default this script is just printing documents.

bazel run -c opt installer -- iterate \
  --data-filepath $DATA_PATH/index/scitech \
  --schema-filepath schema/scitech.yaml