Skip to content

0.9.1

Latest

Choose a tag to compare

@nesitor nesitor released this 28 Nov 11:28

This new release of the aleph.im Core Channel Node application focuses heavily on performance optimization (especially for filtering and query counts) and bug fixes related to IPFS operations, API consistency, and message handling. It also introduces a new consumed credits endpoint for better transparency.

New Features:

  • Consumed Credits Endpoint: Added a new endpoint to view consumed credits. (PR #882)

Performance Improvements:

  • Message Count Query Caching: Implemented caching for message count queries to improve performance. (PR #880)

Bug Fixes:

  • Peer Redial Logic: Changed the peer redial logic to only attempt redialing to peers seen less than a day ago. (PR #871)
  • Related Content Fetching: Fixed an issue to ensure related content is fetched correctly during the fetch pipeline. (PR #873)
  • ipfs/add_file Endpoint: Fixed a bug where the ipfs/add_file endpoint failed due to a missing name field. (PR #883)
  • IPFS File Stat Timeout: Fixed a timeout error caused by long IPFS file stat operations. (PR #885)
  • ECDSA Token Verification: Fixed an issue with ECDSA token verification. (PR #884)
  • IPFS Client for Stat Operation: Fixed an issue by ensuring IPFS files are stat-ed on the correct client to avoid timeouts. (PR #886)
  • Amend Message Checks: Fixed issues with amend message checks. (PR #888)

Internal/Maintenance:

  • CI Linter on PRs: Configured the CI pipeline to run the linter on Pull Requests. (PR #874)
  • Dockerfile Warnings: Fixed warnings in the Dockerfile. (PR #872)

What's Changed

  • perf: Optimize content_type filtering with computed column and indexes by @aliel in #868
  • Fix PR #868: exclude content_type from API responses by @aliel in #869
  • fix: only attempt to re-dial to peers seen less than a day ago by @odesenfans in #871
  • ci: run linter on PRs by @odesenfans in #874
  • internal: fix warnings in Dockerfile by @odesenfans in #872
  • Fix: Related Content should be fetched during the fetch pipeline by @1yam in #873
  • fix: ipfs/add_file endpoint failed because of missing name field by @odesenfans in #883
  • perf: cache message count queries by @odesenfans in #880
  • fix: timeout error caused by long IPFS file stat operation by @odesenfans in #885
  • Fix: ecdsa token verify by @amalcaraz in #884
  • Consumed credits endpoint by @amalcaraz in #882
  • fix: stat IPFS files on correct client to avoid timeout by @aliel in #886
  • fix: amend messages checks by @amalcaraz in #888

Full Changelog: 0.9.0...0.9.1

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.9.1.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the same folder than docker-compose.yml file, should be a proper 001-update-ipfs-config.sh configuration file, can be downloaded with wget https://raw.githubusercontent.com/aleph-im/pyaleph/0.9.1/deployment/scripts/001-update-ipfs-config.sh.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.37.0 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"].
  • On the docker-compose.yml file, in the ipfs service must include a volume pointing to local 001-update-ipfs-config.sh configuration file, like - "./001-update-ipfs-config.sh:/container-init.d/001-update-ipfs-config.sh:ro".
  • On the docker-compose.yml file, in the ipfs service the volume and the file kubo.json are not necessary anymore, so it can be removed.
  • On the docker-compose.yml file, in the ipfs service must have a new environment variable called IPFS_TELEMETRY set to off value, disabling the IPFS telemetry signals, like - IPFS_TELEMETRY=off.
  • On the docker-compose.yml file, ONLY FOR CCNs WITH HIGH CPU LOADS, in the ipfs service, must have some CPU and memory limitations. On CPU side should be the half of total CPU cores, and on memory side around 20% of the total memory. To configure it, the service should look like (new lines added with comments):
  ipfs:
    restart: always
    image: ipfs/kubo:v0.37.0
    ports:
      - "4001:4001"
      - "4001:4001/udp"
      - "127.0.0.1:5001:5001"
    volumes:
      - "pyaleph-ipfs:/data/ipfs"
      - "./001-update-ipfs-config.sh:/container-init.d/001-update-ipfs-config.sh:ro"
    environment:
      - IPFS_PROFILE=server
      - IPFS_TELEMETRY=off
      - GOMAXPROCS=4  # 50% of total CPU cores amount
      - GOMEMLIMIT=23500MiB # 25% of total RAM memory minus 500MB
    networks:
      - pyaleph
    command: ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"]
    cpus: 4.0  # 50% of total CPU cores amount
    mem_limit: 24g # 25% of total RAM memory
    memswap_limit: 24g # Same amount than up

Upgrade troubleshooting

Occasionally, when migrating IPFS to a newer version, you may encounter the log error: "Error: ipfs repo needs migration, please run migration tool." If this occurs, the following steps should be taken to resolve the issue.

Troubleshooting IPFS Repository Migration

  • Halt all containers using the command docker-compose down.
  • Edit the docker-compose.yml file and comment out the line that mounts the configuration script 001-update-ipfs-config.sh within the IPFS service putting a # at the initial of the line.
  • Start only the IPFS container with the command docker-compose up ipfs. Monitor the container logs until you see the message: "Success: fs-repo migrated to version XX using embedded migrations.", where XX corresponds to the IPFS repository version.
  • Wait for the "Daemon is ready" message to appear in the container logs. After just press Control + C to exit the container.
  • Stop the IPFS container again with docker-compose down. Uncomment the line for the configuration script 001-update-ipfs-config.sh that you previously commented out.
  • Restart all containers normally using the command docker-compose up -d.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.