Skip to content

Releases: aleph-im/pyaleph

0.9.1

28 Nov 11:28

Choose a tag to compare

This new release of the aleph.im Core Channel Node application focuses heavily on performance optimization (especially for filtering and query counts) and bug fixes related to IPFS operations, API consistency, and message handling. It also introduces a new consumed credits endpoint for better transparency.

New Features:

  • Consumed Credits Endpoint: Added a new endpoint to view consumed credits. (PR #882)

Performance Improvements:

  • Message Count Query Caching: Implemented caching for message count queries to improve performance. (PR #880)

Bug Fixes:

  • Peer Redial Logic: Changed the peer redial logic to only attempt redialing to peers seen less than a day ago. (PR #871)
  • Related Content Fetching: Fixed an issue to ensure related content is fetched correctly during the fetch pipeline. (PR #873)
  • ipfs/add_file Endpoint: Fixed a bug where the ipfs/add_file endpoint failed due to a missing name field. (PR #883)
  • IPFS File Stat Timeout: Fixed a timeout error caused by long IPFS file stat operations. (PR #885)
  • ECDSA Token Verification: Fixed an issue with ECDSA token verification. (PR #884)
  • IPFS Client for Stat Operation: Fixed an issue by ensuring IPFS files are stat-ed on the correct client to avoid timeouts. (PR #886)
  • Amend Message Checks: Fixed issues with amend message checks. (PR #888)

Internal/Maintenance:

  • CI Linter on PRs: Configured the CI pipeline to run the linter on Pull Requests. (PR #874)
  • Dockerfile Warnings: Fixed warnings in the Dockerfile. (PR #872)

What's Changed

  • perf: Optimize content_type filtering with computed column and indexes by @aliel in #868
  • Fix PR #868: exclude content_type from API responses by @aliel in #869
  • fix: only attempt to re-dial to peers seen less than a day ago by @odesenfans in #871
  • ci: run linter on PRs by @odesenfans in #874
  • internal: fix warnings in Dockerfile by @odesenfans in #872
  • Fix: Related Content should be fetched during the fetch pipeline by @1yam in #873
  • fix: ipfs/add_file endpoint failed because of missing name field by @odesenfans in #883
  • perf: cache message count queries by @odesenfans in #880
  • fix: timeout error caused by long IPFS file stat operation by @odesenfans in #885
  • Fix: ecdsa token verify by @amalcaraz in #884
  • Consumed credits endpoint by @amalcaraz in #882
  • fix: stat IPFS files on correct client to avoid timeout by @aliel in #886
  • fix: amend messages checks by @amalcaraz in #888

Full Changelog: 0.9.0...0.9.1

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.9.1.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the same folder than docker-compose.yml file, should be a proper 001-update-ipfs-config.sh configuration file, can be downloaded with wget https://raw.githubusercontent.com/aleph-im/pyaleph/0.9.1/deployment/scripts/001-update-ipfs-config.sh.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.37.0 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"].
  • On the docker-compose.yml file, in the ipfs service must include a volume pointing to local 001-update-ipfs-config.sh configuration file, like - "./001-update-ipfs-config.sh:/container-init.d/001-update-ipfs-config.sh:ro".
  • On the docker-compose.yml file, in the ipfs service the volume and the file kubo.json are not necessary anymore, so it can be removed.
  • On the docker-compose.yml file, in the ipfs service must have a new environment variable called IPFS_TELEMETRY set to off value, disabling the IPFS telemetry signals, like - IPFS_TELEMETRY=off.
  • On the docker-compose.yml file, ONLY FOR CCNs WITH HIGH CPU LOADS, in the ipfs service, must have some CPU and memory limitations. On CPU side should be the half of total CPU cores, and on memory side around 20% of the total memory. To configure it, the service should look like (new lines added with comments):
  ipfs:
    restart: always
    image: ipfs/kubo:v0.37.0
    ports:
      - "4001:4001"
      - "4001:4001/udp"
      - "127.0.0.1:5001:5001"
    volumes:
      - "pyaleph-ipfs:/data/ipfs"
      - "./001-update-ipfs-config.sh:/container-init.d/001-update-ipfs-config.sh:ro"
    environment:
      - IPFS_PROFILE=server
      - IPFS_TELEMETRY=off
      - GOMAXPROCS=4  # 50% of total CPU cores amount
      - GOMEMLIMIT=23500MiB # 25% of total RAM memory minus 500MB
    networks:
      - pyaleph
    command: ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"]
    cpus: 4.0  # 50% of total CPU cores amount
    mem_limit: 24g # 25% of total RAM memory
    memswap_limit: 24g # Same amount than up

Upgrade troubleshooting

Occasionally, when migrating IPFS to a newer version, you may encounter the log error: "Error: ipfs repo needs migration, please run migration tool." If this occurs, the following steps should be taken to resolve the issue.

Troubleshooting IPFS Repository Migration

  • Halt all containers using the command docker-compose down.
  • Edit the docker-compose.yml file and comment out the line that mounts the configuration script 001-update-ipfs-config.sh within the IPFS service putting a # at the initial of the line.
  • Start only the IPFS container with the command docker-compose up ipfs. Monitor the container logs until you see the message: "Success: fs-repo migrated to version XX using embedded migrations.", where XX corresponds to the IPFS repository version.
  • Wait for the "Daemon is ready" message to appear in the container logs. After just press Control + C to exit the container.
  • Stop the IPFS container again with docker-compose down. Uncomment the line for the configuration script 001-update-ipfs-config.sh that you previously commented out.
  • Restart all containers normally using the command docker-compose up -d.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.9.0

13 Oct 13:21
c63d6b3

Choose a tag to compare

This new release of the aleph.im Core Channel Node application introduces significant new functionality for credit-based cost and lifecycle management and enhances IPFS configuration flexibility. It also includes fixes for data type handling and migration stability.

New Features:

  • Comprehensive Credit-Based Cost Calculation and Lifecycle Management: Implemented a major feature for comprehensive credit-based cost calculation and full lifecycle management of messages. (PR #836)
  • Separate IPFS Pinning Service Configuration: Added an ipfs.pinning configuration section to allow for a separate pinning service, while maintaining backward compatibility. (PR #859)
  • Python 3.13 Support: Added support for Python 3.13. (PR #864)

Bug Fixes:

  • Migration File Number: Fixed an issue with the migration file numbering. (PR #860)
  • IPFS Byte Size Casting: Corrected the method for casting IPFS byte size. (PR #858)
  • add_file Regression: Fixed a regression bug related to the add_file functionality. (PR #862)

Internal/Maintenance:

  • Local Testing Fix: Fixed an issue related to local testing. (PR #863)

What's Changed

  • Fix migration file number by @aliel in #860
  • Cast IPFS byte size method by @nesitor in #858
  • Add ipfs.pinning configuration section for separate pinning service with backward compatibility by @aliel in #859
  • feat: Implement comprehensive credit-based cost calculation and lifecycle management by @amalcaraz in #836
  • Fix add_file regression by @aliel in #862
  • chore: support Python 3.13 by @odesenfans in #864
  • internal: fix local testing by @odesenfans in #863
  • feat: Add node_id execution_id and price in the historical credit exp… by @amalcaraz in #867

Full Changelog: 0.8.2...0.9.0

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.9.0.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the same folder than docker-compose.yml file, should be a proper 001-update-ipfs-config.sh configuration file, can be downloaded with wget https://raw.githubusercontent.com/aleph-im/pyaleph/0.9.0/deployment/scripts/001-update-ipfs-config.sh.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.37.0 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"].
  • On the docker-compose.yml file, in the ipfs service must include a volume pointing to local 001-update-ipfs-config.sh configuration file, like - "./001-update-ipfs-config.sh:/container-init.d/001-update-ipfs-config.sh:ro".
  • On the docker-compose.yml file, in the ipfs service the volume and the file kubo.json are not necessary anymore, so it can be removed.
  • On the docker-compose.yml file, in the ipfs service must have a new environment variable called IPFS_TELEMETRY set to off value, disabling the IPFS telemetry signals, like - IPFS_TELEMETRY=off.
  • On the docker-compose.yml file, ONLY FOR CCNs WITH HIGH CPU LOADS, in the ipfs service, must have some CPU and memory limitations. On CPU side should be the half of total CPU cores, and on memory side around 20% of the total memory. To configure it, the service should look like (new lines added with comments):
  ipfs:
    restart: always
    image: ipfs/kubo:v0.37.0
    ports:
      - "4001:4001"
      - "4001:4001/udp"
      - "127.0.0.1:5001:5001"
    volumes:
      - "pyaleph-ipfs:/data/ipfs"
      - "./001-update-ipfs-config.sh:/container-init.d/001-update-ipfs-config.sh:ro"
    environment:
      - IPFS_PROFILE=server
      - IPFS_TELEMETRY=off
      - GOMAXPROCS=4  # 50% of total CPU cores amount
      - GOMEMLIMIT=23500MiB # 25% of total RAM memory minus 500MB
    networks:
      - pyaleph
    command: ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"]
    cpus: 4.0  # 50% of total CPU cores amount
    mem_limit: 24g # 25% of total RAM memory
    memswap_limit: 24g # Same amount than up

Upgrade troubleshooting

Occasionally, when migrating IPFS to a newer version, you may encounter the log error: "Error: ipfs repo needs migration, please run migration tool." If this occurs, the following steps should be taken to resolve the issue.

Troubleshooting IPFS Repository Migration

  • Halt all containers using the command docker-compose down.
  • Edit the docker-compose.yml file and comment out the line that mounts the configuration script 001-update-ipfs-config.sh within the IPFS service putting a # at the initial of the line.
  • Start only the IPFS container with the command docker-compose up ipfs. Monitor the container logs until you see the message: "Success: fs-repo migrated to version XX using embedded migrations.", where XX corresponds to the IPFS repository version.
  • Wait for the "Daemon is ready" message to appear in the container logs. After just press Control + C to exit the container.
  • Stop the IPFS container again with docker-compose down. Uncomment the line for the configuration script 001-update-ipfs-config.sh that you previously commented out.
  • Restart all containers normally using the command docker-compose up -d.

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.8.2

23 Sep 14:46

Choose a tag to compare

This new release of the aleph.im Core Channel Node application focuses primarily on bug fixes and performance improvements, ensuring more reliable connections, correct data handling, and faster database operations.

New Features & Improvements

  • Database Performance: Added an index to the time field in the pending_messages table to improve database query performance. (PR #850)

Bug Fixes

  • Balance Pre-check: Solved a formatting issue that was affecting the balance pre-check functionality. (PR #855)
  • RabbitMQ Connection: Fixed connection timeouts that occurred during long operations with RabbitMQ. (PR #856)
  • IPFS Configuration: Corrected the method used to update the IPFS configuration, ensuring it follows the recommended practices. (PR #846)
  • IPFS Size Handling: Resolved an issue related to how IPFS size was being handled. (PR #857)

What's Changed

  • Add an index to the pending_messages table on the time field by @aliel in #850
  • Solve format issue on balance pre-check by @nesitor in #855
  • Fix RabbitMQ connection timeouts during long operations. by @aliel in #856
  • Fix IPFS conf: use the recommended method to update IPFS configuration by @aliel in #846
  • Solve IPFS size handling by @nesitor in #857

Full Changelog: 0.8.1...0.8.2

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.8.2.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the same folder than docker-compose.yml file, should be a proper 001-update-ipfs-config.sh configuration file, can be downloaded with wget https://raw.githubusercontent.com/aleph-im/pyaleph/0.8.2/deployment/scripts/001-update-ipfs-config.sh.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.37.0 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"].
  • On the docker-compose.yml file, in the ipfs service must include a volume pointing to local 001-update-ipfs-config.sh configuration file, like - "./001-update-ipfs-config.sh:/container-init.d/001-update-ipfs-config.sh:ro".
  • On the docker-compose.yml file, in the ipfs service the volume and the file kubo.json are not necessary anymore, so it can be removed.
  • On the docker-compose.yml file, in the ipfs service must have a new environment variable called IPFS_TELEMETRY set to off value, disabling the IPFS telemetry signals, like - IPFS_TELEMETRY=off.
  • On the docker-compose.yml file, ONLY FOR CCNs WITH HIGH CPU LOADS, in the ipfs service, must have some CPU and memory limitations. On CPU side should be the half of total CPU cores, and on memory side around 20% of the total memory. To configure it, the service should look like (new lines added with comments):
  ipfs:
    restart: always
    image: ipfs/kubo:v0.37.0
    ports:
      - "4001:4001"
      - "4001:4001/udp"
      - "127.0.0.1:5001:5001"
    volumes:
      - "pyaleph-ipfs:/data/ipfs"
      - "./001-update-ipfs-config.sh:/container-init.d/001-update-ipfs-config.sh:ro"
    environment:
      - IPFS_PROFILE=server
      - IPFS_TELEMETRY=off
      - GOMAXPROCS=4  # 50% of total CPU cores amount
      - GOMEMLIMIT=23500MiB # 25% of total RAM memory minus 500MB
    networks:
      - pyaleph
    command: ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"]
    cpus: 4.0  # 50% of total CPU cores amount
    mem_limit: 24g # 25% of total RAM memory
    memswap_limit: 24g # Same amount than up

Upgrade troubleshooting

Occasionally, when migrating IPFS to a newer version, you may encounter the log error: "Error: ipfs repo needs migration, please run migration tool." If this occurs, the following steps should be taken to resolve the issue.

Troubleshooting IPFS Repository Migration

  • Halt all containers using the command docker-compose down.
  • Edit the docker-compose.yml file and comment out the line that mounts the configuration script 001-update-ipfs-config.sh within the IPFS service putting a # at the initial of the line.
  • Start only the IPFS container with the command docker-compose up ipfs. Monitor the container logs until you see the message: "Success: fs-repo migrated to version XX using embedded migrations.", where XX corresponds to the IPFS repository version.
  • Wait for the "Daemon is ready" message to appear in the container logs. After just press Control + C to exit the container.
  • Stop the IPFS container again with docker-compose down. Uncomment the line for the configuration script 001-update-ipfs-config.sh that you previously commented out.
  • Restart all containers normally using the command docker-compose up -d.

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.8.1

02 Sep 14:29

Choose a tag to compare

This new release of the aleph.im Core Channel Node application significantly enhances security and functionality while ensuring the codebase remains up-to-date.

✨ Features

  • Historical Pricing: Implemented a new system for historical pricing to allow for the recalculation of message costs.
  • Version Display: The Git version is now displayed on the status HTML page for easier tracking.

🐛 Fixes

  • API Fix: Resolved an issue that was causing the /api/v0/balances endpoint to fail.
  • Security Fix: Addressed an important bugfix by implementing proper permission validation to prevent unauthorized message processing.

🔧 Maintenance & Dependencies

  • Documentation: The metrics.rst documentation file has been updated.
  • Dependency Updates: Several dependencies were update

What's Changed

  • Fix: /api/v0/balances broken by @aliel in #843
  • Chore(deps): Bump pytz from 2025.1 to 2025.2 by @dependabot[bot] in #764
  • Chore(deps): Bump requests from 2.32.3 to 2.32.4 by @dependabot[bot] in #804
  • Chore(deps): Bump urllib3 from 2.3 to 2.5.0 by @dependabot[bot] in #811
  • Chore(deps): Bump aiohttp from 3.11.14 to 3.12.15 by @dependabot[bot] in #828
  • Chore(deps): Bump sentry-sdk from 2.23.1 to 2.35.1 by @dependabot[bot] in #842
  • Update metrics.rst by @aliel in #839
  • Chore(deps): Bump requests from 2.32.3 to 2.32.5 by @dependabot[bot] in #840
  • Chore(deps): Bump types-aiofiles from 24.1.0.20241221 to 24.1.0.20250822 by @dependabot[bot] in #841
  • feat: Implement historical pricing for message cost recalculation by @amalcaraz in #813
  • Added git version in status html page by @amalcaraz in #844
  • Fix: Prevent unauthorized message processing due to missing permission validation by @amalcaraz in #845

Full Changelog: 0.8.0...0.8.1

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.8.1.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the same folder than docker-compose.yml file, should be a proper kubo.json configuration file, can be downloaded with wget https://raw.githubusercontent.com/aleph-im/pyaleph/0.8.1/deployment/samples/docker-compose/kubo.json.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.35.0 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc", "--config-file", "/etc/kubo.json"].
  • On the docker-compose.yml file, in the ipfs service must include a volume pointing to local kubo.json configuration file, like - "./kubo.json:/etc/kubo.json:ro".
  • On the docker-compose.yml file, ONLY FOR CCNs WITH HIGH CPU LOADS, in the ipfs service, must have some CPU and memory limitations. On CPU side should be the half of total CPU cores, and on memory side around 20% of the total memory. To configure it, the service should look like (new lines added with comments):
  ipfs:
    restart: always
    image: ipfs/kubo:v0.35.0
    ports:
      - "4001:4001"
      - "4001:4001/udp"
      - "127.0.0.1:5001:5001"
    volumes:
      - "pyaleph-ipfs:/data/ipfs"
      - "./kubo.json:/etc/kubo.json:ro"
    environment:
      - IPFS_PROFILE=server
      - GOMAXPROCS=4  # 50% of total CPU cores amount
      - GOMEMLIMIT=23500MiB # 25% of total RAM memory minus 500MB
    networks:
      - pyaleph
    command: ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc", "--config-file", "/etc/kubo.json"]
    cpus: 4.0  # 50% of total CPU cores amount
    mem_limit: 24g # 25% of total RAM memory
    memswap_limit: 24g # Same amount than up

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.8.0

26 Jun 15:53

Choose a tag to compare

This new release of the aleph.im Core Channel Node application significantly enhances the pyaleph library with new capabilities for automated balance management, improved file pin handling, the integration of the Unichain network, message status filtering, and Kubo configuration. It also addresses a bug related to balance tracking for specific accounts and includes a Pydantic migration for improved codebase.

✨ New Features

  • Balance Pre-check: A balance pre-check has been added for file pins to ensure sufficient funds before proceeding (#799).
  • Automated Balance Management: The system now includes automated balance checking and management of the message lifecycle (#798).
  • Message Status Filtering: The messages API now supports filtering messages by their status (#806).
  • Unichain Network Implementation: Support for the Unichain network has been implemented (#802).
  • Kubo Configuration: A new implementation for Kubo configuration has been added (#805).

🐞 Bug Fixes

  • A fix was implemented to correctly handle accounts without a balance, allowing for proper balance tracking after the cutoff (#801).
  • An issue where file content could not be found has been resolved (#812).

🛠️ Technical Improvements

  • Pydantic Migration: The test codebase has been migrated to the new Pydantic interface (#807).
  • Docker Image: The Docker Image version has been updated (#815).

What's Changed

New Contributors

Full Changelog: 0.7.4...0.8.0

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.8.0.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the same folder than docker-compose.yml file, should be a proper kubo.json configuration file, can be downloaded with wget https://raw.githubusercontent.com/aleph-im/pyaleph/0.8.0/deployment/samples/docker-compose/kubo.json.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.35.0 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc", "--config-file", "/etc/kubo.json"].
  • On the docker-compose.yml file, in the ipfs service must include a volume pointing to local kubo.json configuration file, like - "./kubo.json:/etc/kubo.json:ro".
  • On the docker-compose.yml file, ONLY FOR CCNs WITH HIGH CPU LOADS, in the ipfs service, must have some CPU and memory limitations. On CPU side should be the half of total CPU cores, and on memory side around 20% of the total memory. To configure it, the service should look like (new lines added with comments):
  ipfs:
    restart: always
    image: ipfs/kubo:v0.35.0
    ports:
      - "4001:4001"
      - "4001:4001/udp"
      - "127.0.0.1:5001:5001"
    volumes:
      - "pyaleph-ipfs:/data/ipfs"
      - "./kubo.json:/etc/kubo.json:ro"
    environment:
      - IPFS_PROFILE=server
      - GOMAXPROCS=4  # 50% of total CPU cores amount
      - GOMEMLIMIT=23500MiB # 25% of total RAM memory minus 500MB
    networks:
      - pyaleph
    command: ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc", "--config-file", "/etc/kubo.json"]
    cpus: 4.0  # 50% of total CPU cores amount
    mem_limit: 24g # 25% of total RAM memory
    memswap_limit: 24g # Same amount than up

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.8.0-rc2

13 Jun 12:08
440c96a

Choose a tag to compare

0.8.0-rc2 Pre-release
Pre-release

This new release of the aleph.im Core Channel Node application significantly enhances the pyaleph library with new capabilities for automated balance management, improved file pin handling, the integration of the Unichain network, message status filtering, and Kubo configuration. It also addresses a bug related to balance tracking for specific accounts and includes a Pydantic migration for improved codebase.

New Features:

  • Automated Balance Checking and Message Lifecycle Management: Introduced automated balance checks and comprehensive management of message lifecycles. (PR #798)
  • Balance Pre-check on File Pins: Added a balance pre-check mechanism for file pins. (PR #799)
  • Unichain Network Implementation: Implemented support for the Unichain network. (PR #802)
  • Message Status Filtering: Added message status filtering to the messages API. (PR #806)
  • Kubo Configuration Implementation: Implemented configuration capabilities for Kubo. (PR #805)

Bug Fixes:

  • Accounts Without Balance: Fixed an issue to correctly catch and track accounts that do not have a balance after a cutoff period. (PR #801)

Improvements:

  • Pydantic Migration to New Interface on tests: Migrated Pydantic to its new interface on some tests. (PR #807)

What's Changed

New Contributors

Full Changelog: 0.7.4...0.8.0-rc2

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.8.0-rc2.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the same folder than docker-compose.yml file, should be a proper kubo.json configuration file, can be downloaded with wget https://raw.githubusercontent.com/aleph-im/pyaleph/refs/heads/main/deployment/samples/docker-compose/kubo.json.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.35.0 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc", "--config-file", "/etc/kubo.json"].
  • On the docker-compose.yml file, in the ipfs service must include a volume pointing to local kubo.json configuration file, like - "./kubo.json:/etc/kubo.json:ro".
  • On the docker-compose.yml file, ONLY FOR CCNs WITH HIGH CPU LOADS, in the ipfs service, must have some CPU and memory limitations. On CPU side should be the half of total CPU cores, and on memory side around 20% of the total memory. To configure it, the service should look like (new lines added with comments):
  ipfs:
    restart: always
    image: ipfs/kubo:v0.35.0
    ports:
      - "4001:4001"
      - "4001:4001/udp"
      - "127.0.0.1:5001:5001"
    volumes:
      - "pyaleph-ipfs:/data/ipfs"
      - "./kubo.json:/etc/kubo.json:ro"
    environment:
      - IPFS_PROFILE=server
      - GOMAXPROCS=4  # 50% of total CPU cores amount
      - GOMEMLIMIT=23500MiB # 25% of total RAM memory minus 500MB
    networks:
      - pyaleph
    command: ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc", "--config-file", "/etc/kubo.json"]
    cpus: 4.0  # 50% of total CPU cores amount
    mem_limit: 24g # 25% of total RAM memory
    memswap_limit: 24g # Same amount than up

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.8.0-rc1

06 Jun 12:26
e706cbd

Choose a tag to compare

0.8.0-rc1 Pre-release
Pre-release

This new release of the aleph.im Core Channel Node application significantly enhances the pyaleph library with new capabilities for automated balance management, improved file pin handling, and the integration of the Unichain network. It also addresses a bug related to balance tracking for specific accounts.

New Features:

  • Automated Balance Checking and Message Lifecycle Management: Introduced automated balance checks and comprehensive management of message lifecycles. (PR #798)
  • Balance Pre-check on IPFS Pins: Added a balance pre-check mechanism for IPFS file pins. (PR #799)
  • Unichain Network Implementation: Implemented support for the Unichain network. (PR #802)

Bug Fixes:

  • Accounts Without Balance: Fixed an issue to correctly catch and track accounts that do not have a balance after a cutoff period. (PR #801)

What's Changed

  • Added balance pre-check on file pins by @nesitor in #799
  • Feat: Add automated balance checking and message lifecycle management by @amalcaraz in #798
  • fix: catch accounts without balance to track balance after cutoff by @amalcaraz in #801
  • Implement Unichain network by @nesitor in #802

Full Changelog: 0.7.4...0.8.0-rc1

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.8.0-rc1.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.34.1 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate"].
  • On the docker-compose.yml file, ONLY FOR CCNs WITH HIGH CPU LOADS, in the ipfs service, must have some CPU and memory limitations. On CPU side should be the half of total CPU cores, and on memory side around 20% of the total memory. To configure it, the service should look like (new lines added with comments):
  ipfs:
    restart: always
    image: ipfs/kubo:v0.34.1
    ports:
      - "4001:4001"
      - "4001:4001/udp"
      - "127.0.0.1:5001:5001"
    volumes:
      - "pyaleph-ipfs:/data/ipfs"
    environment:
      - IPFS_PROFILE=server
      - GOMAXPROCS=4  # 50% of total CPU cores amount
      - GOMEMLIMIT=23500MiB # 25% of total RAM memory minus 500MB
    networks:
      - pyaleph
    command: ["daemon", "--enable-pubsub-experiment", "--migrate"]
    cpus: 4.0  # 50% of total CPU cores amount
    mem_limit: 24g # 25% of total RAM memory
    memswap_limit: 24g # Same amount than up

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.7.4

15 May 16:19

Choose a tag to compare

This new release of the aleph.im Core Channel Node application includes garbage collection bug fixes and improvements.

Bug Fixes:

  • Pydantic Migrations Error Handling: Improved error handling during Pydantic migrations. (PR #786)
  • Garbage Collector Issue: Resolved an issue with the garbage collector. (PR #787)
  • Garbage Collection Failure Proofing: Made the garbage collection process more resilient to failures. (PR #792)

Improvements:

  • Garbage Collection Session Handling: Improved the way garbage collection sessions are handled. (PR #793)

What's Changed

  • Fix: pydantic migrations error handling by @1yam in #786
  • Solve garbage collector issue by @nesitor in #787
  • Garbage collection failure proof by @nesitor in #792
  • Improve garbage collection session handling by @nesitor in #793

Full Changelog: 0.7.3...0.7.4

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.7.4.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.
  • On the docker-compose.yml file, in the ipfs service must use the ipfs/kubo:v0.34.1 and the command section should be ["daemon", "--enable-pubsub-experiment", "--migrate", "--enable-gc"].

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.7.3

01 May 12:06

Choose a tag to compare

This new release of the aleph.im Core Channel Node application includes message processing bug fixes and dependency upgrades.

New Features:

  • Charge for Store and Program Messages: Implemented a change where store and program messages are no longer free. (PR #757)

Bug Fixes:

  • Inconsistent Pagination: Fixed an issue causing inconsistent pagination in API responses. (PR #777)
  • Item Hashes List Endpoint: Resolved a problem with the item hashes list endpoint. (PR #779)
  • Pending POST Messages Increase: Fixed a bug that caused an incorrect increase in pending POST messages. (PR #782)
  • Account files: Fixed an issue causing an error on the account files endpoint. (PR #784)

Improvements:

  • Pydantic V1 Migration: Completed the migration from Pydantic V1 to V2 for the remaining parsing methods. (PR #781)

What's Changed

Full Changelog: 0.7.2...0.7.3

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.7.3.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.

0.7.2

25 Apr 07:50

Choose a tag to compare

This new release of the aleph.im Core Channel Node application includes message processing bug fixes and dependency upgrades.

New Features:

  • Charge for Store and Program Messages: Implemented a change where store and program messages are no longer free. (PR #757)

Bug Fixes:

  • Inconsistent Pagination: Fixed an issue causing inconsistent pagination in API responses. (PR #777)
  • Item Hashes List Endpoint: Resolved a problem with the item hashes list endpoint. (PR #779)
  • Pending POST Messages Increase: Fixed a bug that caused an incorrect increase in pending POST messages. (PR #782)

Improvements:

  • Pydantic V1 Migration: Completed the migration from Pydantic V1 to V2 for the remaining parsing methods. (PR #781)

What's Changed

Full Changelog: 0.7.1...0.7.2

Upgrade guide

Make sure that your node is running v0.5.1 or later. If that is not the case already, follow the upgrade guide here.

From v0.5.1, simply bump the version of these services:

  • On the docker-compose.yml file, in the pyaleph and pyaleph-api services must use alephim/pyaleph-node:0.7.2.
  • On the docker-compose.yml file, in the p2p-service service must use alephim/p2p-service:0.1.4.

⚠️ Warning: After updating pyaleph, the service may take up to 10 minutes to start due to a one-time migration. Do not restart the VM; the service will start automatically once the migration completes.

Then, restart your node: docker-compose pull && docker-compose down && docker-compose up -d.