Skip to content

chore: add mandatory hottier for pstats dataset #1414

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nikhilsinhaparseable
Copy link
Contributor

@nikhilsinhaparseable nikhilsinhaparseable commented Aug 23, 2025

Summary by CodeRabbit

  • New Features

    • Automatically initializes a hot tier for the dataset statistics stream when detected, with a default 10 GiB allocation.
  • Bug Fixes

    • Improved sync resilience: initialization issues for the dataset statistics hot tier no longer block syncing; errors are logged and the process continues.

@nikhilsinhaparseable nikhilsinhaparseable marked this pull request as ready for review August 23, 2025 16:11
Copy link
Contributor

coderabbitai bot commented Aug 23, 2025

Walkthrough

Adds lazy initialization of a hot tier for the dataset stats stream before regular hot-tier syncing. Implements a private helper to check storage for the stream and create a default-sized hot tier if missing. Errors during this pre-step are traced and do not halt the subsequent per-stream sync.

Changes

Cohort / File(s) Summary
Hot tier pre-sync initialization
src/hottier.rs
- Import storage::field_stats::DATASET_STATS_STREAM_NAME.
- sync_hot_tier invokes a new create_pstats_hot_tier() prior to per-stream sync; logs errors and continues.
- New helper checks for dataset stats stream via PARSEABLE.check_or_load_stream(...); if present and no hot tier exists, creates and persists a StreamHotTier with default sizing and metadata.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant H as HotTierManager
  participant S as Storage/PARSEABLE
  participant HT as HotTier Store

  rect rgba(230,240,255,0.5)
  note over H: Pre-sync step (new)
  H->>H: create_pstats_hot_tier()
  H->>S: check_or_load_stream(DATASET_STATS_STREAM_NAME)
  alt Stream exists
    H->>HT: get_hot_tier(DATASET_STATS_STREAM_NAME)
    alt Hot tier missing
      H->>HT: put_hot_tier(DATASET_STATS_STREAM_NAME, default StreamHotTier)
      note right of HT: version=CURRENT_HOT_TIER_VERSION<br/>size=MIN_STREAM_HOT_TIER_SIZE_BYTES
    else Hot tier present
      note right of HT: No-op
    end
  else Stream absent
    note over H,S: No-op
  end
  opt Error
    H->>H: trace! error and continue
  end
  end

  rect rgba(235,255,235,0.5)
  note over H: Existing per-stream sync continues
  H->>H: sync per stream (unchanged flow)
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I nibble bytes and hop through tiers,
Preparing stats without the fears.
A gentle check, a cozy slot,
If none exists—I make the spot.
Then off I dash to sync the rest,
With twitchy nose, I do my best. 🐇💾

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/hottier.rs (2)

255-259: Surface pre-step failures at a higher log level (debug/warn) for operational visibility.

Swallowing errors with trace! can hide misconfigurations (e.g., permissions on hot-tier dir). Consider logging at least debug! and include structured fields.

Apply this localized change:

-        if let Err(e) = self.create_pstats_hot_tier().await {
-            tracing::trace!("Skipping pstats hot tier creation because of error: {e}");
-        }
+        if let Err(e) = self.create_pstats_hot_tier().await {
+            tracing::debug!(error = %e, "Skipping dataset-stats hot tier creation pre-step");
+        }

Optionally, emit an info! when the hot tier is created (see suggestion below in create_pstats_hot_tier).


716-739: Make the helper idempotent, explicit, and slightly more robust; standardize naming.

The logic is sound and idempotent. Two small improvements:

  • Naming: “pstats” vs “dataset stats” is inconsistent. Prefer a clear name like ensure_dataset_stats_hot_tier for discoverability.
  • Robustness: ensure the per-stream directory exists before put_hot_tier to avoid relying on LocalFileSystem::put creating parents. Also, log on successful creation to aid ops.

Apply the following focused adjustments:

-    /// Creates hot tier for pstats internal stream if the stream exists in storage
-    async fn create_pstats_hot_tier(&self) -> Result<(), HotTierError> {
+    /// Ensures a hot tier exists for the dataset-stats stream if the stream exists in storage.
+    async fn create_pstats_hot_tier(&self) -> Result<(), HotTierError> {
         // Check if pstats hot tier already exists
         if !self.check_stream_hot_tier_exists(DATASET_STATS_STREAM_NAME) {
             // Check if pstats stream exists in storage by attempting to load it
             if PARSEABLE
                 .check_or_load_stream(DATASET_STATS_STREAM_NAME)
                 .await
             {
+                // Ensure the directory exists for the metadata file
+                let dir = self.hot_tier_path.join(DATASET_STATS_STREAM_NAME);
+                if !dir.exists() {
+                    tokio::fs::create_dir_all(&dir).await?;
+                }
                 let mut stream_hot_tier = StreamHotTier {
                     version: Some(CURRENT_HOT_TIER_VERSION.to_string()),
                     size: MIN_STREAM_HOT_TIER_SIZE_BYTES,
                     used_size: 0,
                     available_size: MIN_STREAM_HOT_TIER_SIZE_BYTES,
                     oldest_date_time_entry: None,
                 };
                 self.put_hot_tier(DATASET_STATS_STREAM_NAME, &mut stream_hot_tier)
                     .await?;
+                tracing::info!(
+                    stream = DATASET_STATS_STREAM_NAME,
+                    size_bytes = MIN_STREAM_HOT_TIER_SIZE_BYTES,
+                    "Created dataset-stats hot tier metadata"
+                );
             }
         }
 
         Ok(())
     }

Optional follow-up: factor this into a generic ensure_hot_tier_for_stream(stream_name, size_bytes) and reuse in put_internal_stream_hot_tier to de-duplicate logic.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0e35b07 and 9c9774a.

📒 Files selected for processing (1)
  • src/hottier.rs (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: Quest Smoke and Load Tests for Distributed deployments
  • GitHub Check: Quest Smoke and Load Tests for Standalone deployments
  • GitHub Check: Build Default x86_64-apple-darwin
  • GitHub Check: Build Kafka aarch64-apple-darwin
  • GitHub Check: Build Default x86_64-pc-windows-msvc
  • GitHub Check: Build Default aarch64-unknown-linux-gnu
  • GitHub Check: coverage
  • GitHub Check: Build Default aarch64-apple-darwin
  • GitHub Check: Build Default x86_64-unknown-linux-gnu
  • GitHub Check: Build Kafka x86_64-unknown-linux-gnu
🔇 Additional comments (2)
src/hottier.rs (2)

30-30: Import of DATASET_STATS_STREAM_NAME looks right.

Pulling the dataset-stats stream name from storage::field_stats is appropriate for colocating ownership with storage-layer concerns. No issues spotted.


716-739: No issues found with check_or_load_stream behavior

The check_or_load_stream(&self, stream_name: &str) -> bool helper:

  • Returns true if the stream is already in PARSEABLE.streams (in-memory) or,
  • In Mode::Query or Mode::Prism, attempts to load from storage via create_stream_and_schema_from_storage (which calls streams.get_or_create) and returns true on success .

The create_stream_and_schema_from_storage implementation:

  • Verifies existence via storage.list_streams(),
  • Inserts the stream into self.streams using get_or_create before returning Ok(true) .

Finally, PARSEABLE.streams.list() simply collects the in-memory keys (containslist includes the name) . Thus a true result from check_or_load_stream guarantees the stream appears in PARSEABLE.streams.list() for downstream hot-tier synchronization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant