Skip to content

Conversation

@shurick2
Copy link
Contributor

@shurick2 shurick2 commented Mar 9, 2025

  • add NETBOX_PAGE_SIZE for controlling answer size
  • calculate batch size based on maximum url size

for offset in range(0, len(value), batch_size)
]
max_len = max_url_size - 200 # minus base url, TODO: calc using real base URL
batches = split_by_len(len(field)+2, max_len, value) # +1 for equal sign, +1 for &
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. we shouldn't try to put as much values as possible in one URL as netbox is pretty slow and can time out
  2. sometimes it is faster to 2 requests using pool than in a single requests: netbox often does N+1 on server side

Copy link
Contributor Author

@shurick2 shurick2 Mar 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I came to better understanding how it works and will close this request in favor of annetutil/annet#290.

  1. we shouldn't try to put as much values as possible in one URL as netbox is pretty slow and can time out

It depends on the installation, but in general, I think we shouldn't force all users to wait, because a timeout may occur. We need to provide configurable parameters, so slow installations can adjust query parameters.

2 sometimes it is faster to 2 requests using pool than in a single requests: netbox often does N+1 on server side

It is true for all cases except dcim/devices/ query. This query uses a pager and cannot be run in parallel. My tests show that it is faster to set a larger page_size, such as -15% faster on 70 devices with 50 ports.

Also I found that _collect_by_pages() ignores page.next and it is a problem, because we ignore limit and page from it, so we can skip some data.

@Tishka17
Copy link
Collaborator

Do we really need this change?
I confirm that current implementation is quite stupid, but I am not sure that "clever" implementation will work definitely better

@shurick2 shurick2 closed this Mar 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants