Skip to content

storage sync period #185

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/gxs/rsgenexchange.cc
Original file line number Diff line number Diff line change
Expand Up @@ -3132,7 +3132,7 @@ void RsGenExchange::processRecvdMessages()

// (cyril) Normally we should discard posts that are older than the sync request. But that causes a problem because
// RsGxsNetService requests posts to sync by chunks of 20. So if the 20 are discarded, they will be re-synced next time, and the sync process
// will indefinitly loop on the same 20 posts. Since the posts are there already, keeping them is the least problematique way to fix this problem.
// will indefinitely loop on the same 20 posts. Since the posts are there already, keeping them is the least problematique way to fix this problem.
//
// uint32_t max_sync_age = ( mNetService != NULL)?( mNetService->getSyncAge(msg->metaData->mGroupId)):RS_GXS_DEFAULT_MSG_REQ_PERIOD;
//
Expand Down
18 changes: 10 additions & 8 deletions src/gxs/rsgxsnetservice.cc
Original file line number Diff line number Diff line change
Expand Up @@ -719,7 +719,7 @@ std::error_condition RsGxsNetService::checkUpdatesFromPeers(
if(keep_delay > 0 && req_delay > 0 && keep_delay < req_delay)
req_delay = keep_delay ;

// The last post will be set to TS 0 if the req delay is 0, which means "Indefinitly"
// The last post will be set to TS 0 if the req delay is 0, which means "Indefinitely"

if(req_delay > 0)
msg->createdSinceTS = std::max(0,(int)time(NULL) - req_delay);
Expand Down Expand Up @@ -1644,13 +1644,15 @@ bool RsGxsNetService::loadList(std::list<RsItem *> &load)

void RsGxsNetService::locked_checkDelay(uint32_t& time_in_secs)
{
if(time_in_secs < 1 * 86400) { time_in_secs = 0 ; return ; }
if(time_in_secs <= 10 * 86400) { time_in_secs = 5 * 86400; return ; }
if(time_in_secs <= 20 * 86400) { time_in_secs = 15 * 86400; return ; }
if(time_in_secs <= 60 * 86400) { time_in_secs = 30 * 86400; return ; }
if(time_in_secs <= 120 * 86400) { time_in_secs = 90 * 86400; return ; }
if(time_in_secs <= 250 * 86400) { time_in_secs = 180 * 86400; return ; }
time_in_secs = 365 * 86400;
if(time_in_secs < 1 * 86400) { time_in_secs = 0 ; return ; }
if(time_in_secs <= 10 * 86400) { time_in_secs = 5 * 86400; return ; }
if(time_in_secs <= 20 * 86400) { time_in_secs = 15 * 86400; return ; }
if(time_in_secs <= 60 * 86400) { time_in_secs = 30 * 86400; return ; }
if(time_in_secs <= 120 * 86400) { time_in_secs = 90 * 86400; return ; }
if(time_in_secs <= 250 * 86400) { time_in_secs = 180 * 86400; return ; }
if(time_in_secs <= 400 * 86400) { time_in_secs = 365 * 86400; return ; }
if(time_in_secs <= 1200 * 86400) { time_in_secs = 1095 * 86400; return ; }
time_in_secs = 1825 * 86400;
}

#include <algorithm>
Expand Down
2 changes: 1 addition & 1 deletion src/retroshare/rshistory.h
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ class RsHistory
/*!
* @brief Sets the maximum number of messages to save
* @param[in] chat_type Type of chat for that number limit
* @param[in] count Max umber of messages, 0 meaning indefinitly
* @param[in] count Max umber of messages, 0 meaning indefinitely
*/
virtual void setSaveCount(uint32_t chat_type, uint32_t count) = 0;
};
Expand Down
2 changes: 1 addition & 1 deletion src/tests/network_simulator/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Unsolved questions:
* should we send ACKs everywhere even upward? No, if we severely limit the depth of random walk.
* better distribute routing events, so that the matrix gets filled better?
* find a strategy to avoid storing too many items
* how to handle cases where a ACK cannot be sent back? The previous peer is going to try indefinitly?
* how to handle cases where a ACK cannot be sent back? The previous peer is going to try indefinitely?
=> the ACK will be automatically collected by another route!
* how to make sure ACKed messages are not stored any longer than necessary?
* send signed ACKs, so that the receiver cannot be spoofed.
Expand Down