The test sets OWNCLOUD_MAX_PARALLEL to 1 to disable parallelism.
But since the max amount of parallelism is twice as much, that does not
work.
So change the way we compute the hardMaximumActiveJob: Use the value of
OWNCLOUD_MAX_PARALLEL to maximize this amount and base the maximum amount
of transfer jobs on it instead of the other way.
A result of this change is that, in case of bandwidth limit, we keep the
default of 6 non-transfer jobs in parallel. I believe that's fine since
the short jobs do not really use bandwidth, so we can still keep the same
amount of small jobs.
This reverts commit e1f5a49c21.
Retrying uploads with insufficent storage errors frequently leads to
high server traffic. See #5537 for links and a sketch of a correct
solution.
If we call
setConfiguration(QNetworkConfiguration());
This sets an invalid configuration on the QNAM.
But later, when we really go online because interfaces are discovered,
QNetworkAccessManagerPrivate::_q_onlineStateChanged is called (with isOnline=true).
And this will set the state to disconnected because customNetworkConfiguration is
true, and the networkConfiguration state is disabled.
The workaround we to fix another bug on Windows in which the default network
configuration was not behaving properly.
The issue on linux is hard to reproduce and only happen in some condition,
but it was reproduced on smashbox when they run two owncloudcmd at the same time.
Issues: #4720 , #3600
We were removing the wholme journal db when the user wanted to keep all files,
But that would also remove the selective sync lists.
We should only remove the metadata table.
Issue #5484
- Put all tests in the bin directory so that DLLs can be loaded
- Add missing exports
- Skip tests that use code depending on zlib
- The "GMT" timezone is named differently, use the int constructor instead
5 tests are still failing, it's not really worth fixing at the moment
since no developper is currently using Windows as its main platform.
It could be possible that _firstJob is marked as finished if
aborted before its parent PropagateDirectory was marked as finished,
allowing a posted scheduleNextJob call to schedule the child job
in-between.
Previously, we'd try migrating from legacy settings if reading
the settings failed with an error. Now, we try again after a
couple of seconds and eventually give up.
We can do that because the only changes that were in master but not in 2.3 were the
translations change and documentation change, and the support for the 'M' permission
which we want in 2.3.
The sync engine rely on the 'M' in premission to ask for confirmation
(As requested in issue #5340)
But we only want to ask the premission for the 'root' of the mounting point and not
for every subfolders within it.
So we change the discovery phase in a way that it does not keep the 'M' for
children within the external storage.
Added two checkboxes in the Account Wizard in the advanced page to change the first options.
Also added a checkbox in the general settings to ask for confirmation for external storages.
Theme options allow to hide the checkboxes in the wizard.
As described in issue #5340
We now delete subjobs as their propagation is complete. This allows us
to also release the item by making sure that nothing else is holding a
reference to it.
Remove the stored SyncFileItemVector from SyncEngine and SyncResult
and instead gather the needed info progressively as each itemCompleted
signal is emitted.
This frees some holes on the heap as propagation goes, allowing many
memory allocations without the need of requesting more virtual memory
from the OS, preventing the memory usage from increasingly growing.
This was to catch duplicate emissions for PropagateDirectory but we
don't emit this signal anymore from there.
This fixes a warning about PropagatorJob not being a registered metatype.
This reverts commit fe42c1a818.
Previously if you paused/unpaused a folder for a disconnected account
they would prepare to sync and thus display the 'Waiting...' text. With
this change, folders that can't possibly sync don't show text like
this.
When account connectivity changes, all unpaused folders will be
scheduled anyway.
Stale chunks might be there because a file was removed or would just not
be uploaded, for any reason.
We just start the DeleteJob but we don't care if it success or not.
Relates to https://github.com/owncloud/core/issues/26981
One of the test is testing the case where the file is modified on the server
during the upload. So this test the precondition failed error.
The FakeGetReply logic was modified because resizing a 150MB big QByteArray
by increment of 16k just did not scale when downloading a big file.
Relates to https://github.com/owncloud/core/issues/26981
We do not track the success or error of the DeleteJob because it does not
matter. If it fails, it might be because the chunks were already removed.
If not, the chunks will be stale, but the server must anyway do a few
cleanup from time to time because we do not always remove the chunks
The current logic tried to avoid a DB lookup just to fetch whether
the file is shared or not since that info is already in the
SyncFileItem. The implementation would however need to decrease the
sync count for itself (and parents) before emitting the new status,
thus emitting the OK status for parents before that last child that
ended the propagation for that folder.
Change the implementation to achieve what we want: give the
possibility to decSyncCount to use a pre-fetched sharing state while
still doing the emission for all involved files. This ensures that
the leaf file also gets its status emitted before its parents.
Issue #4797
We currently push the SYNC status for all files that will be propagated,
and then the OK status when those files are propagated.
On top of this, we send those statuses to all clients connected, even
if the socket is kept open by an application that only needed to show
a file open dialog. On macOS we're also using an NSConnection which
means that we have to wait for the RPC call to return from the
extension, which makes bulk status changes possibly heavy.
Reduce the time spent needlessly sending status pushes by limiting
them to files requested through that socket since it connected.
To limit the data to store, only remember the parent directory of
files requested, and store those in a bloom filter.
Note that this adds a requirement to shell extensions: they should
make sure that the status cache only contains entries that have been
requested through the socket API. In other words, the status cache
must be empty when each socket client connects to the socket API.
Otherwise the cached icon type will be shown to the user, and the
SocketAPI won't push new status for that file if it didn't receive
a RETRIEVE_FILE_STATUS.
- Use the looked-up method index also for the invocation
- Do the method name concatenation already on QByteArray since we'll
convert anyway
- Use staticMetaObject instead of metaObject()
Shrinks owncloud binary by 24 KB and libowncloudsync by 14 KB.
I don't know if it has influence on memory usage or runtime speed though.
Was worth a try.
Previously this wasn't happening for errors that were not
NormalErrors because they don't end up in the blacklist.
This revises the resetting logic to be independent of the
error blacklist and make use of UploadInfo::errorCount
instead.
412 errors should reset chunked uploads because they might be
indicative of a checksum error.
Additionally, server bugs might require that additional
errors cause an upload reset. To allow that, a new capability
is added that can be used to advise the client about this.
This displayName() seemed to be based on Account::user() which used
to call _credentials->user(). But then we repurposed user() to be
davUser() and this usage wasn't updated to point back to the username
used for the credentials.
Some custom server use persistent cookies with the auth token. So we should
clear all the cookies when disconnecting.
Account::clearCookieJar is only called from the HTTPCredentials. This funciton
is not used for shibboleth.
There is probably no reasons to keep the HTTP cookie anyway.
Issue #5370
The re-enables the UI, uses Qt API for importing and
stores the certificate/key in the system keychain.
People who had set up client certs need to re-setup the account. This is ok
since it was an undocumented feature anyway.
Fixes#5408#5407.
The problem was that cleanup of the credentials page set the
credentials of the account back to dummy, thereby overriding
things like shib usernames.
This should be broken since a932eac832.
Otherwise it might happen that the model is inconsistant and this can
lead to crash in the worst case.
(For example, if there was a "fetching" label, and we hide it because it
was a 404. In this case, we would not call begin/endRemoveRows, so the
view could still call the model with an index of row 0, that used to be
for the label, but now correspond to the first element of _subs. And
because _subs is empty, this could lead to crashes)
This could make sure that the network job gets deleted if the parent job gets
deleted, and would avoid crashes like:
Crash: EXCEPTION_ACCESS_VIOLATION_READ at 0xffffffff8b008a04
File "qiodevice.cpp", line 1617, in QIODevice::errorString
File "propagatedownload.cpp", line 264, in OCC::GETFileJob::slotReadyRead
File "moc_propagatedownload.cpp", line 85, in OCC::GETFileJob::qt_static_metacall
File "qobject.cpp", line 3716, in QMetaObject::activate
File "moc_qiodevice.cpp", line 154, in QIODevice::readyRead
File "qnetworkreplyhttpimpl.cpp", line 1045, in QNetworkReplyHttpImplPrivate::replyDownloadData
(#5329)
The csync log level was only set up on startup, and for log files.
Fix the issue by making Logger::isNoop rely on being explicitly activated
for the log window instead of relying on the presence of a connected
signal, and move the csync log level logic in Logger.
The compiler seems to use signed enums and we need to reserve an extra
bit for the sign to avoid the 2 value to overflow and being interpreted
as -2 when read, and thus not being correctly compared to the full enum
value.