Previously if you paused/unpaused a folder for a disconnected account
they would prepare to sync and thus display the 'Waiting...' text. With
this change, folders that can't possibly sync don't show text like
this.
When account connectivity changes, all unpaused folders will be
scheduled anyway.
Stale chunks might be there because a file was removed or would just not
be uploaded, for any reason.
We just start the DeleteJob but we don't care if it success or not.
Relates to https://github.com/owncloud/core/issues/26981
One of the test is testing the case where the file is modified on the server
during the upload. So this test the precondition failed error.
The FakeGetReply logic was modified because resizing a 150MB big QByteArray
by increment of 16k just did not scale when downloading a big file.
Relates to https://github.com/owncloud/core/issues/26981
We do not track the success or error of the DeleteJob because it does not
matter. If it fails, it might be because the chunks were already removed.
If not, the chunks will be stale, but the server must anyway do a few
cleanup from time to time because we do not always remove the chunks
The current logic tried to avoid a DB lookup just to fetch whether
the file is shared or not since that info is already in the
SyncFileItem. The implementation would however need to decrease the
sync count for itself (and parents) before emitting the new status,
thus emitting the OK status for parents before that last child that
ended the propagation for that folder.
Change the implementation to achieve what we want: give the
possibility to decSyncCount to use a pre-fetched sharing state while
still doing the emission for all involved files. This ensures that
the leaf file also gets its status emitted before its parents.
Issue #4797
We currently push the SYNC status for all files that will be propagated,
and then the OK status when those files are propagated.
On top of this, we send those statuses to all clients connected, even
if the socket is kept open by an application that only needed to show
a file open dialog. On macOS we're also using an NSConnection which
means that we have to wait for the RPC call to return from the
extension, which makes bulk status changes possibly heavy.
Reduce the time spent needlessly sending status pushes by limiting
them to files requested through that socket since it connected.
To limit the data to store, only remember the parent directory of
files requested, and store those in a bloom filter.
Note that this adds a requirement to shell extensions: they should
make sure that the status cache only contains entries that have been
requested through the socket API. In other words, the status cache
must be empty when each socket client connects to the socket API.
Otherwise the cached icon type will be shown to the user, and the
SocketAPI won't push new status for that file if it didn't receive
a RETRIEVE_FILE_STATUS.
- Use the looked-up method index also for the invocation
- Do the method name concatenation already on QByteArray since we'll
convert anyway
- Use staticMetaObject instead of metaObject()
Shrinks owncloud binary by 24 KB and libowncloudsync by 14 KB.
I don't know if it has influence on memory usage or runtime speed though.
Was worth a try.
Previously this wasn't happening for errors that were not
NormalErrors because they don't end up in the blacklist.
This revises the resetting logic to be independent of the
error blacklist and make use of UploadInfo::errorCount
instead.
412 errors should reset chunked uploads because they might be
indicative of a checksum error.
Additionally, server bugs might require that additional
errors cause an upload reset. To allow that, a new capability
is added that can be used to advise the client about this.
This displayName() seemed to be based on Account::user() which used
to call _credentials->user(). But then we repurposed user() to be
davUser() and this usage wasn't updated to point back to the username
used for the credentials.
Some custom server use persistent cookies with the auth token. So we should
clear all the cookies when disconnecting.
Account::clearCookieJar is only called from the HTTPCredentials. This funciton
is not used for shibboleth.
There is probably no reasons to keep the HTTP cookie anyway.
Issue #5370
The re-enables the UI, uses Qt API for importing and
stores the certificate/key in the system keychain.
People who had set up client certs need to re-setup the account. This is ok
since it was an undocumented feature anyway.
Fixes#5408#5407.
The problem was that cleanup of the credentials page set the
credentials of the account back to dummy, thereby overriding
things like shib usernames.
This should be broken since a932eac832.
Otherwise it might happen that the model is inconsistant and this can
lead to crash in the worst case.
(For example, if there was a "fetching" label, and we hide it because it
was a 404. In this case, we would not call begin/endRemoveRows, so the
view could still call the model with an index of row 0, that used to be
for the label, but now correspond to the first element of _subs. And
because _subs is empty, this could lead to crashes)
This could make sure that the network job gets deleted if the parent job gets
deleted, and would avoid crashes like:
Crash: EXCEPTION_ACCESS_VIOLATION_READ at 0xffffffff8b008a04
File "qiodevice.cpp", line 1617, in QIODevice::errorString
File "propagatedownload.cpp", line 264, in OCC::GETFileJob::slotReadyRead
File "moc_propagatedownload.cpp", line 85, in OCC::GETFileJob::qt_static_metacall
File "qobject.cpp", line 3716, in QMetaObject::activate
File "moc_qiodevice.cpp", line 154, in QIODevice::readyRead
File "qnetworkreplyhttpimpl.cpp", line 1045, in QNetworkReplyHttpImplPrivate::replyDownloadData
(#5329)
The csync log level was only set up on startup, and for log files.
Fix the issue by making Logger::isNoop rely on being explicitly activated
for the log window instead of relying on the presence of a connected
signal, and move the csync log level logic in Logger.
The compiler seems to use signed enums and we need to reserve an extra
bit for the sign to avoid the 2 value to overflow and being interpreted
as -2 when read, and thus not being correctly compared to the full enum
value.
The rules to select the webdav url are now:
- If the server reports that the new chunking algorithm is working,
always use remote.php/dav/files/<username>
This capability can be overriden with an environment variable
- Otherwise, use the dav path provided by the theme, which defaults to
remote.php/webdav
This means that with the newer server, the branding can no longer override
the webdav URL. If there is still an usecase for the branding to do so, we
need to find another way to override it. But it is now more complicated to
configure as might need include the username and need different endpoint
depending on the operations (chunks or not)
Issue #4007
As the URL might be print on the logs.
Also don't change the scheme from http to owncloud.
This was required before when we were using neon through csync, but now that
we use QNAM for everything we don't need it. The credentials from the account
are used.
We are going to change the webdav path depending on the capabilities.
But the SyncEngine and csync might have been created before the capabilities
are retrieved.
The main raison why we gave the path to the sync engine was to pass it to csync.
But the thing is that csync don't need anymore this url as everything is done by the
discovery classes in libsync that use the network jobs that use the account for the urls.
So csync do not need the remote URI.
shortenFilename in folderstatusmodel.cpp was useless because the string is the
_file of a SyncFileItem which is the relative file name, that name never
starts with owncloud://.
All the csync test creates the folder because csync use to check if the folder
exists. But we don't need to do that anymore
Some listeners detect whether a sync is finished by checking
for isUpdatingEstimates and completedFiles >= totalFiles. But
if a sync didn't transfer any files we never sent signal
with these values. Now we do.
The "S" in the permission is only for the "Shared with me" files.
It is only used to show the shared status in the overlay icons.
But we also wish to show the shared status for files that are shared
"by" the users. We can find that out using the 'share-types' webdav
property. If set, then we are sharing the object.
We fake a 'S' in the permission as for our purpose, they mean the same.
Issue #4788
The ownCloud 9.1 server has a data-fingerprint property that the admin must
change in case of backup restoration. When this change, the client understands
that a backup was restored, and will generate conflict files and re-upload
new files.
The heuristics based system checks that there is at least two files wose mtime
is put back in the past and no files that goes forward. In that case we ask the
user before creating the conflicts.
This commit disable the heuristics for newer server that have the data-fingerpint.
And change the heuristics to two hours because we want to avoid false positive due
to some clock error, and that 2 hours of lost due to backup restoration is probably
not so bad.
We only ask the user in the heuristics based aproach so in practice this mean that
the "backup-detected" dialog will no longer appear with newer server.
Relates issues #5260, #5109
Otherwise progress listeners think it's still the last-completed
item when the next sync starts. This lead to spurious entries in
the "Recent Changes" list.
Update from commit 05ce8a23cdc12e825532dc6de06c267fb8d48b4f from
https://github.com/dragotin/QProgressIndicator
Which itself is forked from commit e5ba0fd09bfd43b067ee3646d70b294c7efcb558 from
upstream, with additional license header.
It was relicensed to MIT according to
14bb9d10e2
Relates to issues #5180 and #5184
Issue #5224
Two problems:
- In the discovery phase, we need to check the selective sync entries of
the source path in case of renames.
- When the rename is done, we need to actually update the black list in the
database.
Some tests (such as FolderManTest) can polute the config file with invalid
accounts.
(That's because most of the code, (even in libsync) always instentiate a ConfigFile)
The crash reporter shows a lot of crashes in sqlite3_clear_bindings
which seems to indicate that _stmt is null. We should guard against
a null value in order to avoid crashing.
This should only happen if the prepare call fails. We don't usually
check the return value of the prepare call, but if _stmt is null, the
exec call should return false, not true. We check the result of the
exec call, so this should then abort the sync with an error, rather
than crashing.
- Use a white icon if the context menu is visible.
- Enable `QIcon::setIsMask` if compiled on Qt >= 5.6 to allow automatic
macOS color handling.
- No changes if the colored icons are used.
One local folder can now be configured as sync target for multiple
accounts as long as their url and user differ.
Also this patch accepts that the sync folder is behind a symlink.
Also this patch fixes a bug that before the user input was taken
canonically which was not working for the symlink handling.
Instead of using the regular selective-sync UI (where it's unclear what
the "Cancel" button would even mean in this context), provide a
different set of buttons that allow the user to quickly synchronize
all pending big folders, none of them, or perform manual changes
as usual.
Tray: Workaround collection
* QDBus workaround for Qt 5.5.0 only, there were reports of the tray
working fine with 5.5.1. #5164
* OWNCLOUD_FORCE_QDBUS_TRAY_WORKAROUND to force the workaround on an off
* OWNCLOUD_TRAY_UPDATE_WHILE_VISIBLE to enable or disable updating of
the menu while it's visible - disable by default due to problems on OSX and Xubuntu.
* Track the visibility of the tray menu with aboutToShow/aboutToHide
only on OSX - the aboutToHide signal doesn't trigger reliably on linux
* Refactor such that setupContextMenu is different from updateContextMenu
* Don't use on-demand updating of the tray menu when the qdbus workaround
is active, instead to occasional (30s) updates of the tray menu.
We ignored csync log, but we also need to silent Qt debug output.
We need to ignore it at the very begining because there might be
qDebug also in account creation.
Issue #5196
Two bugs:
- The change filed are not considered as move, they are re-downloaded
but the old file was not removed from the database. The change in
owncloudpropagator.cpp takes care of removing the old entries.
- Next sync would then remove the file in the server in the old folder
This was not a problem until we start reusing the sync engine, and
that the _renamedFolders map is not cleared. We were before deleting
a non-existing file. But now we delete the actual file.
Also improve the tests to be able to do move on the server.
This include support for file id.
Issue #5192
In case of the root directory, it may happen that the _item
is empty and the _item->_status is NoStatus. But we still need to report
the proper success or error of the whole propagation. We should really
use _hasError for that. However, _hasError is also defined to NoStatus
if there was no error, so in that case we need to set Success.
This fixes the problem in which the data-fingerprint is not saved on the
database because the SyncEngine think that the sync failed. (Issue #5185)
We wwer enot connecting to the right signal from the check server
job, and therefore we were not catching the condition in which the
json was invalid. We would then never terminate the ConnectionValidator job.
Note that instanceNotFound is also emited if there is a network error.
The log looked like this:
10:25:51.247 OCC::CheckServerJob::finished: status.php from server is not valid JSON!
10:25:51.248 OCC::CheckServerJob::finished: status.php returns: QMap() QNetworkReply::NetworkError(NoError) Reply: QNetworkReplyHttpImpl(0x2b6a790)
10:25:51.248 OCC::CheckServerJob::finished: No proper answer on QUrl("http://localhost/~owncloud/status.php")
10:26:23.235 OCC::AccountState::checkConnectivity: ConnectionValidator already running, ignoring "owncloud@localhost"
10:26:55.235 OCC::AccountState::checkConnectivity: ConnectionValidator already running, ignoring "owncloud@localhost"
[...]
- Replace functions that are provided by MinGW with a Win32-based
implementation
- Explicitly export needed symbols from ocsync.dll
- Rename share.h to sharemanager.h since the name clashes with one
of the Windows headers and get included from there
- Remove the timestamp from the fallback csync stderr logging, it's
not used since we always provide a log callback