Update from commit 05ce8a23cdc12e825532dc6de06c267fb8d48b4f from
https://github.com/dragotin/QProgressIndicator
Which itself is forked from commit e5ba0fd09bfd43b067ee3646d70b294c7efcb558 from
upstream, with additional license header.
It was relicensed to MIT according to
14bb9d10e2
Relates to issues #5180 and #5184
Issue #5224
Two problems:
- In the discovery phase, we need to check the selective sync entries of
the source path in case of renames.
- When the rename is done, we need to actually update the black list in the
database.
Some tests (such as FolderManTest) can polute the config file with invalid
accounts.
(That's because most of the code, (even in libsync) always instentiate a ConfigFile)
The crash reporter shows a lot of crashes in sqlite3_clear_bindings
which seems to indicate that _stmt is null. We should guard against
a null value in order to avoid crashing.
This should only happen if the prepare call fails. We don't usually
check the return value of the prepare call, but if _stmt is null, the
exec call should return false, not true. We check the result of the
exec call, so this should then abort the sync with an error, rather
than crashing.
- Use a white icon if the context menu is visible.
- Enable `QIcon::setIsMask` if compiled on Qt >= 5.6 to allow automatic
macOS color handling.
- No changes if the colored icons are used.
One local folder can now be configured as sync target for multiple
accounts as long as their url and user differ.
Also this patch accepts that the sync folder is behind a symlink.
Also this patch fixes a bug that before the user input was taken
canonically which was not working for the symlink handling.
Instead of using the regular selective-sync UI (where it's unclear what
the "Cancel" button would even mean in this context), provide a
different set of buttons that allow the user to quickly synchronize
all pending big folders, none of them, or perform manual changes
as usual.
Tray: Workaround collection
* QDBus workaround for Qt 5.5.0 only, there were reports of the tray
working fine with 5.5.1. #5164
* OWNCLOUD_FORCE_QDBUS_TRAY_WORKAROUND to force the workaround on an off
* OWNCLOUD_TRAY_UPDATE_WHILE_VISIBLE to enable or disable updating of
the menu while it's visible - disable by default due to problems on OSX and Xubuntu.
* Track the visibility of the tray menu with aboutToShow/aboutToHide
only on OSX - the aboutToHide signal doesn't trigger reliably on linux
* Refactor such that setupContextMenu is different from updateContextMenu
* Don't use on-demand updating of the tray menu when the qdbus workaround
is active, instead to occasional (30s) updates of the tray menu.
We ignored csync log, but we also need to silent Qt debug output.
We need to ignore it at the very begining because there might be
qDebug also in account creation.
Issue #5196
Two bugs:
- The change filed are not considered as move, they are re-downloaded
but the old file was not removed from the database. The change in
owncloudpropagator.cpp takes care of removing the old entries.
- Next sync would then remove the file in the server in the old folder
This was not a problem until we start reusing the sync engine, and
that the _renamedFolders map is not cleared. We were before deleting
a non-existing file. But now we delete the actual file.
Also improve the tests to be able to do move on the server.
This include support for file id.
Issue #5192
In case of the root directory, it may happen that the _item
is empty and the _item->_status is NoStatus. But we still need to report
the proper success or error of the whole propagation. We should really
use _hasError for that. However, _hasError is also defined to NoStatus
if there was no error, so in that case we need to set Success.
This fixes the problem in which the data-fingerprint is not saved on the
database because the SyncEngine think that the sync failed. (Issue #5185)
We wwer enot connecting to the right signal from the check server
job, and therefore we were not catching the condition in which the
json was invalid. We would then never terminate the ConnectionValidator job.
Note that instanceNotFound is also emited if there is a network error.
The log looked like this:
10:25:51.247 OCC::CheckServerJob::finished: status.php from server is not valid JSON!
10:25:51.248 OCC::CheckServerJob::finished: status.php returns: QMap() QNetworkReply::NetworkError(NoError) Reply: QNetworkReplyHttpImpl(0x2b6a790)
10:25:51.248 OCC::CheckServerJob::finished: No proper answer on QUrl("http://localhost/~owncloud/status.php")
10:26:23.235 OCC::AccountState::checkConnectivity: ConnectionValidator already running, ignoring "owncloud@localhost"
10:26:55.235 OCC::AccountState::checkConnectivity: ConnectionValidator already running, ignoring "owncloud@localhost"
[...]
- Replace functions that are provided by MinGW with a Win32-based
implementation
- Explicitly export needed symbols from ocsync.dll
- Rename share.h to sharemanager.h since the name clashes with one
of the Windows headers and get included from there
- Remove the timestamp from the fallback csync stderr logging, it's
not used since we always provide a log callback
These are not understood by owncloud yet, but were requested for CernBox
OC-Total-Length in the MKCOL: The full lenght of the file
OC-Chunk-Offset in the PUT: The offset within the file in which this chunk belongs
OC-Checksum in the MOVE: The transission checksum
Two bugs:
- The change filed are not considered as move, they are re-downloaded
but the old file was not removed from the database. The change in
owncloudpropagator.cpp takes care of removing the old entries.
- Next sync would then remove the file in the server in the old folder
This was not a problem until we start reusing the sync engine, and
that the _renamedFolders map is not cleared. We were before deleting
a non-existing file. But now we delete the actual file.
Also improve the tests to be able to do move on the server.
This include support for file id.
Issue #5192
(cherry picked from commit 85b8ab178e)
We wwer enot connecting to the right signal from the check server
job, and therefore we were not catching the condition in which the
json was invalid. We would then never terminate the ConnectionValidator job.
Note that instanceNotFound is also emited if there is a network error.
The log looked like this:
10:25:51.247 OCC::CheckServerJob::finished: status.php from server is not valid JSON!
10:25:51.248 OCC::CheckServerJob::finished: status.php returns: QMap() QNetworkReply::NetworkError(NoError) Reply: QNetworkReplyHttpImpl(0x2b6a790)
10:25:51.248 OCC::CheckServerJob::finished: No proper answer on QUrl("http://localhost/~owncloud/status.php")
10:26:23.235 OCC::AccountState::checkConnectivity: ConnectionValidator already running, ignoring "owncloud@localhost"
10:26:55.235 OCC::AccountState::checkConnectivity: ConnectionValidator already running, ignoring "owncloud@localhost"
[...]
(cherry picked from commit ff701bd473)
This is not sufficient as it is not working for the Socket API.
Next commit will fix it in another layer.
Also, not ignoring paths that are not inside the folder is wrong
as it might still happen if the name has a different casing
This reverts commit d5a481f132.
(cherry picked from commit 904cd46f75)
Previously, we only checked the hiddenness of the target file and
ignored the hiddenness of the containing folders. This lead to
undesired behavior when people synced their home folders and there
was a folder watcher notification for a non-hidden file in one of
the hidden folders.
I'm not fully sure why, but sometimes notifications for .foo/bar were
already ignored, but notifications for .foo/bar/car were not. This may
be because of how we set up the FolderWatchers on Linux.
The new behavior is to check all path components for hiddenness, up
until the base path (but excluding the base path, so using a hidden
folder as a sync folder will work).
This is important because we compare the paths from the file system watcher if it
starts with this path.
Same in the SocketAPI where we need to use cannonical paths in the REGISTER_PATH command,
as the plugin themself will do this comparison.
Issue #5116
This is not sufficient as it is not working for the Socket API.
Next commit will fix it in another layer.
Also, not ignoring paths that are not inside the folder is wrong
as it might still happen if the name has a different casing
This reverts commit d5a481f132.
If the folder has different case in the settings and in the FS, we should
not ignore all the files. This is important for the files system watcher.
(cherry picked from commit 98268d102f)
Missing deleteLater when the CleanupPollsJob aborts.
This is only a problem if the SyncEngine is kept alive a long time. Which is
usually not the case in the configuration where poll jobs are used.
(cherry picked from commit 3465024898)
I got into a situation where the model would endlessly request the directory
contents from the server because we did not notice yet that the server
is actually in maintenance mode while we were expanding the tree view when
changing the tab to the account or when just expanding it by clicking.
(cherry picked from commit 524220d090)
I got into a situation where the model would endlessly request the directory
contents from the server because we did not notice yet that the server
is actually in maintenance mode while we were expanding the tree view when
changing the tab to the account or when just expanding it by clicking.
Before it was in Folder, however, the command line client does not
have the Folder class. To not duplicate code, the function to generate
the sync journal name went to SyncEngine class.
This would happen if the directory would first need to be created
through an mkdir propagation job. This job's itemCompleted signal
would trigger the directory to show as SYNC even though its children
are still propagating.
Fix the issue by tracking the sync count for each file, affecting
its parents. This allows us to get rid of the O(n) vector lookup
for each status query, and properly track the hierachical sync
status of a directory.
This also removes the itemCompleted signal emission from the
PropagateDirectory job. Since we only needed for overlay icons, and
since this job doesn't do any direct propagation, we can remove it
to ensure that we won't call itemCompleted twice for the item attached
to Propagate*Mkdir jobs (since the PropagateDirectory is backed by
the same SyncFileItem, instruction and status).
The current way of tracking the need to update the metadata without
propagation using a separate flag makes it difficult to track
priorities between the local and remote tree. The logic is also
difficult to logically cover since the possibilities matrix isn't
100% covered, leaving the flag only used in a few situations
(mostly involving folders, but not only).
The reason we need to change this is to be able to track the sync
state of files for overlay icons. The instruction alone can't be
used since CSYNC_INSTRUCTION_SYNC is used for folders even though
they won't be propagated. Removing this logic is however not possible
without using something else than CSYNC_INSTRUCTION_NONE since too
many codepath interpret (rightfully) this as meaning "nothing to do".
This patch adds a new CSYNC_INSTRUCTION_UPDATE_METADATA instruction
to let the update and reconcile steps tell the SyncEngine to update
the metadata of a file without any propagation. Other flags are left
to be interpretted by the implementation as implicitly needing
metadata update or not, as this was already the case for most file
propagation jobs. For example, CSYNC_INSTRUCTION_NEW for directories
now also implicitly update the metadata.
Since it's not impossible for folders to emit CSYNC_INSTRUCTION_SYNC
or CSYNC_INSTRUCTION_CONFLICT, the corresponding code paths in the
sync engine have been removed.
Since the reconcile step can now know if the local tree needs metadata
update while the remote side might want propagation, the
localMetadataUpdate logic in SyncEngine::treewalkFile now simply use
a CSYNC_INSTRUCTION_UPDATE_METADATA for the local side, which is now
implemented as a different database query.
Add a missing call that we currently only do in slotItemCompleted.
This would normally only affect the first sync and would have
gotten properly update at the end of the sync anyway.
To be able to test the SyncEngine efficiently, a set of server
mocking classes have been implemented on top of QNetworkAccessManager.
The local disk side hasn't been mocked since this would require adding
a large abstraction layer in csync. The SyncEngine is instead pointed
to a different temporary dir in each test and we test by interacting
with files in this directory instead.
The FakeFolder object wraps the SyncEngine with those abstractions
and allow controlling the local files, and the fake remote state
through the FileModifier interface, using a FileInfo tree structure
for the remote-side implementation as well as feeding and comparing
the states on both side in tests.
Tests run fast and require no setup to be run, but each server feature
that we want to test on the client side needs to be implemented in
this fake objects library. For example, the OC-FileId header isn't
set as of this commit, and we can't test the file move logic properly
without implementing it first.
The TestSyncFileStatusTracker tests already contain a few QEXPECT_FAIL
for what I esteem being issues that need to be fixed in order to catch
up on our test coverage without making this patch too huge.
This is a move away from the original policy where jobs
would only follow redirects in special cases.
Two restrictions are in place:
1. We do not allow protocol downgrades (https -> http)
2. We stop redirects after we find them looping (e.g. old = new url, or
indirectly when looping 10 times).
This is closer to RFC conforming behavior, although currently
we will treat 301 replies like they were 302. This is for a separate
commit.
Error handling (and display) also needs improvement.
Addresses #2791
Once upon a time, the SyncEngine was instantiated once per sync. But now that
the SyncEngine is kept between sync, we need to reset all these variable between
syncs.
Reverts commit 622017adcf
Could be the cause of #5092 and the cost is higher than the benefit if this is the case.
A network request taking more than 30 seconds isn't something unlikely in this world
and shouldn't be a good reason to abort. We should try to untangle the threads
dependencies to properly fix this if possible instead.
It opens a window and connects to a cipher test
page, showing the output from there, that helps for debugging.
The window is enabled by setting the environment variable
OWNCLOUD_SHIBBOLETH_DEBUG
Missing deleteLater when the CleanupPollsJob aborts.
This is only a problem if the SyncEngine is kept alive a long time. Which is
usually not the case in the configuration where poll jobs are used.
While loading the account, only override the server url if Theme::forceConfigAuthType
is set. This restore the behavior from the client 2.1 for theme that did not
use Theme::forceConfigAuthType.
Issue: owncloud/enterprise#1418
This was removed in 0194ebb222
because it breaks on Linux. However, it looks like it is correct
for Windows. In the meantime the surrounding ifdef has changed
from !Q_OS_MAC to Q_OS_WIN, so reverting it makes sense.
Since the SyncEngine now quits and waits for the discovery thread,
the main thread can enter a deadlock where the discovery thread waits
for its directory result.
Add a 2 seconds timer to the discovery thread wait condition
to limit the deadlock time.
This disables the workaround 487e1fdca5ee04fc98c1ed77898df70d740967c8
for servers that are new enough to support fine grained permissions
on federated shares.
The consequence is that the 'reshare' permission is now granted by
default and that users can edit permissions on the usual fine-grained
level again.
The way the client deals with servers <9.1 is unchanged.
Use a QMap to avoid using a full hashtable for only a few entries, and
clear the QMap once we're done with the measuring. This saves a few
hundred bytes per job during propagation that would otherwise only be
freed at the end of the sync.
During propagation, we create a line for each file, taking memory, but
we delete all lines passed 2000 right at the beginning of the next sync.
Since the user has little chances of being able to read past those 2000
lines in the log, we might as well keep it capped at 2000 also during
propagation to prevent it from eating memory.
The SyncRunFileLog owned by the Folder must be destroyed after the
SyncEngine since the SyncEngine will abort during destruction, resulting
in all jobs being aborted.
It's possible that this crash only happens with a debug build.
The FolderWatcher inserts files to be marked as SYNC and we
currently assume that all file statuses will be updated by the
following sync. It's however possible that the FolderWatcher
notify us of a change that csync won't consider necessary to
propagate, in which case a new status wouldn't be pushed and
the file manager would continue showing this file as syncing.
Re-push the file status when emptying the dirty files list
before propagating to avoid this issue, most likely the OK
status.
No need to allocate (and initialize to 0) a 10 MiB buffer for each files, even
when most files are much smaller than that.
So make sure the buffer that we allocate is not bigger than the file size.
And Also 10 MiB is a bit big for a buffer. 500 KiB should be more than enough.
(Too big allocations can cause problem because of memory fragmentation and such)
We first need to set the abort flag to csync and then aborting the discovery
job, otherwise, the discovery thread could start a new job in the mean time.
We also need to make sure that the thread has existed before we destroy the
exclude list.
Events from the crash reporter suggest that the QNAM and its
child replies might get deleted before returning from this method
and the only possible cause we can see is that the inner event
loop has something to do with it.
Try keeping a ref on the QNAM while in this method to make sure
that it won't get deleted by the inner event loop.
Same fix as in commit 60c101d9
From the crash reporter:
Crash
EXCEPTION_ACCESS_VIOLATION_READ at 0x4
qnetworkreply.cpp in QNetworkReply::request at line 476
propagateupload.cpp in OCC::PUTFileJob::slotTimeout at line 100
moc_abstractnetworkjob.cpp in OCC::AbstractNetworkJob::qt_static_metacall at line 98
qobject.cpp in QMetaObject::activate at line 3716
moc_qtimer.cpp in QTimer::timeout at line 192
qtimer.cpp in QTimer::timerEvent at line 247
qobject.cpp in QObject::event at line 1267
qapplication.cpp in QApplicationPrivate::notify_helper at line 3722
qapplication.cpp in QApplication::notify at line 3505
qcoreapplication.cpp in QCoreApplication::notifyInternal at line 932
FolderDefinition::save and load escapes the alias. We also need to escape
it when we remove it.
New folder can't be created with alias that needs escaping, but old folder
from old config may still exist, and we must allow user to delete them.
For issue #4927:
On Windows 10, we get a notification after the sync is finished for file that were
just downloaded. The guard we have against our "own changes" are only working when
the sync is running and the OwncloudPropagator still alive.
Setting the Environment variable only for owncloud makes in inconsistant with
other Qt application running at the same time.
The users can still set it themself for the whole desktop if they wish.
Addresses #4840
Previously rejecting any kind of certificate meant that the user
was never asked again, even if the certificate changed.
Now we keep track of which certificates were rejected and ask again
if the ones mentioned in the ssl errors change.
mitmproxy is excellent for testing this.
* Progress: Don't count dirs without propagation jobs #4856
These directory SyncFileItems are necessary for bookkeeping
but should not influence the progress display at all.
* Progress: Skip ignored files #4856
The problem in this case is if we rename the file "xxx" to "invalid\file".
The rename will fail because the new filename constains a slash, and it
will be blacklisted.
But then if the user re-rename the file to "valid_name", then we should
invalidate the blacklist entry and retry to upload. But we did not do
that because renaming don't change the mtime and we did not store the
rename target in the database
IL issue 558
This fixes an issue in which too many jobs are started un parallel
while uploading many files, which could cause too much memory usage as the
chunks are stored in memory.
Probably the fix for #4611
Issue #4855
A typo in the context string made the translation lookup fail.
But also the %Ln was not recognized as a plural form by transifex, so only
the singular was translated
The assert was there to make sure that this case wasn't happening
to eventually be properly tested. Remove the assert for now but this
codepath should eventually be unit tested using this specific situation.
Since the windows implementation first does cache lookups using the
path string, directories need to be passed identically as through
RETRIEVE_FILE_STATUS.
Change the convention to never have a trailing slash for directories
in the protocol. This allows the convention to be applied without
having to access the disk (since we'd need to know if the path is
represented by a directory) and also matches the convention of the
rest of the sync engine. Individual file manager plugins are then
responsible of handling pushed paths as not ending with a trailing
slash.
This also:
- Moves the trailing slash removal logic from the SyncFileStatusTracker
to the SocketApi class
- Remove the unneeded QString::normalized call in fileStatus, since
this should already be done by the FolderWatcher and plugins
Since the statuses are cached and that we can't invalidate the cache,
sending NOP would need to be overwritten by the default OK status
once the client successfully connected. But instead of remembering
which files we NOPed, rather wait until we are ready to sync before
sending the REGISTER_PATH message to the socket API client. It will
also prevent the client from sending unnecessary RETRIEVE_FILE_STATUS
requests.
Also remove AccountState::canSync, since it does the same as
isConnected and syncing is not an account responsibility.
Go through fileStatus like other cases to make sure that all use
cases go through the same code path. This also makes sure to use
lookupProblem which will use lower_bound which is more efficient
for larger sets of sync problems.
This also fixes the issue with lookupProblem that prevented it to
properly match an empty pathToMatch, caused by the fact that the
problem map contains relative paths not starting with a slash.
Make sure that we push the new status when the status of the SyncEngine
changed. SyncEngine::started comes a bit late, only when the propagation
starts, although it's better in this case since child folders will
only switch to Sync in aboutToPropagate.
Also fix an issue with SyncEngine::findSyncItem when using an empty
fileName; this would match and return the wrong item, even though
not currently happening with the code since fileStatus won't call
it with an empty fileName anymore.
As before, we rely on metadata-update SyncFileItem entries for parent
directories to notify us that a directory contains files to propagate,
and to know when all children were propagated through its itemCompleted
signal.
Those metadata SyncFileItems however have a None direction and we need
to add a explicit directory check to show them as Sync.
This fix also handles new files as well as existing ones, so no need
to keep a separate logic for new files.
propagatedownload.cpp:712:35: error: 'seenLockedFile' is a protected member of 'OCC::OwncloudPropagator'
Signals are protected in Qt4 but public in Qt5, mark the class accessing it
as friend when compiling with Qt4
Do the visual stuff from designer.
The previous code that was ment to change the color in red did not work
and changed it to gray instead.
Also I don't see why there should be a frame.
Issue #4773
When a conflict-rename or a temporary-rename fails, notify the
LockWatcher. It'll regularly check whether the file has become
accesible again. When it has, another sync is triggered.
owncloud/enterprise#1288
If the downloaded file is empty but the PROPFIND previously announced it
should not have been empty, this might mean the file was somehow corrupted
because of a bug on the server and that we should therefore not accept
the file.
Normaly we accept a change between the actual size of the file and what we
got during discovery because the file might have been updated to a new version
inbetween. But after this patch we won't accept the file if it was replaced
by an empty file.
Will help for issue #4583
Also requested by IL for issue 548
This uses the file watcher to keep track of files that were modified
in order to assign them the blue icon.
This is transient state that's not persisted across restarts.
Before commit 1a51b6718a, the wizard was
making sure folder had an alias but this is no longer the case.
So generate still an unique alias.
Alias is not used in the UI any longer, it's just use for internal purposes.
For issue #4737
As discussed on issue ##4460
Having the quote to be queried on subfolder is wrong in the generic case,
so add a branding option to configure it.
This partially reverts commit ff4cdc3161
In the before-propagate slot, new files that wait to be
pushed to the server are remembered in the _syncProblems
map. That way, the parents show a sync icon properly as
asked for in #4682.
After the item has been transfered properly, the item is
removed from the map again because success is the default.
Added in previous commit from pull request #4663
As discussed, we do not need this option so no need to introduce
a new dependency on the config file in the sync engine
If an app modifies the expiration date (for example the password policy
app) then on more recent versions of the server we will get the share
object back REST style. We should use that info!
Fixes#4409
* Add checksums/supportedTypes and checksums/preferredUploadType
capabilities. The default is that no checksum types are supported.
* Remove the transmissionChecksum config option. Servers must now
use the capabilities to indicate that they are fine with the
client sending checksums.
Note: This intentionally breaks brandings that overrode
Theme::transmissionChecksum. The override must be removed and the
server's capabilities must be adjusted to include the new values.
The reason is that updateFolderView is invoked by the
emitted signal folderSyncStateChange() anyway.
This will reduce the traffic over the SocketAPI nicely,
maybe this was the reason why it was slower than before.
There were two issues:
* With the refactoring of how Folder and SyncEngine relate, the
ignore_hidden_files flag on the CSync context was reset after
each sync run and not updated from the configuration again.
* The folder watcher failed to enumerate hidden folders and thus
didn't watch for changes inside them. (linux only)
SQLITE_DONE is the indicator for not more query results, which is a legal
thing and not an error condition.
Also, check _getFileRecordQuery for null pointer, as close() wipes it.
The problem with QSet is that the QDateTime was part of
the hash, but that does not make sens as it should be unique
per widget and not per <date, widget>
Instead make it a QHash so there is only one entry per widget.
The idea is that the next call to any database operation will try to
reopen the database through the checkConnect() method. So even if there
was a disconnect trom the db file, this will reestablish the connection.
Imagine tgus scenario on a read only share that you move file from
one location to a new directory in the read only share.
Creating the read only directory fails for permission error.
But we should also restore the files that have been moved.
IL issue 542
Bring back the hardcoded status logic for excluded files.
Since the activity log doesn't even mention those files on purpose,
we can't rely on the SyncEngine to notify us about the status of those files.
Looking up a/aa while an error is present in a/aab/aaba would return
a warning status since a/aa is a substring of a/aab.
Fix the issue by checking if the following character is a slash.
This prevents having to define a Problem structure with dubious
operator overloads to accomplish the same.
Also use std::map::lower_bound to quickly iterate over the
list of problems.
* Remove duplicate remote path
* Use thin progress bar
* Move bandwidth and file info to tooltip
* Shorten overall progress message
This also fixes#4562 by making the layout not dependent on the
width of the displayed text.
This also remove all smartness from the SocketApi about the status
of a file and solely use info from the current and last sync.
This simplifies the logic a lot and prevents any discrepancy between
the status shown in the activity log and the one displayed on the
overlay icon of a file.
The main benefit of the additional simplicity is that we are able
to push all new status of a file reliably (including warnings for
parent folders) to properly update the icon on overlay implementations
that don't allow us invalidating the status cache, like on OS X.
Both errors and warning from the last sync are now kept in a set,
which is used to also affect parent folders of an error.
To make sure that errors don't become warning icons on a second
sync, SyncFileItem::_hasBlacklistEntry is also interpreted as an error.
This also renames StatusIgnore to StatusWarning to match this semantic.
SyncEngine::aboutToPropagate is used in favor of SyncEngine::syncItemDiscovered
since the latter is emitted before file permission warnings are set on the
SyncFileItem. SyncEngine::finished is not used since we have all the
needed information in SyncEngine::itemCompleted.
SyncFileStatus' purpose is to track overlay icon status.
Instead of putting comments and default: clauses in switch
on both sides about unused enums, use different enums.
This also remove STATUS_NEW which is the equivalent of
STATUS_SYNC in all shell extension implementations, and
remove STATUS_UPDATED and STATUS_STAT_ERROR which have
the same semantic as STATUS_UPTODATE and STATUS__ERROR.
This currently is no-op code since the socket API isn't notified
that the tainted folder list changed, and the result is the same
since a sync will be triggered within the next 5 seconds and the
modified folder will be shown as SYNC at that point anyway.
Removing the dependency to the file watcher allows moving the
status estimation logic to libsync.
This will help moving the SyncEngine construction in the constructor
and allow moving functionalities from Folder to SyncEngine or its
delegated objects.
The policy that was said is that if a notification has no action, the
client can and should display a close-button. This patch does it.
In additon to that, the client needs a blacklist of closed notifcations
otherwise they would re-appear next time the server notifications are
fetched again.
Also, changed the cleanup of not-longer-used widgets to be more robust.
If a notification is not longer in the list that comes from the
server, the notification is removed.
That is mainly for the notifications that are created by the
announcement application
Since the quota is a per-folder value, this will make the displayed data
more useful when a single sync folder is configured.
Of course each subfolder could have a different quota again.
The date we receive from the server is an ISO8601 datetime that
includes the offset from UTC. Qt does correctly parse this
information and creates the appropriate QDateTime object.
Calling setTimeSpec(UTC) will force the timezone offset to 0 and
thereby change the referenced point in time to an incorrect one.
Soldiering on with a broken or incomplete response could lead to
incorrect sync behavior.
Since discovery uses LsCol jobs which already handle errors
correctly, this should not have a significant impact.
the size on the server might be different from the size on the client
with certain backend so it should be ignored.
(cherry picked from commit 9222db6df9b19a21e1bea5a238d745d96a6385e3)
In SQLite bindings are not cleared by sqlite3_reset() calls, so
skipping a sqlite3_bind call to create a NULL value doesn't work,
instead the previous value will be written.
To fix this, I clear all bindings in SqlQuery::reset and make sure
to explicitly bind NULL when desired in SqlQuery::bind.
To make sure there's no confusion about SqlQuery::reset and
sqlite3_reset, I rename our method to reset_and_clear_bindings().
(cherry picked from commit 7bd4f95b8c)
In SQLite bindings are not cleared by sqlite3_reset() calls, so
skipping a sqlite3_bind call to create a NULL value doesn't work,
instead the previous value will be written.
To fix this, I clear all bindings in SqlQuery::reset and make sure
to explicitly bind NULL when desired in SqlQuery::bind.
To make sure there's no confusion about SqlQuery::reset and
sqlite3_reset, I rename our method to reset_and_clear_bindings().
The isValid check should be used everywhere the capabilities
are used as the loading of the capabilities is happening
in parallel of the startup, so it is not guaranteed to be
available always.
As interaction is required, the notifications are displayed in a
separate widget above the server activity list.
Note that design and also where we display the notifications can
still be discussed and changed.
If the ownCloud server does not have the activity app enabled,
it returns 999 as status code. If all the configured accounts
do that, this code hides the entire tab with the server
activities.
This is supposed to fix#4533
Helps with small file sync #331
When I benchmarked this, it went up to 6 parallelism and
was about 1/3 faster than the previous fixed 3 parallelism.
Doing more than 6 is dangerous because QNAM limits to 6 TCP
connections and also the server might become a bottleneck.
Should also help for #4081
Previously one could accidentally call Folder::setSyncPaused() and miss
some expected side effects. Before, the correct call was to FolderMan::
slotSetFolderPaused(). Now the setter on Folder has the expected effect.
This will be useful if we ever want to store account-level gui state.
I built this originally because I thought a paused account would be
this kind of state.
The creation doesn't need to be separated from the SyncEngine anymore.
This allows the SyncEngine to be created in fewer steps if we want to
use it in tests.
This moves most of the direct csync code from Folder into the SyncEngine.
The exclude file logic for the context has been wrapped using the
existing ExcludedFiles class as well.
Given that we control all call sites, the only way that this can fail is during
OOM. Also remove the code in csync itself to make sure that it's obvious that
any new error case wouldn't be handled by call sites.
As discussed with Klaas, this seems to be a better compromise.
10MB * 3 prarralel jobs = 30MB in memory, and to retry in case of
disconnection. Which is still reasonable. And might make the upload
almost twice as fast on fast network where the amount of chunk is the
bottleneck (because of more server processing)
Relates to issue #4354
If the PROPFIND return an invalid code (like 200) then we would
not recieve the error signal and we would never sync again.
Found while investigating https://github.com/owncloud/enterprise/issues/1068
The ".sys.admin#recall#" is the recall file and should not be ignored
even if hidden.
The remote discovery do not need to detect hidden files because it
is already detected by csync in all cases. So this avoid code duplication
Users have complained that they don't see the notification when it is
shown and are not aware that their files aren't syncing.
Remove the non-interactive credentials fetch logic and add make sure
that the shibboleth popup will flash in the taskbar instead.
This will still not allow the popup to show in front in all cases,
but this is a compromise that we have to chose.
This reverts commit dcb687929f.
Issue https://github.com/owncloud/enterprise/issues/990
This is the fix for issue #4370
Step to reproduce the bug:
1) have lots of files in directory "dir1"
2) do mkdir dir2 && mv dir1/* dir2
3) DURING the sync (which takes time because of the many moves) do mkdir dir3 && mv dir2/* dir3/
4) observe that files are PUT in the next sync
The problem is that SyncJournalFileRecord::SyncJournalFileRecord will fail to
get the inode after the forst move because the files are already moved on the
filesystem. Normaly it should use the inode from the discovery phase in that
case but that is not working because it comes from the remote node in case of
moves, so the code in SyncEngine::treewalkFile would not set the inode.
Test in https://github.com/owncloud/smashbox/pull/143
Added a new "chunkSize" entry in the General group of the owncloud.cfg
which can be set to the size, in bytes, of the chunks.
This allow user with hude bandwidth to select more optimal chunk size
Issue #4354
QFontMetrics::boundingRect doesn't return the right size for this
font size for some reason, while it works well if we remove the
smaller point size adjustment for the progress font.
To avoid having to debug the font system in Qt just increase the
existing +2px adjustment to +5px so that it renders fine.
Server older than 8.1 cannot cope with invalid char in the filename
so we must not send them from the client. We were already checking
for new files, but not for renames or new directories.
https://github.com/owncloud/enterprise/issues/1009
The height adjustment done to place the button in the middle of the
non-error area was only done for rendering. Make sure that we do the
same adjustment when mapping click events as well.
Also replace some wrong occurences of aliasMargin*2 for margin.
Previously we would fail to start if the directory was not existing.
This was working for relative directory, but it should also work for
absolute ones
https://github.com/owncloud/enterprise/issues/970
Use QT_DEVICE_PIXEL_RATIO=auto on Qt<=5.5 to enable automatic
scale factor settings on Windows. Also move the existing
Qt::AA_EnableHighDpiScaling logic to use the equivalent
QT_AUTO_SCREEN_SCALE_FACTOR=1 environment variable just to
keep the 5.5 and >=5.6 code at the same place.
* Ensure every time a file becomes a directory or the other way around
the item is flagged as INSTRUCTION_TYPE_CHANGE.
* Delete the badly-typed entity if necessary in the propagation jobs.
QPainter::drawText uses the top of the font for the y position of its
rect argument, but uses the baseline when using a point argument.
Also use the margin variable that matches the font used instead of
the aliasMargin and make sure that the margin is only added between
the box and the text once.
The Qt HTTP thread calls authenticationRequired (indirectly) using a
BlockingQueuedConnection. So when we call invalidateToken from slot
connected to this signal and end up calling QNAM::clearAccessCache which
waits on the thread for 5 seconds
Backtraces:
Qt HTTP thread:
#0 0x00007ffff20c707f in pthread_cond_wait@@GLIBC_2.3.2 ()
#1 0x00007ffff43f0c0b in QWaitConditionPrivate::wait
#2 QWaitCondition::wait
#3 0x00007ffff43ea06b in QSemaphore::acquire
#4 0x00007ffff45dcf6f in QMetaObject::activate
[...]
#9 0x00007ffff45dd607 in QMetaObject::activate
#10 0x00007ffff4edbaf7 in QHttpNetworkReply::authenticationRequired
#11 0x00007ffff4e0b2b4 in QHttpNetworkConnectionPrivate::handleAuthenticateChallenge
#12 0x00007ffff4e10753 in QHttpNetworkConnectionChannel::handleStatus
#13 0x00007ffff4e11cc9 in QHttpNetworkConnectionChannel::allDone
#14 0x00007ffff4e14605 in QHttpProtocolHandler::_q_receiveReply
Main Thread:
#0 0x00007ffff20c7428 in pthread_cond_timedwait@@GLIBC_2.3.2 ()
#1 0x00007ffff43f0b56 in QWaitConditionPrivate::wait_relative (time=5000, this=0x136c580)
#2 QWaitConditionPrivate::wait (time=5000, this=0x136c580)
#3 QWaitCondition::wait (this=this@entry=0x136c788, mutex=mutex@entry=0x136c760, time=time@entry=5000)
#4 0x00007ffff43efa6e in QThread::wait (this=<optimized out>, time=time@entry=5000)
#5 0x00007ffff4e1edd3 in QNetworkAccessManagerPrivate::clearCache
#6 0x00007ffff7b6fb03 in OCC::HttpCredentials::invalidateToken()
#7 0x000000000057adb4 in OCC::AccountState::slotInvalidCredentials()
#8 0x000000000057ac76 in OCC::AccountState::slotConnectionValidatorResult(OCC::ConnectionValidator::Status, QStringList const&) ()
#9 0x00000000005ab45c in OCC::AccountState::qt_static_metacall(QObject*, QMetaObject::Call, int, void**)
#10 0x00007ffff45dcd30 in QMetaObject::activate
#11 0x00007ffff7b78671 in OCC::ConnectionValidator::connectionResult(OCC::ConnectionValidator::Status, QStringList) ()
#12 0x00007ffff7ae2514 in OCC::ConnectionValidator::reportResult(OCC::ConnectionValidator::Status) ()
#13 0x00007ffff7ae39b7 in OCC::ConnectionValidator::slotAuthFailed(QNetworkReply*) ()
#14 0x00007ffff7b784a9 in OCC::ConnectionValidator::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) ()
#15 0x00007ffff45dcd30 in QMetaObject::activate
#16 0x00007ffff7b766dc in OCC::AbstractNetworkJob::networkError(QNetworkReply*)
#17 0x00007ffff7af9f6e in OCC::AbstractNetworkJob::slotFinished()
#18 0x00007ffff7b7654d in OCC::AbstractNetworkJob::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) ()
#20 0x00007ffff45dd607 in QMetaObject::activate
#21 0x00007ffff4edd143 in QNetworkReply::finished
#22 0x00007ffff4e3fec7 in QNetworkReplyHttpImplPrivate::finished
#23 0x00007ffff4e41818 in QNetworkReplyHttpImpl::close
#24 0x00007ffff7b7047b in OCC::HttpCredentials::slotAuthentication(QNetworkReply*, QAuthenticator*) ()
#25 0x00007ffff7b79092 in OCC::HttpCredentials::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) ()
#27 0x00007ffff45dd607 in QMetaObject::activate
#28 0x00007ffff4e1d6fb in QNetworkAccessManager::authenticationRequired
#29 0x00007ffff4e1ea07 in QNetworkAccessManagerPrivate::authenticationRequired
#30 0x00007ffff4e3c784 in QNetworkReplyHttpImplPrivate::httpAuthenticationRequired
Another case of Main Thread:
#5 0x00007ffff4e1edd3 in QNetworkAccessManagerPrivate::clearCache
#6 0x00007ffff7b6fb03 in OCC::HttpCredentials::invalidateToken()
#7 0x000000000057b1e4 in OCC::AccountState::slotInvalidCredentials() ()
#8 0x00000000005abb8a in OCC::AccountState::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) ()
#9 0x00007ffff45dcd30 in QMetaObject::activate
#10 0x00007ffff7b76ed5 in OCC::Account::invalidCredentials() ()
#11 0x00007ffff7ad55f5 in OCC::Account::handleInvalidCredentials()
#12 0x00007ffff7afa69a in OCC::AbstractNetworkJob::slotFinished()
#13 0x00007ffff7b7654d in OCC::AbstractNetworkJob::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) ()
#15 0x00007ffff45dd607 in QMetaObject::activate
#16 0x00007ffff4edd143 in QNetworkReply::finished
#17 0x00007ffff4e3fec7 in QNetworkReplyHttpImplPrivate::finished
#18 0x00007ffff4e41818 in QNetworkReplyHttpImpl::close
#19 0x00007ffff7b7047b in OCC::HttpCredentials::slotAuthentication(QNetworkReply*, QAuthenticator*) ()
#20 0x00007ffff7b79092 in OCC::HttpCredentials::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) ()
#22 0x00007ffff45dd607 in QMetaObject::activate
#23 0x00007ffff4e1d6fb in QNetworkAccessManager::authenticationRequired
Now that we have a separate list for files that could not be synced,
we can make sure that it only shows entries for files that are still
not in sync with the server. This allows the user to treat this list
as action items in order to get everything synced, including the
blacklist.
Simply remove the keep-errors logic that was used when the lists were
merged to achieve this result.
This has been fixed in the meanwhile, but we are still shipping
with Qt 5.4. Also, some Linux Distros will still have older Qt
versions.
Addresses issue #4301
The error status of children should only be used for the etag logic.
The SocketApi uses a path matching system to do this and the UI should
report errors only for individual involved files/directories.
This use the previous code by resetting the progress to hide
the progress back and then return errors in the FolderErrorMsg
data role of the folder model.
This also remove the unused FolderRemotePath role, remove FolderStatus
in favor of invalidating all roles in dataChanged and make sure
that the SyncRunning role is transfered properly from the SyncResult
to show the warning icon during sync.
Since the presence of any path in SyncEngine::_syncedItems
would translate in a SYNC status, platforms that don't refresh
all their status cache after an UPDATE_VIEW message like OS X
or Windows would keep displaying that status even after all
files are successfully synchronized.
- Read SyncFileItem::_status to determine the status to display mid-sync
- Match moved paths also to _renameTarget since this might be the
path to match
- Make sure that PropagateDirectory jobs also set SyncFileItem::_status
properly
If all the files bring us to past timestamp, it is possibly a backup
restoration in the server. In which case we want don't want to just
overwrite newer files with the older ones.
Issue #2325
After c3cf6aef7d the invokeMethod calls
should be adjusted to pass the new method arguments.
The result was currently a passing sync with this error message on
the console:
QMetaObject::invokeMethod: No such method OCC::Folder::slotSyncFinished()
The socket api uses native folder separator. We need to use QDir::cleanPath
for anything else so we only work with '/' everywhere else in the code
This fixes the sharing dialog on window.
Issue #4311
Since owncloud 2.1, csync_vio_local_stat was optimized because readdir
would already fetch most of the info. This works for the discovery,
but not later. And we used this function later for symliks and co.
So this fixes the .lnk on windows
Issue #4300
This secret key was used to wipe the database. In the past this was
usefull because of many bugs, but now this is not usefull anymore.
And cause trouble because it also erase the selective sync list.
Issue #4182
It's just a feature that was not there in 2.0
It means that removed folder stay on the undecided list if it is removed
from the server until the user press apply in the selective sync widget.
Not a very bad bug anyway.
As discusses with jan.
* Detailed permissions displayed in qtoolboxmenu
* Made share rows slightly smaller
Bug fix:
* Do not show delete permissions for file shares
Before the selective sync status text and apply/cancel buttons
were shown as soon as any folder was expanded. This changes it
to only show when the model is dirty (or a big folder confirmation
is needed).
This is nice because we auto-open the folder list sometimes
and having the apply/cancel buttons visible makes users think a
decision is needed.
* Compute the content checksum (in addition to the optional
transmission checksum) during upload (.eml files only)
* Add hook to compute and compare the checksum in csync_update
* Add content checksum to database, remove transmission checksum
The signal jsonReceived() now not only delivers the raw json string, but
also the status code that came as OCS reply.
Also, fixed a typo in the signals name (recieved => received).
Fix the warning:
QLayout: Attempting to add QLayout "" to OCC::ActivitySettings "", which already has a layout
It was caused because one layout was created with the wrong parent
A 403 is a reply code sent from the file firewall to indicate that
this directory is forbidden to use for the user.
The patch handles it by setting the state to IGNORED.
This addresses #3490
This patch lets a successful etag job check mark a timestamp.
If next time a connection check is requested, it is checked if
the last ETag happened within the last 30 seconds and if so the
connection check can be checked.
This way we avoid half of the PROPFINDs if all goes well.
- Changed sequence of menu items
- lowercased entries
- removed the "Account" from entries, its in the toolbox button already
- added a little space between toolbox button label and the rectangle.
Note that the activity has also entries of files that are not synced so
that not every activity entry has to have a local pendant.
Also, one activity entry can reference multiple files, so only the first
one is shown.
The problem was that Share could be deleted *before*
the OcsShareJob itself finished. Since Share was the
parent of the network job, its object would be deleted
too early.
In general, it's unnecessary to assign parents to the OcsJobs
because they delete themselves when finished.
If a folder was paused while being the next item in the scheduling
queue, the whole scheduling could get stuck.
This also fixes the progress information of paused folders possibly
getting stuck.
Now we have 1 simple dialog that includes 2 widgets.
* ShareLinkWidget (for link shares)
* ShareUserGroupWidget (for user/group shares)
The ShareUserGroupWidget is only included if the server version is >=
8.2.0
For <8.2.0 the old behavior is preserved
We changed the discovery code not to ignore files whose filename contains
charachter invalid on windows. (Because newer versions of the server
supports them)
Servers older than 8.1 will just say "Bad Request" as an error and it's a
regression against previous client version. So keep nice error even with
older server.
Relates to #3736
We did not flush or closed the file after having modified it from the UI.
So when the socket api was reloading it, it wouldn't be able to load
the newly added rules
When the toolbar is full because there is no enough room, make the extension
of the toolbar work, by using QWidgetAction::createWidget instead of
QToolBar::insertWidget
There should not be prolem when the window is too narrow.
Relates #3832
Regressed since d610693af1. The problem
is that the _size vector contains the pathToRemove and that it was removed
before.
Reorganize a bit the code so there is only one loop that has still all the
information.
- Disable the whole group box
- Add a tooltip explaining why it is disabled
- Make sure it is disabled in the settings in case of upgrade
- Do a runtime check in case the running Qt is greater
To use the same logic as the other clients and unify ownBrander
implementations, the switch is now called multiAccount() rather
than singleAccount() with a reverse logic.
Desktop Client stays with the default of having multiaacount
enabled.
Note that existing brandings need to rename the switch.
https://github.com/owncloud/ownbrander/issues/443
If there is a any none files, we do not show the dialog saying that all
files have been removed. If a directory contiains ignored files, we still
want to show this message box even if the directory will not be deleted
There is now a generic OCSJob which must be inherited by other jobs. This is in
prepartion for the other OCS job that will come (for the Sharee API endpoint
for example).
More logic is moved from the sharedialog to the OcsShareJob. So in the GUI code
we now only say what we want (a new share, set the password etc). And the code
in libsync will make that happen. Error handling is for now still done in the
GUI part.
For now the ocsjob and ocssharejob live in gui but probabaly we should
create a libshare or libocs at some point.
Progress estimation is usually based on transfer speed. That makes no
sense when we're doing operations like deletes, that need very little
data transfer but nevertheless take a long time.
This hack attempts to detect this case better and switches to a
different estimate.
We should rewrite this to maintain and update estimates for the
transfer speed, per-file overhead and chunk-assembly overhead each
time an item finishes. Then we could provide more consistent progress
estimates without ad-hoc fixes like this one.
Also, there's an issue where resuming a partial download will lead
to exaggerated transfer speed estimates.