We will have all the code in public anyway so it can just be compiled
in. Thus no need to go through the plugin loading dance. Replaced the
loading with factory functions. Kept mostly the same structure
otherwise.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
If we use those encrypted propagation code paths, we already know from
the discovery phase (and thus the journal db) that the folders are
encrypted so no need to check again.
This will remove another expensive round trip with the server.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
This step isn't necessary anymore, no one looks at ClientSideEncryption
for that information anyway.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
Thanks to the new discovery algorithm, we got all the freshest E2EE
information straight from the database so reuse it instead of going
through an in memory copy.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
It had a different path convention than all the other jobs, most likely
for legacy reasons because of the tight coupling it had to the settings
dialog.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
No need to look for a token on the outside we can just work properly by
keeping all the state encapsulated in the job.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
Thanks to the new discovery algorithm we got both mangled and original
file names in the item so no need to go through the database for
nothing.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
Now we pull the encrypted metadata during the discovery which is a
better approach than before. This shall remove the need for some of the
deep propfinds we have been using so far. It already simplifies the code
a bit for the download scenario.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
This code was actually not breaking most cookie handling by accident.
As the raw cookies where not split properly we added cookies with values like
"key: val; key2 = val2; key3 = val3"
When the code was corrected we overwrote the newer values in the jar with
the old ones from a request.
This was done because the propagator jobs where running in a thread a long
time ago, but this is no longer the case.
(Also QAtomicInt::load is marked as deprecated now)
The original code from csync was stopping at any error.
But we have been whitelisting soeme http error code one by one
to ignore the directory instead of aborting the sync.
However, as there are more requests to continue the sync in case
of error, just ignore most HTTP errors
Issue #7586
On windows we do a test to know if we should change the case of the files,
but that conflict with the test that checks if the file was still there
when the filename is actually the same. Which can happen with virtual files
as they have two representation (the one with and without suffix).
When an item is downloaded because it is restored, it shall be shown in the
sync protocol.
(It is also going to be shown in the not synchronized for a short while, but
that's fine)
Previously the source was deleted (or attempted to be deleted), even if
the new location was not acceptable for upload. This could make data
unavilable on the server.
For #7410
By introducing a PropagateRootDirectory job that explicitly
separates the directory deletion jobs from all the other jobs.
Note that this means that if there are errors in subJobs the
dirDeletionJobs won't get executed.
Previously the job would only become "active" when the downloads
started. That meant that arbitrarily many hash computations could be
queued at the same time.
Since the the file was opened during future creation this could lead to
a "too many open files" problem if there were lots of new-new conflicts.
To change this:
- Make PropagateDownload become active when computing a hash
asynchronously.
- Make the computation future open the file only once it gets run. This
will make it less likely for this problem to occur even if thousands
of these futures are queued.
For #7372
Previously fatal error texts were duplicated: Once they entered the
SyncResult via the SyncFileItem and once via syncError().
syncError is intended for folder-wide sync issues that are not pinned
to particular files. Thus that duplicated path is removed.
For #5088
Previously the pin states of deleted files stayed in the 'flags'
database and could be inadvertently reused when a new file with the same
name appeared. Now they are deleted.
To make this work right, the meaning of the 'path' column in the 'flags'
table was changed: Previously it never had the .owncloud file suffix.
Now it's the same as in metadata.path.
This takes the safe parts from #7274 for inclusion in 2.6. The more
elaborate database schema changes (why use 'path' the join the two
tables in the first place?) shall go into master.
Previously RequestEtagJob did return the etag verbatim (including extra
quotes) while the db had the parsed form. That caused the etag
comparison during discovery move detection to always fail. The test
didn't catch it because the etags there didn't have quotes.
Now:
- RequestEtagJob will parse the etag, leading to a consistent format
- Tests have etags with quotes, detecting the problem
- Close the UploadDevice to close the QFile after the PUT job is done.
This allows winvfs to get an oplock on the file later.
- Don't rely on QFile::fileName() to be valid after
openAndSeekFileSharedRead() was called. The way it is openend on
Windows makes it have an empty filename.
If one adds a new file to an online-only folder the previous behavior
was to upload the file in one sync and dehydrate it in the next. Now
these new files get set to Unspecified pin state, making them retain
their data.
Instead of all at once, to reduce peak memory use.
Changing UploadDevice in this way requires keeping the file open for the
duration of the upload. It also means changes to open(), seek(), close()
to ensure that uses of the device work right when a request needs to
be resent.
Since Qt does not yet transparently resend HTTP2 requests in some cases
we do it manually.
The test showed a problem where the initial non-200 reply would close
the target temporary file and the follow-up request couldn't store any
data. Removing that close() call is safe because there also is a
_saveBodyToFile flag that guards writes to the target file.