Partial results of long-running operations

Here are some news on the reproducibility project. After going for rather low-hanging fruits in January, in February I started work on a bigger task, that of supporting visualization of partial results of long-running operations. This work is available in the 5541-partial-change-support branch of my fork.

I made a quick video to demonstrate the intended use case:

.

For now this really is a proof of concept of the feature, a validation that it can be done relatively naturally with the new architecture. I am hoping this will also prompt us to think about design questions around this feature, as mentioned in the video. The ones I am thinking about for now are:

  • how to convey the right expectations around sequencing of updates to the project data (the fact that editing cells before the reconciliation operation reaches their row will not influence the reconciliation value)
  • how to change the way processes are displayed in the UI, making space for multiple processes and more options to pause, resume, cancel and rerun them
  • (not mentioned in the video) how to update the data shown in the grid in a non-disruptive way (for now, the grid is only updated when changing pagination settings, but it’s really just a hack)
  • how to update the representation of the history, to associate running processes and history entries.

Another exciting consequence of this work (also not mentioned in the video) is in the Wikibase extension. The internal architectural changes for this feature forced me to change the Wikibase upload operation quite a bit. So far, this operation gathered all edits generated by the entire project first, then optimized them so that edits to a given entity are grouped together (resulting in at most two edits per entity, generally a single one), and then carried out those edits. I originally made this choice to minimize the number of edits made (since Wikibase edits are costly), but this conflicts with this new architecture because it requires to read the entire dataset before doing any edit. I was therefore sort of forced to change the Wikibase upload operation so that it does its edit grouping at a smaller scale, using a configurable batch size. This means that we will be doing a bit more edits (since edits generated by rows that are far apart will not be grouped together), but it has the very big benefit of making it much easier to keep track of which row generated which edit. This opens the door for in-grid error reporting for the Wikibase upload operation. We could change the operation so that it stores any editing errors it encounters in a dedicated column, making it much easier for users to figure out which parts of their datasets were actually uploaded and which errors prevented the rest of the dataset to upload. I think the lack of error-reporting in this operation is really bad and is a major obstacle to its broader adoption, especially for Wikimedia Commons where upload errors are very common. I think the ability to pause and resume processes should also be useful in that context.

With this prototype out of the door I am thinking about starting to look for a designer to help with the questions raised above (and other design questions in this project). We have some budget planned for such a role in the grant for the reproducibility project. But obviously, the community should have its say in where this is going, and so I would be curious to hear what people think of this at this stage already. Do you recognize the need for this? What workflows would you use it for? I am all ears :slight_smile:

1 Like

Thank you for taking the time to write your thoughts down and produce the video.

Actually I would prefer to have the processes running so fast that we would not have to deal with this issues at all… but reality tells a different story :grin:

For what it is worth I think your thoughts (and implementation) are going into the right direction.

1 Like