Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Tags: hudmol/archivesspace

Tags

20200428

Toggle 20200428's commit message
Only render View Published toolbar action if AppConfig[:show_view_pub…

…lished]

20200424

Toggle 20200424's commit message
Only render View Published toolbar action if AppConfig[:show_view_pub…

…lished]

20200423

Toggle 20200423's commit message
Only render View Published toolbar action if AppConfig[:show_view_pub…

…lished]

qa

Toggle qa's commit message
Only render View Published toolbar action if AppConfig[:show_view_pub…

…lished]

20200417

Toggle 20200417's commit message
Hide RDE

20200416

Toggle 20200416's commit message
Show a link to the conflicting record when conflicting_record error i…

…s raised

20200409

Toggle 20200409's commit message
Add log rotation to the unix startup script

Hello 2020!

20200330

Toggle 20200330's commit message
Bugfix for position calculation when bulk-updating AOs within a resource

On an `update_from_json`, the `position` property on the incoming JSON
record is a logical position (where a value of N means "I'm Nth in the
list" of siblings).

This value was incorrectly being written to the `position` column in
the archival_object table as a part of doing the update.  That's
incorrect because the database column should only contain physical
positions: position keys that sort correctly but leave room for
insertions.

Generally this wouldn't matter because, after the initial update, the
`tree_nodes` code would kick in, calculate the correct physical
position for the record and update the row.  But if another record's
physical position corresponded to the incoming record's logical
position, the update would throw an error like:

     Duplicate entry 'root@/repositories/2/resources/102-500' for key 'uniq_ao_pos'

I was able to replicate this issue by choosing a Resource with 1000+
records underneath it and then run some code like this:

     ids = ArchivalObject.filter(:root_record_id => 102).select(:id).map {|row| row[:id]}

     ids.each do |id|
       ao = ArchivalObject.get_or_die(id)
       ao_json = ArchivalObject.to_jsonmodel(ao)
       ao.update_from_json(ao_json)
     end

This would blow up after around 500 iterations.

The fix is to set the updated row's physical position to what it
already was, and then use the incoming JSON's logical position to
calculate its changed position and update as required.  That way we
never risk stealing another record's spot.

20200325

Toggle 20200325's commit message
Load enums eagerly and recursively too

20200316

Toggle 20200316's commit message
Use a more distinguished section id for resource notes