async mount option, and data=journal (re: persist-save and live-remaster)

Forum Forums antiX-development Development async mount option, and data=journal (re: persist-save and live-remaster)

  • This topic has 1 reply, 2 voices, and was last updated Apr 18-8:13 am by ModdIt.
Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
  • #57821

    In a liveboot enviroment, I have observed that unless/until a “sync” command is issued,
    writes to any of the ext4 devices are, indeed, deferred.
    The delay interval is 15 -ish seconds, which matches the implicit value of the “sync” ext2/3/4 mount option.

    The persist-save script (and live-remaster) takes into consideration this likely (writing, flushing) delay.
    Subsequent to the rsync operation, it explicitly issues a sync command prior to unmounting the device.

    Recently, I checked… and did not find ANY script which passes “async”
    (nor any other explicit mount option) when calling the “mount_if_needed()” routine.

    I am wondering:

    During these scripted operations, would it be beneficial to explicitly specify “async” when calling mount_if_needed()


    As a separate consideration:

    Wouldn’t it be beneficial (in terms of data safety) to explicitly specify “data=journal”
    when mounting the device prior to performing the live-remaster and live-persist operations?

    The implicit default for this mount option is data=ordered


    After reading everything I could find on this subject
    (including multiple 100pp+ academic PDF documents)
    I’ve chosen to modify my copy of the persist-save script, so that it now does specify data=journal when calling mount_if_needed()

    So far, contrary to my expectation, which was based on “all most of those citations” I had reviewed…

    …with data=journal specified, my careful testing indicated that
    a persist-save operation w/ data=journal specified
    does consistently complete a few seconds FASTER, compared to data=ordered

                  All data is committed into the journal 
                  prior to being written into the main filesystem.
                  This is the default mode.  All data is forced directly
                  out to the main file system prior to its  metadata
                  being committed to the journal.
                  Data  ordering  is not preserved – data may be written into the
                  main filesystem after its metadata has been committed to the journal.
                  This is rumoured to be the highest-throughput option.   It  guarantees
                  internal  filesystem  integrity,  however  it  can allow old data
                  to appear in files after a crash and journal recovery.

    Although I was surprised and pleased to find performance penalty, speed-wise, I’m more focused on data safety aspect.

    Why hasn’t use of data=journal become the goto, the de facto standard, for writing to the persistence files?

    ^— During my reading, I encountered quite a few “ewwwww, gonna wear out my eMMC//SSD//LiveUSB” remarks, but all these seemed to be spouted by uninformed folks who were simiply parroting “technical folklore”.

    Can we pin down the details and arrive at a well-considered BestPractice?

    Personally, my attention is toward BestPractice regarding “semi-automatic, dynamic, root persistence”. I do realize that “one BestPractices shoe may not fit all scenarios”.

    Toward finding Which End is Up”, here are some statements:

    The ‘rootfs’ is a sparse file, containing an ext4 filesystem.

    The antiX ‘persist-makefs’ script specifies a 32MB journal size.

    Irrespective of any (or none) wear-leveling performed by the persistent drive’s onboard controller,
    throughout its lifetime, the ‘rootfs’ file is never moved (re-pathed)
    and, within the filesystem ensconced by the rootfs file,
    the blocks designated as the “journal(ing) blocks” are never relocated.
    ISTM this is a significant “(here’s why) you should remaster frequently” detail.

    Every bit if saved persistence data is written across/thru this bottleneck w/ data=journal option.

    Here, my “statements” become questions, wonderments… starting with:

    Under the status quo (implicit) data=ordered scenario,
    how much — or how little — is 32MB “worth”, in terms of providing a data safety buffer?


    Thanks skidoo,
    interesting reading,
    In regard to SSD USB sticks SD cards, as the controller is non transparent to the connected device, where
    things actualy get written to or moved too is manufacturer software controlled. Same goes for wear levelling.
    So trying to physicaly write data to a new location from the OS is futile.

    On storage device life, some (expensive) devices are known notoriously unreliable, hard to find serious reports
    though. As you note too many myths and legends.

    Whether we should remaster frequently, good question, I do it after customizing, every larger update and/or after
    installing new software. So quite often.

Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.