Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: kaitz/fxcm

fxcm_v26

12 Sep 11:11

Choose a tag to compare

  • Known dictionary words are compared with their codeword. Previously, text strings were compared.
  • Adjusted global StateMap prediction.
  • ContextMap (HT 128) reduced predictions from 6/4 to 4/3 per context. Use single internal StateMap. All context states are update with that.
  • ContextMap (HT 32) reduced predictions from 5/4 to 3/2 per context. Removed StateMap based predictions.
  • StationaryMap for 2 context
  • In WordsContext use also codeword for dictionary word.
  • Added SentenceContext for sentance managment. Max 64 sentances (WordsContexts). Search for similarity is performed by compareing codewords (default 53% means match found).
  • In stemmer add Pronoun word type.
  • Add InDirectStateMap with order-w mixing of primary predictions (similar to Paq9a/zpaq)
  • Partial sentance contexts.
  • Group of SentenceContexts for: lists ('*'), table, wikilinks and regular sentances. Total 4.
  • Removed SparseMatchModel.
  • Removed 4 SmallStationaryContextMap contexts
  • Mixer count from 12 to 24
  • Added 7 new ContextMap's
  • Added 22 new InDirectStateMap contexts
  • Adjusted mixer parameters and contexts
  • Adjusted ContextMap memory usage
  • There are 3 mixers layers (+1 in every InDirectStateMap).
  • For layer 0 mixers about ~40% of updates are skipped.
  • Some predictions are skipped if line is Category link, after topic 'See also', 'References', 'Bibliography' or 'External links'.
  • Some low memory ContextMaps are reset after every page (wikipedia article). The StateMap is preserved if it exists.

fxcm_v24

02 Sep 11:15

Choose a tag to compare

  • Update model to fx2-cmix level
  • Add match skip

fxcm_v22

20 Jun 00:57

Choose a tag to compare

  • Reverse dictionary transform. We load the dictionary when it is found after decompressing it. Text has a separate buffer from coded byte stream buffer.
  • Natural language processing using stemmer (from paq8px(d)).
  • Stemmer has new word types: Article, Conjunction, Adposition, ConjunctiveAdverb.
  • Some word (related) contexts are changed based on what type of word was last. Some words are removed from word streams depending on the last word type. This improves compression.

There are four word streams:

    1. basic stream of undecoded words.
    1. decoded word stream after stemming for sentences. Contains all words. Reset when sentence ends.
    1. decoded word stream after stemming for paragraphs. Contains words that are not: Conjunction, Article, Male, Female, ConjunctiveAdverb. Reset when paragraph ends.
    1. decoded word stream after stemming. Contains words that are not: Conjunction, Article, Male, Female, Adposition, AdverbOfManner, ConjunctiveAdverb.
  • Word limit per stream is increased from 64 to 256 words.
  • New context that uses stemmer and decoded plaintext. Some global context are changed when word type is:
    ConjunctiveAdverb or Conjunction - skip updating in stream 1. Conjunction for sentence reset. etc. Knowing these new words allowed large amounts of compression improvements.
  • In some cases words are removed between certain chars from stream 2 and 3 when following is true:
    =| - wiki template
    <> - html/xml tags
    [| - wiki links
    () - usually words in sentences
  • Main predictors are split between three different ContextMaps. This provides better compression. Sizes for hash tables are 32, 64 (standard for paq8 versions) or 128 bytes per contexts. 32 byte size is good for small memory context (below 256 KB), 64 is good for medium sized context (up-to 16MB), 128 is good for large memory context (more than 16MB).
  • One state table is removed and replaced with another one. State tables are generated at runtime to reduce code size.
  • Added sparse match model. With a gap of 1-2 bytes and minimum length of 3-6 bytes. Mostly for escaped UTF8.
  • Detection of math, pre, nowiki, text tags in decoded text stream. Some word related contexts are not used when 3 first tags content is compressed. Improves compression speed.
  • More parsing of lists and paragraphs. So that context for predictors is best as they can.
  • Optimized context skipping in main predictors.
  • Main predictor context bias is not forwarded to cmix floating point mixers, instead a single prediction bias is set. This avoids unnecessary expansion of mixer weight space and maintains lower memory/cpu usage. There are some other predictions that are not forwarded as they make compression worse and slower.
  • Some mixers and APM’s context size is larger so that prediction can be better.
  • partially/fully decoded word index into dictionary is used as a context for mixer in fxcm mixer
  • Some variables are renamed for better readability.

fxcm_v17-18

13 Dec 17:13

Choose a tag to compare

  • Some cleanup
  • Adjust contexts

fxcm_v15-16

17 Oct 13:48

Choose a tag to compare

  • Add one new context
  • Adjust on mixer
  • Some cleanup
  • Add comments so reader can understand what is happening
  • Tune some variables and contexts
  • Improve compression speed

fxcm_v13-14

23 Sep 13:07

Choose a tag to compare

  • Word&Sentence context.
  • Adjust some contexts
  • Update "for CMIX" to most recent version

fxcm_v11-12

18 Sep 11:10

Choose a tag to compare

  • Wiki table/row & column context
  • Add 3 new context based on table
  • Change some context

fxcm_v9-10

06 Sep 10:18

Choose a tag to compare

  • Change some contexts
  • Increase one mixer size

fxcm_v7-8

05 Sep 20:08

Choose a tag to compare

Page

One article as seen by first char context

fxcm_v5-6

04 Sep 16:37

Choose a tag to compare

-Add another context based on first char
-Adjust some contexts