Forum Home Forum Home > OverClocking(OC) Zone > Achivement&Record
  New Posts New Posts RSS Feed - Fastest CPU-RAM subsystem compressing KJV Bible
  FAQ FAQ  Forum Search   Events   Register Register  Login Login

Fastest CPU-RAM subsystem compressing KJV Bible

 Post Reply Post Reply
Sanmayce View Drop Down

Joined: 15 May 2017
Status: Offline
Points: 3
Post Options Post Options   Thanks (0) Thanks(0)   Quote Sanmayce Quote  Post ReplyReply Direct Link To This Post Topic: Fastest CPU-RAM subsystem compressing KJV Bible
    Posted: 15 May 2017 at 1:09am
looking to gather results on fastest machines of today.

The thing that interests me is superfast decompression of textual data, so I made 'Bible' benchmark package:

- single-threaded;
- Windows;
- reproducible (see below for download);
- 40+ compressors;
- testdatafile: Bible;
- integer in nature, stresses CPU-RAM subsystem;
- results sorted by compression ratio.

Benchmark_Bible_(Washigan+_vs_lzbench_vs_TurboBench) 12.6 MB (13,294,047 bytes), downloadable at:

My wish is to give the richest rosterS of compression programs of today... executed both by latest AMD and Intel CPUs.!AmWWFXGMzDmEgwJoWmaIzTunrZjN

Bible roster

This PDF booklet features Ryzen 3.9GHz vs Haswell 4.6GHz showdown:

It would be nice someone to run it on 7700K 5+GHz...
Here are the steps how to run it:

To me, this benchmark is quite informative since it stresses the CPU-RAM with all modern general purpose compressors, during compression phase we can see how many different algorithms behave - Hash Chains, Binary Search Trees, Suffix Arrays, Neural Networks, Automata, ... 
Speed is religion.
Back to Top
elizabethrboatright View Drop Down

Joined: 22 Jan 2018
Status: Offline
Points: 1
Post Options Post Options   Thanks (0) Thanks(0)   Quote elizabethrboatright Quote  Post ReplyReply Direct Link To This Post Posted: 26 Jan 2018 at 12:56pm
For in-piece pressure to work, the portion must take byte successions in memory and pack them, and after that keep the compacted form in RAM until eventually when the information is required once more. While the information is in a compacted state, it is difficult to read any individual bytes from it or write any individual bytes to it. At that point, when the information is required once more, the compacted succession must be decompressed so singular bytes can again be straightforwardly gotten to. It is conceivable to pack any number of successive bytes however it is helpful to utilize a settled "unit" of pressure. A standard stockpiling unit utilized all through the portion is a "page" which is comprised of a settled steady PAGE_SIZE bytes (4KB on most models bolstered by Linux). If this page is adjusted at a PAGE_SIZE address limit, it is known as a "page outline"; the bit keeps up a relating "struct page" for each page outline in framework RAM. Every one of the three zprojects utilize a page as the unit of pressure and distribute and oversee page casings to store packed pages.
Back to Top
 Post Reply Post Reply
  Share Topic   

Forum Jump Forum Permissions View Drop Down

Forum Software by Web Wiz Forums® version 11.06
Copyright ©2001-2016 Web Wiz Ltd.

This page was generated in 0.063 seconds.