LogNDB is fast; when searching HTTP/HTTPS request data sets at a 20:1 compression factor, means you can search a 100GB (uncompressed) dataset in just over 5 minutes. That’s real performance that you can just use!
At the core of the LogNDB is the LogN Encoder, which was built for austere low; size, weight, and power applications (SWAP). Each encoder runs on a single thread and uses about 5MB of RAM. Depending on the client requirements, including the existing amount of data and the projected new data acquisition, we scale the LogN Encoder to hundreds of thousands of concurrent serverless instances, while keeping the one-time and on-going costs low.
LogNDB and the LogN Encoder compile for x86/ARM chipsets and run on a linux environment. Most of our clients have a cloud presence, which we can integrate with to dramatically reduce the TCO and increase the usefulness of their data.
Even small companies are required to collect and store log data generated throughout the company’s technology infrastructure and will need to store and transmit over a PB of data each day. With the ability to search compressed log data and a 20X reduction in cloud storage and transmission LogNDB provides outstanding value to our users.
AWS data transfer prices per gigabyte vary for transferring data in-and-out as well as to-and-from an AWS cloud service. Data transfer prices often end up contributing up to 30% of the total AWS cost. LogNDB substantially reduces cloud data transfer costs by reducing the amount of data transferred and by reducing the frequency and number of data transfers. More money in your budget!
ML and big data analysis require vast amounts of data to work and derive meaningful insights. Running the tools is expensive in storage and processing costs, often in the 10’s of thousands per iteration. State of the art ML such as Open AI’s GPT-3 cost 10’s of millions of dollars to train. LogNDB reduces the cost to train these models by up to 20:1. Again, more money in our budget!