LogiCrunch is a sophisticated product, built from scratch primarily to process real-time streaming events from hospitals. LogiCrunch currently receives and processes events in various shape and form. LogiCrunch processing pipeline is metadata driven, tenant-specific patient messages are correlated and features dynamically generated, prior to pathway predictions and instant notification.
Platform is capable of handling and transforming sensor messages across versions. Also, capable of handling tenant specific semantics. All tenant specific encoded messages are transformed to a canonical form that is received and processed by all the downstream processing components.
Platform is broadly divided into 2 set of processing feeds. It receives Tenant agonistic feeds other feeds that involve tenant specific profiling and processes. Tenant-agnostic components receive, and process transformed canonical payloads.
Different predictive models are built offline on historical data and injected into the streaming pipeline. Sensor related events received in real time are passed through complex event processes(CEP) that transform, derive features and predict in real-time. Models built are segmented broadly based on high, low acuity areas. Ceptors that are clustered receives high severity events in real time at sub-second levels, while the low-acuity patient related events are received at relatively high levels. Platform receives processes, predicts on a continuous streaming data in real-time and relies heavily on clustered messaging brokers for handshakes
Cardiac and sepsis pathway predictive models are built offline on historical data and injected into the streaming pipeline. Device, ADT and Lab related events received in real time are passed through CEP processes that transform, derive features and predict in real-time. Models built are segmented broadly based on high, low acuity areas. Ceptors that are clustered receives high acuity patient related events in real time at sub-second levels, while the low-acuity patient related events are received at relatively high levels. Platform receives processes, predicts on a continuous streaming data in real-time and relies heavily on clustered messaging brokers for handshakes
Platform generates and stores tenant-specific metadata, as it discovers dynamically in the ingestion process, while enriching and transforming from tenant-specific to canonical payloads. Metadata is also loaded onto a clustered cache, for data reads by the pipeline processes
Pipeline process stores the raw data as received, in the tenant specific data-lakes. Also, pipeline stores use-case specific data ponds per tenant, inclusive of the model features and predictions. Apache Atlas, Ranger and SOLR are configured over the data fabric to bolster data access, instrument, search and retrieval scenarios. Product has the complete Data fabric layer metadata driven.
LogiCrunch is a sophisticated product that is pluggable, extensible and scalable. LogiCrunch has several moving parts - the streaming engine, messaging brokers, CEP engine, in-memory cache, the data fabric in HBase and PostgreSql. All these components are clustered, hosted in isolated and/or on shared nodes. Each of these pipeline components can be dynamically scaled based on resource usage.
Furthermore, streaming data is seen and used everywhere—from social networks, to mobile and web applications, IoT devices, instrumentation in data centres, and many other sources. As the speed and volume of this type of data increases, the need to perform data analysis in real time with machine learning algorithms. For example, you might want a continuous monitoring system to detect sentiment changes in a social media feed so that you can react to the sentiment in near real time.
LogiCrunch also has an avatar in AWS solution, which can be deployed with minor or no modification, in the form of cost-effective batch and real-time data analytics architecture on the AWS Cloud that leverages Apache Spark Streaming. In this edition, LogiCrunch enjoys the flexibility of AWS Cloud, streaming data, and data analysis. For more details please go through Cloud consulting case studies.
Would you need help in embarking on the Big Data? Contact