Client had a lot of requirements for new version of the system,
including hardware and software parts. New version should be much more productive and have
possibilities with deployment to remote offices, real time and off line modes with remote
buildings. Remote building could be located even in different cities, but all system should work
as one unit.
We had different challenges with data interaction speed - all system
for all online modules should work in real time. In security system domain package delivery and
reactions during a second is equal to ages. Reaction to the hardware events should take
milliseconds, even if some door is located 100 km far away from the central database. We can have
dozen offices, each of them can have 10-50 controllers, and each controller can have 1-4 doors
and 1-4 locks and readers (card, biometric, chip, keypad, etc.).
To meet the customer’s requirements our team prepared couple system
architecture designs and then developed prove of concept (POC) versions for each architecture
design. After performing the measurement's (speed, response time, emulation of specified amount
of clients, sensors, signal sources, etc.) we selected the better solution and added some
improvements. After we got the skeleton of the future solution we started split the whole system
by subsystems. Of course, we use previous best practices from customer's existing solutions and
our team experience.
Hardware interaction subsystem
Access control system has a lot of objects, data entities and sources
of the signals, including controllers, repeaters, concentrators, converters, doors, locks,
readers and so forth. During a working day system, even with 100-200 controllers can generate up
to 500-1000 events per second. Of course, we cannot implement "classic" architecture for such
amount of information, when we just have single UART/USB port and one hardware driver, which
will write data on the fly directly to DB - we just won't have such available computing
For solving all challenges, listed above, we designed multi-level
dedicated data interaction system which works as one unit. All data flows were divided by their
importance level and affects to the system logic. System has memory cache subsystems, priority
queues and other logical blocks.
On the architecture design stage we take a timeout and made a decision
to perform a brainstorming and think about the future. What challenges we can face in 5 or even
in 10 years from now? What technologies and trends will dictate the technological rules in the
future? - That is the crucial question in case if investments in something new.
Finally we built the architecture, where we can build each subsystem
from the blocks - logical and functional bricks. Using this concept we can insert to the logic
to some level artificial intelligence blocks, which are not exist today, but can be developed in
2-3 years for now. Today system has preconfigured and custom configured rules and strategies for
reactions on system events, but in the future we have plans of development self-learning blocks,
which will detect "strange" system behaviors, check system health, detect business rules fail
and so forth.
Data storage subsystem
As we mentioned above, we cannot just write and read to database with
10 000 operations per second (imagine, that some queries can take megabytes) - we should use
different approach. Classic systems can "DB via" data interaction model, which all modules
"speak" with other modules via data in the DB. For instance, after some event from the sensor we
inserted data to the DB, DB trigger generated event and notified client apps (operator's working
place). In case of heavy loads such type of approach will not work - we will have signal delays, crashes
because of memory problems, system overloads and so on.
This is why for the designed system we applied "hybrid" concept, when
we use DB read/write operations only when it's really necessary. All modules can communicate
between each other in real time using sockets and websockets, share memory and other
technologies with data double backup protection (if some module, responsible for some data
package sent it and crashed, other module will backup the same package). When data is ready for
saving to DB - it will done once without DB overloading. Data can be aggregated, stored in the
not core data storage (service local DBs) and so on.
Operation and Administration
Client's software part should be user-friendly, easy for use, modern
and clean. Client part for previous version of software had not obvious structure and access to
the system's features and functionality - and that's OK for the previous version, because
customer's developers added new modules and features during 15 years - it's a great software
even nowadays. In our case we already had list of modules, their features and requirements for
the future extensions, for example, as it was mentioned before artificial intelligence modules
for decision making and forecasting or something else.
We built modular system, where "host" core app can load necessary
administrative and operation modules depending of requirements for the working place, who works
with current client (users, roles, permissions), client's available functionality and so on.
This includes (not all modules):
- Personnel management module;
- Visitors management module;
- Accounts management;
- System users, roles and permissions management;
- UI module with schemas of buildings and system components representations, including their
states (module consist of two parts - representation and control for operators and
administrative module for system administrators (you should have a way to configure your
buildings, schemas, drag and drop doors and controllers to your plans and so on));
- Hardware management;
- Hardware diagnostic;
- System failover and backing up routines;
- System events active log;
- Rules and strategies configuration;
- Plugins for integrations;
- Cards design and printing module;
- Access points working places (for operators or police);
Reporting & BI
If you have tons of information - you should have easy way for accessing this information. For sure, each client's installation will have standard requirements for the data reports and representation, but maybe 30% - it will be custom requirements for data access or data export.
This is why we implemented solution, which combines classic reporting ideology (with possibility to define custom reports) and Business Intelligence (BI) system for better and modern data representation and control. Not all customer can afford to use and pay for standalone or cloud-based BI solutions, but some of them already use the systems as Power BI and connect connect to allowed data entrypoints in data storage.
Modern access control system cannot work in isolation (at least in 90%). Data, provided by access control system can be used in enterprise systems, such as ERP, CRM, HRM, Payroll and many other systems.
Pervious software version had a lot of custom developed integration modules, custom developed for each client. It was too hard to support such "zoo" of custom integrations, especially in case of system modernization and later system update for each client. Sometimes it was real nightmare for our customer.
For solving at least 90% of the problems, listed above, we developed open REST API for the system for any external integrations and extensions. Now, in case of developing new version or changes of DB storage we support API versioning, where process obsolete methods and entrypoints, handle data collisions and so on. After some period clients will migrate their migration to up-to-date API versions and we don't have system collapse after each new version. Of course, for multi-versioning support we have to support data transition structures, regression tests and other principles.