Skip to main content

Event Management System

Purpose

The project focuses on monitoring and maintaining user activity logs in the VitaliT app to track streaks and assign badges based on engagement, such as daily app usage. The system also provides insights into user behavior by visualizing the data in Kibana, helping the team understand patterns for improving engagement.

Key Goals

1. User Streak Monitoring

  • Track daily app usage and identify users who maintain streaks (e.g., 21 consecutive days).
  • Automatically assign badges and rewards based on user streaks.
  • Trigger actions or notifications based on engagement metrics.

2. Alerts and Notifications

  • Use Prometheus and Alertmanager to monitor logs.
  • Send notifications when activity thresholds are met (e.g., 21-day streak).

3. Data Visualization

  • Store and index user activity logs in Elasticsearch.
  • Use Kibana for real-time insights and trends on user behavior.

Development Setup

1. Set Up Elasticsearch and Kibana

  • Install Elasticsearch:

  • Install Kibana:

    • Follow the Kibana installation guide that matches your Elasticsearch version.
    • Configure Kibana to connect to your Elasticsearch instance.
    • Verify the setup by accessing Kibana through your web browser.

2. Integrate Flutter App with Elasticsearch

  • Add HTTP Client to Flutter App:

    • Use the http package to make HTTP requests.

    • Add the following package to your pubspec.yaml:

      dependencies:
      http: ^0.14.0
  • Send Logs to Elasticsearch:

    • Implement a function to send log data to Elasticsearch:

      import 'package:http/http.dart' as http;
      import 'dart:convert';

      Future<void> sendLogToElasticsearch(Map<String, dynamic> log) async {
      final url = 'http://your-elasticsearch-instance:9200/logs/_doc/';
      final response = await http.post(
      Uri.parse(url),
      headers: {'Content-Type': 'application/json'},
      body: json.encode(log),
      );

      if (response.statusCode != 201) {
      throw Exception('Failed to send log');
      }
      }
    • Call sendLogToElasticsearch() whenever you need to log an event.


3. Configure Elasticsearch Index and Kibana

  • Create an Index in Elasticsearch:

    • You can create an index directly from Kibana or using a PUT request to the Elasticsearch API. Example of creating an index via Kibana Dev Tools:

      PUT /logs
      {
      "mappings": {
      "properties": {
      "user_id": {"type": "keyword"},
      "date": {"type": "date"},
      "event": {"type": "text"}
      }
      }
      }
  • Set Up Kibana Index Pattern:

    • In Kibana, go to the Index Patterns section and create a new pattern that matches your Elasticsearch index (e.g., logs*).
    • Configure visualizations and dashboards based on your log data.

4. Set Up Metrics Collection and Prometheus

  • Create an Intermediate Service or Exporter:
    • Implement a service that queries Elasticsearch for the required metrics and exposes them in a format Prometheus can scrape.

      Example using Python and prometheus_client:

      from prometheus_client import start_http_server, Gauge
      from elasticsearch import Elasticsearch

      es = Elasticsearch(['http://your-elasticsearch-instance:9200'])
      g = Gauge('app_open_days', 'Number of days user has opened the app', ['user_id'])

      def collect_metrics():
      result = es.search(index='logs', body={
      "aggs": {
      "by_user": {
      "terms": {"field": "user_id"},
      "aggs": {
      "days_open": {"date_histogram": {"field": "date", "interval": "day"}}
      }
      }
      }
      })
      for bucket in result['aggregations']['by_user']['buckets']:
      g.labels(user_id=bucket['key']).set(len(bucket['days_open']['buckets']))

      if __name__ == '__main__':
      start_http_server(8000)
      while True:
      collect_metrics()

5. Configure Prometheus and Alertmanager

  • Prometheus Configuration:

    • Edit prometheus.yml to add your exporter as a scrape target:

      scrape_configs:
      - job_name: 'elasticsearch-logs'
      static_configs:
      - targets: ['localhost:8000']
  • Set Up Alerting Rules:

    • Define rules in prometheus.yml or in a separate rules file:

      groups:
      - name: user_activity
      rules:
      - alert: UserOpenedAppFor21Days
      expr: app_open_days{user_id="12345"} >= 21
      for: 1m
      labels:
      severity: critical
      annotations:
      summary: "User has opened the app for 21 days"
  • Alertmanager Configuration:

    • Edit alertmanager.yml to define how alerts should be handled:

      receivers:
      - name: 'api-call'
      webhook_configs:
      - url: 'http://your-api-endpoint'