Skip to content

Commit

Permalink
Merge branch 'refs/heads/0.9.1-dev'
Browse files Browse the repository at this point in the history
# Conflicts:
#	README.md
  • Loading branch information
atompie committed Jun 24, 2024
2 parents 0560e5c + 7823be4 commit ea0182d
Show file tree
Hide file tree
Showing 643 changed files with 23,431 additions and 10,531 deletions.
Binary file added .DS_Store
Binary file not shown.
32 changes: 20 additions & 12 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,15 @@
Version: 0.9.0
----------------------------------------------------------
* MySQL as a metadata store. Migration from ElasticSearch to MySQL as metastore for app configuration. Now only big data is stored in elastic search.

* GUI Dark theme.
* MySQL as a metadata store. Now only big data is stored in elastic search.
* New deployment process
* All workers have been replaced by one worker - simplifies the commercial installation
* Huge cost savings for multi tenant installations
* Audiences and Activations replaced the Workflow Segmentation
* Post Event Segmentation was removed.
* GitHub Workflow Push
* Auto events - not documented
* New build-in event types

Version: 0.8.2
----------------------------------------------------------
Expand All @@ -10,6 +18,14 @@ Version: 0.8.2
* Improved performance
* Simpler code
* Asynchronous workflows, destinations, etc. (pro)
* Auto profile merging - Automates merging profiles for difference sources/channels.
* Workers
* Redone the scheduler worker (pro)
* New distributed storage worker (pro)
* New in-memory storage for profiles
* New metrics worker (pro)
* Redone visit-ended worker (pro)
* Redone workflow async worker (pro)
* Fixed some GUI errors
* Preparation for KeyCloak oAuth authorisation (pro)
* New plugins:
Expand All @@ -21,22 +37,14 @@ Version: 0.8.2
* Tag Mailchimp Contact
* Time delay
* Profile Metrics
* Workers
* Entity expiration worker (pro)
* Redone the scheduler worker (pro)
* New distributed storage worker (pro)
* New in-memory storage for profiles
* New metrics worker (pro)
* Redone visit-ended worker (pro)
* Redone workflow async worker (pro)
* Configurable data partitioning
* Preparation for configurable profile merging strategies (pro)
* Fixing missing plugin documentation
* Extended integration with mailchimp.
* Time is zone aware now (no longer naive dates) - All timestamps are UTC with UTC zone.
* Fail over database for bigger fault tolerance (pro)
* Fail over database for better fault tolerance (pro)
* All records have insert, create, update timestamps.
* Info on fields updates.
* Information on fields updates.
* Updates profile details page.
* Numerous error fixes

Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Contributions are always welcome!!! Before that, please take a moment to review

- When contributing to this repository, please first discuss the change you wish to make via issue, email, slack, or any other method with the owners of this repository before making a change.
- Don't forget to get yourself assigned before starting to work to avoid any clashes and confusion.
- Please read how to set up a development [environment for API](http://docs.tracardi.com/development/python_env/) or [GUI](http://docs.tracardi.com/development/react_env/)
- Please read how to set up a development [environment for API](http://manual.tracardi.com/development/env/api_source.md) or [GUI](http://manual.tracardi.com/development/env/gui_source.md)


## Pull Request Process
Expand Down
23 changes: 11 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<p align="center">
<br/>
<a href="https://docs.tracardi.com" rel="dofollow"><strong>Explore Tracardi Documentation</strong></a> ·
<a href="https://manual.tracardi.com" rel="dofollow"><strong>Explore Tracardi Documentation</strong></a> ·
<a href="https://opencollective.com/tracardi-cdp">⭐️ Support the project</a> ·
<a href="https://join.slack.com/t/tracardi/shared_invite/zt-1bpf35skl-8Fr5FX5a6cji7tX18JNNDA">👨‍💻 Join the community</a> ·
<a href="https://youtube.com/@tracardi">:tv: Watch tutorials on YOUTUBE</a>
Expand All @@ -31,9 +31,9 @@
</a>
</p>

# Open-source Customer Engagement and Data Platform
# API-First Composable Open-source Customer Data Platform

[TRACARDI](http://www.tracardi.com) is an API-first solution, a low-code or no-code platform aimed at any business that wants to start using user data for automated customer engagement. It is intended for anyone who carries out any type of customer interaction, be it through sales or service delivery.
[TRACARDI](http://www.tracardi.com) is an API-first, composable CDP solution, that is tailored for any company willing to integrate CDP into their Platform. Tracardi comes with a low-code or no-code editor aimed at any business that wants to start using user data for automated customer engagement. It is intended for anyone who carries out any type of customer interaction, be it through sales or service delivery.

Tracardi __collects data from customer journeys__ and assigns it to a profile, automates __data enhancement__, and facilitates 🚀 __Machine Learning APIs__ usage.

Expand Down Expand Up @@ -69,7 +69,7 @@ Want to see Tracardi in action? Subscribe to our [:tv: Youtube channel](https://

## 👇 Installation and getting started

The easiest way to run TRACARDI is to run it as a :whale: **docker container**.
The easiest way to run TRACARDI is to run it as a :whale: **docker container**.

* Install docker and docker-compose on your local machine
* Clone [tracardi/tracardi-api](https://github.com/Tracardi/tracardi-api.git) by executing the following line in your terminal.
Expand All @@ -85,11 +85,11 @@ cd tracardi-api
docker-compose up
```

* Visit the url `http://127.0.0.1:8787` and complete the installation in Tracardi GUI.
* Visit the url `http://127.0.0.1:8787` and complete the installation in Tracardi GUI.

## 👇 Alternate Methods of Installation
## 👇 Other Methods of Installation

There are alternate methods of installation available as well. These are described in detail in our [documentation](http://docs.tracardi.com/installation/).
There are other methods of installation available as well. These are described in detail in our [documentation](http://manual.tracardi.com/installation/).

## 👇 Need help ?

Expand All @@ -114,7 +114,7 @@ There are alternate methods of installation available as well. These are describ

## 👇 Documentation

* System documentation is available at: [http://docs.tracardi.com](http://docs.tracardi.com).
* System documentation is available at: [http://manual.tracardi.com](http://manual.tracardi.com).
* Api documentation is always available after installation at http://127.0.0.1:8686/docs.
* Tracardi also has a built-in documentation into the system.

Expand All @@ -124,17 +124,16 @@ Have you found a bug :bug: ? Or maybe you have a nice feature :sparkles: to cont
[CONTRIBUTING guide](https://github.com/Tracardi/tracardi/blob/master/CONTRIBUTING.md) will help you get your
development environment ready in minutes.

All contributors willing to start coding TRACARDI plugins are urged to read the follwing beginners' tutorial:
All contributors willing to start coding TRACARDI plugins are urged to read the following beginners' tutorial:

* [How to code simple plugin in Tracardi](http://docs.tracardi.com/plugins/tutorial/part1/)
* [Configuring the plugin in Tracardi](http://docs.tracardi.com/plugins/tutorial/part2/)
* [How to code simple plugin in Tracardi](http://manual.tracardi.com/development/tutorial/plugin/)

## 👇 Support us

If you wish to support us, follow us on:

* [Facebook](https://bit.ly/3uPwP5a)
* [X (Twitter)](https://bit.ly/3uVJwLJ), tag TRACARDI and leave your comments.
* [X (Twitter)](https://bit.ly/3uVJwLJ), tag TRACARDI and leave your comments.
* Subscribe to our [Youtube channel](https://bit.ly/3pbdbPR) to see development process and new upcoming features. Don't forget to turn on the notifications by pressing the bell icon to stay updated about the latest updates and releases.
* ⭐️ Star the TRACARDI GitHub project - it really matters and puts a smile on our faces.

Expand Down
5 changes: 3 additions & 2 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# Release process

This is a description of a source release process. This tutorial is meant to be for core developers oly.
This is a description of a source release process. This tutorial is meant to be for core developers only.

1. Branch the code with release version e.g. 0.7.3
2. Find all occurrences of old version e.g 0.7.3-dev and replace it with release version 0.7.3. There may be places in
documentation where the change should not be done. Remember to make a change to requirements.txt
3. Commit the code to the branch (0.7.3)
4. Checkout the master branch
4. Checkout the new version branch (0.7.4)
5. Find all occurrences of old version e.g 0.7.3-dev and replace it with next development version 0.7.4-dev
6. Commit the code to the master branch
7. Check out the release version (e.g. 0.7.3) and run bash ./build-http.sh to build the docker image. Build also other
containers.
8. Uncomment build-as-latest in release version and build latest docker version.
9. Check if the database version does not have to be upgraded.
161 changes: 161 additions & 0 deletions ai/convert_es_to_mysql.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
You must convert a Elasticsearch (ES) mapping to python script that uses the SQLAlchemy to create a MySQL Table. To do
this in the section `Elasticsearch table mapping` you will get a mapping form Elastic search. It wil look like this:

```json
{
"mappings": {
"properties": {
"name": {
"type": "text"
},
"age": {
"type": "integer"
},
"address": {
"properties": {
"street": {
"type": "text",
"ignore_above": 64
},
"city": {
"type": "text"
}
}
},
"tags": {
"type": "keyword"
},
"external_id": {
"type": "keyword"
}
}
}
}
```

It defines all available fields in ES index. This mapping creates the following field.

- name
- age
- address.street
- address.city
- tags

Some fields are embedded like `address.street`. Each field has a data type. For example "type": "keyword".
Map elastic data types to Mysql datatypes so it matches the Mysql datatypes the following way:

keyword - varchar
text - varchar
date - datetime
number - integer
float - float
object - json
flattended - json
boolean - BOOLEAN
binary - BLOB

if there is no mapping come up with the closest type that you think will fit.

Your task is to create a SQLAlchemy python script that creates a MySQL Table based on the provided Elasticsearch mapping
in a section `Elasticsearch table mapping`. Mapping should be converted to a SQlAlchemy code this way that all fields
should be available in the table. Embedded fields like address.street should be converted to address_street.
Table name should be taken from `Elasticsearch index name` section.

If there is `ignore_above` use this value to set max string length. If the field name indicates that this is and id, eg.
flow_id then convert it to `String(40)`. If there is `id` field convert it to: `id = Column(String(40), primary_key=True)`.
If there is no `ignore_above` for string values like text or keyword make the max length 255. If there is "null_value"
use it as default field value.
Convert all mapping do not left any field unconverted event if there is like 100
fields. All fields must be in the script so it can be executed just by coping it.

Expected result of such converted script for example above and `Elasticsearch index name` set to `my_index` could look
like
this:

```python

from sqlalchemy import Column, Integer, String, DateTime, Float, PrimaryKeyConstraint, Boolean
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class MyIndexTable(Base):
__tablename__ = 'my_index'

id = Column(String(40)) # No primary key
tenant = Column(String(40)) # Add this field for multitenance
production = Column(Boolean) # Add this field for multitenance
name = Column(String(128)) # Elasticsearch 'text' type is similar to MySQL 'VARCHAR'. Name field should have always 128
age = Column(Integer) # 'integer' in ES is the same as in MySQL
address_street = Column(
String(64)) # Nested 'text' fields converted to 'VARCHAR', and ignore_above set as max String length
address_city = Column(String(255))
tags = Column(String(255)) # 'keyword' type in ES corresponds to 'VARCHAR' in MySQL
external_id = Column(String(40)) # Always 40 char string

# Notice that all fields are converted.

__table_args__ = (
PrimaryKeyConstraint('id', 'tenant', 'production'),
)

```

Do not write any explanation only full code.

# Elasticsearch index name

settings

# Elasticsearch index mapping

```json
{
"settings": {
"number_of_shards": %%CONF_SHARDS%%,
"number_of_replicas": %%REPLICAS%%
},
"mappings": {
"_meta": {
"version": "%%VERSION%%",
"name": "%%PREFIX%%"
},
"dynamic": "strict",
"properties": {
"id": {
"type": "keyword",
"ignore_above": 48
},
"timestamp": {
"type": "date"
},
"name": {
"type": "keyword"
},
"description": {
"type": "text"
},
"type": {
"type": "keyword"
},
"enabled": {
"type": "boolean"
},
"content": {
"type": "object",
"dynamic": "true",
"enabled": false
},
"config": {
"type": "flattened"
}
}
},
"aliases": {
"%%ALIAS%%": {}
}
}


```
Loading

0 comments on commit ea0182d

Please sign in to comment.