-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: [Hub Bot] Refresh metadata 2025-02-24 #1953
chore: [Hub Bot] Refresh metadata 2025-02-24 #1953
Conversation
✅ Deploy Preview for meltano-hub ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Testing plugin
Auto-generated README.md
|
Setting | Required | Default | Description |
---|---|---|---|
host | False | None | Hostname for redshift instance. |
port | False | 5432 | The port on which redshift is awaiting connection. |
enable_iam_authentication | False | None | If true, use temporary credentials (https://docs.aws.amazon.com/redshift/latest/mgmt/generating-iam-credentials-cli-api.html). |
cluster_identifier | False | None | Redshift cluster identifier. Note if sqlalchemy_url is set or enable_iam_authentication is false this will be ignored. |
user | False | None | User name used to authenticate. Note if sqlalchemy_url is set this will be ignored. |
password | False | None | Password used to authenticate. Note if sqlalchemy_url is set this will be ignored. |
dbname | False | None | Database name. Note if sqlalchemy_url is set this will be ignored. |
aws_redshift_copy_role_arn | True | None | Redshift copy role arn to use for the COPY command from s3 |
s3_bucket | True | None | S3 bucket to save staging files before using COPY command |
s3_region | False | None | AWS region for S3 bucket. If not specified, region will be detected by boto config resolution. See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html |
s3_key_prefix | False | S3 key prefix to save staging files before using COPY command | |
remove_s3_files | False | 0 | If you want to remove staging files in S3 |
temp_dir | False | temp | Where you want to store your temp data files. |
default_target_schema | False | None | Redshift schema to send data to, example: tap-clickup |
activate_version | False | 0 | If set to false, the tap will ignore activate version messages. If set to true, add_record_metadata must be set to true as well. |
hard_delete | False | 0 | When activate version is sent from a tap this specefies if we should delete the records that don't match, or mark them with a date in the _sdc_deleted_at column. This config option is ignored if activate_version is set to false. |
add_record_metadata | False | 0 | Note that this must be enabled for activate_version to work!This adds _sdc_extracted_at, _sdc_batched_at, and more to every table. See https://sdk.meltano.com/en/latest/implementation/record_metadata.htmlfor more information. |
ssl_enable | False | 0 | Whether or not to use ssl to verify the server's identity. Use ssl_certificate_authority and ssl_mode for further customization. To use a client certificate to authenticate yourself to the server, use ssl_client_certificate_enable instead. |
ssl_mode | False | verify-full | SSL Protection method, see [redshift documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html for more information. Must be one of disable, allow, prefer, require, verify-ca, or verify-full. |
grants | False | None | List of users/roles/groups that will have select permissions on the tables |
load_method | False | append-only | The method to use when loading data into the destination. append-only will always write all input records whether that records already exists or not. upsert will update existing records and insert new records. overwrite will delete all existing records and insert all input records. |
batch_size_rows | False | None | Maximum number of rows in each batch. |
process_activate_version_messages | False | 1 | Whether to process ACTIVATE_VERSION messages. |
validate_records | False | 1 | Whether to validate the schema of the incoming streams. |
stream_maps | False | None | Config object for stream maps capability. For more information check out Stream Maps. |
stream_map_config | False | None | User-defined config values to be used within map expressions. |
faker_config | False | None | Config for the Faker instance variable fake used within map expressions. Only applicable if the plugin specifies faker as an additional dependency (through the singer-sdk faker extra or directly). |
faker_config.seed | False | None | Value to seed the Faker generator for deterministic output: https://faker.readthedocs.io/en/master/#seeding-the-generator |
faker_config.locale | False | None | One or more LCID locale strings to produce localized output for: https://faker.readthedocs.io/en/master/#localization |
flattening_enabled | False | None | 'True' to enable schema flattening and automatically expand nested properties. |
flattening_max_depth | False | None | The max depth to flatten schemas. |
A full list of supported settings and capabilities is available by running: target-redshift --about
Version info
target-redshift v0.2.1, Meltano SDK v0.44.3
Usage info
melty-bot % target-redshift --help
Usage: target-redshift [OPTIONS]
Execute the Singer target.
Options:
--version Display the package version.
--about Display package metadata and settings.
--format [json|markdown] Specify output style for --about
--config TEXT Configuration file location or 'ENV' to use
environment variables.
--input FILENAME A path to read messages from instead of from
standard in.
--help Show this message and exit.
Detected capabilities
- ❌ 'discover'
- ❌ 'catalog'
- ❌ 'properties'
- ❌ 'state'
- ✅ 'about'
JSON Metadata
{
"name": "target-redshift",
"description": "Target for Redshift.",
"version": "0.2.1",
"sdk_version": "0.44.3",
"supported_python_versions": [
"3.9",
"3.10",
"3.11",
"3.12"
],
"capabilities": [
"about",
"stream-maps",
"schema-flattening",
"validate-records",
"activate-version",
"target-schema",
"hard-delete"
],
"settings": {
"type": "object",
"properties": {
"host": {
"type": [
"string",
"null"
],
"description": "Hostname for redshift instance."
},
"port": {
"type": [
"string",
"null"
],
"default": "5432",
"description": "The port on which redshift is awaiting connection."
},
"enable_iam_authentication": {
"type": [
"boolean",
"null"
],
"title": "Enable IAM Authentication",
"description": "If true, use temporary credentials (https://docs.aws.amazon.com/redshift/latest/mgmt/generating-iam-credentials-cli-api.html)."
},
"cluster_identifier": {
"type": [
"string",
"null"
],
"description": "Redshift cluster identifier. Note if sqlalchemy_url is set or enable_iam_authentication is false this will be ignored."
},
"user": {
"type": [
"string",
"null"
],
"description": "User name used to authenticate. Note if sqlalchemy_url is set this will be ignored."
},
"password": {
"type": [
"string",
"null"
],
"description": "Password used to authenticate. Note if sqlalchemy_url is set this will be ignored."
},
"dbname": {
"type": [
"string",
"null"
],
"title": "Database Name",
"description": "Database name. Note if sqlalchemy_url is set this will be ignored."
},
"aws_redshift_copy_role_arn": {
"type": [
"string"
],
"title": "AWS Redshift Copy Role ARN",
"description": "Redshift copy role arn to use for the COPY command from s3",
"secret": true,
"writeOnly": true
},
"s3_bucket": {
"type": [
"string"
],
"description": "S3 bucket to save staging files before using COPY command"
},
"s3_region": {
"type": [
"string",
"null"
],
"description": "AWS region for S3 bucket. If not specified, region will be detected by boto config resolution. See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html"
},
"s3_key_prefix": {
"type": [
"string",
"null"
],
"default": "",
"description": "S3 key prefix to save staging files before using COPY command"
},
"remove_s3_files": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "If you want to remove staging files in S3"
},
"temp_dir": {
"type": [
"string",
"null"
],
"default": "temp",
"description": "Where you want to store your temp data files."
},
"default_target_schema": {
"type": [
"string",
"null"
],
"description": "Redshift schema to send data to, example: tap-clickup"
},
"activate_version": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "If set to false, the tap will ignore activate version messages. If set to true, add_record_metadata must be set to true as well."
},
"hard_delete": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "When activate version is sent from a tap this specefies if we should delete the records that don't match, or mark them with a date in the `_sdc_deleted_at` column. This config option is ignored if `activate_version` is set to false."
},
"add_record_metadata": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Note that this must be enabled for activate_version to work!This adds _sdc_extracted_at, _sdc_batched_at, and more to every table. See https://sdk.meltano.com/en/latest/implementation/record_metadata.htmlfor more information."
},
"ssl_enable": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Whether or not to use ssl to verify the server's identity. Use ssl_certificate_authority and ssl_mode for further customization. To use a client certificate to authenticate yourself to the server, use ssl_client_certificate_enable instead."
},
"ssl_mode": {
"type": [
"string",
"null"
],
"default": "verify-full",
"description": "SSL Protection method, see [redshift documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html for more information. Must be one of disable, allow, prefer, require, verify-ca, or verify-full."
},
"grants": {
"type": [
"array",
"null"
],
"items": {
"type": [
"string"
]
},
"description": "List of users/roles/groups that will have select permissions on the tables"
},
"load_method": {
"type": [
"string",
"null"
],
"default": "append-only",
"description": "The method to use when loading data into the destination. `append-only` will always write all input records whether that records already exists or not. `upsert` will update existing records and insert new records. `overwrite` will delete all existing records and insert all input records.",
"enum": [
"append-only",
"upsert",
"overwrite"
]
},
"batch_size_rows": {
"type": [
"integer",
"null"
],
"title": "Batch Size Rows",
"description": "Maximum number of rows in each batch."
},
"process_activate_version_messages": {
"type": [
"boolean",
"null"
],
"title": "Process `ACTIVATE_VERSION` messages",
"default": true,
"description": "Whether to process `ACTIVATE_VERSION` messages."
},
"validate_records": {
"type": [
"boolean",
"null"
],
"title": "Validate Records",
"default": true,
"description": "Whether to validate the schema of the incoming streams."
},
"stream_maps": {
"type": [
"object",
"null"
],
"properties": {},
"title": "Stream Maps",
"description": "Config object for stream maps capability. For more information check out [Stream Maps](https://sdk.meltano.com/en/latest/stream_maps.html)."
},
"stream_map_config": {
"type": [
"object",
"null"
],
"properties": {},
"title": "User Stream Map Configuration",
"description": "User-defined config values to be used within map expressions."
},
"faker_config": {
"type": [
"object",
"null"
],
"properties": {
"seed": {
"oneOf": [
{
"type": [
"number"
]
},
{
"type": [
"string"
]
},
{
"type": [
"boolean"
]
},
{
"type": "null"
}
],
"title": "Faker Seed",
"description": "Value to seed the Faker generator for deterministic output: https://faker.readthedocs.io/en/master/#seeding-the-generator"
},
"locale": {
"oneOf": [
{
"type": [
"string"
]
},
{
"type": "array",
"items": {
"type": [
"string"
]
}
},
{
"type": "null"
}
],
"title": "Faker Locale",
"description": "One or more LCID locale strings to produce localized output for: https://faker.readthedocs.io/en/master/#localization"
}
},
"title": "Faker Configuration",
"description": "Config for the [`Faker`](https://faker.readthedocs.io/en/master/) instance variable `fake` used within map expressions. Only applicable if the plugin specifies `faker` as an additional dependency (through the `singer-sdk` `faker` extra or directly)."
},
"flattening_enabled": {
"type": [
"boolean",
"null"
],
"title": "Enable Schema Flattening",
"description": "'True' to enable schema flattening and automatically expand nested properties."
},
"flattening_max_depth": {
"type": [
"integer",
"null"
],
"title": "Max Flattening Depth",
"description": "The max depth to flatten schemas."
}
},
"required": [
"aws_redshift_copy_role_arn",
"s3_bucket"
],
"$schema": "https://json-schema.org/draft/2020-12/schema"
}
}
Testing plugin
Auto-generated README.md
|
Setting | Required | Default | Description |
---|---|---|---|
api_key | True | None | The token to authenticate against the API service |
start_date | False | None | The earliest record date to sync |
dc | True | None | Your Mailchimp DC |
stream_maps | False | None | Config object for stream maps capability. For more information check out Stream Maps. |
stream_map_config | False | None | User-defined config values to be used within map expressions. |
flattening_enabled | False | None | 'True' to enable schema flattening and automatically expand nested properties. |
flattening_max_depth | False | None | The max depth to flatten schemas. |
A full list of supported settings and capabilities is available by running: tap-mailchimp --about
Version info
tap-mailchimp v0.0.1, Meltano SDK v0.24.0
Usage info
melty-bot % tap-mailchimp --help
Usage: tap-mailchimp [OPTIONS]
Execute the Singer tap.
Options:
--state PATH Use a bookmarks file for incremental replication.
--catalog PATH Use a Singer catalog file with the tap.
--test TEXT Use --test to sync a single record for each
stream. Use --test=schema to test schema output
without syncing records.
--discover Run the tap in discovery mode.
--config TEXT Configuration file location or 'ENV' to use
environment variables.
--format [json|markdown] Specify output style for --about
--about Display package metadata and settings.
--version Display the package version.
--help Show this message and exit.
Detected capabilities
- ✅ 'discover'
- ✅ 'catalog'
- ✅ 'state'
- ✅ 'about'
JSON Metadata
{
"name": "tap-mailchimp",
"description": "Mailchimp tap class.",
"version": "0.0.1",
"sdk_version": "0.24.0",
"capabilities": [
"catalog",
"state",
"discover",
"about",
"stream-maps",
"schema-flattening"
],
"settings": {
"type": "object",
"properties": {
"api_key": {
"type": [
"string"
],
"description": "The token to authenticate against the API service",
"secret": true,
"writeOnly": true
},
"start_date": {
"type": [
"string",
"null"
],
"format": "date-time",
"description": "The earliest record date to sync"
},
"dc": {
"type": [
"string"
],
"description": "Your Mailchimp DC"
},
"stream_maps": {
"type": [
"object",
"null"
],
"properties": {},
"description": "Config object for stream maps capability. For more information check out [Stream Maps](https://sdk.meltano.com/en/latest/stream_maps.html)."
},
"stream_map_config": {
"type": [
"object",
"null"
],
"properties": {},
"description": "User-defined config values to be used within map expressions."
},
"flattening_enabled": {
"type": [
"boolean",
"null"
],
"description": "'True' to enable schema flattening and automatically expand nested properties."
},
"flattening_max_depth": {
"type": [
"integer",
"null"
],
"description": "The max depth to flatten schemas."
}
},
"required": [
"api_key",
"dc"
]
}
}
Discovered streams
campaigns
lists
lists_members
reports_email_activity
reports_sent_to
reports_unsubscribes
Testing plugin
Auto-generated README.md
|
Setting | Required | Default | Description |
---|---|---|---|
credentials_path | False | None | The path to a gcp credentials json file. |
credentials_json | False | None | A JSON string of your service account JSON file. |
project | True | None | The target GCP project to materialize data into. |
dataset | True | None | The target dataset to materialize data into. |
location | False | US | The target dataset/bucket location to materialize data into. |
batch_size | False | 500 | The maximum number of rows to send in a single batch or commit. |
fail_fast | False | 1 | Fail the entire load job if any row fails to insert. |
timeout | False | 600 | Default timeout for batch_job and gcs_stage derived LoadJobs. |
denormalized | False | 0 | Determines whether to denormalize the data before writing to BigQuery. A false value will write data using a fixed JSON column based schema, while a true value will write data using a dynamic schema derived from the tap. |
method | True | storage_write_api | The method to use for writing to BigQuery. |
generate_view | False | 0 | Determines whether to generate a view based on the SCHEMA message parsed from the tap. Only valid if denormalized=false meaning you are using the fixed JSON column based schema. |
bucket | False | None | The GCS bucket to use for staging data. Only used if method is gcs_stage. |
partition_granularity | False | month | The granularity of the partitioning strategy. Defaults to month. |
partition_expiration_days | False | None | If set for date- or timestamp-type partitions, the partition will expire that many days after the date it represents. |
cluster_on_key_properties | False | 0 | Determines whether to cluster on the key properties from the tap. Defaults to false. When false, clustering will be based on _sdc_batched_at instead. |
column_name_transforms | False | None | Accepts a JSON object of options with boolean values to enable them. The available options are quote (quote columns in DDL), lower (lowercase column names), add_underscore_when_invalid (add underscore if column starts with digit), and snake_case (convert to snake case naming). For fixed schema, this transform only applies to the generated view if enabled. |
options | False | None | Accepts a JSON object of options with boolean values to enable them. These are more advanced options that shouldn't need tweaking but are here for flexibility. |
upsert | False | 0 | Determines if we should upsert. Defaults to false. A value of true will write to a temporary table and then merge into the target table (upsert). This requires the target table to be unique on the key properties. A value of false will write to the target table directly (append). A value of an array of strings will evaluate the strings in order using fnmatch. At the end of the array, the value of the last match will be used. If not matched, the default value is false (append). |
overwrite | False | 0 | Determines if the target table should be overwritten on load. Defaults to false. A value of true will write to a temporary table and then overwrite the target table inside a transaction (so it is safe). A value of false will write to the target table directly (append). A value of an array of strings will evaluate the strings in order using fnmatch. At the end of the array, the value of the last match will be used. If not matched, the default value is false. This is mutually exclusive with the upsert option. If both are set, upsert will take precedence. |
dedupe_before_upsert | False | 0 | This option is only used if upsert is enabled for a stream. The selection criteria for the stream's candidacy is the same as upsert. If the stream is marked for deduping before upsert, we will create a _session scoped temporary table during the merge transaction to dedupe the ingested records. This is useful for streams that are not unique on the key properties during an ingest but are unique in the source system. Data lake ingestion is often a good example of this where the same unique record may exist in the lake at different points in time from different extracts. |
schema_resolver_version | False | 1 | The version of the schema resolver to use. Defaults to 1. Version 2 uses JSON as a fallback during denormalization. This only has an effect if denormalized=true |
stream_maps | False | None | Config object for stream maps capability. For more information check out Stream Maps. |
stream_map_config | False | None | User-defined config values to be used within map expressions. |
flattening_enabled | False | None | 'True' to enable schema flattening and automatically expand nested properties. |
flattening_max_depth | False | None | The max depth to flatten schemas. |
A full list of supported settings and capabilities is available by running: target-bigquery --about
Version info
target-bigquery v[could not be detected], Meltano SDK v0.22.1
Usage info
melty-bot % target-bigquery --help
Usage: target-bigquery [OPTIONS]
Execute the Singer target.
Options:
--input FILENAME A path to read messages from instead of from
standard in.
--config TEXT Configuration file location or 'ENV' to use
environment variables.
--format [json|markdown] Specify output style for --about
--about Display package metadata and settings.
--version Display the package version.
--help Show this message and exit.
Detected capabilities
- ❌ 'discover'
- ❌ 'catalog'
- ❌ 'properties'
- ❌ 'state'
- ✅ 'about'
JSON Metadata
{
"name": "target-bigquery",
"description": "Target for BigQuery.",
"version": "[could not be detected]",
"sdk_version": "0.22.1",
"capabilities": [
"about",
"stream-maps",
"schema-flattening"
],
"settings": {
"type": "object",
"properties": {
"credentials_path": {
"type": [
"string",
"null"
],
"description": "The path to a gcp credentials json file."
},
"credentials_json": {
"type": [
"string",
"null"
],
"description": "A JSON string of your service account JSON file."
},
"project": {
"type": [
"string"
],
"description": "The target GCP project to materialize data into."
},
"dataset": {
"type": [
"string"
],
"description": "The target dataset to materialize data into."
},
"location": {
"type": [
"string",
"null"
],
"default": "US",
"description": "The target dataset/bucket location to materialize data into."
},
"batch_size": {
"type": [
"integer",
"null"
],
"default": 500,
"description": "The maximum number of rows to send in a single batch or commit."
},
"fail_fast": {
"type": [
"boolean",
"null"
],
"default": true,
"description": "Fail the entire load job if any row fails to insert."
},
"timeout": {
"type": [
"integer",
"null"
],
"default": 600,
"description": "Default timeout for batch_job and gcs_stage derived LoadJobs."
},
"denormalized": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Determines whether to denormalize the data before writing to BigQuery. A false value will write data using a fixed JSON column based schema, while a true value will write data using a dynamic schema derived from the tap."
},
"method": {
"type": "string",
"enum": [
"storage_write_api",
"batch_job",
"gcs_stage",
"streaming_insert"
],
"default": "storage_write_api",
"description": "The method to use for writing to BigQuery."
},
"generate_view": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Determines whether to generate a view based on the SCHEMA message parsed from the tap. Only valid if denormalized=false meaning you are using the fixed JSON column based schema."
},
"bucket": {
"type": [
"string",
"null"
],
"description": "The GCS bucket to use for staging data. Only used if method is gcs_stage."
},
"partition_granularity": {
"type": [
"string",
"null"
],
"enum": [
"year",
"month",
"day",
"hour"
],
"default": "month",
"description": "The granularity of the partitioning strategy. Defaults to month."
},
"partition_expiration_days": {
"type": [
"integer",
"null"
],
"description": "If set for date- or timestamp-type partitions, the partition will expire that many days after the date it represents."
},
"cluster_on_key_properties": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Determines whether to cluster on the key properties from the tap. Defaults to false. When false, clustering will be based on _sdc_batched_at instead."
},
"column_name_transforms": {
"type": [
"object",
"null"
],
"properties": {
"lower": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Lowercase column names"
},
"quote": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Quote columns during DDL generation"
},
"add_underscore_when_invalid": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Add an underscore when a column starts with a digit"
},
"snake_case": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Convert columns to snake case"
},
"replace_period_with_underscore": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "Convert periods to underscores"
}
},
"description": "Accepts a JSON object of options with boolean values to enable them. The available options are `quote` (quote columns in DDL), `lower` (lowercase column names), `add_underscore_when_invalid` (add underscore if column starts with digit), and `snake_case` (convert to snake case naming). For fixed schema, this transform only applies to the generated view if enabled."
},
"options": {
"type": [
"object",
"null"
],
"properties": {
"storage_write_batch_mode": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "By default, we use the default stream (Committed mode) in the storage_write_api load method which results in streaming records which are immediately available and is generally fastest. If this is set to true, we will use the application created streams (Committed mode) to transactionally batch data on STATE messages and at end of pipe."
},
"process_pool": {
"type": [
"boolean",
"null"
],
"default": false,
"description": "By default we use an autoscaling threadpool to write to BigQuery. If set to true, we will use a process pool."
},
"max_workers": {
"type": [
"integer",
"null"
],
"description": "By default, each sink type has a preconfigured max worker pool limit. This sets an override for maximum number of workers in the pool."
}
},
"description": "Accepts a JSON object of options with boolean values to enable them. These are more advanced options that shouldn't need tweaking but are here for flexibility."
},
"upsert": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "array",
"items": {
"type": "string"
}
},
"null"
],
"default": false,
"description": "Determines if we should upsert. Defaults to false. A value of true will write to a temporary table and then merge into the target table (upsert). This requires the target table to be unique on the key properties. A value of false will write to the target table directly (append). A value of an array of strings will evaluate the strings in order using fnmatch. At the end of the array, the value of the last match will be used. If not matched, the default value is false (append)."
},
"overwrite": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "array",
"items": {
"type": "string"
}
},
"null"
],
"default": false,
"description": "Determines if the target table should be overwritten on load. Defaults to false. A value of true will write to a temporary table and then overwrite the target table inside a transaction (so it is safe). A value of false will write to the target table directly (append). A value of an array of strings will evaluate the strings in order using fnmatch. At the end of the array, the value of the last match will be used. If not matched, the default value is false. This is mutually exclusive with the `upsert` option. If both are set, `upsert` will take precedence."
},
"dedupe_before_upsert": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "array",
"items": {
"type": "string"
}
},
"null"
],
"default": false,
"description": "This option is only used if `upsert` is enabled for a stream. The selection criteria for the stream's candidacy is the same as upsert. If the stream is marked for deduping before upsert, we will create a _session scoped temporary table during the merge transaction to dedupe the ingested records. This is useful for streams that are not unique on the key properties during an ingest but are unique in the source system. Data lake ingestion is often a good example of this where the same unique record may exist in the lake at different points in time from different extracts."
},
"schema_resolver_version": {
"type": [
"integer",
"null"
],
"default": 1,
"description": "The version of the schema resolver to use. Defaults to 1. Version 2 uses JSON as a fallback during denormalization. This only has an effect if denormalized=true",
"enum": [
1,
2
]
},
"stream_maps": {
"type": [
"object",
"null"
],
"properties": {},
"description": "Config object for stream maps capability. For more information check out [Stream Maps](https://sdk.meltano.com/en/latest/stream_maps.html)."
},
"stream_map_config": {
"type": [
"object",
"null"
],
"properties": {},
"description": "User-defined config values to be used within map expressions."
},
"flattening_enabled": {
"type": [
"boolean",
"null"
],
"description": "'True' to enable schema flattening and automatically expand nested properties."
},
"flattening_max_depth": {
"type": [
"integer",
"null"
],
"description": "The max depth to flatten schemas."
}
},
"required": [
"project",
"dataset",
"method"
]
}
}
Testing plugin
Version info
Usage info
Detected capabilities
JSON Metadata{
"name": "tap-totango",
"description": "totango tap class.",
"version": "0.5.0",
"sdk_version": "0.27.0",
"capabilities": [
"catalog",
"state",
"discover",
"about",
"stream-maps",
"schema-flattening"
],
"settings": {
"type": "object",
"properties": {
"api_url": {
"type": [
"string"
],
"default": "https://api.totango.com",
"description": "The url for the API services. https://api.totango.com is for US services, whereas https://api-eu1.totango.com is for EU services.",
"enum": [
"https://api.totango.com",
"https://api-eu1.totango.com "
]
},
"auth_token": {
"type": [
"string"
],
"description": "The token to authenticate against the API service",
"secret": true,
"writeOnly": true
},
"events_terms": {
"type": "array",
"items": {
"type": "object",
"properties": {
"type": {
"type": [
"string"
]
}
},
"required": [
"type"
],
"additionalProperties": true
},
"default": [],
"description": "An array containing filter conditions to use for the events stream search.",
"examples": [
[
{
"type": "event_property",
"name": "event_type",
"eq": "note"
}
],
[
{
"type": "or",
"or": [
{
"type": "event_property",
"name": "event_type",
"eq": "note"
},
{
"type": "event_property",
"name": "event_type",
"eq": "campaign_touch"
}
]
}
],
[
{
"type": "date",
"term": "date",
"joker": "yesterday"
},
{
"type": "or",
"or": [
{
"type": "event_property",
"name": "event_type",
"eq": "note"
},
{
"type": "event_property",
"name": "event_type",
"eq": "campaign_touch"
}
]
}
],
[
{
"type": "date",
"term": "date",
"gte": 1587859200000
},
{
"type": "event_property",
"name": "event_type",
"eq": "note"
}
]
]
},
"events_count": {
"type": [
"integer"
],
"default": 1000,
"description": "The maximum number of accounts to return in the events result set. The max. value for count is 1000."
},
"events_offset": {
"type": [
"integer"
],
"default": 0,
"description": "Page number (0 is the 1st-page)."
},
"account_id": {
"type": [
"string",
"null"
],
"description": "Filter the events stream results for a specific account."
},
"accounts_terms": {
"type": "array",
"items": {
"type": "object",
"properties": {
"type": {
"type": [
"string"
]
}
},
"required": [
"type"
],
"additionalProperties": true
},
"default": [],
"description": "An array containing filter conditions to use for the accounts stream search.",
"examples": [
[
{
"type": "string",
"term": "status_group",
"in_list": [
"paying"
]
}
]
]
},
"accounts_fields": {
"type": "array",
"items": {
"type": "object",
"properties": {
"type": {
"type": [
"string"
]
}
},
"required": [
"type"
],
"additionalProperties": true
},
"default": [],
"description": "List of fields to return as results. Note that the account name and account-id are always returned as well.",
"examples": [
[
{
"type": "string",
"term": "health",
"field_display_name": "Health rank "
},
{
"type": "health_trend",
"field_display_name": "Health last change "
},
{
"type": "string_attribute",
"attribute": "Success Manager",
"field_display_name": "Success Manager"
}
]
]
},
"accounts_count": {
"type": [
"integer",
"null"
],
"default": 100,
"description": "The maximum number of accounts to return in the accounts result set. The max. value for count is 1000."
},
"accounts_offset": {
"type": [
"integer",
"null"
],
"default": 0,
"description": "Record number (0 states \"start at record 0\"). The record size can be defined using the count parameter (and limited to 1000). Tip: To page through results, ask for 1000 records (count: 1000). If you receive 1000 records, assume there\u2019s more, in which case you want to pull the next 1000 records (offset: 1000\u2026then 2000\u2026etc.). Repeat paging until the number of records returned is less than 1000."
},
"accounts_sort_by": {
"type": [
"string",
"null"
],
"default": "display_name",
"description": "Field name to sort the accounts stream results set by."
},
"accounts_sort_order": {
"type": [
"string",
"null"
],
"enum": [
"ASC",
"DESC"
],
"default": "ASC",
"description": "Order to sort the accounts stream results set by."
},
"users_terms": {
"type": "array",
"items": {
"type": "object",
"properties": {
"type": {
"type": [
"string"
]
}
},
"required": [
"type"
],
"additionalProperties": true
},
"default": [],
"description": "An array containing filter conditions to use for the users stream search.",
"examples": [
[
{
"type": "parent_account",
"terms": [
{
"type": "string",
"term": "status_group",
"in_list": [
"paying"
]
}
]
}
]
]
},
"users_fields": {
"type": "array",
"items": {
"type": "object",
"properties": {
"type": {
"type": [
"string"
]
}
},
"required": [
"type"
],
"additionalProperties": true
},
"default": [],
"description": "List of fields to return as results. Note that the user name and id along with account name and account-id are always returned as well.",
"examples": [
[
{
"type": "date",
"term": "last_activity_time",
"field_display_name": "Last activity",
"desc": true
},
{
"type": "named_aggregation",
"aggregation": "total_activities",
"duration": 14,
"field_display_name": "Activities (14d)"
}
]
]
},
"users_count": {
"type": [
"integer",
"null"
],
"default": 1000,
"description": "The maximum number of users to return in the users result set. The max. value for count is 1000."
},
"users_offset": {
"type": [
"integer",
"null"
],
"default": 0,
"description": "Record number (0 states \"start at record 0\"). The record size can be defined using the count parameter (and limited to 1000). Tip: To page through results, ask for 1000 records (count: 1000). If you receive 1000 records, assume there\u2019s more, in which case you want to pull the next 1000 records (offset: 1000\u2026then 2000\u2026etc.). Repeat paging until the number of records returned is less than 1000."
},
"users_sort_by": {
"type": [
"string",
"null"
],
"default": "display_name",
"description": "Field name to sort the users stream results set by."
},
"users_sort_order": {
"type": [
"string",
"null"
],
"enum": [
"ASC",
"DESC"
],
"default": "ASC",
"description": "Order to sort the users stream results set by."
},
"stream_maps": {
"type": [
"object",
"null"
],
"properties": {},
"description": "Config object for stream maps capability. For more information check out [Stream Maps](https://sdk.meltano.com/en/latest/stream_maps.html)."
},
"stream_map_config": {
"type": [
"object",
"null"
],
"properties": {},
"description": "User-defined config values to be used within map expressions."
},
"flattening_enabled": {
"type": [
"boolean",
"null"
],
"description": "'True' to enable schema flattening and automatically expand nested properties."
},
"flattening_max_depth": {
"type": [
"integer",
"null"
],
"description": "The max depth to flatten schemas."
}
},
"required": [
"api_url",
"auth_token",
"events_terms",
"events_count",
"events_offset",
"accounts_terms",
"accounts_fields",
"users_terms",
"users_fields"
]
}
} Discovered streams
|
Updates Plugin Definitions