diff --git a/auditbeat/docs/auditbeat-filtering.asciidoc b/auditbeat/docs/auditbeat-filtering.asciidoc
deleted file mode 100644
index 6919965ac540..000000000000
--- a/auditbeat/docs/auditbeat-filtering.asciidoc
+++ /dev/null
@@ -1,10 +0,0 @@
-[[filtering-and-enhancing-data]]
-== Filter and enhance data with processors
-
-++++
-Processors
-++++
-
-include::{libbeat-dir}/processors.asciidoc[]
-
-include::{libbeat-dir}/processors-using.asciidoc[]
diff --git a/auditbeat/docs/auditbeat-general-options.asciidoc b/auditbeat/docs/auditbeat-general-options.asciidoc
deleted file mode 100644
index 7aec17cd6095..000000000000
--- a/auditbeat/docs/auditbeat-general-options.asciidoc
+++ /dev/null
@@ -1,11 +0,0 @@
-[[configuration-general-options]]
-== Configure general settings
-
-++++
-General settings
-++++
-
-You can specify settings in the +{beatname_lc}.yml+ config file to control the
-general behavior of {beatname_uc}.
-
-include::{libbeat-dir}/generalconfig.asciidoc[]
diff --git a/auditbeat/docs/auditbeat-modules-config.asciidoc b/auditbeat/docs/auditbeat-modules-config.asciidoc
deleted file mode 100644
index 2071f156b922..000000000000
--- a/auditbeat/docs/auditbeat-modules-config.asciidoc
+++ /dev/null
@@ -1,35 +0,0 @@
-[id="configuration-{beatname_lc}"]
-== Configure modules
-
-++++
-Modules
-++++
-
-To enable specific modules you add entries to the `auditbeat.modules` list in
-the +{beatname_lc}.yml+ config file. Each entry in the list begins with a dash
-(-) and is followed by settings for that module.
-
-The following example shows a configuration that runs the `auditd` and
-`file_integrity` modules.
-
-[source,yaml]
-----
-auditbeat.modules:
-
-- module: auditd
- audit_rules: |
- -w /etc/passwd -p wa -k identity
- -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
-
-- module: file_integrity
- paths:
- - /bin
- - /usr/bin
- - /sbin
- - /usr/sbin
- - /etc
-----
-
-The configuration details vary by module. See the
-<<{beatname_lc}-modules,module documentation>> for more detail about configuring
-the available modules.
diff --git a/auditbeat/docs/auditbeat-options.asciidoc b/auditbeat/docs/auditbeat-options.asciidoc
deleted file mode 100644
index 8233f79cee1e..000000000000
--- a/auditbeat/docs/auditbeat-options.asciidoc
+++ /dev/null
@@ -1,56 +0,0 @@
-//////////////////////////////////////////////////////////////////////////
-//// This content is shared by all Auditbeat modules. Make sure you keep the
-//// descriptions generic enough to work for all modules. To include
-//// this file, use:
-////
-//// include::{docdir}/auditbeat-options.asciidoc[]
-////
-//////////////////////////////////////////////////////////////////////////
-
-[id="module-standard-options-{modulename}"]
-[float]
-==== Standard configuration options
-
-You can specify the following options for any {beatname_uc} module.
-
-*`module`*:: The name of the module to run.
-
-ifeval::["{modulename}"=="system"]
-*`datasets`*:: A list of datasets to execute.
-endif::[]
-
-*`enabled`*:: A Boolean value that specifies whether the module is enabled.
-
-ifeval::["{modulename}"=="system"]
-*`period`*:: The frequency at which the datasets check for changes. If a system
-is not reachable, {beatname_uc} returns an error for each period. This setting
-is required. For most datasets, especially `process` and `socket`, a shorter
-period is recommended.
-endif::[]
-
-*`fields`*:: A dictionary of fields that will be sent with the dataset event. This setting
-is optional.
-
-*`tags`*:: A list of tags that will be sent with the dataset event. This setting is
-optional.
-
-*`processors`*:: A list of processors to apply to the data generated by the dataset.
-+
-See <> for information about specifying
-processors in your config.
-
-*`index`*:: If present, this formatted string overrides the index for events from this
-module (for elasticsearch outputs), or sets the `raw_index` field of the event's
-metadata (for other outputs). This string can only refer to the agent name and
-version and the event timestamp; for access to dynamic fields, use
-`output.elasticsearch.index` or a processor.
-+
-Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might
-expand to +"{beatname_lc}-myindex-2019.12.13"+.
-
-*`keep_null`*:: If this option is set to true, fields with `null` values will be published in
-the output document. By default, `keep_null` is set to `false`.
-
-*`service.name`*:: A name given by the user to the service the data is collected from. It can be
-used for example to identify information collected from nodes of different
-clusters with the same `service.type`.
diff --git a/auditbeat/docs/configuring-howto.asciidoc b/auditbeat/docs/configuring-howto.asciidoc
deleted file mode 100644
index a2de4ee5ed61..000000000000
--- a/auditbeat/docs/configuring-howto.asciidoc
+++ /dev/null
@@ -1,70 +0,0 @@
-[id="configuring-howto-{beatname_lc}"]
-= Configure {beatname_uc}
-
-[partintro]
---
-++++
-Configure
-++++
-
-include::{libbeat-dir}/shared/configuring-intro.asciidoc[]
-
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <<{beatname_lc}-reference-yml>>
-
-After changing configuration settings, you need to restart {beatname_uc} to
-pick up the changes.
-
---
-
-include::./auditbeat-modules-config.asciidoc[]
-
-include::./auditbeat-general-options.asciidoc[]
-
-include::{libbeat-dir}/shared-path-config.asciidoc[]
-
-include::./reload-configuration.asciidoc[]
-
-include::{libbeat-dir}/outputconfig.asciidoc[]
-
-ifndef::no_kerberos[]
-include::{libbeat-dir}/shared-kerberos-config.asciidoc[]
-endif::[]
-
-include::{libbeat-dir}/shared-ssl-config.asciidoc[]
-
-include::{libbeat-dir}/shared-ilm.asciidoc[]
-
-include::{libbeat-dir}/setup-config.asciidoc[]
-
-include::./auditbeat-filtering.asciidoc[]
-
-include::{libbeat-dir}/queueconfig.asciidoc[]
-
-include::{libbeat-dir}/loggingconfig.asciidoc[]
-
-include::{libbeat-dir}/http-endpoint.asciidoc[]
-
-include::{libbeat-dir}/regexp.asciidoc[]
-
-include::{libbeat-dir}/shared-instrumentation.asciidoc[]
-
-include::{libbeat-dir}/shared-feature-flags.asciidoc[]
-
-include::{libbeat-dir}/reference-yml.asciidoc[]
diff --git a/auditbeat/docs/faq-ulimit.asciidoc b/auditbeat/docs/faq-ulimit.asciidoc
deleted file mode 100644
index e234d1c9d958..000000000000
--- a/auditbeat/docs/faq-ulimit.asciidoc
+++ /dev/null
@@ -1,28 +0,0 @@
-[[ulimit]]
-=== {beatname_uc} fails to watch folders because too many files are open
-
-Because of the way file monitoring is implemented on macOS, you may see a
-warning similar to the following:
-
-[source,shell]
-----
-eventreader_fsnotify.go:42: WARN [audit.file] Failed to watch /usr/bin: too many
-open files (check the max number of open files allowed with 'ulimit -a')
-----
-
-To resolve this issue, run {beatname_uc} with the `ulimit` set to a larger
-value, for example:
-
-["source","sh",subs="attributes"]
-----
-sudo sh -c 'ulimit -n 8192 && ./{beatname_uc} -e
-----
-
-Or:
-
-["source","sh",subs="attributes"]
-----
-sudo su
-ulimit -n 8192
-./{beatname_lc} -e
-----
diff --git a/auditbeat/docs/faq.asciidoc b/auditbeat/docs/faq.asciidoc
deleted file mode 100644
index d0f4fbe8235e..000000000000
--- a/auditbeat/docs/faq.asciidoc
+++ /dev/null
@@ -1,12 +0,0 @@
-[[faq]]
-== Common problems
-
-This section describes common problems you might encounter with
-{beatname_uc}. Also check out the
-https://discuss.elastic.co/c/beats/{beatname_lc}[{beatname_uc} discussion forum].
-
-include::./faq-ulimit.asciidoc[]
-
-include::{libbeat-dir}/faq-limit-bandwidth.asciidoc[]
-
-include::{libbeat-dir}/shared-faq.asciidoc[]
diff --git a/auditbeat/docs/fields.asciidoc b/auditbeat/docs/fields.asciidoc
deleted file mode 100644
index 9eee5f008fc1..000000000000
--- a/auditbeat/docs/fields.asciidoc
+++ /dev/null
@@ -1,19467 +0,0 @@
-
-////
-This file is generated! See _meta/fields.yml and scripts/generate_fields_docs.py
-////
-
-:edit_url:
-
-[[exported-fields]]
-= Exported fields
-
-[partintro]
-
---
-This document describes the fields that are exported by Auditbeat. They are
-grouped in the following categories:
-
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-
---
-[[exported-fields-auditd]]
-== Auditd fields
-
-These are the fields generated by the auditd module.
-
-
-
-*`user.auid`*::
-+
---
-type: alias
-
-alias to: user.audit.id
-
---
-
-*`user.uid`*::
-+
---
-type: alias
-
-alias to: user.id
-
---
-
-*`user.fsuid`*::
-+
---
-type: alias
-
-alias to: user.filesystem.id
-
---
-
-*`user.suid`*::
-+
---
-type: alias
-
-alias to: user.saved.id
-
---
-
-*`user.gid`*::
-+
---
-type: alias
-
-alias to: user.group.id
-
---
-
-*`user.sgid`*::
-+
---
-type: alias
-
-alias to: user.saved.group.id
-
---
-
-*`user.fsgid`*::
-+
---
-type: alias
-
-alias to: user.filesystem.group.id
-
---
-
-[float]
-=== name_map
-
-If `resolve_ids` is set to true in the configuration then `name_map` will contain a mapping of uid field names to the resolved name (e.g. auid -> root).
-
-
-
-*`user.name_map.auid`*::
-+
---
-type: alias
-
-alias to: user.audit.name
-
---
-
-*`user.name_map.uid`*::
-+
---
-type: alias
-
-alias to: user.name
-
---
-
-*`user.name_map.fsuid`*::
-+
---
-type: alias
-
-alias to: user.filesystem.name
-
---
-
-*`user.name_map.suid`*::
-+
---
-type: alias
-
-alias to: user.saved.name
-
---
-
-*`user.name_map.gid`*::
-+
---
-type: alias
-
-alias to: user.group.name
-
---
-
-*`user.name_map.sgid`*::
-+
---
-type: alias
-
-alias to: user.saved.group.name
-
---
-
-*`user.name_map.fsgid`*::
-+
---
-type: alias
-
-alias to: user.filesystem.group.name
-
---
-
-[float]
-=== selinux
-
-The SELinux identity of the actor.
-
-
-*`user.selinux.user`*::
-+
---
-account submitted for authentication
-
-type: keyword
-
---
-
-*`user.selinux.role`*::
-+
---
-user's SELinux role
-
-type: keyword
-
---
-
-*`user.selinux.domain`*::
-+
---
-The actor's SELinux domain or type.
-
-type: keyword
-
---
-
-*`user.selinux.level`*::
-+
---
-The actor's SELinux level.
-
-type: keyword
-
-example: s0
-
---
-
-*`user.selinux.category`*::
-+
---
-The actor's SELinux category or compartments.
-
-type: keyword
-
---
-
-[float]
-=== process
-
-Process attributes.
-
-
-*`process.cwd`*::
-+
---
-The current working directory.
-
-type: alias
-
-alias to: process.working_directory
-
---
-
-[float]
-=== source
-
-Source that triggered the event.
-
-
-*`source.path`*::
-+
---
-This is the path associated with a unix socket.
-
-type: keyword
-
---
-
-[float]
-=== destination
-
-Destination address that triggered the event.
-
-
-*`destination.path`*::
-+
---
-This is the path associated with a unix socket.
-
-type: keyword
-
---
-
-
-*`auditd.message_type`*::
-+
---
-The audit message type (e.g. syscall or apparmor_denied).
-
-
-type: keyword
-
-example: syscall
-
---
-
-*`auditd.sequence`*::
-+
---
-The sequence number of the event as assigned by the kernel. Sequence numbers are stored as a uint32 in the kernel and can rollover.
-
-
-type: long
-
---
-
-*`auditd.session`*::
-+
---
-The session ID assigned to a login. All events related to a login session will have the same value.
-
-
-type: keyword
-
---
-
-*`auditd.result`*::
-+
---
-The result of the audited operation (success/fail).
-
-type: keyword
-
-example: success or fail
-
---
-
-
-[float]
-=== actor
-
-The actor is the user that triggered the audit event.
-
-
-*`auditd.summary.actor.primary`*::
-+
---
-The primary identity of the actor. This is the actor's original login ID. It will not change even if the user changes to another account.
-
-
-type: keyword
-
---
-
-*`auditd.summary.actor.secondary`*::
-+
---
-The secondary identity of the actor. This is typically the same as the primary, except for when the user has used `su`.
-
-type: keyword
-
---
-
-[float]
-=== object
-
-This is the thing or object being acted upon in the event.
-
-
-
-*`auditd.summary.object.type`*::
-+
---
-A description of the what the "thing" is (e.g. file, socket, user-session).
-
-
-type: keyword
-
---
-
-*`auditd.summary.object.primary`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.summary.object.secondary`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.summary.how`*::
-+
---
-This describes how the action was performed. Usually this is the exe or command that was being executed that triggered the event.
-
-
-type: keyword
-
---
-
-[float]
-=== paths
-
-List of paths associated with the event.
-
-
-*`auditd.paths.inode`*::
-+
---
-inode number
-
-type: keyword
-
---
-
-*`auditd.paths.dev`*::
-+
---
-device name as found in /dev
-
-type: keyword
-
---
-
-*`auditd.paths.obj_user`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.paths.obj_role`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.paths.obj_domain`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.paths.obj_level`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.paths.objtype`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.paths.ouid`*::
-+
---
-file owner user ID
-
-type: keyword
-
---
-
-*`auditd.paths.rdev`*::
-+
---
-the device identifier (special files only)
-
-type: keyword
-
---
-
-*`auditd.paths.nametype`*::
-+
---
-kind of file operation being referenced
-
-type: keyword
-
---
-
-*`auditd.paths.ogid`*::
-+
---
-file owner group ID
-
-type: keyword
-
---
-
-*`auditd.paths.item`*::
-+
---
-which item is being recorded
-
-type: keyword
-
---
-
-*`auditd.paths.mode`*::
-+
---
-mode flags on a file
-
-type: keyword
-
---
-
-*`auditd.paths.name`*::
-+
---
-file name in avcs
-
-type: keyword
-
---
-
-[float]
-=== data
-
-The data from the audit messages.
-
-
-*`auditd.data.action`*::
-+
---
-netfilter packet disposition
-
-type: keyword
-
---
-
-*`auditd.data.minor`*::
-+
---
-device minor number
-
-type: keyword
-
---
-
-*`auditd.data.acct`*::
-+
---
-a user's account name
-
-type: keyword
-
---
-
-*`auditd.data.addr`*::
-+
---
-the remote address that the user is connecting from
-
-type: keyword
-
---
-
-*`auditd.data.cipher`*::
-+
---
-name of crypto cipher selected
-
-type: keyword
-
---
-
-*`auditd.data.id`*::
-+
---
-during account changes
-
-type: keyword
-
---
-
-*`auditd.data.entries`*::
-+
---
-number of entries in the netfilter table
-
-type: keyword
-
---
-
-*`auditd.data.kind`*::
-+
---
-server or client in crypto operation
-
-type: keyword
-
---
-
-*`auditd.data.ksize`*::
-+
---
-key size for crypto operation
-
-type: keyword
-
---
-
-*`auditd.data.spid`*::
-+
---
-sent process ID
-
-type: keyword
-
---
-
-*`auditd.data.arch`*::
-+
---
-the elf architecture flags
-
-type: keyword
-
---
-
-*`auditd.data.argc`*::
-+
---
-the number of arguments to an execve syscall
-
-type: keyword
-
---
-
-*`auditd.data.major`*::
-+
---
-device major number
-
-type: keyword
-
---
-
-*`auditd.data.unit`*::
-+
---
-systemd unit
-
-type: keyword
-
---
-
-*`auditd.data.table`*::
-+
---
-netfilter table name
-
-type: keyword
-
---
-
-*`auditd.data.terminal`*::
-+
---
-terminal name the user is running programs on
-
-type: keyword
-
---
-
-*`auditd.data.grantors`*::
-+
---
-pam modules approving the action
-
-type: keyword
-
---
-
-*`auditd.data.direction`*::
-+
---
-direction of crypto operation
-
-type: keyword
-
---
-
-*`auditd.data.op`*::
-+
---
-the operation being performed that is audited
-
-type: keyword
-
---
-
-*`auditd.data.tty`*::
-+
---
-tty udevice the user is running programs on
-
-type: keyword
-
---
-
-*`auditd.data.syscall`*::
-+
---
-syscall number in effect when the event occurred
-
-type: keyword
-
---
-
-*`auditd.data.data`*::
-+
---
-TTY text
-
-type: keyword
-
---
-
-*`auditd.data.family`*::
-+
---
-netfilter protocol
-
-type: keyword
-
---
-
-*`auditd.data.mac`*::
-+
---
-crypto MAC algorithm selected
-
-type: keyword
-
---
-
-*`auditd.data.pfs`*::
-+
---
-perfect forward secrecy method
-
-type: keyword
-
---
-
-*`auditd.data.items`*::
-+
---
-the number of path records in the event
-
-type: keyword
-
---
-
-*`auditd.data.a0`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.data.a1`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.data.a2`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.data.a3`*::
-+
---
-
-
-type: keyword
-
---
-
-*`auditd.data.hostname`*::
-+
---
-the hostname that the user is connecting from
-
-type: keyword
-
---
-
-*`auditd.data.lport`*::
-+
---
-local network port
-
-type: keyword
-
---
-
-*`auditd.data.rport`*::
-+
---
-remote port number
-
-type: keyword
-
---
-
-*`auditd.data.exit`*::
-+
---
-syscall exit code
-
-type: keyword
-
---
-
-*`auditd.data.fp`*::
-+
---
-crypto key finger print
-
-type: keyword
-
---
-
-*`auditd.data.laddr`*::
-+
---
-local network address
-
-type: keyword
-
---
-
-*`auditd.data.sport`*::
-+
---
-local port number
-
-type: keyword
-
---
-
-*`auditd.data.capability`*::
-+
---
-posix capabilities
-
-type: keyword
-
---
-
-*`auditd.data.nargs`*::
-+
---
-the number of arguments to a socket call
-
-type: keyword
-
---
-
-*`auditd.data.new-enabled`*::
-+
---
-new TTY audit enabled setting
-
-type: keyword
-
---
-
-*`auditd.data.audit_backlog_limit`*::
-+
---
-audit system's backlog queue size
-
-type: keyword
-
---
-
-*`auditd.data.dir`*::
-+
---
-directory name
-
-type: keyword
-
---
-
-*`auditd.data.cap_pe`*::
-+
---
-process effective capability map
-
-type: keyword
-
---
-
-*`auditd.data.model`*::
-+
---
-security model being used for virt
-
-type: keyword
-
---
-
-*`auditd.data.new_pp`*::
-+
---
-new process permitted capability map
-
-type: keyword
-
---
-
-*`auditd.data.old-enabled`*::
-+
---
-present TTY audit enabled setting
-
-type: keyword
-
---
-
-*`auditd.data.oauid`*::
-+
---
-object's login user ID
-
-type: keyword
-
---
-
-*`auditd.data.old`*::
-+
---
-old value
-
-type: keyword
-
---
-
-*`auditd.data.banners`*::
-+
---
-banners used on printed page
-
-type: keyword
-
---
-
-*`auditd.data.feature`*::
-+
---
-kernel feature being changed
-
-type: keyword
-
---
-
-*`auditd.data.vm-ctx`*::
-+
---
-the vm's context string
-
-type: keyword
-
---
-
-*`auditd.data.opid`*::
-+
---
-object's process ID
-
-type: keyword
-
---
-
-*`auditd.data.seperms`*::
-+
---
-SELinux permissions being used
-
-type: keyword
-
---
-
-*`auditd.data.seresult`*::
-+
---
-SELinux AVC decision granted/denied
-
-type: keyword
-
---
-
-*`auditd.data.new-rng`*::
-+
---
-device name of rng being added from a vm
-
-type: keyword
-
---
-
-*`auditd.data.old-net`*::
-+
---
-present MAC address assigned to vm
-
-type: keyword
-
---
-
-*`auditd.data.sigev_signo`*::
-+
---
-signal number
-
-type: keyword
-
---
-
-*`auditd.data.ino`*::
-+
---
-inode number
-
-type: keyword
-
---
-
-*`auditd.data.old_enforcing`*::
-+
---
-old MAC enforcement status
-
-type: keyword
-
---
-
-*`auditd.data.old-vcpu`*::
-+
---
-present number of CPU cores
-
-type: keyword
-
---
-
-*`auditd.data.range`*::
-+
---
-user's SE Linux range
-
-type: keyword
-
---
-
-*`auditd.data.res`*::
-+
---
-result of the audited operation(success/fail)
-
-type: keyword
-
---
-
-*`auditd.data.added`*::
-+
---
-number of new files detected
-
-type: keyword
-
---
-
-*`auditd.data.fam`*::
-+
---
-socket address family
-
-type: keyword
-
---
-
-*`auditd.data.nlnk-pid`*::
-+
---
-pid of netlink packet sender
-
-type: keyword
-
---
-
-*`auditd.data.subj`*::
-+
---
-lspp subject's context string
-
-type: keyword
-
---
-
-*`auditd.data.a[0-3]`*::
-+
---
-the arguments to a syscall
-
-type: keyword
-
---
-
-*`auditd.data.cgroup`*::
-+
---
-path to cgroup in sysfs
-
-type: keyword
-
---
-
-*`auditd.data.kernel`*::
-+
---
-kernel's version number
-
-type: keyword
-
---
-
-*`auditd.data.ocomm`*::
-+
---
-object's command line name
-
-type: keyword
-
---
-
-*`auditd.data.new-net`*::
-+
---
-MAC address being assigned to vm
-
-type: keyword
-
---
-
-*`auditd.data.permissive`*::
-+
---
-SELinux is in permissive mode
-
-type: keyword
-
---
-
-*`auditd.data.class`*::
-+
---
-resource class assigned to vm
-
-type: keyword
-
---
-
-*`auditd.data.compat`*::
-+
---
-is_compat_task result
-
-type: keyword
-
---
-
-*`auditd.data.fi`*::
-+
---
-file assigned inherited capability map
-
-type: keyword
-
---
-
-*`auditd.data.changed`*::
-+
---
-number of changed files
-
-type: keyword
-
---
-
-*`auditd.data.msg`*::
-+
---
-the payload of the audit record
-
-type: keyword
-
---
-
-*`auditd.data.dport`*::
-+
---
-remote port number
-
-type: keyword
-
---
-
-*`auditd.data.new-seuser`*::
-+
---
-new SELinux user
-
-type: keyword
-
---
-
-*`auditd.data.invalid_context`*::
-+
---
-SELinux context
-
-type: keyword
-
---
-
-*`auditd.data.dmac`*::
-+
---
-remote MAC address
-
-type: keyword
-
---
-
-*`auditd.data.ipx-net`*::
-+
---
-IPX network number
-
-type: keyword
-
---
-
-*`auditd.data.iuid`*::
-+
---
-ipc object's user ID
-
-type: keyword
-
---
-
-*`auditd.data.macproto`*::
-+
---
-ethernet packet type ID field
-
-type: keyword
-
---
-
-*`auditd.data.obj`*::
-+
---
-lspp object context string
-
-type: keyword
-
---
-
-*`auditd.data.ipid`*::
-+
---
-IP datagram fragment identifier
-
-type: keyword
-
---
-
-*`auditd.data.new-fs`*::
-+
---
-file system being added to vm
-
-type: keyword
-
---
-
-*`auditd.data.vm-pid`*::
-+
---
-vm's process ID
-
-type: keyword
-
---
-
-*`auditd.data.cap_pi`*::
-+
---
-process inherited capability map
-
-type: keyword
-
---
-
-*`auditd.data.old-auid`*::
-+
---
-previous auid value
-
-type: keyword
-
---
-
-*`auditd.data.oses`*::
-+
---
-object's session ID
-
-type: keyword
-
---
-
-*`auditd.data.fd`*::
-+
---
-file descriptor number
-
-type: keyword
-
---
-
-*`auditd.data.igid`*::
-+
---
-ipc object's group ID
-
-type: keyword
-
---
-
-*`auditd.data.new-disk`*::
-+
---
-disk being added to vm
-
-type: keyword
-
---
-
-*`auditd.data.parent`*::
-+
---
-the inode number of the parent file
-
-type: keyword
-
---
-
-*`auditd.data.len`*::
-+
---
-length
-
-type: keyword
-
---
-
-*`auditd.data.oflag`*::
-+
---
-open syscall flags
-
-type: keyword
-
---
-
-*`auditd.data.uuid`*::
-+
---
-a UUID
-
-type: keyword
-
---
-
-*`auditd.data.code`*::
-+
---
-seccomp action code
-
-type: keyword
-
---
-
-*`auditd.data.nlnk-grp`*::
-+
---
-netlink group number
-
-type: keyword
-
---
-
-*`auditd.data.cap_fp`*::
-+
---
-file permitted capability map
-
-type: keyword
-
---
-
-*`auditd.data.new-mem`*::
-+
---
-new amount of memory in KB
-
-type: keyword
-
---
-
-*`auditd.data.seperm`*::
-+
---
-SELinux permission being decided on
-
-type: keyword
-
---
-
-*`auditd.data.enforcing`*::
-+
---
-new MAC enforcement status
-
-type: keyword
-
---
-
-*`auditd.data.new-chardev`*::
-+
---
-new character device being assigned to vm
-
-type: keyword
-
---
-
-*`auditd.data.old-rng`*::
-+
---
-device name of rng being removed from a vm
-
-type: keyword
-
---
-
-*`auditd.data.outif`*::
-+
---
-out interface number
-
-type: keyword
-
---
-
-*`auditd.data.cmd`*::
-+
---
-command being executed
-
-type: keyword
-
---
-
-*`auditd.data.hook`*::
-+
---
-netfilter hook that packet came from
-
-type: keyword
-
---
-
-*`auditd.data.new-level`*::
-+
---
-new run level
-
-type: keyword
-
---
-
-*`auditd.data.sauid`*::
-+
---
-sent login user ID
-
-type: keyword
-
---
-
-*`auditd.data.sig`*::
-+
---
-signal number
-
-type: keyword
-
---
-
-*`auditd.data.audit_backlog_wait_time`*::
-+
---
-audit system's backlog wait time
-
-type: keyword
-
---
-
-*`auditd.data.printer`*::
-+
---
-printer name
-
-type: keyword
-
---
-
-*`auditd.data.old-mem`*::
-+
---
-present amount of memory in KB
-
-type: keyword
-
---
-
-*`auditd.data.perm`*::
-+
---
-the file permission being used
-
-type: keyword
-
---
-
-*`auditd.data.old_pi`*::
-+
---
-old process inherited capability map
-
-type: keyword
-
---
-
-*`auditd.data.state`*::
-+
---
-audit daemon configuration resulting state
-
-type: keyword
-
---
-
-*`auditd.data.format`*::
-+
---
-audit log's format
-
-type: keyword
-
---
-
-*`auditd.data.new_gid`*::
-+
---
-new group ID being assigned
-
-type: keyword
-
---
-
-*`auditd.data.tcontext`*::
-+
---
-the target's or object's context string
-
-type: keyword
-
---
-
-*`auditd.data.maj`*::
-+
---
-device major number
-
-type: keyword
-
---
-
-*`auditd.data.watch`*::
-+
---
-file name in a watch record
-
-type: keyword
-
---
-
-*`auditd.data.device`*::
-+
---
-device name
-
-type: keyword
-
---
-
-*`auditd.data.grp`*::
-+
---
-group name
-
-type: keyword
-
---
-
-*`auditd.data.bool`*::
-+
---
-name of SELinux boolean
-
-type: keyword
-
---
-
-*`auditd.data.icmp_type`*::
-+
---
-type of icmp message
-
-type: keyword
-
---
-
-*`auditd.data.new_lock`*::
-+
---
-new value of feature lock
-
-type: keyword
-
---
-
-*`auditd.data.old_prom`*::
-+
---
-network promiscuity flag
-
-type: keyword
-
---
-
-*`auditd.data.acl`*::
-+
---
-access mode of resource assigned to vm
-
-type: keyword
-
---
-
-*`auditd.data.ip`*::
-+
---
-network address of a printer
-
-type: keyword
-
---
-
-*`auditd.data.new_pi`*::
-+
---
-new process inherited capability map
-
-type: keyword
-
---
-
-*`auditd.data.default-context`*::
-+
---
-default MAC context
-
-type: keyword
-
---
-
-*`auditd.data.inode_gid`*::
-+
---
-group ID of the inode's owner
-
-type: keyword
-
---
-
-*`auditd.data.new-log_passwd`*::
-+
---
-new value for TTY password logging
-
-type: keyword
-
---
-
-*`auditd.data.new_pe`*::
-+
---
-new process effective capability map
-
-type: keyword
-
---
-
-*`auditd.data.selected-context`*::
-+
---
-new MAC context assigned to session
-
-type: keyword
-
---
-
-*`auditd.data.cap_fver`*::
-+
---
-file system capabilities version number
-
-type: keyword
-
---
-
-*`auditd.data.file`*::
-+
---
-file name
-
-type: keyword
-
---
-
-*`auditd.data.net`*::
-+
---
-network MAC address
-
-type: keyword
-
---
-
-*`auditd.data.virt`*::
-+
---
-kind of virtualization being referenced
-
-type: keyword
-
---
-
-*`auditd.data.cap_pp`*::
-+
---
-process permitted capability map
-
-type: keyword
-
---
-
-*`auditd.data.old-range`*::
-+
---
-present SELinux range
-
-type: keyword
-
---
-
-*`auditd.data.resrc`*::
-+
---
-resource being assigned
-
-type: keyword
-
---
-
-*`auditd.data.new-range`*::
-+
---
-new SELinux range
-
-type: keyword
-
---
-
-*`auditd.data.obj_gid`*::
-+
---
-group ID of object
-
-type: keyword
-
---
-
-*`auditd.data.proto`*::
-+
---
-network protocol
-
-type: keyword
-
---
-
-*`auditd.data.old-disk`*::
-+
---
-disk being removed from vm
-
-type: keyword
-
---
-
-*`auditd.data.audit_failure`*::
-+
---
-audit system's failure mode
-
-type: keyword
-
---
-
-*`auditd.data.inif`*::
-+
---
-in interface number
-
-type: keyword
-
---
-
-*`auditd.data.vm`*::
-+
---
-virtual machine name
-
-type: keyword
-
---
-
-*`auditd.data.flags`*::
-+
---
-mmap syscall flags
-
-type: keyword
-
---
-
-*`auditd.data.nlnk-fam`*::
-+
---
-netlink protocol number
-
-type: keyword
-
---
-
-*`auditd.data.old-fs`*::
-+
---
-file system being removed from vm
-
-type: keyword
-
---
-
-*`auditd.data.old-ses`*::
-+
---
-previous ses value
-
-type: keyword
-
---
-
-*`auditd.data.seqno`*::
-+
---
-sequence number
-
-type: keyword
-
---
-
-*`auditd.data.fver`*::
-+
---
-file system capabilities version number
-
-type: keyword
-
---
-
-*`auditd.data.qbytes`*::
-+
---
-ipc objects quantity of bytes
-
-type: keyword
-
---
-
-*`auditd.data.seuser`*::
-+
---
-user's SE Linux user acct
-
-type: keyword
-
---
-
-*`auditd.data.cap_fe`*::
-+
---
-file assigned effective capability map
-
-type: keyword
-
---
-
-*`auditd.data.new-vcpu`*::
-+
---
-new number of CPU cores
-
-type: keyword
-
---
-
-*`auditd.data.old-level`*::
-+
---
-old run level
-
-type: keyword
-
---
-
-*`auditd.data.old_pp`*::
-+
---
-old process permitted capability map
-
-type: keyword
-
---
-
-*`auditd.data.daddr`*::
-+
---
-remote IP address
-
-type: keyword
-
---
-
-*`auditd.data.old-role`*::
-+
---
-present SELinux role
-
-type: keyword
-
---
-
-*`auditd.data.ioctlcmd`*::
-+
---
-The request argument to the ioctl syscall
-
-type: keyword
-
---
-
-*`auditd.data.smac`*::
-+
---
-local MAC address
-
-type: keyword
-
---
-
-*`auditd.data.apparmor`*::
-+
---
-apparmor event information
-
-type: keyword
-
---
-
-*`auditd.data.fe`*::
-+
---
-file assigned effective capability map
-
-type: keyword
-
---
-
-*`auditd.data.perm_mask`*::
-+
---
-file permission mask that triggered a watch event
-
-type: keyword
-
---
-
-*`auditd.data.ses`*::
-+
---
-login session ID
-
-type: keyword
-
---
-
-*`auditd.data.cap_fi`*::
-+
---
-file inherited capability map
-
-type: keyword
-
---
-
-*`auditd.data.obj_uid`*::
-+
---
-user ID of object
-
-type: keyword
-
---
-
-*`auditd.data.reason`*::
-+
---
-text string denoting a reason for the action
-
-type: keyword
-
---
-
-*`auditd.data.list`*::
-+
---
-the audit system's filter list number
-
-type: keyword
-
---
-
-*`auditd.data.old_lock`*::
-+
---
-present value of feature lock
-
-type: keyword
-
---
-
-*`auditd.data.bus`*::
-+
---
-name of subsystem bus a vm resource belongs to
-
-type: keyword
-
---
-
-*`auditd.data.old_pe`*::
-+
---
-old process effective capability map
-
-type: keyword
-
---
-
-*`auditd.data.new-role`*::
-+
---
-new SELinux role
-
-type: keyword
-
---
-
-*`auditd.data.prom`*::
-+
---
-network promiscuity flag
-
-type: keyword
-
---
-
-*`auditd.data.uri`*::
-+
---
-URI pointing to a printer
-
-type: keyword
-
---
-
-*`auditd.data.audit_enabled`*::
-+
---
-audit systems's enable/disable status
-
-type: keyword
-
---
-
-*`auditd.data.old-log_passwd`*::
-+
---
-present value for TTY password logging
-
-type: keyword
-
---
-
-*`auditd.data.old-seuser`*::
-+
---
-present SELinux user
-
-type: keyword
-
---
-
-*`auditd.data.per`*::
-+
---
-linux personality
-
-type: keyword
-
---
-
-*`auditd.data.scontext`*::
-+
---
-the subject's context string
-
-type: keyword
-
---
-
-*`auditd.data.tclass`*::
-+
---
-target's object classification
-
-type: keyword
-
---
-
-*`auditd.data.ver`*::
-+
---
-audit daemon's version number
-
-type: keyword
-
---
-
-*`auditd.data.new`*::
-+
---
-value being set in feature
-
-type: keyword
-
---
-
-*`auditd.data.val`*::
-+
---
-generic value associated with the operation
-
-type: keyword
-
---
-
-*`auditd.data.img-ctx`*::
-+
---
-the vm's disk image context string
-
-type: keyword
-
---
-
-*`auditd.data.old-chardev`*::
-+
---
-present character device assigned to vm
-
-type: keyword
-
---
-
-*`auditd.data.old_val`*::
-+
---
-current value of SELinux boolean
-
-type: keyword
-
---
-
-*`auditd.data.success`*::
-+
---
-whether the syscall was successful or not
-
-type: keyword
-
---
-
-*`auditd.data.inode_uid`*::
-+
---
-user ID of the inode's owner
-
-type: keyword
-
---
-
-*`auditd.data.removed`*::
-+
---
-number of deleted files
-
-type: keyword
-
---
-
-
-*`auditd.data.socket.port`*::
-+
---
-The port number.
-
-type: keyword
-
---
-
-*`auditd.data.socket.saddr`*::
-+
---
-The raw socket address structure.
-
-type: keyword
-
---
-
-*`auditd.data.socket.addr`*::
-+
---
-The remote address.
-
-type: keyword
-
---
-
-*`auditd.data.socket.family`*::
-+
---
-The socket family (unix, ipv4, ipv6, netlink).
-
-type: keyword
-
-example: unix
-
---
-
-*`auditd.data.socket.path`*::
-+
---
-This is the path associated with a unix socket.
-
-type: keyword
-
---
-
-*`auditd.messages`*::
-+
---
-An ordered list of the raw messages received from the kernel that were used to construct this document. This field is present if an error occurred processing the data or if `include_raw_message` is set in the config.
-
-
-type: alias
-
-alias to: event.original
-
---
-
-*`auditd.warnings`*::
-+
---
-The warnings generated by the Beat during the construction of the event. These are disabled by default and are used for development and debug purposes only.
-
-
-type: alias
-
-alias to: error.message
-
---
-
-[float]
-=== geoip
-
-The geoip fields are defined as a convenience in case you decide to enrich the data using a geoip filter in Logstash or an Elasticsearch geoip ingest processor.
-
-
-
-*`geoip.continent_name`*::
-+
---
-The name of the continent.
-
-
-type: keyword
-
---
-
-*`geoip.city_name`*::
-+
---
-The name of the city.
-
-
-type: keyword
-
---
-
-*`geoip.region_name`*::
-+
---
-The name of the region.
-
-
-type: keyword
-
---
-
-*`geoip.country_iso_code`*::
-+
---
-Country ISO code.
-
-
-type: keyword
-
---
-
-*`geoip.location`*::
-+
---
-The longitude and latitude.
-
-
-type: geo_point
-
---
-
-[[exported-fields-beat-common]]
-== Beat fields
-
-Contains common beat fields available in all event types.
-
-
-
-*`agent.hostname`*::
-+
---
-Deprecated - use agent.name or agent.id to identify an agent.
-
-
-type: alias
-
-alias to: agent.name
-
---
-
-*`beat.timezone`*::
-+
---
-type: alias
-
-alias to: event.timezone
-
---
-
-*`fields`*::
-+
---
-Contains user configurable fields.
-
-
-type: object
-
---
-
-*`beat.name`*::
-+
---
-type: alias
-
-alias to: host.name
-
---
-
-*`beat.hostname`*::
-+
---
-type: alias
-
-alias to: agent.name
-
---
-
-*`timeseries.instance`*::
-+
---
-Time series instance id
-
-type: keyword
-
---
-
-[[exported-fields-cloud]]
-== Cloud provider metadata fields
-
-Metadata from cloud providers added by the add_cloud_metadata processor.
-
-
-
-*`cloud.image.id`*::
-+
---
-Image ID for the cloud instance.
-
-
-example: ami-abcd1234
-
---
-
-*`meta.cloud.provider`*::
-+
---
-type: alias
-
-alias to: cloud.provider
-
---
-
-*`meta.cloud.instance_id`*::
-+
---
-type: alias
-
-alias to: cloud.instance.id
-
---
-
-*`meta.cloud.instance_name`*::
-+
---
-type: alias
-
-alias to: cloud.instance.name
-
---
-
-*`meta.cloud.machine_type`*::
-+
---
-type: alias
-
-alias to: cloud.machine.type
-
---
-
-*`meta.cloud.availability_zone`*::
-+
---
-type: alias
-
-alias to: cloud.availability_zone
-
---
-
-*`meta.cloud.project_id`*::
-+
---
-type: alias
-
-alias to: cloud.project.id
-
---
-
-*`meta.cloud.region`*::
-+
---
-type: alias
-
-alias to: cloud.region
-
---
-
-[[exported-fields-common]]
-== Common fields
-
-Contains common fields available in all event types.
-
-
-
-[float]
-=== file
-
-File attributes.
-
-
-*`file.setuid`*::
-+
---
-Set if the file has the `setuid` bit set. Omitted otherwise.
-
-type: boolean
-
-example: True
-
---
-
-*`file.setgid`*::
-+
---
-Set if the file has the `setgid` bit set. Omitted otherwise.
-
-type: boolean
-
-example: True
-
---
-
-*`file.origin`*::
-+
---
-An array of strings describing a possible external origin for this file. For example, the URL it was downloaded from. Only supported in macOS, via the kMDItemWhereFroms attribute. Omitted if origin information is not available.
-
-
-type: keyword
-
---
-
-*`file.origin.text`*::
-+
---
-This is an analyzed field that is useful for full text search on the origin data.
-
-
-type: text
-
---
-
-[float]
-=== selinux
-
-The SELinux identity of the file.
-
-
-*`file.selinux.user`*::
-+
---
-The owner of the object.
-
-type: keyword
-
---
-
-*`file.selinux.role`*::
-+
---
-The object's SELinux role.
-
-type: keyword
-
---
-
-*`file.selinux.domain`*::
-+
---
-The object's SELinux domain or type.
-
-type: keyword
-
---
-
-*`file.selinux.level`*::
-+
---
-The object's SELinux level.
-
-type: keyword
-
-example: s0
-
---
-
-[float]
-=== user
-
-User information.
-
-
-[float]
-=== audit
-
-Audit user information.
-
-
-*`user.audit.id`*::
-+
---
-Audit user ID.
-
-type: keyword
-
---
-
-*`user.audit.name`*::
-+
---
-Audit user name.
-
-type: keyword
-
---
-
-[float]
-=== filesystem
-
-Filesystem user information.
-
-
-*`user.filesystem.id`*::
-+
---
-Filesystem user ID.
-
-type: keyword
-
---
-
-*`user.filesystem.name`*::
-+
---
-Filesystem user name.
-
-type: keyword
-
---
-
-[float]
-=== group
-
-Filesystem group information.
-
-
-*`user.filesystem.group.id`*::
-+
---
-Filesystem group ID.
-
-type: keyword
-
---
-
-*`user.filesystem.group.name`*::
-+
---
-Filesystem group name.
-
-type: keyword
-
---
-
-[float]
-=== saved
-
-Saved user information.
-
-
-*`user.saved.id`*::
-+
---
-Saved user ID.
-
-type: keyword
-
---
-
-*`user.saved.name`*::
-+
---
-Saved user name.
-
-type: keyword
-
---
-
-[float]
-=== group
-
-Saved group information.
-
-
-*`user.saved.group.id`*::
-+
---
-Saved group ID.
-
-type: keyword
-
---
-
-*`user.saved.group.name`*::
-+
---
-Saved group name.
-
-type: keyword
-
---
-
-[[exported-fields-docker-processor]]
-== Docker fields
-
-Docker stats collected from Docker.
-
-
-
-
-*`docker.container.id`*::
-+
---
-type: alias
-
-alias to: container.id
-
---
-
-*`docker.container.image`*::
-+
---
-type: alias
-
-alias to: container.image.name
-
---
-
-*`docker.container.name`*::
-+
---
-type: alias
-
-alias to: container.name
-
---
-
-*`docker.container.labels`*::
-+
---
-Image labels.
-
-
-type: object
-
---
-
-[[exported-fields-ecs]]
-== ECS fields
-
-
-This section defines Elastic Common Schema (ECS) fields—a common set of fields
-to be used when storing event data in {es}.
-
-This is an exhaustive list, and fields listed here are not necessarily used by {beatname_uc}.
-The goal of ECS is to enable and encourage users of {es} to normalize their event data,
-so that they can better analyze, visualize, and correlate the data represented in their events.
-
-See the {ecs-ref}[ECS reference] for more information.
-
-*`@timestamp`*::
-+
---
-Date/time when the event originated.
-This is the date/time extracted from the event, typically representing when the event was generated by the source.
-If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline.
-Required field for all events.
-
-type: date
-
-example: 2016-05-23T08:05:34.853Z
-
-required: True
-
---
-
-*`labels`*::
-+
---
-Custom key/value pairs.
-Can be used to add meta information to events. Should not contain nested objects. All values are stored as keyword.
-Example: `docker` and `k8s` labels.
-
-type: object
-
-example: {"application": "foo-bar", "env": "production"}
-
---
-
-*`message`*::
-+
---
-For log events the message field contains the log message, optimized for viewing in a log viewer.
-For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event.
-If multiple messages exist, they can be combined into one message.
-
-type: match_only_text
-
-example: Hello World
-
---
-
-*`tags`*::
-+
---
-List of keywords used to tag each event.
-
-type: keyword
-
-example: ["production", "env2"]
-
---
-
-[float]
-=== agent
-
-The agent fields contain the data about the software entity, if any, that collects, detects, or observes events on a host, or takes measurements on a host.
-Examples include Beats. Agents may also run on observers. ECS agent.* fields shall be populated with details of the agent running on the host or observer where the event happened or the measurement was taken.
-
-
-*`agent.build.original`*::
-+
---
-Extended build information for the agent.
-This field is intended to contain any build information that a data source may provide, no specific formatting is required.
-
-type: keyword
-
-example: metricbeat version 7.6.0 (amd64), libbeat 7.6.0 [6a23e8f8f30f5001ba344e4e54d8d9cb82cb107c built 2020-02-05 23:10:10 +0000 UTC]
-
---
-
-*`agent.ephemeral_id`*::
-+
---
-Ephemeral identifier of this agent (if one exists).
-This id normally changes across restarts, but `agent.id` does not.
-
-type: keyword
-
-example: 8a4f500f
-
---
-
-*`agent.id`*::
-+
---
-Unique identifier of this agent (if one exists).
-Example: For Beats this would be beat.id.
-
-type: keyword
-
-example: 8a4f500d
-
---
-
-*`agent.name`*::
-+
---
-Custom name of the agent.
-This is a name that can be given to an agent. This can be helpful if for example two Filebeat instances are running on the same host but a human readable separation is needed on which Filebeat instance data is coming from.
-If no name is given, the name is often left empty.
-
-type: keyword
-
-example: foo
-
---
-
-*`agent.type`*::
-+
---
-Type of the agent.
-The agent type always stays the same and should be given by the agent used. In case of Filebeat the agent would always be Filebeat also if two Filebeat instances are run on the same machine.
-
-type: keyword
-
-example: filebeat
-
---
-
-*`agent.version`*::
-+
---
-Version of the agent.
-
-type: keyword
-
-example: 6.0.0-rc2
-
---
-
-[float]
-=== as
-
-An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the internet.
-
-
-*`as.number`*::
-+
---
-Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet.
-
-type: long
-
-example: 15169
-
---
-
-*`as.organization.name`*::
-+
---
-Organization name.
-
-type: keyword
-
-example: Google LLC
-
---
-
-*`as.organization.name.text`*::
-+
---
-type: match_only_text
-
---
-
-[float]
-=== client
-
-A client is defined as the initiator of a network connection for events regarding sessions, connections, or bidirectional flow records.
-For TCP events, the client is the initiator of the TCP connection that sends the SYN packet(s). For other protocols, the client is generally the initiator or requestor in the network transaction. Some systems use the term "originator" to refer the client in TCP connections. The client fields describe details about the system acting as the client in the network event. Client fields are usually populated in conjunction with server fields. Client fields are generally not populated for packet-level events.
-Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately.
-
-
-*`client.address`*::
-+
---
-Some event client addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field.
-Then it should be duplicated to `.ip` or `.domain`, depending on which one it is.
-
-type: keyword
-
---
-
-*`client.as.number`*::
-+
---
-Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet.
-
-type: long
-
-example: 15169
-
---
-
-*`client.as.organization.name`*::
-+
---
-Organization name.
-
-type: keyword
-
-example: Google LLC
-
---
-
-*`client.as.organization.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`client.bytes`*::
-+
---
-Bytes sent from the client to the server.
-
-type: long
-
-example: 184
-
-format: bytes
-
---
-
-*`client.domain`*::
-+
---
-The domain name of the client system.
-This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment.
-
-type: keyword
-
-example: foo.example.com
-
---
-
-*`client.geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`client.geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`client.geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`client.geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`client.geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`client.geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`client.geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`client.geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`client.geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`client.geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`client.geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-*`client.ip`*::
-+
---
-IP address of the client (IPv4 or IPv6).
-
-type: ip
-
---
-
-*`client.mac`*::
-+
---
-MAC address of the client.
-The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen.
-
-type: keyword
-
-example: 00-00-5E-00-53-23
-
---
-
-*`client.nat.ip`*::
-+
---
-Translated IP of source based NAT sessions (e.g. internal client to internet).
-Typically connections traversing load balancers, firewalls, or routers.
-
-type: ip
-
---
-
-*`client.nat.port`*::
-+
---
-Translated port of source based NAT sessions (e.g. internal client to internet).
-Typically connections traversing load balancers, firewalls, or routers.
-
-type: long
-
-format: string
-
---
-
-*`client.packets`*::
-+
---
-Packets sent from the client to the server.
-
-type: long
-
-example: 12
-
---
-
-*`client.port`*::
-+
---
-Port of the client.
-
-type: long
-
-format: string
-
---
-
-*`client.registered_domain`*::
-+
---
-The highest registered client domain, stripped of the subdomain.
-For example, the registered domain for "foo.example.com" is "example.com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk".
-
-type: keyword
-
-example: example.com
-
---
-
-*`client.subdomain`*::
-+
---
-The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain.
-For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period.
-
-type: keyword
-
-example: east
-
---
-
-*`client.top_level_domain`*::
-+
---
-The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk".
-
-type: keyword
-
-example: co.uk
-
---
-
-*`client.user.domain`*::
-+
---
-Name of the directory the user is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`client.user.email`*::
-+
---
-User email address.
-
-type: keyword
-
---
-
-*`client.user.full_name`*::
-+
---
-User's full name, if available.
-
-type: keyword
-
-example: Albert Einstein
-
---
-
-*`client.user.full_name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`client.user.group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`client.user.group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`client.user.group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-*`client.user.hash`*::
-+
---
-Unique user hash to correlate information for a user in anonymized form.
-Useful if `user.id` or `user.name` contain confidential information and cannot be used.
-
-type: keyword
-
---
-
-*`client.user.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
-example: S-1-5-21-202424912787-2692429404-2351956786-1000
-
---
-
-*`client.user.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: a.einstein
-
---
-
-*`client.user.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`client.user.roles`*::
-+
---
-Array of user roles at the time of the event.
-
-type: keyword
-
-example: ["kibana_admin", "reporting_user"]
-
---
-
-[float]
-=== cloud
-
-Fields related to the cloud or infrastructure the events are coming from.
-
-
-*`cloud.account.id`*::
-+
---
-The cloud account or organization id used to identify different entities in a multi-tenant environment.
-Examples: AWS account id, Google Cloud ORG Id, or other unique identifier.
-
-type: keyword
-
-example: 666777888999
-
---
-
-*`cloud.account.name`*::
-+
---
-The cloud account name or alias used to identify different entities in a multi-tenant environment.
-Examples: AWS account name, Google Cloud ORG display name.
-
-type: keyword
-
-example: elastic-dev
-
---
-
-*`cloud.availability_zone`*::
-+
---
-Availability zone in which this host, resource, or service is located.
-
-type: keyword
-
-example: us-east-1c
-
---
-
-*`cloud.instance.id`*::
-+
---
-Instance ID of the host machine.
-
-type: keyword
-
-example: i-1234567890abcdef0
-
---
-
-*`cloud.instance.name`*::
-+
---
-Instance name of the host machine.
-
-type: keyword
-
---
-
-*`cloud.machine.type`*::
-+
---
-Machine type of the host machine.
-
-type: keyword
-
-example: t2.medium
-
---
-
-*`cloud.origin.account.id`*::
-+
---
-The cloud account or organization id used to identify different entities in a multi-tenant environment.
-Examples: AWS account id, Google Cloud ORG Id, or other unique identifier.
-
-type: keyword
-
-example: 666777888999
-
---
-
-*`cloud.origin.account.name`*::
-+
---
-The cloud account name or alias used to identify different entities in a multi-tenant environment.
-Examples: AWS account name, Google Cloud ORG display name.
-
-type: keyword
-
-example: elastic-dev
-
---
-
-*`cloud.origin.availability_zone`*::
-+
---
-Availability zone in which this host, resource, or service is located.
-
-type: keyword
-
-example: us-east-1c
-
---
-
-*`cloud.origin.instance.id`*::
-+
---
-Instance ID of the host machine.
-
-type: keyword
-
-example: i-1234567890abcdef0
-
---
-
-*`cloud.origin.instance.name`*::
-+
---
-Instance name of the host machine.
-
-type: keyword
-
---
-
-*`cloud.origin.machine.type`*::
-+
---
-Machine type of the host machine.
-
-type: keyword
-
-example: t2.medium
-
---
-
-*`cloud.origin.project.id`*::
-+
---
-The cloud project identifier.
-Examples: Google Cloud Project id, Azure Project id.
-
-type: keyword
-
-example: my-project
-
---
-
-*`cloud.origin.project.name`*::
-+
---
-The cloud project name.
-Examples: Google Cloud Project name, Azure Project name.
-
-type: keyword
-
-example: my project
-
---
-
-*`cloud.origin.provider`*::
-+
---
-Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean.
-
-type: keyword
-
-example: aws
-
---
-
-*`cloud.origin.region`*::
-+
---
-Region in which this host, resource, or service is located.
-
-type: keyword
-
-example: us-east-1
-
---
-
-*`cloud.origin.service.name`*::
-+
---
-The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server.
-Examples: app engine, app service, cloud run, fargate, lambda.
-
-type: keyword
-
-example: lambda
-
---
-
-*`cloud.project.id`*::
-+
---
-The cloud project identifier.
-Examples: Google Cloud Project id, Azure Project id.
-
-type: keyword
-
-example: my-project
-
---
-
-*`cloud.project.name`*::
-+
---
-The cloud project name.
-Examples: Google Cloud Project name, Azure Project name.
-
-type: keyword
-
-example: my project
-
---
-
-*`cloud.provider`*::
-+
---
-Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean.
-
-type: keyword
-
-example: aws
-
---
-
-*`cloud.region`*::
-+
---
-Region in which this host, resource, or service is located.
-
-type: keyword
-
-example: us-east-1
-
---
-
-*`cloud.service.name`*::
-+
---
-The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server.
-Examples: app engine, app service, cloud run, fargate, lambda.
-
-type: keyword
-
-example: lambda
-
---
-
-*`cloud.target.account.id`*::
-+
---
-The cloud account or organization id used to identify different entities in a multi-tenant environment.
-Examples: AWS account id, Google Cloud ORG Id, or other unique identifier.
-
-type: keyword
-
-example: 666777888999
-
---
-
-*`cloud.target.account.name`*::
-+
---
-The cloud account name or alias used to identify different entities in a multi-tenant environment.
-Examples: AWS account name, Google Cloud ORG display name.
-
-type: keyword
-
-example: elastic-dev
-
---
-
-*`cloud.target.availability_zone`*::
-+
---
-Availability zone in which this host, resource, or service is located.
-
-type: keyword
-
-example: us-east-1c
-
---
-
-*`cloud.target.instance.id`*::
-+
---
-Instance ID of the host machine.
-
-type: keyword
-
-example: i-1234567890abcdef0
-
---
-
-*`cloud.target.instance.name`*::
-+
---
-Instance name of the host machine.
-
-type: keyword
-
---
-
-*`cloud.target.machine.type`*::
-+
---
-Machine type of the host machine.
-
-type: keyword
-
-example: t2.medium
-
---
-
-*`cloud.target.project.id`*::
-+
---
-The cloud project identifier.
-Examples: Google Cloud Project id, Azure Project id.
-
-type: keyword
-
-example: my-project
-
---
-
-*`cloud.target.project.name`*::
-+
---
-The cloud project name.
-Examples: Google Cloud Project name, Azure Project name.
-
-type: keyword
-
-example: my project
-
---
-
-*`cloud.target.provider`*::
-+
---
-Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean.
-
-type: keyword
-
-example: aws
-
---
-
-*`cloud.target.region`*::
-+
---
-Region in which this host, resource, or service is located.
-
-type: keyword
-
-example: us-east-1
-
---
-
-*`cloud.target.service.name`*::
-+
---
-The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server.
-Examples: app engine, app service, cloud run, fargate, lambda.
-
-type: keyword
-
-example: lambda
-
---
-
-[float]
-=== code_signature
-
-These fields contain information about binary code signatures.
-
-
-*`code_signature.digest_algorithm`*::
-+
---
-The hashing algorithm used to sign the process.
-This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm.
-
-type: keyword
-
-example: sha256
-
---
-
-*`code_signature.exists`*::
-+
---
-Boolean to capture if a signature is present.
-
-type: boolean
-
-example: true
-
---
-
-*`code_signature.signing_id`*::
-+
---
-The identifier used to sign the process.
-This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: com.apple.xpc.proxy
-
---
-
-*`code_signature.status`*::
-+
---
-Additional information about the certificate status.
-This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked.
-
-type: keyword
-
-example: ERROR_UNTRUSTED_ROOT
-
---
-
-*`code_signature.subject_name`*::
-+
---
-Subject name of the code signer
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`code_signature.team_id`*::
-+
---
-The team identifier used to sign the process.
-This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: EQHXZ8M8AV
-
---
-
-*`code_signature.timestamp`*::
-+
---
-Date and time when the code signature was generated and signed.
-
-type: date
-
-example: 2021-01-01T12:10:30Z
-
---
-
-*`code_signature.trusted`*::
-+
---
-Stores the trust status of the certificate chain.
-Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status.
-
-type: boolean
-
-example: true
-
---
-
-*`code_signature.valid`*::
-+
---
-Boolean to capture if the digital signature is verified against the binary content.
-Leave unpopulated if a certificate was unchecked.
-
-type: boolean
-
-example: true
-
---
-
-[float]
-=== container
-
-Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime.
-
-
-*`container.cpu.usage`*::
-+
---
-Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. Scaling factor: 1000.
-
-type: scaled_float
-
---
-
-*`container.disk.read.bytes`*::
-+
---
-The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection.
-
-type: long
-
---
-
-*`container.disk.write.bytes`*::
-+
---
-The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection.
-
-type: long
-
---
-
-*`container.id`*::
-+
---
-Unique container id.
-
-type: keyword
-
---
-
-*`container.image.name`*::
-+
---
-Name of the image the container was built on.
-
-type: keyword
-
---
-
-*`container.image.tag`*::
-+
---
-Container image tags.
-
-type: keyword
-
---
-
-*`container.labels`*::
-+
---
-Image labels.
-
-type: object
-
---
-
-*`container.memory.usage`*::
-+
---
-Memory usage percentage and it ranges from 0 to 1. Scaling factor: 1000.
-
-type: scaled_float
-
---
-
-*`container.name`*::
-+
---
-Container name.
-
-type: keyword
-
---
-
-*`container.network.egress.bytes`*::
-+
---
-The number of bytes (gauge) sent out on all network interfaces by the container since the last metric collection.
-
-type: long
-
---
-
-*`container.network.ingress.bytes`*::
-+
---
-The number of bytes received (gauge) on all network interfaces by the container since the last metric collection.
-
-type: long
-
---
-
-*`container.runtime`*::
-+
---
-Runtime managing this container.
-
-type: keyword
-
-example: docker
-
---
-
-[float]
-=== data_stream
-
-The data_stream fields take part in defining the new data stream naming scheme.
-In the new data stream naming scheme the value of the data stream fields combine to the name of the actual data stream in the following manner: `{data_stream.type}-{data_stream.dataset}-{data_stream.namespace}`. This means the fields can only contain characters that are valid as part of names of data streams. More details about this can be found in this https://www.elastic.co/blog/an-introduction-to-the-elastic-data-stream-naming-scheme[blog post].
-An Elasticsearch data stream consists of one or more backing indices, and a data stream name forms part of the backing indices names. Due to this convention, data streams must also follow index naming restrictions. For example, data stream names cannot include `\`, `/`, `*`, `?`, `"`, `<`, `>`, `|`, ` ` (space character), `,`, or `#`. Please see the Elasticsearch reference for additional https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#indices-create-api-path-params[restrictions].
-
-
-*`data_stream.dataset`*::
-+
---
-The field can contain anything that makes sense to signify the source of the data.
-Examples include `nginx.access`, `prometheus`, `endpoint` etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. `event.dataset` should have the same value as `data_stream.dataset`.
-Beyond the Elasticsearch data stream naming criteria noted above, the `dataset` value has additional restrictions:
- * Must not contain `-`
- * No longer than 100 characters
-
-type: constant_keyword
-
-example: nginx.access
-
---
-
-*`data_stream.namespace`*::
-+
---
-A user defined namespace. Namespaces are useful to allow grouping of data.
-Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with `default`. If no value is used, it falls back to `default`.
-Beyond the Elasticsearch index naming criteria noted above, `namespace` value has the additional restrictions:
- * Must not contain `-`
- * No longer than 100 characters
-
-type: constant_keyword
-
-example: production
-
---
-
-*`data_stream.type`*::
-+
---
-An overarching type for the data stream.
-Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
-
-type: constant_keyword
-
-example: logs
-
---
-
-[float]
-=== destination
-
-Destination fields capture details about the receiver of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction.
-Destination fields are usually populated in conjunction with source fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated.
-
-
-*`destination.address`*::
-+
---
-Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field.
-Then it should be duplicated to `.ip` or `.domain`, depending on which one it is.
-
-type: keyword
-
---
-
-*`destination.as.number`*::
-+
---
-Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet.
-
-type: long
-
-example: 15169
-
---
-
-*`destination.as.organization.name`*::
-+
---
-Organization name.
-
-type: keyword
-
-example: Google LLC
-
---
-
-*`destination.as.organization.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`destination.bytes`*::
-+
---
-Bytes sent from the destination to the source.
-
-type: long
-
-example: 184
-
-format: bytes
-
---
-
-*`destination.domain`*::
-+
---
-The domain name of the destination system.
-This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment.
-
-type: keyword
-
-example: foo.example.com
-
---
-
-*`destination.geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`destination.geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`destination.geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`destination.geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`destination.geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`destination.geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`destination.geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`destination.geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`destination.geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`destination.geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`destination.geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-*`destination.ip`*::
-+
---
-IP address of the destination (IPv4 or IPv6).
-
-type: ip
-
---
-
-*`destination.mac`*::
-+
---
-MAC address of the destination.
-The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen.
-
-type: keyword
-
-example: 00-00-5E-00-53-23
-
---
-
-*`destination.nat.ip`*::
-+
---
-Translated ip of destination based NAT sessions (e.g. internet to private DMZ)
-Typically used with load balancers, firewalls, or routers.
-
-type: ip
-
---
-
-*`destination.nat.port`*::
-+
---
-Port the source session is translated to by NAT Device.
-Typically used with load balancers, firewalls, or routers.
-
-type: long
-
-format: string
-
---
-
-*`destination.packets`*::
-+
---
-Packets sent from the destination to the source.
-
-type: long
-
-example: 12
-
---
-
-*`destination.port`*::
-+
---
-Port of the destination.
-
-type: long
-
-format: string
-
---
-
-*`destination.registered_domain`*::
-+
---
-The highest registered destination domain, stripped of the subdomain.
-For example, the registered domain for "foo.example.com" is "example.com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk".
-
-type: keyword
-
-example: example.com
-
---
-
-*`destination.subdomain`*::
-+
---
-The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain.
-For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period.
-
-type: keyword
-
-example: east
-
---
-
-*`destination.top_level_domain`*::
-+
---
-The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk".
-
-type: keyword
-
-example: co.uk
-
---
-
-*`destination.user.domain`*::
-+
---
-Name of the directory the user is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`destination.user.email`*::
-+
---
-User email address.
-
-type: keyword
-
---
-
-*`destination.user.full_name`*::
-+
---
-User's full name, if available.
-
-type: keyword
-
-example: Albert Einstein
-
---
-
-*`destination.user.full_name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`destination.user.group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`destination.user.group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`destination.user.group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-*`destination.user.hash`*::
-+
---
-Unique user hash to correlate information for a user in anonymized form.
-Useful if `user.id` or `user.name` contain confidential information and cannot be used.
-
-type: keyword
-
---
-
-*`destination.user.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
-example: S-1-5-21-202424912787-2692429404-2351956786-1000
-
---
-
-*`destination.user.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: a.einstein
-
---
-
-*`destination.user.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`destination.user.roles`*::
-+
---
-Array of user roles at the time of the event.
-
-type: keyword
-
-example: ["kibana_admin", "reporting_user"]
-
---
-
-[float]
-=== dll
-
-These fields contain information about code libraries dynamically loaded into processes.
-
-Many operating systems refer to "shared code libraries" with different names, but this field set refers to all of the following:
-* Dynamic-link library (`.dll`) commonly used on Windows
-* Shared Object (`.so`) commonly used on Unix-like operating systems
-* Dynamic library (`.dylib`) commonly used on macOS
-
-
-*`dll.code_signature.digest_algorithm`*::
-+
---
-The hashing algorithm used to sign the process.
-This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm.
-
-type: keyword
-
-example: sha256
-
---
-
-*`dll.code_signature.exists`*::
-+
---
-Boolean to capture if a signature is present.
-
-type: boolean
-
-example: true
-
---
-
-*`dll.code_signature.signing_id`*::
-+
---
-The identifier used to sign the process.
-This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: com.apple.xpc.proxy
-
---
-
-*`dll.code_signature.status`*::
-+
---
-Additional information about the certificate status.
-This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked.
-
-type: keyword
-
-example: ERROR_UNTRUSTED_ROOT
-
---
-
-*`dll.code_signature.subject_name`*::
-+
---
-Subject name of the code signer
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`dll.code_signature.team_id`*::
-+
---
-The team identifier used to sign the process.
-This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: EQHXZ8M8AV
-
---
-
-*`dll.code_signature.timestamp`*::
-+
---
-Date and time when the code signature was generated and signed.
-
-type: date
-
-example: 2021-01-01T12:10:30Z
-
---
-
-*`dll.code_signature.trusted`*::
-+
---
-Stores the trust status of the certificate chain.
-Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status.
-
-type: boolean
-
-example: true
-
---
-
-*`dll.code_signature.valid`*::
-+
---
-Boolean to capture if the digital signature is verified against the binary content.
-Leave unpopulated if a certificate was unchecked.
-
-type: boolean
-
-example: true
-
---
-
-*`dll.hash.md5`*::
-+
---
-MD5 hash.
-
-type: keyword
-
---
-
-*`dll.hash.sha1`*::
-+
---
-SHA1 hash.
-
-type: keyword
-
---
-
-*`dll.hash.sha256`*::
-+
---
-SHA256 hash.
-
-type: keyword
-
---
-
-*`dll.hash.sha512`*::
-+
---
-SHA512 hash.
-
-type: keyword
-
---
-
-*`dll.hash.ssdeep`*::
-+
---
-SSDEEP hash.
-
-type: keyword
-
---
-
-*`dll.name`*::
-+
---
-Name of the library.
-This generally maps to the name of the file on disk.
-
-type: keyword
-
-example: kernel32.dll
-
---
-
-*`dll.path`*::
-+
---
-Full file path of the library.
-
-type: keyword
-
-example: C:\Windows\System32\kernel32.dll
-
---
-
-*`dll.pe.architecture`*::
-+
---
-CPU architecture target for the file.
-
-type: keyword
-
-example: x64
-
---
-
-*`dll.pe.company`*::
-+
---
-Internal company name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`dll.pe.description`*::
-+
---
-Internal description of the file, provided at compile-time.
-
-type: keyword
-
-example: Paint
-
---
-
-*`dll.pe.file_version`*::
-+
---
-Internal version of the file, provided at compile-time.
-
-type: keyword
-
-example: 6.3.9600.17415
-
---
-
-*`dll.pe.imphash`*::
-+
---
-A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html.
-
-type: keyword
-
-example: 0c6803c4e922103c4dca5963aad36ddf
-
---
-
-*`dll.pe.original_file_name`*::
-+
---
-Internal name of the file, provided at compile-time.
-
-type: keyword
-
-example: MSPAINT.EXE
-
---
-
-*`dll.pe.product`*::
-+
---
-Internal product name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft® Windows® Operating System
-
---
-
-[float]
-=== dns
-
-Fields describing DNS queries and answers.
-DNS events should either represent a single DNS query prior to getting answers (`dns.type:query`) or they should represent a full exchange and contain the query details as well as all of the answers that were provided for this query (`dns.type:answer`).
-
-
-*`dns.answers`*::
-+
---
-An array containing an object for each answer section returned by the server.
-The main keys that should be present in these objects are defined by ECS. Records that have more information may contain more keys than what ECS defines.
-Not all DNS data sources give all details about DNS answers. At minimum, answer objects must contain the `data` key. If more information is available, map as much of it to ECS as possible, and add any additional fields to the answer objects as custom fields.
-
-type: object
-
---
-
-*`dns.answers.class`*::
-+
---
-The class of DNS data contained in this resource record.
-
-type: keyword
-
-example: IN
-
---
-
-*`dns.answers.data`*::
-+
---
-The data describing the resource.
-The meaning of this data depends on the type and class of the resource record.
-
-type: keyword
-
-example: 10.10.10.10
-
---
-
-*`dns.answers.name`*::
-+
---
-The domain name to which this resource record pertains.
-If a chain of CNAME is being resolved, each answer's `name` should be the one that corresponds with the answer's `data`. It should not simply be the original `question.name` repeated.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`dns.answers.ttl`*::
-+
---
-The time interval in seconds that this resource record may be cached before it should be discarded. Zero values mean that the data should not be cached.
-
-type: long
-
-example: 180
-
---
-
-*`dns.answers.type`*::
-+
---
-The type of data contained in this resource record.
-
-type: keyword
-
-example: CNAME
-
---
-
-*`dns.header_flags`*::
-+
---
-Array of 2 letter DNS header flags.
-Expected values are: AA, TC, RD, RA, AD, CD, DO.
-
-type: keyword
-
-example: ["RD", "RA"]
-
---
-
-*`dns.id`*::
-+
---
-The DNS packet identifier assigned by the program that generated the query. The identifier is copied to the response.
-
-type: keyword
-
-example: 62111
-
---
-
-*`dns.op_code`*::
-+
---
-The DNS operation code that specifies the kind of query in the message. This value is set by the originator of a query and copied into the response.
-
-type: keyword
-
-example: QUERY
-
---
-
-*`dns.question.class`*::
-+
---
-The class of records being queried.
-
-type: keyword
-
-example: IN
-
---
-
-*`dns.question.name`*::
-+
---
-The name being queried.
-If the name field contains non-printable characters (below 32 or above 126), those characters should be represented as escaped base 10 integers (\DDD). Back slashes and quotes should be escaped. Tabs, carriage returns, and line feeds should be converted to \t, \r, and \n respectively.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`dns.question.registered_domain`*::
-+
---
-The highest registered domain, stripped of the subdomain.
-For example, the registered domain for "foo.example.com" is "example.com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk".
-
-type: keyword
-
-example: example.com
-
---
-
-*`dns.question.subdomain`*::
-+
---
-The subdomain is all of the labels under the registered_domain.
-If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period.
-
-type: keyword
-
-example: www
-
---
-
-*`dns.question.top_level_domain`*::
-+
---
-The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk".
-
-type: keyword
-
-example: co.uk
-
---
-
-*`dns.question.type`*::
-+
---
-The type of record being queried.
-
-type: keyword
-
-example: AAAA
-
---
-
-*`dns.resolved_ip`*::
-+
---
-Array containing all IPs seen in `answers.data`.
-The `answers` array can be difficult to use, because of the variety of data formats it can contain. Extracting all IP addresses seen in there to `dns.resolved_ip` makes it possible to index them as IP addresses, and makes them easier to visualize and query for.
-
-type: ip
-
-example: ["10.10.10.10", "10.10.10.11"]
-
---
-
-*`dns.response_code`*::
-+
---
-The DNS response code.
-
-type: keyword
-
-example: NOERROR
-
---
-
-*`dns.type`*::
-+
---
-The type of DNS event captured, query or answer.
-If your source of DNS events only gives you DNS queries, you should only create dns events of type `dns.type:query`.
-If your source of DNS events gives you answers as well, you should create one event per query (optionally as soon as the query is seen). And a second event containing all query details as well as an array of answers.
-
-type: keyword
-
-example: answer
-
---
-
-[float]
-=== ecs
-
-Meta-information specific to ECS.
-
-
-*`ecs.version`*::
-+
---
-ECS version this event conforms to. `ecs.version` is a required field and must exist in all events.
-When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
-
-type: keyword
-
-example: 1.0.0
-
-required: True
-
---
-
-[float]
-=== elf
-
-These fields contain Linux Executable Linkable Format (ELF) metadata.
-
-
-*`elf.architecture`*::
-+
---
-Machine architecture of the ELF file.
-
-type: keyword
-
-example: x86-64
-
---
-
-*`elf.byte_order`*::
-+
---
-Byte sequence of ELF file.
-
-type: keyword
-
-example: Little Endian
-
---
-
-*`elf.cpu_type`*::
-+
---
-CPU type of the ELF file.
-
-type: keyword
-
-example: Intel
-
---
-
-*`elf.creation_date`*::
-+
---
-Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators.
-
-type: date
-
---
-
-*`elf.exports`*::
-+
---
-List of exported element names and types.
-
-type: flattened
-
---
-
-*`elf.header.abi_version`*::
-+
---
-Version of the ELF Application Binary Interface (ABI).
-
-type: keyword
-
---
-
-*`elf.header.class`*::
-+
---
-Header class of the ELF file.
-
-type: keyword
-
---
-
-*`elf.header.data`*::
-+
---
-Data table of the ELF header.
-
-type: keyword
-
---
-
-*`elf.header.entrypoint`*::
-+
---
-Header entrypoint of the ELF file.
-
-type: long
-
-format: string
-
---
-
-*`elf.header.object_version`*::
-+
---
-"0x1" for original ELF files.
-
-type: keyword
-
---
-
-*`elf.header.os_abi`*::
-+
---
-Application Binary Interface (ABI) of the Linux OS.
-
-type: keyword
-
---
-
-*`elf.header.type`*::
-+
---
-Header type of the ELF file.
-
-type: keyword
-
---
-
-*`elf.header.version`*::
-+
---
-Version of the ELF header.
-
-type: keyword
-
---
-
-*`elf.imports`*::
-+
---
-List of imported element names and types.
-
-type: flattened
-
---
-
-*`elf.sections`*::
-+
---
-An array containing an object for each section of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`.
-
-type: nested
-
---
-
-*`elf.sections.chi2`*::
-+
---
-Chi-square probability distribution of the section.
-
-type: long
-
-format: number
-
---
-
-*`elf.sections.entropy`*::
-+
---
-Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`elf.sections.flags`*::
-+
---
-ELF Section List flags.
-
-type: keyword
-
---
-
-*`elf.sections.name`*::
-+
---
-ELF Section List name.
-
-type: keyword
-
---
-
-*`elf.sections.physical_offset`*::
-+
---
-ELF Section List offset.
-
-type: keyword
-
---
-
-*`elf.sections.physical_size`*::
-+
---
-ELF Section List physical size.
-
-type: long
-
-format: bytes
-
---
-
-*`elf.sections.type`*::
-+
---
-ELF Section List type.
-
-type: keyword
-
---
-
-*`elf.sections.virtual_address`*::
-+
---
-ELF Section List virtual address.
-
-type: long
-
-format: string
-
---
-
-*`elf.sections.virtual_size`*::
-+
---
-ELF Section List virtual size.
-
-type: long
-
-format: string
-
---
-
-*`elf.segments`*::
-+
---
-An array containing an object for each segment of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`.
-
-type: nested
-
---
-
-*`elf.segments.sections`*::
-+
---
-ELF object segment sections.
-
-type: keyword
-
---
-
-*`elf.segments.type`*::
-+
---
-ELF object segment type.
-
-type: keyword
-
---
-
-*`elf.shared_libraries`*::
-+
---
-List of shared libraries used by this ELF object.
-
-type: keyword
-
---
-
-*`elf.telfhash`*::
-+
---
-telfhash symbol hash for ELF file.
-
-type: keyword
-
---
-
-[float]
-=== error
-
-These fields can represent errors of any kind.
-Use them for errors that happen while fetching events or in cases where the event itself contains an error.
-
-
-*`error.code`*::
-+
---
-Error code describing the error.
-
-type: keyword
-
---
-
-*`error.id`*::
-+
---
-Unique identifier for the error.
-
-type: keyword
-
---
-
-*`error.message`*::
-+
---
-Error message.
-
-type: match_only_text
-
---
-
-*`error.stack_trace`*::
-+
---
-The stack trace of this error in plain text.
-
-type: wildcard
-
---
-
-*`error.stack_trace.text`*::
-+
---
-type: match_only_text
-
---
-
-*`error.type`*::
-+
---
-The type of the error, for example the class name of the exception.
-
-type: keyword
-
-example: java.lang.NullPointerException
-
---
-
-[float]
-=== event
-
-The event fields are used for context information about the log or metric event itself.
-A log is defined as an event containing details of something that happened. Log events must include the time at which the thing happened. Examples of log events include a process starting on a host, a network packet being sent from a source to a destination, or a network connection between a client and a server being initiated or closed. A metric is defined as an event containing one or more numerical measurements and the time at which the measurement was taken. Examples of metric events include memory pressure measured on a host and device temperature. See the `event.kind` definition in this section for additional details about metric and state events.
-
-
-*`event.action`*::
-+
---
-The action captured by the event.
-This describes the information in the event. It is more specific than `event.category`. Examples are `group-add`, `process-started`, `file-created`. The value is normally defined by the implementer.
-
-type: keyword
-
-example: user-password-change
-
---
-
-*`event.agent_id_status`*::
-+
---
-Agents are normally responsible for populating the `agent.id` field value. If the system receiving events is capable of validating the value based on authentication information for the client then this field can be used to reflect the outcome of that validation.
-For example if the agent's connection is authenticated with mTLS and the client cert contains the ID of the agent to which the cert was issued then the `agent.id` value in events can be checked against the certificate. If the values match then `event.agent_id_status: verified` is added to the event, otherwise one of the other allowed values should be used.
-If no validation is performed then the field should be omitted.
-The allowed values are:
-`verified` - The `agent.id` field value matches expected value obtained from auth metadata.
-`mismatch` - The `agent.id` field value does not match the expected value obtained from auth metadata.
-`missing` - There was no `agent.id` field in the event to validate.
-`auth_metadata_missing` - There was no auth metadata or it was missing information about the agent ID.
-
-type: keyword
-
-example: verified
-
---
-
-*`event.category`*::
-+
---
-This is one of four ECS Categorization Fields, and indicates the second level in the ECS category hierarchy.
-`event.category` represents the "big buckets" of ECS categories. For example, filtering on `event.category:process` yields all events relating to process activity. This field is closely related to `event.type`, which is used as a subcategory.
-This field is an array. This will allow proper categorization of some events that fall in multiple categories.
-
-type: keyword
-
-example: authentication
-
---
-
-*`event.code`*::
-+
---
-Identification code for this event, if one exists.
-Some event sources use event codes to identify messages unambiguously, regardless of message language or wording adjustments over time. An example of this is the Windows Event ID.
-
-type: keyword
-
-example: 4648
-
---
-
-*`event.created`*::
-+
---
-event.created contains the date/time when the event was first read by an agent, or by your pipeline.
-This field is distinct from @timestamp in that @timestamp typically contain the time extracted from the original event.
-In most situations, these two timestamps will be slightly different. The difference can be used to calculate the delay between your source generating an event, and the time when your agent first processed it. This can be used to monitor your agent's or pipeline's ability to keep up with your event source.
-In case the two timestamps are identical, @timestamp should be used.
-
-type: date
-
-example: 2016-05-23T08:05:34.857Z
-
---
-
-*`event.dataset`*::
-+
---
-Name of the dataset.
-If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from.
-It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
-
-type: keyword
-
-example: apache.access
-
---
-
-*`event.duration`*::
-+
---
-Duration of the event in nanoseconds.
-If event.start and event.end are known this value should be the difference between the end and start time.
-
-type: long
-
-format: duration
-
---
-
-*`event.end`*::
-+
---
-event.end contains the date when the event ended or when the activity was last observed.
-
-type: date
-
---
-
-*`event.hash`*::
-+
---
-Hash (perhaps logstash fingerprint) of raw field to be able to demonstrate log integrity.
-
-type: keyword
-
-example: 123456789012345678901234567890ABCD
-
---
-
-*`event.id`*::
-+
---
-Unique ID to describe the event.
-
-type: keyword
-
-example: 8a4f500d
-
---
-
-*`event.ingested`*::
-+
---
-Timestamp when an event arrived in the central data store.
-This is different from `@timestamp`, which is when the event originally occurred. It's also different from `event.created`, which is meant to capture the first time an agent saw the event.
-In normal conditions, assuming no tampering, the timestamps should chronologically look like this: `@timestamp` < `event.created` < `event.ingested`.
-
-type: date
-
-example: 2016-05-23T08:05:35.101Z
-
---
-
-*`event.kind`*::
-+
---
-This is one of four ECS Categorization Fields, and indicates the highest level in the ECS category hierarchy.
-`event.kind` gives high-level information about what type of information the event contains, without being specific to the contents of the event. For example, values of this field distinguish alert events from metric events.
-The value of this field can be used to inform how these kinds of events should be handled. They may warrant different retention, different access control, it may also help understand whether the data coming in at a regular interval or not.
-
-type: keyword
-
-example: alert
-
---
-
-*`event.module`*::
-+
---
-Name of the module this data is coming from.
-If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), `event.module` should contain the name of this module.
-
-type: keyword
-
-example: apache
-
---
-
-*`event.original`*::
-+
---
-Raw text message of entire event. Used to demonstrate log integrity or where the full log message (before splitting it up in multiple parts) may be required, e.g. for reindex.
-This field is not indexed and doc_values are disabled. It cannot be searched, but it can be retrieved from `_source`. If users wish to override this and index this field, please see `Field data types` in the `Elasticsearch Reference`.
-
-type: keyword
-
-example: Sep 19 08:26:10 host CEF:0|Security| threatmanager|1.0|100| worm successfully stopped|10|src=10.0.0.1 dst=2.1.2.2spt=1232
-
-Field is not indexed.
-
---
-
-*`event.outcome`*::
-+
---
-This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy.
-`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event.
-Note that when a single transaction is described in multiple events, each event may populate different values of `event.outcome`, according to their perspective.
-Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer.
-Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with `event.type:info`, or any events for which an outcome does not make logical sense.
-
-type: keyword
-
-example: success
-
---
-
-*`event.provider`*::
-+
---
-Source of the event.
-Event transports such as Syslog or the Windows Event Log typically mention the source of an event. It can be the name of the software that generated the event (e.g. Sysmon, httpd), or of a subsystem of the operating system (kernel, Microsoft-Windows-Security-Auditing).
-
-type: keyword
-
-example: kernel
-
---
-
-*`event.reason`*::
-+
---
-Reason why this event happened, according to the source.
-This describes the why of a particular action or outcome captured in the event. Where `event.action` captures the action from the event, `event.reason` describes why that action was taken. For example, a web proxy with an `event.action` which denied the request may also populate `event.reason` with the reason why (e.g. `blocked site`).
-
-type: keyword
-
-example: Terminated an unexpected process
-
---
-
-*`event.reference`*::
-+
---
-Reference URL linking to additional information about this event.
-This URL links to a static definition of this event. Alert events, indicated by `event.kind:alert`, are a common use case for this field.
-
-type: keyword
-
-example: https://system.example.com/event/#0001234
-
---
-
-*`event.risk_score`*::
-+
---
-Risk score or priority of the event (e.g. security solutions). Use your system's original value here.
-
-type: float
-
---
-
-*`event.risk_score_norm`*::
-+
---
-Normalized risk score or priority of the event, on a scale of 0 to 100.
-This is mainly useful if you use more than one system that assigns risk scores, and you want to see a normalized value across all systems.
-
-type: float
-
---
-
-*`event.sequence`*::
-+
---
-Sequence number of the event.
-The sequence number is a value published by some event sources, to make the exact ordering of events unambiguous, regardless of the timestamp precision.
-
-type: long
-
-format: string
-
---
-
-*`event.severity`*::
-+
---
-The numeric severity of the event according to your event source.
-What the different severity values mean can be different between sources and use cases. It's up to the implementer to make sure severities are consistent across events from the same source.
-The Syslog severity belongs in `log.syslog.severity.code`. `event.severity` is meant to represent the severity according to the event source (e.g. firewall, IDS). If the event source does not publish its own severity, you may optionally copy the `log.syslog.severity.code` to `event.severity`.
-
-type: long
-
-example: 7
-
-format: string
-
---
-
-*`event.start`*::
-+
---
-event.start contains the date when the event started or when the activity was first observed.
-
-type: date
-
---
-
-*`event.timezone`*::
-+
---
-This field should be populated when the event's timestamp does not include timezone information already (e.g. default Syslog timestamps). It's optional otherwise.
-Acceptable timezone formats are: a canonical ID (e.g. "Europe/Amsterdam"), abbreviated (e.g. "EST") or an HH:mm differential (e.g. "-05:00").
-
-type: keyword
-
---
-
-*`event.type`*::
-+
---
-This is one of four ECS Categorization Fields, and indicates the third level in the ECS category hierarchy.
-`event.type` represents a categorization "sub-bucket" that, when used along with the `event.category` field values, enables filtering events down to a level appropriate for single visualization.
-This field is an array. This will allow proper categorization of some events that fall in multiple event types.
-
-type: keyword
-
---
-
-*`event.url`*::
-+
---
-URL linking to an external system to continue investigation of this event.
-This URL links to another system where in-depth investigation of the specific occurrence of this event can take place. Alert events, indicated by `event.kind:alert`, are a common use case for this field.
-
-type: keyword
-
-example: https://mysystem.example.com/alert/5271dedb-f5b0-4218-87f0-4ac4870a38fe
-
---
-
-[float]
-=== faas
-
-The user fields describe information about the function as a service that is relevant to the event.
-
-
-*`faas.coldstart`*::
-+
---
-Boolean value indicating a cold start of a function.
-
-type: boolean
-
---
-
-*`faas.execution`*::
-+
---
-The execution ID of the current function execution.
-
-type: keyword
-
-example: af9d5aa4-a685-4c5f-a22b-444f80b3cc28
-
---
-
-*`faas.trigger`*::
-+
---
-Details about the function trigger.
-
-type: nested
-
---
-
-*`faas.trigger.request_id`*::
-+
---
-The ID of the trigger request , message, event, etc.
-
-type: keyword
-
-example: 123456789
-
---
-
-*`faas.trigger.type`*::
-+
---
-The trigger for the function execution.
-Expected values are:
- * http
- * pubsub
- * datasource
- * timer
- * other
-
-type: keyword
-
-example: http
-
---
-
-[float]
-=== file
-
-A file is defined as a set of information that has been created on, or has existed on a filesystem.
-File objects can be associated with host events, network events, and/or file events (e.g., those produced by File Integrity Monitoring [FIM] products or services). File fields provide details about the affected file associated with the event or metric.
-
-
-*`file.accessed`*::
-+
---
-Last time the file was accessed.
-Note that not all filesystems keep track of access time.
-
-type: date
-
---
-
-*`file.attributes`*::
-+
---
-Array of file attributes.
-Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write.
-
-type: keyword
-
-example: ["readonly", "system"]
-
---
-
-*`file.code_signature.digest_algorithm`*::
-+
---
-The hashing algorithm used to sign the process.
-This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm.
-
-type: keyword
-
-example: sha256
-
---
-
-*`file.code_signature.exists`*::
-+
---
-Boolean to capture if a signature is present.
-
-type: boolean
-
-example: true
-
---
-
-*`file.code_signature.signing_id`*::
-+
---
-The identifier used to sign the process.
-This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: com.apple.xpc.proxy
-
---
-
-*`file.code_signature.status`*::
-+
---
-Additional information about the certificate status.
-This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked.
-
-type: keyword
-
-example: ERROR_UNTRUSTED_ROOT
-
---
-
-*`file.code_signature.subject_name`*::
-+
---
-Subject name of the code signer
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`file.code_signature.team_id`*::
-+
---
-The team identifier used to sign the process.
-This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: EQHXZ8M8AV
-
---
-
-*`file.code_signature.timestamp`*::
-+
---
-Date and time when the code signature was generated and signed.
-
-type: date
-
-example: 2021-01-01T12:10:30Z
-
---
-
-*`file.code_signature.trusted`*::
-+
---
-Stores the trust status of the certificate chain.
-Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status.
-
-type: boolean
-
-example: true
-
---
-
-*`file.code_signature.valid`*::
-+
---
-Boolean to capture if the digital signature is verified against the binary content.
-Leave unpopulated if a certificate was unchecked.
-
-type: boolean
-
-example: true
-
---
-
-*`file.created`*::
-+
---
-File creation time.
-Note that not all filesystems store the creation time.
-
-type: date
-
---
-
-*`file.ctime`*::
-+
---
-Last time the file attributes or metadata changed.
-Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file.
-
-type: date
-
---
-
-*`file.device`*::
-+
---
-Device that is the source of the file.
-
-type: keyword
-
-example: sda
-
---
-
-*`file.directory`*::
-+
---
-Directory where the file is located. It should include the drive letter, when appropriate.
-
-type: keyword
-
-example: /home/alice
-
---
-
-*`file.drive_letter`*::
-+
---
-Drive letter where the file is located. This field is only relevant on Windows.
-The value should be uppercase, and not include the colon.
-
-type: keyword
-
-example: C
-
---
-
-*`file.elf.architecture`*::
-+
---
-Machine architecture of the ELF file.
-
-type: keyword
-
-example: x86-64
-
---
-
-*`file.elf.byte_order`*::
-+
---
-Byte sequence of ELF file.
-
-type: keyword
-
-example: Little Endian
-
---
-
-*`file.elf.cpu_type`*::
-+
---
-CPU type of the ELF file.
-
-type: keyword
-
-example: Intel
-
---
-
-*`file.elf.creation_date`*::
-+
---
-Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators.
-
-type: date
-
---
-
-*`file.elf.exports`*::
-+
---
-List of exported element names and types.
-
-type: flattened
-
---
-
-*`file.elf.header.abi_version`*::
-+
---
-Version of the ELF Application Binary Interface (ABI).
-
-type: keyword
-
---
-
-*`file.elf.header.class`*::
-+
---
-Header class of the ELF file.
-
-type: keyword
-
---
-
-*`file.elf.header.data`*::
-+
---
-Data table of the ELF header.
-
-type: keyword
-
---
-
-*`file.elf.header.entrypoint`*::
-+
---
-Header entrypoint of the ELF file.
-
-type: long
-
-format: string
-
---
-
-*`file.elf.header.object_version`*::
-+
---
-"0x1" for original ELF files.
-
-type: keyword
-
---
-
-*`file.elf.header.os_abi`*::
-+
---
-Application Binary Interface (ABI) of the Linux OS.
-
-type: keyword
-
---
-
-*`file.elf.header.type`*::
-+
---
-Header type of the ELF file.
-
-type: keyword
-
---
-
-*`file.elf.header.version`*::
-+
---
-Version of the ELF header.
-
-type: keyword
-
---
-
-*`file.elf.imports`*::
-+
---
-List of imported element names and types.
-
-type: flattened
-
---
-
-*`file.elf.sections`*::
-+
---
-An array containing an object for each section of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`.
-
-type: nested
-
---
-
-*`file.elf.sections.chi2`*::
-+
---
-Chi-square probability distribution of the section.
-
-type: long
-
-format: number
-
---
-
-*`file.elf.sections.entropy`*::
-+
---
-Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`file.elf.sections.flags`*::
-+
---
-ELF Section List flags.
-
-type: keyword
-
---
-
-*`file.elf.sections.name`*::
-+
---
-ELF Section List name.
-
-type: keyword
-
---
-
-*`file.elf.sections.physical_offset`*::
-+
---
-ELF Section List offset.
-
-type: keyword
-
---
-
-*`file.elf.sections.physical_size`*::
-+
---
-ELF Section List physical size.
-
-type: long
-
-format: bytes
-
---
-
-*`file.elf.sections.type`*::
-+
---
-ELF Section List type.
-
-type: keyword
-
---
-
-*`file.elf.sections.virtual_address`*::
-+
---
-ELF Section List virtual address.
-
-type: long
-
-format: string
-
---
-
-*`file.elf.sections.virtual_size`*::
-+
---
-ELF Section List virtual size.
-
-type: long
-
-format: string
-
---
-
-*`file.elf.segments`*::
-+
---
-An array containing an object for each segment of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`.
-
-type: nested
-
---
-
-*`file.elf.segments.sections`*::
-+
---
-ELF object segment sections.
-
-type: keyword
-
---
-
-*`file.elf.segments.type`*::
-+
---
-ELF object segment type.
-
-type: keyword
-
---
-
-*`file.elf.shared_libraries`*::
-+
---
-List of shared libraries used by this ELF object.
-
-type: keyword
-
---
-
-*`file.elf.telfhash`*::
-+
---
-telfhash symbol hash for ELF file.
-
-type: keyword
-
---
-
-*`file.extension`*::
-+
---
-File extension, excluding the leading dot.
-Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz").
-
-type: keyword
-
-example: png
-
---
-
-*`file.fork_name`*::
-+
---
-A fork is additional data associated with a filesystem object.
-On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist.
-On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name.
-
-type: keyword
-
-example: Zone.Identifer
-
---
-
-*`file.gid`*::
-+
---
-Primary group ID (GID) of the file.
-
-type: keyword
-
-example: 1001
-
---
-
-*`file.group`*::
-+
---
-Primary group name of the file.
-
-type: keyword
-
-example: alice
-
---
-
-*`file.hash.md5`*::
-+
---
-MD5 hash.
-
-type: keyword
-
---
-
-*`file.hash.sha1`*::
-+
---
-SHA1 hash.
-
-type: keyword
-
---
-
-*`file.hash.sha256`*::
-+
---
-SHA256 hash.
-
-type: keyword
-
---
-
-*`file.hash.sha512`*::
-+
---
-SHA512 hash.
-
-type: keyword
-
---
-
-*`file.hash.ssdeep`*::
-+
---
-SSDEEP hash.
-
-type: keyword
-
---
-
-*`file.inode`*::
-+
---
-Inode representing the file in the filesystem.
-
-type: keyword
-
-example: 256383
-
---
-
-*`file.mime_type`*::
-+
---
-MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used.
-
-type: keyword
-
---
-
-*`file.mode`*::
-+
---
-Mode of the file in octal representation.
-
-type: keyword
-
-example: 0640
-
---
-
-*`file.mtime`*::
-+
---
-Last time the file content was modified.
-
-type: date
-
---
-
-*`file.name`*::
-+
---
-Name of the file including the extension, without the directory.
-
-type: keyword
-
-example: example.png
-
---
-
-*`file.owner`*::
-+
---
-File owner's username.
-
-type: keyword
-
-example: alice
-
---
-
-*`file.path`*::
-+
---
-Full path to the file, including the file name. It should include the drive letter, when appropriate.
-
-type: keyword
-
-example: /home/alice/example.png
-
---
-
-*`file.path.text`*::
-+
---
-type: match_only_text
-
---
-
-*`file.pe.architecture`*::
-+
---
-CPU architecture target for the file.
-
-type: keyword
-
-example: x64
-
---
-
-*`file.pe.company`*::
-+
---
-Internal company name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`file.pe.description`*::
-+
---
-Internal description of the file, provided at compile-time.
-
-type: keyword
-
-example: Paint
-
---
-
-*`file.pe.file_version`*::
-+
---
-Internal version of the file, provided at compile-time.
-
-type: keyword
-
-example: 6.3.9600.17415
-
---
-
-*`file.pe.imphash`*::
-+
---
-A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html.
-
-type: keyword
-
-example: 0c6803c4e922103c4dca5963aad36ddf
-
---
-
-*`file.pe.original_file_name`*::
-+
---
-Internal name of the file, provided at compile-time.
-
-type: keyword
-
-example: MSPAINT.EXE
-
---
-
-*`file.pe.product`*::
-+
---
-Internal product name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft® Windows® Operating System
-
---
-
-*`file.size`*::
-+
---
-File size in bytes.
-Only relevant when `file.type` is "file".
-
-type: long
-
-example: 16384
-
---
-
-*`file.target_path`*::
-+
---
-Target path for symlinks.
-
-type: keyword
-
---
-
-*`file.target_path.text`*::
-+
---
-type: match_only_text
-
---
-
-*`file.type`*::
-+
---
-File type (file, dir, or symlink).
-
-type: keyword
-
-example: file
-
---
-
-*`file.uid`*::
-+
---
-The user ID (UID) or security identifier (SID) of the file owner.
-
-type: keyword
-
-example: 1001
-
---
-
-*`file.x509.alternative_names`*::
-+
---
-List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses.
-
-type: keyword
-
-example: *.elastic.co
-
---
-
-*`file.x509.issuer.common_name`*::
-+
---
-List of common name (CN) of issuing certificate authority.
-
-type: keyword
-
-example: Example SHA2 High Assurance Server CA
-
---
-
-*`file.x509.issuer.country`*::
-+
---
-List of country (C) codes
-
-type: keyword
-
-example: US
-
---
-
-*`file.x509.issuer.distinguished_name`*::
-+
---
-Distinguished name (DN) of issuing certificate authority.
-
-type: keyword
-
-example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA
-
---
-
-*`file.x509.issuer.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: Mountain View
-
---
-
-*`file.x509.issuer.organization`*::
-+
---
-List of organizations (O) of issuing certificate authority.
-
-type: keyword
-
-example: Example Inc
-
---
-
-*`file.x509.issuer.organizational_unit`*::
-+
---
-List of organizational units (OU) of issuing certificate authority.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`file.x509.issuer.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`file.x509.not_after`*::
-+
---
-Time at which the certificate is no longer considered valid.
-
-type: date
-
-example: 2020-07-16 03:15:39+00:00
-
---
-
-*`file.x509.not_before`*::
-+
---
-Time at which the certificate is first considered valid.
-
-type: date
-
-example: 2019-08-16 01:40:25+00:00
-
---
-
-*`file.x509.public_key_algorithm`*::
-+
---
-Algorithm used to generate the public key.
-
-type: keyword
-
-example: RSA
-
---
-
-*`file.x509.public_key_curve`*::
-+
---
-The curve used by the elliptic curve public key algorithm. This is algorithm specific.
-
-type: keyword
-
-example: nistp521
-
---
-
-*`file.x509.public_key_exponent`*::
-+
---
-Exponent used to derive the public key. This is algorithm specific.
-
-type: long
-
-example: 65537
-
-Field is not indexed.
-
---
-
-*`file.x509.public_key_size`*::
-+
---
-The size of the public key space in bits.
-
-type: long
-
-example: 2048
-
---
-
-*`file.x509.serial_number`*::
-+
---
-Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters.
-
-type: keyword
-
-example: 55FBB9C7DEBF09809D12CCAA
-
---
-
-*`file.x509.signature_algorithm`*::
-+
---
-Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353.
-
-type: keyword
-
-example: SHA256-RSA
-
---
-
-*`file.x509.subject.common_name`*::
-+
---
-List of common names (CN) of subject.
-
-type: keyword
-
-example: shared.global.example.net
-
---
-
-*`file.x509.subject.country`*::
-+
---
-List of country (C) code
-
-type: keyword
-
-example: US
-
---
-
-*`file.x509.subject.distinguished_name`*::
-+
---
-Distinguished name (DN) of the certificate subject entity.
-
-type: keyword
-
-example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net
-
---
-
-*`file.x509.subject.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: San Francisco
-
---
-
-*`file.x509.subject.organization`*::
-+
---
-List of organizations (O) of subject.
-
-type: keyword
-
-example: Example, Inc.
-
---
-
-*`file.x509.subject.organizational_unit`*::
-+
---
-List of organizational units (OU) of subject.
-
-type: keyword
-
---
-
-*`file.x509.subject.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`file.x509.version_number`*::
-+
---
-Version of x509 format.
-
-type: keyword
-
-example: 3
-
---
-
-[float]
-=== geo
-
-Geo fields can carry data about a specific location related to an event.
-This geolocation information can be derived from techniques such as Geo IP, or be user-supplied.
-
-
-*`geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-[float]
-=== group
-
-The group fields are meant to represent groups that are relevant to the event.
-
-
-*`group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-[float]
-=== hash
-
-The hash fields represent different bitwise hash algorithms and their values.
-Field names for common hashes (e.g. MD5, SHA1) are predefined. Add fields for other hashes by lowercasing the hash algorithm name and using underscore separators as appropriate (snake case, e.g. sha3_512).
-Note that this fieldset is used for common hashes that may be computed over a range of generic bytes. Entity-specific hashes such as ja3 or imphash are placed in the fieldsets to which they relate (tls and pe, respectively).
-
-
-*`hash.md5`*::
-+
---
-MD5 hash.
-
-type: keyword
-
---
-
-*`hash.sha1`*::
-+
---
-SHA1 hash.
-
-type: keyword
-
---
-
-*`hash.sha256`*::
-+
---
-SHA256 hash.
-
-type: keyword
-
---
-
-*`hash.sha512`*::
-+
---
-SHA512 hash.
-
-type: keyword
-
---
-
-*`hash.ssdeep`*::
-+
---
-SSDEEP hash.
-
-type: keyword
-
---
-
-[float]
-=== host
-
-A host is defined as a general computing instance.
-ECS host.* fields should be populated with details about the host on which the event happened, or from which the measurement was taken. Host types include hardware, virtual machines, Docker containers, and Kubernetes nodes.
-
-
-*`host.architecture`*::
-+
---
-Operating system architecture.
-
-type: keyword
-
-example: x86_64
-
---
-
-*`host.cpu.usage`*::
-+
---
-Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1.
-Scaling factor: 1000.
-For example: For a two core host, this value should be the average of the two cores, between 0 and 1.
-
-type: scaled_float
-
---
-
-*`host.disk.read.bytes`*::
-+
---
-The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection.
-
-type: long
-
---
-
-*`host.disk.write.bytes`*::
-+
---
-The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection.
-
-type: long
-
---
-
-*`host.domain`*::
-+
---
-Name of the domain of which the host is a member.
-For example, on Windows this could be the host's Active Directory domain or NetBIOS domain name. For Linux this could be the domain of the host's LDAP provider.
-
-type: keyword
-
-example: CONTOSO
-
---
-
-*`host.geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`host.geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`host.geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`host.geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`host.geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`host.geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`host.geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`host.geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`host.geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`host.geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`host.geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-*`host.hostname`*::
-+
---
-Hostname of the host.
-It normally contains what the `hostname` command returns on the host machine.
-
-type: keyword
-
---
-
-*`host.id`*::
-+
---
-Unique host id.
-As hostname is not always unique, use values that are meaningful in your environment.
-Example: The current usage of `beat.name`.
-
-type: keyword
-
---
-
-*`host.ip`*::
-+
---
-Host ip addresses.
-
-type: ip
-
---
-
-*`host.mac`*::
-+
---
-Host MAC addresses.
-The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen.
-
-type: keyword
-
-example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"]
-
---
-
-*`host.name`*::
-+
---
-Name of the host.
-It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use.
-
-type: keyword
-
---
-
-*`host.network.egress.bytes`*::
-+
---
-The number of bytes (gauge) sent out on all network interfaces by the host since the last metric collection.
-
-type: long
-
---
-
-*`host.network.egress.packets`*::
-+
---
-The number of packets (gauge) sent out on all network interfaces by the host since the last metric collection.
-
-type: long
-
---
-
-*`host.network.ingress.bytes`*::
-+
---
-The number of bytes received (gauge) on all network interfaces by the host since the last metric collection.
-
-type: long
-
---
-
-*`host.network.ingress.packets`*::
-+
---
-The number of packets (gauge) received on all network interfaces by the host since the last metric collection.
-
-type: long
-
---
-
-*`host.os.family`*::
-+
---
-OS family (such as redhat, debian, freebsd, windows).
-
-type: keyword
-
-example: debian
-
---
-
-*`host.os.full`*::
-+
---
-Operating system name, including the version or code name.
-
-type: keyword
-
-example: Mac OS Mojave
-
---
-
-*`host.os.full.text`*::
-+
---
-type: match_only_text
-
---
-
-*`host.os.kernel`*::
-+
---
-Operating system kernel version as a raw string.
-
-type: keyword
-
-example: 4.4.0-112-generic
-
---
-
-*`host.os.name`*::
-+
---
-Operating system name, without the version.
-
-type: keyword
-
-example: Mac OS X
-
---
-
-*`host.os.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`host.os.platform`*::
-+
---
-Operating system platform (such centos, ubuntu, windows).
-
-type: keyword
-
-example: darwin
-
---
-
-*`host.os.type`*::
-+
---
-Use the `os.type` field to categorize the operating system into one of the broad commercial families.
-One of these following values should be used (lowercase): linux, macos, unix, windows.
-If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition.
-
-type: keyword
-
-example: macos
-
---
-
-*`host.os.version`*::
-+
---
-Operating system version as a raw string.
-
-type: keyword
-
-example: 10.14.1
-
---
-
-*`host.type`*::
-+
---
-Type of host.
-For Cloud providers this can be the machine type like `t2.medium`. If vm, this could be the container, for example, or other information meaningful in your environment.
-
-type: keyword
-
---
-
-*`host.uptime`*::
-+
---
-Seconds the host has been up.
-
-type: long
-
-example: 1325
-
---
-
-[float]
-=== http
-
-Fields related to HTTP activity. Use the `url` field set to store the url of the request.
-
-
-*`http.request.body.bytes`*::
-+
---
-Size in bytes of the request body.
-
-type: long
-
-example: 887
-
-format: bytes
-
---
-
-*`http.request.body.content`*::
-+
---
-The full HTTP request body.
-
-type: wildcard
-
-example: Hello world
-
---
-
-*`http.request.body.content.text`*::
-+
---
-type: match_only_text
-
---
-
-*`http.request.bytes`*::
-+
---
-Total size in bytes of the request (body and headers).
-
-type: long
-
-example: 1437
-
-format: bytes
-
---
-
-*`http.request.id`*::
-+
---
-A unique identifier for each HTTP request to correlate logs between clients and servers in transactions.
-The id may be contained in a non-standard HTTP header, such as `X-Request-ID` or `X-Correlation-ID`.
-
-type: keyword
-
-example: 123e4567-e89b-12d3-a456-426614174000
-
---
-
-*`http.request.method`*::
-+
---
-HTTP request method.
-The value should retain its casing from the original event. For example, `GET`, `get`, and `GeT` are all considered valid values for this field.
-
-type: keyword
-
-example: POST
-
---
-
-*`http.request.mime_type`*::
-+
---
-Mime type of the body of the request.
-This value must only be populated based on the content of the request body, not on the `Content-Type` header. Comparing the mime type of a request with the request's Content-Type header can be helpful in detecting threats or misconfigured clients.
-
-type: keyword
-
-example: image/gif
-
---
-
-*`http.request.referrer`*::
-+
---
-Referrer for this HTTP request.
-
-type: keyword
-
-example: https://blog.example.com/
-
---
-
-*`http.response.body.bytes`*::
-+
---
-Size in bytes of the response body.
-
-type: long
-
-example: 887
-
-format: bytes
-
---
-
-*`http.response.body.content`*::
-+
---
-The full HTTP response body.
-
-type: wildcard
-
-example: Hello world
-
---
-
-*`http.response.body.content.text`*::
-+
---
-type: match_only_text
-
---
-
-*`http.response.bytes`*::
-+
---
-Total size in bytes of the response (body and headers).
-
-type: long
-
-example: 1437
-
-format: bytes
-
---
-
-*`http.response.mime_type`*::
-+
---
-Mime type of the body of the response.
-This value must only be populated based on the content of the response body, not on the `Content-Type` header. Comparing the mime type of a response with the response's Content-Type header can be helpful in detecting misconfigured servers.
-
-type: keyword
-
-example: image/gif
-
---
-
-*`http.response.status_code`*::
-+
---
-HTTP response status code.
-
-type: long
-
-example: 404
-
-format: string
-
---
-
-*`http.version`*::
-+
---
-HTTP version.
-
-type: keyword
-
-example: 1.1
-
---
-
-[float]
-=== interface
-
-The interface fields are used to record ingress and egress interface information when reported by an observer (e.g. firewall, router, load balancer) in the context of the observer handling a network connection. In the case of a single observer interface (e.g. network sensor on a span port) only the observer.ingress information should be populated.
-
-
-*`interface.alias`*::
-+
---
-Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming.
-
-type: keyword
-
-example: outside
-
---
-
-*`interface.id`*::
-+
---
-Interface ID as reported by an observer (typically SNMP interface ID).
-
-type: keyword
-
-example: 10
-
---
-
-*`interface.name`*::
-+
---
-Interface name as reported by the system.
-
-type: keyword
-
-example: eth0
-
---
-
-[float]
-=== log
-
-Details about the event's logging mechanism or logging transport.
-The log.* fields are typically populated with details about the logging mechanism used to create and/or transport the event. For example, syslog details belong under `log.syslog.*`.
-The details specific to your event source are typically not logged under `log.*`, but rather in `event.*` or in other ECS fields.
-
-
-*`log.file.path`*::
-+
---
-Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate.
-If the event wasn't read from a log file, do not populate this field.
-
-type: keyword
-
-example: /var/log/fun-times.log
-
---
-
-*`log.level`*::
-+
---
-Original log level of the log event.
-If the source of the event provides a log level or textual severity, this is the one that goes in `log.level`. If your source doesn't specify one, you may put your event transport's severity here (e.g. Syslog severity).
-Some examples are `warn`, `err`, `i`, `informational`.
-
-type: keyword
-
-example: error
-
---
-
-*`log.logger`*::
-+
---
-The name of the logger inside an application. This is usually the name of the class which initialized the logger, or can be a custom name.
-
-type: keyword
-
-example: org.elasticsearch.bootstrap.Bootstrap
-
---
-
-*`log.origin.file.line`*::
-+
---
-The line number of the file containing the source code which originated the log event.
-
-type: long
-
-example: 42
-
---
-
-*`log.origin.file.name`*::
-+
---
-The name of the file containing the source code which originated the log event.
-Note that this field is not meant to capture the log file. The correct field to capture the log file is `log.file.path`.
-
-type: keyword
-
-example: Bootstrap.java
-
---
-
-*`log.origin.function`*::
-+
---
-The name of the function or method which originated the log event.
-
-type: keyword
-
-example: init
-
---
-
-*`log.syslog`*::
-+
---
-The Syslog metadata of the event, if the event was transmitted via Syslog. Please see RFCs 5424 or 3164.
-
-type: object
-
---
-
-*`log.syslog.facility.code`*::
-+
---
-The Syslog numeric facility of the log event, if available.
-According to RFCs 5424 and 3164, this value should be an integer between 0 and 23.
-
-type: long
-
-example: 23
-
-format: string
-
---
-
-*`log.syslog.facility.name`*::
-+
---
-The Syslog text-based facility of the log event, if available.
-
-type: keyword
-
-example: local7
-
---
-
-*`log.syslog.priority`*::
-+
---
-Syslog numeric priority of the event, if available.
-According to RFCs 5424 and 3164, the priority is 8 * facility + severity. This number is therefore expected to contain a value between 0 and 191.
-
-type: long
-
-example: 135
-
-format: string
-
---
-
-*`log.syslog.severity.code`*::
-+
---
-The Syslog numeric severity of the log event, if available.
-If the event source publishing via Syslog provides a different numeric severity value (e.g. firewall, IDS), your source's numeric severity should go to `event.severity`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `event.severity`.
-
-type: long
-
-example: 3
-
---
-
-*`log.syslog.severity.name`*::
-+
---
-The Syslog numeric severity of the log event, if available.
-If the event source publishing via Syslog provides a different severity value (e.g. firewall, IDS), your source's text severity should go to `log.level`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `log.level`.
-
-type: keyword
-
-example: Error
-
---
-
-[float]
-=== network
-
-The network is defined as the communication path over which a host or network event happens.
-The network.* fields should be populated with details about the network activity associated with an event.
-
-
-*`network.application`*::
-+
---
-When a specific application or service is identified from network connection details (source/dest IPs, ports, certificates, or wire format), this field captures the application's or service's name.
-For example, the original event identifies the network connection being from a specific web service in a `https` network connection, like `facebook` or `twitter`.
-The field value must be normalized to lowercase for querying.
-
-type: keyword
-
-example: aim
-
---
-
-*`network.bytes`*::
-+
---
-Total bytes transferred in both directions.
-If `source.bytes` and `destination.bytes` are known, `network.bytes` is their sum.
-
-type: long
-
-example: 368
-
-format: bytes
-
---
-
-*`network.community_id`*::
-+
---
-A hash of source and destination IPs and ports, as well as the protocol used in a communication. This is a tool-agnostic standard to identify flows.
-Learn more at https://github.com/corelight/community-id-spec.
-
-type: keyword
-
-example: 1:hO+sN4H+MG5MY/8hIrXPqc4ZQz0=
-
---
-
-*`network.direction`*::
-+
---
-Direction of the network traffic.
-Recommended values are:
- * ingress
- * egress
- * inbound
- * outbound
- * internal
- * external
- * unknown
-
-When mapping events from a host-based monitoring context, populate this field from the host's point of view, using the values "ingress" or "egress".
-When mapping events from a network or perimeter-based monitoring context, populate this field from the point of view of the network perimeter, using the values "inbound", "outbound", "internal" or "external".
-Note that "internal" is not crossing perimeter boundaries, and is meant to describe communication between two hosts within the perimeter. Note also that "external" is meant to describe traffic between two hosts that are external to the perimeter. This could for example be useful for ISPs or VPN service providers.
-
-type: keyword
-
-example: inbound
-
---
-
-*`network.forwarded_ip`*::
-+
---
-Host IP address when the source IP address is the proxy.
-
-type: ip
-
-example: 192.1.1.2
-
---
-
-*`network.iana_number`*::
-+
---
-IANA Protocol Number (https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). Standardized list of protocols. This aligns well with NetFlow and sFlow related logs which use the IANA Protocol Number.
-
-type: keyword
-
-example: 6
-
---
-
-*`network.inner`*::
-+
---
-Network.inner fields are added in addition to network.vlan fields to describe the innermost VLAN when q-in-q VLAN tagging is present. Allowed fields include vlan.id and vlan.name. Inner vlan fields are typically used when sending traffic with multiple 802.1q encapsulations to a network sensor (e.g. Zeek, Wireshark.)
-
-type: object
-
---
-
-*`network.inner.vlan.id`*::
-+
---
-VLAN ID as reported by the observer.
-
-type: keyword
-
-example: 10
-
---
-
-*`network.inner.vlan.name`*::
-+
---
-Optional VLAN name as reported by the observer.
-
-type: keyword
-
-example: outside
-
---
-
-*`network.name`*::
-+
---
-Name given by operators to sections of their network.
-
-type: keyword
-
-example: Guest Wifi
-
---
-
-*`network.packets`*::
-+
---
-Total packets transferred in both directions.
-If `source.packets` and `destination.packets` are known, `network.packets` is their sum.
-
-type: long
-
-example: 24
-
---
-
-*`network.protocol`*::
-+
---
-In the OSI Model this would be the Application Layer protocol. For example, `http`, `dns`, or `ssh`.
-The field value must be normalized to lowercase for querying.
-
-type: keyword
-
-example: http
-
---
-
-*`network.transport`*::
-+
---
-Same as network.iana_number, but instead using the Keyword name of the transport layer (udp, tcp, ipv6-icmp, etc.)
-The field value must be normalized to lowercase for querying.
-
-type: keyword
-
-example: tcp
-
---
-
-*`network.type`*::
-+
---
-In the OSI Model this would be the Network Layer. ipv4, ipv6, ipsec, pim, etc
-The field value must be normalized to lowercase for querying.
-
-type: keyword
-
-example: ipv4
-
---
-
-*`network.vlan.id`*::
-+
---
-VLAN ID as reported by the observer.
-
-type: keyword
-
-example: 10
-
---
-
-*`network.vlan.name`*::
-+
---
-Optional VLAN name as reported by the observer.
-
-type: keyword
-
-example: outside
-
---
-
-[float]
-=== observer
-
-An observer is defined as a special network, security, or application device used to detect, observe, or create network, security, or application-related events and metrics.
-This could be a custom hardware appliance or a server that has been configured to run special network, security, or application software. Examples include firewalls, web proxies, intrusion detection/prevention systems, network monitoring sensors, web application firewalls, data loss prevention systems, and APM servers. The observer.* fields shall be populated with details of the system, if any, that detects, observes and/or creates a network, security, or application event or metric. Message queues and ETL components used in processing events or metrics are not considered observers in ECS.
-
-
-*`observer.egress`*::
-+
---
-Observer.egress holds information like interface number and name, vlan, and zone information to classify egress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic.
-
-type: object
-
---
-
-*`observer.egress.interface.alias`*::
-+
---
-Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming.
-
-type: keyword
-
-example: outside
-
---
-
-*`observer.egress.interface.id`*::
-+
---
-Interface ID as reported by an observer (typically SNMP interface ID).
-
-type: keyword
-
-example: 10
-
---
-
-*`observer.egress.interface.name`*::
-+
---
-Interface name as reported by the system.
-
-type: keyword
-
-example: eth0
-
---
-
-*`observer.egress.vlan.id`*::
-+
---
-VLAN ID as reported by the observer.
-
-type: keyword
-
-example: 10
-
---
-
-*`observer.egress.vlan.name`*::
-+
---
-Optional VLAN name as reported by the observer.
-
-type: keyword
-
-example: outside
-
---
-
-*`observer.egress.zone`*::
-+
---
-Network zone of outbound traffic as reported by the observer to categorize the destination area of egress traffic, e.g. Internal, External, DMZ, HR, Legal, etc.
-
-type: keyword
-
-example: Public_Internet
-
---
-
-*`observer.geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`observer.geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`observer.geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`observer.geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`observer.geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`observer.geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`observer.geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`observer.geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`observer.geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`observer.geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`observer.geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-*`observer.hostname`*::
-+
---
-Hostname of the observer.
-
-type: keyword
-
---
-
-*`observer.ingress`*::
-+
---
-Observer.ingress holds information like interface number and name, vlan, and zone information to classify ingress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic.
-
-type: object
-
---
-
-*`observer.ingress.interface.alias`*::
-+
---
-Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming.
-
-type: keyword
-
-example: outside
-
---
-
-*`observer.ingress.interface.id`*::
-+
---
-Interface ID as reported by an observer (typically SNMP interface ID).
-
-type: keyword
-
-example: 10
-
---
-
-*`observer.ingress.interface.name`*::
-+
---
-Interface name as reported by the system.
-
-type: keyword
-
-example: eth0
-
---
-
-*`observer.ingress.vlan.id`*::
-+
---
-VLAN ID as reported by the observer.
-
-type: keyword
-
-example: 10
-
---
-
-*`observer.ingress.vlan.name`*::
-+
---
-Optional VLAN name as reported by the observer.
-
-type: keyword
-
-example: outside
-
---
-
-*`observer.ingress.zone`*::
-+
---
-Network zone of incoming traffic as reported by the observer to categorize the source area of ingress traffic. e.g. internal, External, DMZ, HR, Legal, etc.
-
-type: keyword
-
-example: DMZ
-
---
-
-*`observer.ip`*::
-+
---
-IP addresses of the observer.
-
-type: ip
-
---
-
-*`observer.mac`*::
-+
---
-MAC addresses of the observer.
-The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen.
-
-type: keyword
-
-example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"]
-
---
-
-*`observer.name`*::
-+
---
-Custom name of the observer.
-This is a name that can be given to an observer. This can be helpful for example if multiple firewalls of the same model are used in an organization.
-If no custom name is needed, the field can be left empty.
-
-type: keyword
-
-example: 1_proxySG
-
---
-
-*`observer.os.family`*::
-+
---
-OS family (such as redhat, debian, freebsd, windows).
-
-type: keyword
-
-example: debian
-
---
-
-*`observer.os.full`*::
-+
---
-Operating system name, including the version or code name.
-
-type: keyword
-
-example: Mac OS Mojave
-
---
-
-*`observer.os.full.text`*::
-+
---
-type: match_only_text
-
---
-
-*`observer.os.kernel`*::
-+
---
-Operating system kernel version as a raw string.
-
-type: keyword
-
-example: 4.4.0-112-generic
-
---
-
-*`observer.os.name`*::
-+
---
-Operating system name, without the version.
-
-type: keyword
-
-example: Mac OS X
-
---
-
-*`observer.os.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`observer.os.platform`*::
-+
---
-Operating system platform (such centos, ubuntu, windows).
-
-type: keyword
-
-example: darwin
-
---
-
-*`observer.os.type`*::
-+
---
-Use the `os.type` field to categorize the operating system into one of the broad commercial families.
-One of these following values should be used (lowercase): linux, macos, unix, windows.
-If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition.
-
-type: keyword
-
-example: macos
-
---
-
-*`observer.os.version`*::
-+
---
-Operating system version as a raw string.
-
-type: keyword
-
-example: 10.14.1
-
---
-
-*`observer.product`*::
-+
---
-The product name of the observer.
-
-type: keyword
-
-example: s200
-
---
-
-*`observer.serial_number`*::
-+
---
-Observer serial number.
-
-type: keyword
-
---
-
-*`observer.type`*::
-+
---
-The type of the observer the data is coming from.
-There is no predefined list of observer types. Some examples are `forwarder`, `firewall`, `ids`, `ips`, `proxy`, `poller`, `sensor`, `APM server`.
-
-type: keyword
-
-example: firewall
-
---
-
-*`observer.vendor`*::
-+
---
-Vendor name of the observer.
-
-type: keyword
-
-example: Symantec
-
---
-
-*`observer.version`*::
-+
---
-Observer version.
-
-type: keyword
-
---
-
-[float]
-=== orchestrator
-
-Fields that describe the resources which container orchestrators manage or act upon.
-
-
-*`orchestrator.api_version`*::
-+
---
-API version being used to carry out the action
-
-type: keyword
-
-example: v1beta1
-
---
-
-*`orchestrator.cluster.name`*::
-+
---
-Name of the cluster.
-
-type: keyword
-
---
-
-*`orchestrator.cluster.url`*::
-+
---
-URL of the API used to manage the cluster.
-
-type: keyword
-
---
-
-*`orchestrator.cluster.version`*::
-+
---
-The version of the cluster.
-
-type: keyword
-
---
-
-*`orchestrator.namespace`*::
-+
---
-Namespace in which the action is taking place.
-
-type: keyword
-
-example: kube-system
-
---
-
-*`orchestrator.organization`*::
-+
---
-Organization affected by the event (for multi-tenant orchestrator setups).
-
-type: keyword
-
-example: elastic
-
---
-
-*`orchestrator.resource.name`*::
-+
---
-Name of the resource being acted upon.
-
-type: keyword
-
-example: test-pod-cdcws
-
---
-
-*`orchestrator.resource.type`*::
-+
---
-Type of resource being acted upon.
-
-type: keyword
-
-example: service
-
---
-
-*`orchestrator.type`*::
-+
---
-Orchestrator cluster type (e.g. kubernetes, nomad or cloudfoundry).
-
-type: keyword
-
-example: kubernetes
-
---
-
-[float]
-=== organization
-
-The organization fields enrich data with information about the company or entity the data is associated with.
-These fields help you arrange or filter data stored in an index by one or multiple organizations.
-
-
-*`organization.id`*::
-+
---
-Unique identifier for the organization.
-
-type: keyword
-
---
-
-*`organization.name`*::
-+
---
-Organization name.
-
-type: keyword
-
---
-
-*`organization.name.text`*::
-+
---
-type: match_only_text
-
---
-
-[float]
-=== os
-
-The OS fields contain information about the operating system.
-
-
-*`os.family`*::
-+
---
-OS family (such as redhat, debian, freebsd, windows).
-
-type: keyword
-
-example: debian
-
---
-
-*`os.full`*::
-+
---
-Operating system name, including the version or code name.
-
-type: keyword
-
-example: Mac OS Mojave
-
---
-
-*`os.full.text`*::
-+
---
-type: match_only_text
-
---
-
-*`os.kernel`*::
-+
---
-Operating system kernel version as a raw string.
-
-type: keyword
-
-example: 4.4.0-112-generic
-
---
-
-*`os.name`*::
-+
---
-Operating system name, without the version.
-
-type: keyword
-
-example: Mac OS X
-
---
-
-*`os.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`os.platform`*::
-+
---
-Operating system platform (such centos, ubuntu, windows).
-
-type: keyword
-
-example: darwin
-
---
-
-*`os.type`*::
-+
---
-Use the `os.type` field to categorize the operating system into one of the broad commercial families.
-One of these following values should be used (lowercase): linux, macos, unix, windows.
-If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition.
-
-type: keyword
-
-example: macos
-
---
-
-*`os.version`*::
-+
---
-Operating system version as a raw string.
-
-type: keyword
-
-example: 10.14.1
-
---
-
-[float]
-=== package
-
-These fields contain information about an installed software package. It contains general information about a package, such as name, version or size. It also contains installation details, such as time or location.
-
-
-*`package.architecture`*::
-+
---
-Package architecture.
-
-type: keyword
-
-example: x86_64
-
---
-
-*`package.build_version`*::
-+
---
-Additional information about the build version of the installed package.
-For example use the commit SHA of a non-released package.
-
-type: keyword
-
-example: 36f4f7e89dd61b0988b12ee000b98966867710cd
-
---
-
-*`package.checksum`*::
-+
---
-Checksum of the installed package for verification.
-
-type: keyword
-
-example: 68b329da9893e34099c7d8ad5cb9c940
-
---
-
-*`package.description`*::
-+
---
-Description of the package.
-
-type: keyword
-
-example: Open source programming language to build simple/reliable/efficient software.
-
---
-
-*`package.install_scope`*::
-+
---
-Indicating how the package was installed, e.g. user-local, global.
-
-type: keyword
-
-example: global
-
---
-
-*`package.installed`*::
-+
---
-Time when package was installed.
-
-type: date
-
---
-
-*`package.license`*::
-+
---
-License under which the package was released.
-Use a short name, e.g. the license identifier from SPDX License List where possible (https://spdx.org/licenses/).
-
-type: keyword
-
-example: Apache License 2.0
-
---
-
-*`package.name`*::
-+
---
-Package name
-
-type: keyword
-
-example: go
-
---
-
-*`package.path`*::
-+
---
-Path where the package is installed.
-
-type: keyword
-
-example: /usr/local/Cellar/go/1.12.9/
-
---
-
-*`package.reference`*::
-+
---
-Home page or reference URL of the software in this package, if available.
-
-type: keyword
-
-example: https://golang.org
-
---
-
-*`package.size`*::
-+
---
-Package size in bytes.
-
-type: long
-
-example: 62231
-
-format: string
-
---
-
-*`package.type`*::
-+
---
-Type of package.
-This should contain the package file type, rather than the package manager name. Examples: rpm, dpkg, brew, npm, gem, nupkg, jar.
-
-type: keyword
-
-example: rpm
-
---
-
-*`package.version`*::
-+
---
-Package version
-
-type: keyword
-
-example: 1.12.9
-
---
-
-[float]
-=== pe
-
-These fields contain Windows Portable Executable (PE) metadata.
-
-
-*`pe.architecture`*::
-+
---
-CPU architecture target for the file.
-
-type: keyword
-
-example: x64
-
---
-
-*`pe.company`*::
-+
---
-Internal company name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`pe.description`*::
-+
---
-Internal description of the file, provided at compile-time.
-
-type: keyword
-
-example: Paint
-
---
-
-*`pe.file_version`*::
-+
---
-Internal version of the file, provided at compile-time.
-
-type: keyword
-
-example: 6.3.9600.17415
-
---
-
-*`pe.imphash`*::
-+
---
-A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html.
-
-type: keyword
-
-example: 0c6803c4e922103c4dca5963aad36ddf
-
---
-
-*`pe.original_file_name`*::
-+
---
-Internal name of the file, provided at compile-time.
-
-type: keyword
-
-example: MSPAINT.EXE
-
---
-
-*`pe.product`*::
-+
---
-Internal product name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft® Windows® Operating System
-
---
-
-[float]
-=== process
-
-These fields contain information about a process.
-These fields can help you correlate metrics information with a process id/name from a log message. The `process.pid` often stays in the metric itself and is copied to the global field for correlation.
-
-
-*`process.args`*::
-+
---
-Array of process arguments, starting with the absolute path to the executable.
-May be filtered to protect sensitive information.
-
-type: keyword
-
-example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"]
-
---
-
-*`process.args_count`*::
-+
---
-Length of the process.args array.
-This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity.
-
-type: long
-
-example: 4
-
---
-
-*`process.code_signature.digest_algorithm`*::
-+
---
-The hashing algorithm used to sign the process.
-This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm.
-
-type: keyword
-
-example: sha256
-
---
-
-*`process.code_signature.exists`*::
-+
---
-Boolean to capture if a signature is present.
-
-type: boolean
-
-example: true
-
---
-
-*`process.code_signature.signing_id`*::
-+
---
-The identifier used to sign the process.
-This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: com.apple.xpc.proxy
-
---
-
-*`process.code_signature.status`*::
-+
---
-Additional information about the certificate status.
-This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked.
-
-type: keyword
-
-example: ERROR_UNTRUSTED_ROOT
-
---
-
-*`process.code_signature.subject_name`*::
-+
---
-Subject name of the code signer
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`process.code_signature.team_id`*::
-+
---
-The team identifier used to sign the process.
-This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: EQHXZ8M8AV
-
---
-
-*`process.code_signature.timestamp`*::
-+
---
-Date and time when the code signature was generated and signed.
-
-type: date
-
-example: 2021-01-01T12:10:30Z
-
---
-
-*`process.code_signature.trusted`*::
-+
---
-Stores the trust status of the certificate chain.
-Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status.
-
-type: boolean
-
-example: true
-
---
-
-*`process.code_signature.valid`*::
-+
---
-Boolean to capture if the digital signature is verified against the binary content.
-Leave unpopulated if a certificate was unchecked.
-
-type: boolean
-
-example: true
-
---
-
-*`process.command_line`*::
-+
---
-Full command line that started the process, including the absolute path to the executable, and all arguments.
-Some arguments may be filtered to protect sensitive information.
-
-type: wildcard
-
-example: /usr/bin/ssh -l user 10.0.0.16
-
---
-
-*`process.command_line.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.elf.architecture`*::
-+
---
-Machine architecture of the ELF file.
-
-type: keyword
-
-example: x86-64
-
---
-
-*`process.elf.byte_order`*::
-+
---
-Byte sequence of ELF file.
-
-type: keyword
-
-example: Little Endian
-
---
-
-*`process.elf.cpu_type`*::
-+
---
-CPU type of the ELF file.
-
-type: keyword
-
-example: Intel
-
---
-
-*`process.elf.creation_date`*::
-+
---
-Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators.
-
-type: date
-
---
-
-*`process.elf.exports`*::
-+
---
-List of exported element names and types.
-
-type: flattened
-
---
-
-*`process.elf.header.abi_version`*::
-+
---
-Version of the ELF Application Binary Interface (ABI).
-
-type: keyword
-
---
-
-*`process.elf.header.class`*::
-+
---
-Header class of the ELF file.
-
-type: keyword
-
---
-
-*`process.elf.header.data`*::
-+
---
-Data table of the ELF header.
-
-type: keyword
-
---
-
-*`process.elf.header.entrypoint`*::
-+
---
-Header entrypoint of the ELF file.
-
-type: long
-
-format: string
-
---
-
-*`process.elf.header.object_version`*::
-+
---
-"0x1" for original ELF files.
-
-type: keyword
-
---
-
-*`process.elf.header.os_abi`*::
-+
---
-Application Binary Interface (ABI) of the Linux OS.
-
-type: keyword
-
---
-
-*`process.elf.header.type`*::
-+
---
-Header type of the ELF file.
-
-type: keyword
-
---
-
-*`process.elf.header.version`*::
-+
---
-Version of the ELF header.
-
-type: keyword
-
---
-
-*`process.elf.imports`*::
-+
---
-List of imported element names and types.
-
-type: flattened
-
---
-
-*`process.elf.sections`*::
-+
---
-An array containing an object for each section of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`.
-
-type: nested
-
---
-
-*`process.elf.sections.chi2`*::
-+
---
-Chi-square probability distribution of the section.
-
-type: long
-
-format: number
-
---
-
-*`process.elf.sections.entropy`*::
-+
---
-Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`process.elf.sections.flags`*::
-+
---
-ELF Section List flags.
-
-type: keyword
-
---
-
-*`process.elf.sections.name`*::
-+
---
-ELF Section List name.
-
-type: keyword
-
---
-
-*`process.elf.sections.physical_offset`*::
-+
---
-ELF Section List offset.
-
-type: keyword
-
---
-
-*`process.elf.sections.physical_size`*::
-+
---
-ELF Section List physical size.
-
-type: long
-
-format: bytes
-
---
-
-*`process.elf.sections.type`*::
-+
---
-ELF Section List type.
-
-type: keyword
-
---
-
-*`process.elf.sections.virtual_address`*::
-+
---
-ELF Section List virtual address.
-
-type: long
-
-format: string
-
---
-
-*`process.elf.sections.virtual_size`*::
-+
---
-ELF Section List virtual size.
-
-type: long
-
-format: string
-
---
-
-*`process.elf.segments`*::
-+
---
-An array containing an object for each segment of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`.
-
-type: nested
-
---
-
-*`process.elf.segments.sections`*::
-+
---
-ELF object segment sections.
-
-type: keyword
-
---
-
-*`process.elf.segments.type`*::
-+
---
-ELF object segment type.
-
-type: keyword
-
---
-
-*`process.elf.shared_libraries`*::
-+
---
-List of shared libraries used by this ELF object.
-
-type: keyword
-
---
-
-*`process.elf.telfhash`*::
-+
---
-telfhash symbol hash for ELF file.
-
-type: keyword
-
---
-
-*`process.end`*::
-+
---
-The time the process ended.
-
-type: date
-
-example: 2016-05-23T08:05:34.853Z
-
---
-
-*`process.entity_id`*::
-+
---
-Unique identifier for the process.
-The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process.
-Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts.
-
-type: keyword
-
-example: c2c455d9f99375d
-
---
-
-*`process.executable`*::
-+
---
-Absolute path to the process executable.
-
-type: keyword
-
-example: /usr/bin/ssh
-
---
-
-*`process.executable.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.exit_code`*::
-+
---
-The exit code of the process, if this is a termination event.
-The field should be absent if there is no exit code for the event (e.g. process start).
-
-type: long
-
-example: 137
-
---
-
-*`process.hash.md5`*::
-+
---
-MD5 hash.
-
-type: keyword
-
---
-
-*`process.hash.sha1`*::
-+
---
-SHA1 hash.
-
-type: keyword
-
---
-
-*`process.hash.sha256`*::
-+
---
-SHA256 hash.
-
-type: keyword
-
---
-
-*`process.hash.sha512`*::
-+
---
-SHA512 hash.
-
-type: keyword
-
---
-
-*`process.hash.ssdeep`*::
-+
---
-SSDEEP hash.
-
-type: keyword
-
---
-
-*`process.name`*::
-+
---
-Process name.
-Sometimes called program name or similar.
-
-type: keyword
-
-example: ssh
-
---
-
-*`process.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.parent.args`*::
-+
---
-Array of process arguments, starting with the absolute path to the executable.
-May be filtered to protect sensitive information.
-
-type: keyword
-
-example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"]
-
---
-
-*`process.parent.args_count`*::
-+
---
-Length of the process.args array.
-This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity.
-
-type: long
-
-example: 4
-
---
-
-*`process.parent.code_signature.digest_algorithm`*::
-+
---
-The hashing algorithm used to sign the process.
-This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm.
-
-type: keyword
-
-example: sha256
-
---
-
-*`process.parent.code_signature.exists`*::
-+
---
-Boolean to capture if a signature is present.
-
-type: boolean
-
-example: true
-
---
-
-*`process.parent.code_signature.signing_id`*::
-+
---
-The identifier used to sign the process.
-This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: com.apple.xpc.proxy
-
---
-
-*`process.parent.code_signature.status`*::
-+
---
-Additional information about the certificate status.
-This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked.
-
-type: keyword
-
-example: ERROR_UNTRUSTED_ROOT
-
---
-
-*`process.parent.code_signature.subject_name`*::
-+
---
-Subject name of the code signer
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`process.parent.code_signature.team_id`*::
-+
---
-The team identifier used to sign the process.
-This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: EQHXZ8M8AV
-
---
-
-*`process.parent.code_signature.timestamp`*::
-+
---
-Date and time when the code signature was generated and signed.
-
-type: date
-
-example: 2021-01-01T12:10:30Z
-
---
-
-*`process.parent.code_signature.trusted`*::
-+
---
-Stores the trust status of the certificate chain.
-Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status.
-
-type: boolean
-
-example: true
-
---
-
-*`process.parent.code_signature.valid`*::
-+
---
-Boolean to capture if the digital signature is verified against the binary content.
-Leave unpopulated if a certificate was unchecked.
-
-type: boolean
-
-example: true
-
---
-
-*`process.parent.command_line`*::
-+
---
-Full command line that started the process, including the absolute path to the executable, and all arguments.
-Some arguments may be filtered to protect sensitive information.
-
-type: wildcard
-
-example: /usr/bin/ssh -l user 10.0.0.16
-
---
-
-*`process.parent.command_line.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.parent.elf.architecture`*::
-+
---
-Machine architecture of the ELF file.
-
-type: keyword
-
-example: x86-64
-
---
-
-*`process.parent.elf.byte_order`*::
-+
---
-Byte sequence of ELF file.
-
-type: keyword
-
-example: Little Endian
-
---
-
-*`process.parent.elf.cpu_type`*::
-+
---
-CPU type of the ELF file.
-
-type: keyword
-
-example: Intel
-
---
-
-*`process.parent.elf.creation_date`*::
-+
---
-Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators.
-
-type: date
-
---
-
-*`process.parent.elf.exports`*::
-+
---
-List of exported element names and types.
-
-type: flattened
-
---
-
-*`process.parent.elf.header.abi_version`*::
-+
---
-Version of the ELF Application Binary Interface (ABI).
-
-type: keyword
-
---
-
-*`process.parent.elf.header.class`*::
-+
---
-Header class of the ELF file.
-
-type: keyword
-
---
-
-*`process.parent.elf.header.data`*::
-+
---
-Data table of the ELF header.
-
-type: keyword
-
---
-
-*`process.parent.elf.header.entrypoint`*::
-+
---
-Header entrypoint of the ELF file.
-
-type: long
-
-format: string
-
---
-
-*`process.parent.elf.header.object_version`*::
-+
---
-"0x1" for original ELF files.
-
-type: keyword
-
---
-
-*`process.parent.elf.header.os_abi`*::
-+
---
-Application Binary Interface (ABI) of the Linux OS.
-
-type: keyword
-
---
-
-*`process.parent.elf.header.type`*::
-+
---
-Header type of the ELF file.
-
-type: keyword
-
---
-
-*`process.parent.elf.header.version`*::
-+
---
-Version of the ELF header.
-
-type: keyword
-
---
-
-*`process.parent.elf.imports`*::
-+
---
-List of imported element names and types.
-
-type: flattened
-
---
-
-*`process.parent.elf.sections`*::
-+
---
-An array containing an object for each section of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`.
-
-type: nested
-
---
-
-*`process.parent.elf.sections.chi2`*::
-+
---
-Chi-square probability distribution of the section.
-
-type: long
-
-format: number
-
---
-
-*`process.parent.elf.sections.entropy`*::
-+
---
-Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`process.parent.elf.sections.flags`*::
-+
---
-ELF Section List flags.
-
-type: keyword
-
---
-
-*`process.parent.elf.sections.name`*::
-+
---
-ELF Section List name.
-
-type: keyword
-
---
-
-*`process.parent.elf.sections.physical_offset`*::
-+
---
-ELF Section List offset.
-
-type: keyword
-
---
-
-*`process.parent.elf.sections.physical_size`*::
-+
---
-ELF Section List physical size.
-
-type: long
-
-format: bytes
-
---
-
-*`process.parent.elf.sections.type`*::
-+
---
-ELF Section List type.
-
-type: keyword
-
---
-
-*`process.parent.elf.sections.virtual_address`*::
-+
---
-ELF Section List virtual address.
-
-type: long
-
-format: string
-
---
-
-*`process.parent.elf.sections.virtual_size`*::
-+
---
-ELF Section List virtual size.
-
-type: long
-
-format: string
-
---
-
-*`process.parent.elf.segments`*::
-+
---
-An array containing an object for each segment of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`.
-
-type: nested
-
---
-
-*`process.parent.elf.segments.sections`*::
-+
---
-ELF object segment sections.
-
-type: keyword
-
---
-
-*`process.parent.elf.segments.type`*::
-+
---
-ELF object segment type.
-
-type: keyword
-
---
-
-*`process.parent.elf.shared_libraries`*::
-+
---
-List of shared libraries used by this ELF object.
-
-type: keyword
-
---
-
-*`process.parent.elf.telfhash`*::
-+
---
-telfhash symbol hash for ELF file.
-
-type: keyword
-
---
-
-*`process.parent.end`*::
-+
---
-The time the process ended.
-
-type: date
-
-example: 2016-05-23T08:05:34.853Z
-
---
-
-*`process.parent.entity_id`*::
-+
---
-Unique identifier for the process.
-The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process.
-Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts.
-
-type: keyword
-
-example: c2c455d9f99375d
-
---
-
-*`process.parent.executable`*::
-+
---
-Absolute path to the process executable.
-
-type: keyword
-
-example: /usr/bin/ssh
-
---
-
-*`process.parent.executable.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.parent.exit_code`*::
-+
---
-The exit code of the process, if this is a termination event.
-The field should be absent if there is no exit code for the event (e.g. process start).
-
-type: long
-
-example: 137
-
---
-
-*`process.parent.hash.md5`*::
-+
---
-MD5 hash.
-
-type: keyword
-
---
-
-*`process.parent.hash.sha1`*::
-+
---
-SHA1 hash.
-
-type: keyword
-
---
-
-*`process.parent.hash.sha256`*::
-+
---
-SHA256 hash.
-
-type: keyword
-
---
-
-*`process.parent.hash.sha512`*::
-+
---
-SHA512 hash.
-
-type: keyword
-
---
-
-*`process.parent.hash.ssdeep`*::
-+
---
-SSDEEP hash.
-
-type: keyword
-
---
-
-*`process.parent.name`*::
-+
---
-Process name.
-Sometimes called program name or similar.
-
-type: keyword
-
-example: ssh
-
---
-
-*`process.parent.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.parent.pe.architecture`*::
-+
---
-CPU architecture target for the file.
-
-type: keyword
-
-example: x64
-
---
-
-*`process.parent.pe.company`*::
-+
---
-Internal company name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`process.parent.pe.description`*::
-+
---
-Internal description of the file, provided at compile-time.
-
-type: keyword
-
-example: Paint
-
---
-
-*`process.parent.pe.file_version`*::
-+
---
-Internal version of the file, provided at compile-time.
-
-type: keyword
-
-example: 6.3.9600.17415
-
---
-
-*`process.parent.pe.imphash`*::
-+
---
-A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html.
-
-type: keyword
-
-example: 0c6803c4e922103c4dca5963aad36ddf
-
---
-
-*`process.parent.pe.original_file_name`*::
-+
---
-Internal name of the file, provided at compile-time.
-
-type: keyword
-
-example: MSPAINT.EXE
-
---
-
-*`process.parent.pe.product`*::
-+
---
-Internal product name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft® Windows® Operating System
-
---
-
-*`process.parent.pgid`*::
-+
---
-Identifier of the group of processes the process belongs to.
-
-type: long
-
-format: string
-
---
-
-*`process.parent.pid`*::
-+
---
-Process id.
-
-type: long
-
-example: 4242
-
-format: string
-
---
-
-*`process.parent.start`*::
-+
---
-The time the process started.
-
-type: date
-
-example: 2016-05-23T08:05:34.853Z
-
---
-
-*`process.parent.thread.id`*::
-+
---
-Thread ID.
-
-type: long
-
-example: 4242
-
-format: string
-
---
-
-*`process.parent.thread.name`*::
-+
---
-Thread name.
-
-type: keyword
-
-example: thread-0
-
---
-
-*`process.parent.title`*::
-+
---
-Process title.
-The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened.
-
-type: keyword
-
---
-
-*`process.parent.title.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.parent.uptime`*::
-+
---
-Seconds the process has been up.
-
-type: long
-
-example: 1325
-
---
-
-*`process.parent.working_directory`*::
-+
---
-The working directory of the process.
-
-type: keyword
-
-example: /home/alice
-
---
-
-*`process.parent.working_directory.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.pe.architecture`*::
-+
---
-CPU architecture target for the file.
-
-type: keyword
-
-example: x64
-
---
-
-*`process.pe.company`*::
-+
---
-Internal company name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`process.pe.description`*::
-+
---
-Internal description of the file, provided at compile-time.
-
-type: keyword
-
-example: Paint
-
---
-
-*`process.pe.file_version`*::
-+
---
-Internal version of the file, provided at compile-time.
-
-type: keyword
-
-example: 6.3.9600.17415
-
---
-
-*`process.pe.imphash`*::
-+
---
-A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html.
-
-type: keyword
-
-example: 0c6803c4e922103c4dca5963aad36ddf
-
---
-
-*`process.pe.original_file_name`*::
-+
---
-Internal name of the file, provided at compile-time.
-
-type: keyword
-
-example: MSPAINT.EXE
-
---
-
-*`process.pe.product`*::
-+
---
-Internal product name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft® Windows® Operating System
-
---
-
-*`process.pgid`*::
-+
---
-Identifier of the group of processes the process belongs to.
-
-type: long
-
-format: string
-
---
-
-*`process.pid`*::
-+
---
-Process id.
-
-type: long
-
-example: 4242
-
-format: string
-
---
-
-*`process.start`*::
-+
---
-The time the process started.
-
-type: date
-
-example: 2016-05-23T08:05:34.853Z
-
---
-
-*`process.thread.id`*::
-+
---
-Thread ID.
-
-type: long
-
-example: 4242
-
-format: string
-
---
-
-*`process.thread.name`*::
-+
---
-Thread name.
-
-type: keyword
-
-example: thread-0
-
---
-
-*`process.title`*::
-+
---
-Process title.
-The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened.
-
-type: keyword
-
---
-
-*`process.title.text`*::
-+
---
-type: match_only_text
-
---
-
-*`process.uptime`*::
-+
---
-Seconds the process has been up.
-
-type: long
-
-example: 1325
-
---
-
-*`process.working_directory`*::
-+
---
-The working directory of the process.
-
-type: keyword
-
-example: /home/alice
-
---
-
-*`process.working_directory.text`*::
-+
---
-type: match_only_text
-
---
-
-[float]
-=== registry
-
-Fields related to Windows Registry operations.
-
-
-*`registry.data.bytes`*::
-+
---
-Original bytes written with base64 encoding.
-For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values.
-
-type: keyword
-
-example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA=
-
---
-
-*`registry.data.strings`*::
-+
---
-Content when writing string types.
-Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`).
-
-type: wildcard
-
-example: ["C:\rta\red_ttp\bin\myapp.exe"]
-
---
-
-*`registry.data.type`*::
-+
---
-Standard registry type for encoding contents
-
-type: keyword
-
-example: REG_SZ
-
---
-
-*`registry.hive`*::
-+
---
-Abbreviated name for the hive.
-
-type: keyword
-
-example: HKLM
-
---
-
-*`registry.key`*::
-+
---
-Hive-relative path of keys.
-
-type: keyword
-
-example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe
-
---
-
-*`registry.path`*::
-+
---
-Full path, including hive, key and value
-
-type: keyword
-
-example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger
-
---
-
-*`registry.value`*::
-+
---
-Name of the value written.
-
-type: keyword
-
-example: Debugger
-
---
-
-[float]
-=== related
-
-This field set is meant to facilitate pivoting around a piece of data.
-Some pieces of information can be seen in many places in an ECS event. To facilitate searching for them, store an array of all seen values to their corresponding field in `related.`.
-A concrete example is IP addresses, which can be under host, observer, source, destination, client, server, and network.forwarded_ip. If you append all IPs to `related.ip`, you can then search for a given IP trivially, no matter where it appeared, by querying `related.ip:192.0.2.15`.
-
-
-*`related.hash`*::
-+
---
-All the hashes seen on your event. Populating this field, then using it to search for hashes can help in situations where you're unsure what the hash algorithm is (and therefore which key name to search).
-
-type: keyword
-
---
-
-*`related.hosts`*::
-+
---
-All hostnames or other host identifiers seen on your event. Example identifiers include FQDNs, domain names, workstation names, or aliases.
-
-type: keyword
-
---
-
-*`related.ip`*::
-+
---
-All of the IPs seen on your event.
-
-type: ip
-
---
-
-*`related.user`*::
-+
---
-All the user names or other user identifiers seen on the event.
-
-type: keyword
-
---
-
-[float]
-=== rule
-
-Rule fields are used to capture the specifics of any observer or agent rules that generate alerts or other notable events.
-Examples of data sources that would populate the rule fields include: network admission control platforms, network or host IDS/IPS, network firewalls, web application firewalls, url filters, endpoint detection and response (EDR) systems, etc.
-
-
-*`rule.author`*::
-+
---
-Name, organization, or pseudonym of the author or authors who created the rule used to generate this event.
-
-type: keyword
-
-example: ["Star-Lord"]
-
---
-
-*`rule.category`*::
-+
---
-A categorization value keyword used by the entity using the rule for detection of this event.
-
-type: keyword
-
-example: Attempted Information Leak
-
---
-
-*`rule.description`*::
-+
---
-The description of the rule generating the event.
-
-type: keyword
-
-example: Block requests to public DNS over HTTPS / TLS protocols
-
---
-
-*`rule.id`*::
-+
---
-A rule ID that is unique within the scope of an agent, observer, or other entity using the rule for detection of this event.
-
-type: keyword
-
-example: 101
-
---
-
-*`rule.license`*::
-+
---
-Name of the license under which the rule used to generate this event is made available.
-
-type: keyword
-
-example: Apache 2.0
-
---
-
-*`rule.name`*::
-+
---
-The name of the rule or signature generating the event.
-
-type: keyword
-
-example: BLOCK_DNS_over_TLS
-
---
-
-*`rule.reference`*::
-+
---
-Reference URL to additional information about the rule used to generate this event.
-The URL can point to the vendor's documentation about the rule. If that's not available, it can also be a link to a more general page describing this type of alert.
-
-type: keyword
-
-example: https://en.wikipedia.org/wiki/DNS_over_TLS
-
---
-
-*`rule.ruleset`*::
-+
---
-Name of the ruleset, policy, group, or parent category in which the rule used to generate this event is a member.
-
-type: keyword
-
-example: Standard_Protocol_Filters
-
---
-
-*`rule.uuid`*::
-+
---
-A rule ID that is unique within the scope of a set or group of agents, observers, or other entities using the rule for detection of this event.
-
-type: keyword
-
-example: 1100110011
-
---
-
-*`rule.version`*::
-+
---
-The version / revision of the rule being used for analysis.
-
-type: keyword
-
-example: 1.1
-
---
-
-[float]
-=== server
-
-A Server is defined as the responder in a network connection for events regarding sessions, connections, or bidirectional flow records.
-For TCP events, the server is the receiver of the initial SYN packet(s) of the TCP connection. For other protocols, the server is generally the responder in the network transaction. Some systems actually use the term "responder" to refer the server in TCP connections. The server fields describe details about the system acting as the server in the network event. Server fields are usually populated in conjunction with client fields. Server fields are generally not populated for packet-level events.
-Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately.
-
-
-*`server.address`*::
-+
---
-Some event server addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field.
-Then it should be duplicated to `.ip` or `.domain`, depending on which one it is.
-
-type: keyword
-
---
-
-*`server.as.number`*::
-+
---
-Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet.
-
-type: long
-
-example: 15169
-
---
-
-*`server.as.organization.name`*::
-+
---
-Organization name.
-
-type: keyword
-
-example: Google LLC
-
---
-
-*`server.as.organization.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`server.bytes`*::
-+
---
-Bytes sent from the server to the client.
-
-type: long
-
-example: 184
-
-format: bytes
-
---
-
-*`server.domain`*::
-+
---
-The domain name of the server system.
-This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment.
-
-type: keyword
-
-example: foo.example.com
-
---
-
-*`server.geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`server.geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`server.geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`server.geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`server.geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`server.geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`server.geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`server.geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`server.geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`server.geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`server.geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-*`server.ip`*::
-+
---
-IP address of the server (IPv4 or IPv6).
-
-type: ip
-
---
-
-*`server.mac`*::
-+
---
-MAC address of the server.
-The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen.
-
-type: keyword
-
-example: 00-00-5E-00-53-23
-
---
-
-*`server.nat.ip`*::
-+
---
-Translated ip of destination based NAT sessions (e.g. internet to private DMZ)
-Typically used with load balancers, firewalls, or routers.
-
-type: ip
-
---
-
-*`server.nat.port`*::
-+
---
-Translated port of destination based NAT sessions (e.g. internet to private DMZ)
-Typically used with load balancers, firewalls, or routers.
-
-type: long
-
-format: string
-
---
-
-*`server.packets`*::
-+
---
-Packets sent from the server to the client.
-
-type: long
-
-example: 12
-
---
-
-*`server.port`*::
-+
---
-Port of the server.
-
-type: long
-
-format: string
-
---
-
-*`server.registered_domain`*::
-+
---
-The highest registered server domain, stripped of the subdomain.
-For example, the registered domain for "foo.example.com" is "example.com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk".
-
-type: keyword
-
-example: example.com
-
---
-
-*`server.subdomain`*::
-+
---
-The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain.
-For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period.
-
-type: keyword
-
-example: east
-
---
-
-*`server.top_level_domain`*::
-+
---
-The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk".
-
-type: keyword
-
-example: co.uk
-
---
-
-*`server.user.domain`*::
-+
---
-Name of the directory the user is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`server.user.email`*::
-+
---
-User email address.
-
-type: keyword
-
---
-
-*`server.user.full_name`*::
-+
---
-User's full name, if available.
-
-type: keyword
-
-example: Albert Einstein
-
---
-
-*`server.user.full_name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`server.user.group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`server.user.group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`server.user.group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-*`server.user.hash`*::
-+
---
-Unique user hash to correlate information for a user in anonymized form.
-Useful if `user.id` or `user.name` contain confidential information and cannot be used.
-
-type: keyword
-
---
-
-*`server.user.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
-example: S-1-5-21-202424912787-2692429404-2351956786-1000
-
---
-
-*`server.user.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: a.einstein
-
---
-
-*`server.user.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`server.user.roles`*::
-+
---
-Array of user roles at the time of the event.
-
-type: keyword
-
-example: ["kibana_admin", "reporting_user"]
-
---
-
-[float]
-=== service
-
-The service fields describe the service for or from which the data was collected.
-These fields help you find and correlate logs for a specific service and version.
-
-
-*`service.address`*::
-+
---
-Address where data about this service was collected from.
-This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets).
-
-type: keyword
-
-example: 172.26.0.2:5432
-
---
-
-*`service.environment`*::
-+
---
-Identifies the environment where the service is running.
-If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment.
-
-type: keyword
-
-example: production
-
---
-
-*`service.ephemeral_id`*::
-+
---
-Ephemeral identifier of this service (if one exists).
-This id normally changes across restarts, but `service.id` does not.
-
-type: keyword
-
-example: 8a4f500f
-
---
-
-*`service.id`*::
-+
---
-Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes.
-This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event.
-Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead.
-
-type: keyword
-
-example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6
-
---
-
-*`service.name`*::
-+
---
-Name of the service data is collected from.
-The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name.
-In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified.
-
-type: keyword
-
-example: elasticsearch-metrics
-
---
-
-*`service.node.name`*::
-+
---
-Name of a service node.
-This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service.
-In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn't have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set.
-
-type: keyword
-
-example: instance-0000000016
-
---
-
-*`service.origin.address`*::
-+
---
-Address where data about this service was collected from.
-This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets).
-
-type: keyword
-
-example: 172.26.0.2:5432
-
---
-
-*`service.origin.environment`*::
-+
---
-Identifies the environment where the service is running.
-If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment.
-
-type: keyword
-
-example: production
-
---
-
-*`service.origin.ephemeral_id`*::
-+
---
-Ephemeral identifier of this service (if one exists).
-This id normally changes across restarts, but `service.id` does not.
-
-type: keyword
-
-example: 8a4f500f
-
---
-
-*`service.origin.id`*::
-+
---
-Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes.
-This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event.
-Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead.
-
-type: keyword
-
-example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6
-
---
-
-*`service.origin.name`*::
-+
---
-Name of the service data is collected from.
-The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name.
-In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified.
-
-type: keyword
-
-example: elasticsearch-metrics
-
---
-
-*`service.origin.node.name`*::
-+
---
-Name of a service node.
-This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service.
-In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn't have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set.
-
-type: keyword
-
-example: instance-0000000016
-
---
-
-*`service.origin.state`*::
-+
---
-Current state of the service.
-
-type: keyword
-
---
-
-*`service.origin.type`*::
-+
---
-The type of the service data is collected from.
-The type can be used to group and correlate logs and metrics from one service type.
-Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`.
-
-type: keyword
-
-example: elasticsearch
-
---
-
-*`service.origin.version`*::
-+
---
-Version of the service the data was collected from.
-This allows to look at a data set only for a specific version of a service.
-
-type: keyword
-
-example: 3.2.4
-
---
-
-*`service.state`*::
-+
---
-Current state of the service.
-
-type: keyword
-
---
-
-*`service.target.address`*::
-+
---
-Address where data about this service was collected from.
-This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets).
-
-type: keyword
-
-example: 172.26.0.2:5432
-
---
-
-*`service.target.environment`*::
-+
---
-Identifies the environment where the service is running.
-If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment.
-
-type: keyword
-
-example: production
-
---
-
-*`service.target.ephemeral_id`*::
-+
---
-Ephemeral identifier of this service (if one exists).
-This id normally changes across restarts, but `service.id` does not.
-
-type: keyword
-
-example: 8a4f500f
-
---
-
-*`service.target.id`*::
-+
---
-Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes.
-This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event.
-Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead.
-
-type: keyword
-
-example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6
-
---
-
-*`service.target.name`*::
-+
---
-Name of the service data is collected from.
-The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name.
-In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified.
-
-type: keyword
-
-example: elasticsearch-metrics
-
---
-
-*`service.target.node.name`*::
-+
---
-Name of a service node.
-This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service.
-In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn't have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set.
-
-type: keyword
-
-example: instance-0000000016
-
---
-
-*`service.target.state`*::
-+
---
-Current state of the service.
-
-type: keyword
-
---
-
-*`service.target.type`*::
-+
---
-The type of the service data is collected from.
-The type can be used to group and correlate logs and metrics from one service type.
-Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`.
-
-type: keyword
-
-example: elasticsearch
-
---
-
-*`service.target.version`*::
-+
---
-Version of the service the data was collected from.
-This allows to look at a data set only for a specific version of a service.
-
-type: keyword
-
-example: 3.2.4
-
---
-
-*`service.type`*::
-+
---
-The type of the service data is collected from.
-The type can be used to group and correlate logs and metrics from one service type.
-Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`.
-
-type: keyword
-
-example: elasticsearch
-
---
-
-*`service.version`*::
-+
---
-Version of the service the data was collected from.
-This allows to look at a data set only for a specific version of a service.
-
-type: keyword
-
-example: 3.2.4
-
---
-
-[float]
-=== source
-
-Source fields capture details about the sender of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction.
-Source fields are usually populated in conjunction with destination fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated.
-
-
-*`source.address`*::
-+
---
-Some event source addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field.
-Then it should be duplicated to `.ip` or `.domain`, depending on which one it is.
-
-type: keyword
-
---
-
-*`source.as.number`*::
-+
---
-Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet.
-
-type: long
-
-example: 15169
-
---
-
-*`source.as.organization.name`*::
-+
---
-Organization name.
-
-type: keyword
-
-example: Google LLC
-
---
-
-*`source.as.organization.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`source.bytes`*::
-+
---
-Bytes sent from the source to the destination.
-
-type: long
-
-example: 184
-
-format: bytes
-
---
-
-*`source.domain`*::
-+
---
-The domain name of the source system.
-This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment.
-
-type: keyword
-
-example: foo.example.com
-
---
-
-*`source.geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`source.geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`source.geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`source.geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`source.geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`source.geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`source.geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`source.geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`source.geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`source.geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`source.geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-*`source.ip`*::
-+
---
-IP address of the source (IPv4 or IPv6).
-
-type: ip
-
---
-
-*`source.mac`*::
-+
---
-MAC address of the source.
-The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen.
-
-type: keyword
-
-example: 00-00-5E-00-53-23
-
---
-
-*`source.nat.ip`*::
-+
---
-Translated ip of source based NAT sessions (e.g. internal client to internet)
-Typically connections traversing load balancers, firewalls, or routers.
-
-type: ip
-
---
-
-*`source.nat.port`*::
-+
---
-Translated port of source based NAT sessions. (e.g. internal client to internet)
-Typically used with load balancers, firewalls, or routers.
-
-type: long
-
-format: string
-
---
-
-*`source.packets`*::
-+
---
-Packets sent from the source to the destination.
-
-type: long
-
-example: 12
-
---
-
-*`source.port`*::
-+
---
-Port of the source.
-
-type: long
-
-format: string
-
---
-
-*`source.registered_domain`*::
-+
---
-The highest registered source domain, stripped of the subdomain.
-For example, the registered domain for "foo.example.com" is "example.com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk".
-
-type: keyword
-
-example: example.com
-
---
-
-*`source.subdomain`*::
-+
---
-The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain.
-For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period.
-
-type: keyword
-
-example: east
-
---
-
-*`source.top_level_domain`*::
-+
---
-The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk".
-
-type: keyword
-
-example: co.uk
-
---
-
-*`source.user.domain`*::
-+
---
-Name of the directory the user is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`source.user.email`*::
-+
---
-User email address.
-
-type: keyword
-
---
-
-*`source.user.full_name`*::
-+
---
-User's full name, if available.
-
-type: keyword
-
-example: Albert Einstein
-
---
-
-*`source.user.full_name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`source.user.group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`source.user.group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`source.user.group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-*`source.user.hash`*::
-+
---
-Unique user hash to correlate information for a user in anonymized form.
-Useful if `user.id` or `user.name` contain confidential information and cannot be used.
-
-type: keyword
-
---
-
-*`source.user.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
-example: S-1-5-21-202424912787-2692429404-2351956786-1000
-
---
-
-*`source.user.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: a.einstein
-
---
-
-*`source.user.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`source.user.roles`*::
-+
---
-Array of user roles at the time of the event.
-
-type: keyword
-
-example: ["kibana_admin", "reporting_user"]
-
---
-
-[float]
-=== threat
-
-Fields to classify events and alerts according to a threat taxonomy such as the MITRE ATT&CK® framework.
-These fields are for users to classify alerts from all of their sources (e.g. IDS, NGFW, etc.) within a common taxonomy. The threat.tactic.* fields are meant to capture the high level category of the threat (e.g. "impact"). The threat.technique.* fields are meant to capture which kind of approach is used by this detected threat, to accomplish the goal (e.g. "endpoint denial of service").
-
-
-*`threat.enrichments`*::
-+
---
-A list of associated indicators objects enriching the event, and the context of that association/enrichment.
-
-type: nested
-
---
-
-*`threat.enrichments.indicator`*::
-+
---
-Object containing associated indicators enriching the event.
-
-type: object
-
---
-
-*`threat.enrichments.indicator.as.number`*::
-+
---
-Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet.
-
-type: long
-
-example: 15169
-
---
-
-*`threat.enrichments.indicator.as.organization.name`*::
-+
---
-Organization name.
-
-type: keyword
-
-example: Google LLC
-
---
-
-*`threat.enrichments.indicator.as.organization.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.enrichments.indicator.confidence`*::
-+
---
-Identifies the vendor-neutral confidence rating using the None/Low/Medium/High scale defined in Appendix A of the STIX 2.1 framework. Vendor-specific confidence scales may be added as custom fields.
-Expected values are:
- * Not Specified
- * None
- * Low
- * Medium
- * High
-
-type: keyword
-
-example: Medium
-
---
-
-*`threat.enrichments.indicator.description`*::
-+
---
-Describes the type of action conducted by the threat.
-
-type: keyword
-
-example: IP x.x.x.x was observed delivering the Angler EK.
-
---
-
-*`threat.enrichments.indicator.email.address`*::
-+
---
-Identifies a threat indicator as an email address (irrespective of direction).
-
-type: keyword
-
-example: phish@example.com
-
---
-
-*`threat.enrichments.indicator.file.accessed`*::
-+
---
-Last time the file was accessed.
-Note that not all filesystems keep track of access time.
-
-type: date
-
---
-
-*`threat.enrichments.indicator.file.attributes`*::
-+
---
-Array of file attributes.
-Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write.
-
-type: keyword
-
-example: ["readonly", "system"]
-
---
-
-*`threat.enrichments.indicator.file.code_signature.digest_algorithm`*::
-+
---
-The hashing algorithm used to sign the process.
-This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm.
-
-type: keyword
-
-example: sha256
-
---
-
-*`threat.enrichments.indicator.file.code_signature.exists`*::
-+
---
-Boolean to capture if a signature is present.
-
-type: boolean
-
-example: true
-
---
-
-*`threat.enrichments.indicator.file.code_signature.signing_id`*::
-+
---
-The identifier used to sign the process.
-This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: com.apple.xpc.proxy
-
---
-
-*`threat.enrichments.indicator.file.code_signature.status`*::
-+
---
-Additional information about the certificate status.
-This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked.
-
-type: keyword
-
-example: ERROR_UNTRUSTED_ROOT
-
---
-
-*`threat.enrichments.indicator.file.code_signature.subject_name`*::
-+
---
-Subject name of the code signer
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`threat.enrichments.indicator.file.code_signature.team_id`*::
-+
---
-The team identifier used to sign the process.
-This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: EQHXZ8M8AV
-
---
-
-*`threat.enrichments.indicator.file.code_signature.timestamp`*::
-+
---
-Date and time when the code signature was generated and signed.
-
-type: date
-
-example: 2021-01-01T12:10:30Z
-
---
-
-*`threat.enrichments.indicator.file.code_signature.trusted`*::
-+
---
-Stores the trust status of the certificate chain.
-Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status.
-
-type: boolean
-
-example: true
-
---
-
-*`threat.enrichments.indicator.file.code_signature.valid`*::
-+
---
-Boolean to capture if the digital signature is verified against the binary content.
-Leave unpopulated if a certificate was unchecked.
-
-type: boolean
-
-example: true
-
---
-
-*`threat.enrichments.indicator.file.created`*::
-+
---
-File creation time.
-Note that not all filesystems store the creation time.
-
-type: date
-
---
-
-*`threat.enrichments.indicator.file.ctime`*::
-+
---
-Last time the file attributes or metadata changed.
-Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file.
-
-type: date
-
---
-
-*`threat.enrichments.indicator.file.device`*::
-+
---
-Device that is the source of the file.
-
-type: keyword
-
-example: sda
-
---
-
-*`threat.enrichments.indicator.file.directory`*::
-+
---
-Directory where the file is located. It should include the drive letter, when appropriate.
-
-type: keyword
-
-example: /home/alice
-
---
-
-*`threat.enrichments.indicator.file.drive_letter`*::
-+
---
-Drive letter where the file is located. This field is only relevant on Windows.
-The value should be uppercase, and not include the colon.
-
-type: keyword
-
-example: C
-
---
-
-*`threat.enrichments.indicator.file.elf.architecture`*::
-+
---
-Machine architecture of the ELF file.
-
-type: keyword
-
-example: x86-64
-
---
-
-*`threat.enrichments.indicator.file.elf.byte_order`*::
-+
---
-Byte sequence of ELF file.
-
-type: keyword
-
-example: Little Endian
-
---
-
-*`threat.enrichments.indicator.file.elf.cpu_type`*::
-+
---
-CPU type of the ELF file.
-
-type: keyword
-
-example: Intel
-
---
-
-*`threat.enrichments.indicator.file.elf.creation_date`*::
-+
---
-Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators.
-
-type: date
-
---
-
-*`threat.enrichments.indicator.file.elf.exports`*::
-+
---
-List of exported element names and types.
-
-type: flattened
-
---
-
-*`threat.enrichments.indicator.file.elf.header.abi_version`*::
-+
---
-Version of the ELF Application Binary Interface (ABI).
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.header.class`*::
-+
---
-Header class of the ELF file.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.header.data`*::
-+
---
-Data table of the ELF header.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.header.entrypoint`*::
-+
---
-Header entrypoint of the ELF file.
-
-type: long
-
-format: string
-
---
-
-*`threat.enrichments.indicator.file.elf.header.object_version`*::
-+
---
-"0x1" for original ELF files.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.header.os_abi`*::
-+
---
-Application Binary Interface (ABI) of the Linux OS.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.header.type`*::
-+
---
-Header type of the ELF file.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.header.version`*::
-+
---
-Version of the ELF header.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.imports`*::
-+
---
-List of imported element names and types.
-
-type: flattened
-
---
-
-*`threat.enrichments.indicator.file.elf.sections`*::
-+
---
-An array containing an object for each section of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`.
-
-type: nested
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.chi2`*::
-+
---
-Chi-square probability distribution of the section.
-
-type: long
-
-format: number
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.entropy`*::
-+
---
-Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.flags`*::
-+
---
-ELF Section List flags.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.name`*::
-+
---
-ELF Section List name.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.physical_offset`*::
-+
---
-ELF Section List offset.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.physical_size`*::
-+
---
-ELF Section List physical size.
-
-type: long
-
-format: bytes
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.type`*::
-+
---
-ELF Section List type.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.virtual_address`*::
-+
---
-ELF Section List virtual address.
-
-type: long
-
-format: string
-
---
-
-*`threat.enrichments.indicator.file.elf.sections.virtual_size`*::
-+
---
-ELF Section List virtual size.
-
-type: long
-
-format: string
-
---
-
-*`threat.enrichments.indicator.file.elf.segments`*::
-+
---
-An array containing an object for each segment of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`.
-
-type: nested
-
---
-
-*`threat.enrichments.indicator.file.elf.segments.sections`*::
-+
---
-ELF object segment sections.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.segments.type`*::
-+
---
-ELF object segment type.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.shared_libraries`*::
-+
---
-List of shared libraries used by this ELF object.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.elf.telfhash`*::
-+
---
-telfhash symbol hash for ELF file.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.extension`*::
-+
---
-File extension, excluding the leading dot.
-Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz").
-
-type: keyword
-
-example: png
-
---
-
-*`threat.enrichments.indicator.file.fork_name`*::
-+
---
-A fork is additional data associated with a filesystem object.
-On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist.
-On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name.
-
-type: keyword
-
-example: Zone.Identifer
-
---
-
-*`threat.enrichments.indicator.file.gid`*::
-+
---
-Primary group ID (GID) of the file.
-
-type: keyword
-
-example: 1001
-
---
-
-*`threat.enrichments.indicator.file.group`*::
-+
---
-Primary group name of the file.
-
-type: keyword
-
-example: alice
-
---
-
-*`threat.enrichments.indicator.file.hash.md5`*::
-+
---
-MD5 hash.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.hash.sha1`*::
-+
---
-SHA1 hash.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.hash.sha256`*::
-+
---
-SHA256 hash.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.hash.sha512`*::
-+
---
-SHA512 hash.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.hash.ssdeep`*::
-+
---
-SSDEEP hash.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.inode`*::
-+
---
-Inode representing the file in the filesystem.
-
-type: keyword
-
-example: 256383
-
---
-
-*`threat.enrichments.indicator.file.mime_type`*::
-+
---
-MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.mode`*::
-+
---
-Mode of the file in octal representation.
-
-type: keyword
-
-example: 0640
-
---
-
-*`threat.enrichments.indicator.file.mtime`*::
-+
---
-Last time the file content was modified.
-
-type: date
-
---
-
-*`threat.enrichments.indicator.file.name`*::
-+
---
-Name of the file including the extension, without the directory.
-
-type: keyword
-
-example: example.png
-
---
-
-*`threat.enrichments.indicator.file.owner`*::
-+
---
-File owner's username.
-
-type: keyword
-
-example: alice
-
---
-
-*`threat.enrichments.indicator.file.path`*::
-+
---
-Full path to the file, including the file name. It should include the drive letter, when appropriate.
-
-type: keyword
-
-example: /home/alice/example.png
-
---
-
-*`threat.enrichments.indicator.file.path.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.enrichments.indicator.file.pe.architecture`*::
-+
---
-CPU architecture target for the file.
-
-type: keyword
-
-example: x64
-
---
-
-*`threat.enrichments.indicator.file.pe.company`*::
-+
---
-Internal company name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`threat.enrichments.indicator.file.pe.description`*::
-+
---
-Internal description of the file, provided at compile-time.
-
-type: keyword
-
-example: Paint
-
---
-
-*`threat.enrichments.indicator.file.pe.file_version`*::
-+
---
-Internal version of the file, provided at compile-time.
-
-type: keyword
-
-example: 6.3.9600.17415
-
---
-
-*`threat.enrichments.indicator.file.pe.imphash`*::
-+
---
-A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html.
-
-type: keyword
-
-example: 0c6803c4e922103c4dca5963aad36ddf
-
---
-
-*`threat.enrichments.indicator.file.pe.original_file_name`*::
-+
---
-Internal name of the file, provided at compile-time.
-
-type: keyword
-
-example: MSPAINT.EXE
-
---
-
-*`threat.enrichments.indicator.file.pe.product`*::
-+
---
-Internal product name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft® Windows® Operating System
-
---
-
-*`threat.enrichments.indicator.file.size`*::
-+
---
-File size in bytes.
-Only relevant when `file.type` is "file".
-
-type: long
-
-example: 16384
-
---
-
-*`threat.enrichments.indicator.file.target_path`*::
-+
---
-Target path for symlinks.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.target_path.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.enrichments.indicator.file.type`*::
-+
---
-File type (file, dir, or symlink).
-
-type: keyword
-
-example: file
-
---
-
-*`threat.enrichments.indicator.file.uid`*::
-+
---
-The user ID (UID) or security identifier (SID) of the file owner.
-
-type: keyword
-
-example: 1001
-
---
-
-*`threat.enrichments.indicator.file.x509.alternative_names`*::
-+
---
-List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses.
-
-type: keyword
-
-example: *.elastic.co
-
---
-
-*`threat.enrichments.indicator.file.x509.issuer.common_name`*::
-+
---
-List of common name (CN) of issuing certificate authority.
-
-type: keyword
-
-example: Example SHA2 High Assurance Server CA
-
---
-
-*`threat.enrichments.indicator.file.x509.issuer.country`*::
-+
---
-List of country (C) codes
-
-type: keyword
-
-example: US
-
---
-
-*`threat.enrichments.indicator.file.x509.issuer.distinguished_name`*::
-+
---
-Distinguished name (DN) of issuing certificate authority.
-
-type: keyword
-
-example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA
-
---
-
-*`threat.enrichments.indicator.file.x509.issuer.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: Mountain View
-
---
-
-*`threat.enrichments.indicator.file.x509.issuer.organization`*::
-+
---
-List of organizations (O) of issuing certificate authority.
-
-type: keyword
-
-example: Example Inc
-
---
-
-*`threat.enrichments.indicator.file.x509.issuer.organizational_unit`*::
-+
---
-List of organizational units (OU) of issuing certificate authority.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`threat.enrichments.indicator.file.x509.issuer.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`threat.enrichments.indicator.file.x509.not_after`*::
-+
---
-Time at which the certificate is no longer considered valid.
-
-type: date
-
-example: 2020-07-16 03:15:39+00:00
-
---
-
-*`threat.enrichments.indicator.file.x509.not_before`*::
-+
---
-Time at which the certificate is first considered valid.
-
-type: date
-
-example: 2019-08-16 01:40:25+00:00
-
---
-
-*`threat.enrichments.indicator.file.x509.public_key_algorithm`*::
-+
---
-Algorithm used to generate the public key.
-
-type: keyword
-
-example: RSA
-
---
-
-*`threat.enrichments.indicator.file.x509.public_key_curve`*::
-+
---
-The curve used by the elliptic curve public key algorithm. This is algorithm specific.
-
-type: keyword
-
-example: nistp521
-
---
-
-*`threat.enrichments.indicator.file.x509.public_key_exponent`*::
-+
---
-Exponent used to derive the public key. This is algorithm specific.
-
-type: long
-
-example: 65537
-
-Field is not indexed.
-
---
-
-*`threat.enrichments.indicator.file.x509.public_key_size`*::
-+
---
-The size of the public key space in bits.
-
-type: long
-
-example: 2048
-
---
-
-*`threat.enrichments.indicator.file.x509.serial_number`*::
-+
---
-Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters.
-
-type: keyword
-
-example: 55FBB9C7DEBF09809D12CCAA
-
---
-
-*`threat.enrichments.indicator.file.x509.signature_algorithm`*::
-+
---
-Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353.
-
-type: keyword
-
-example: SHA256-RSA
-
---
-
-*`threat.enrichments.indicator.file.x509.subject.common_name`*::
-+
---
-List of common names (CN) of subject.
-
-type: keyword
-
-example: shared.global.example.net
-
---
-
-*`threat.enrichments.indicator.file.x509.subject.country`*::
-+
---
-List of country (C) code
-
-type: keyword
-
-example: US
-
---
-
-*`threat.enrichments.indicator.file.x509.subject.distinguished_name`*::
-+
---
-Distinguished name (DN) of the certificate subject entity.
-
-type: keyword
-
-example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net
-
---
-
-*`threat.enrichments.indicator.file.x509.subject.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: San Francisco
-
---
-
-*`threat.enrichments.indicator.file.x509.subject.organization`*::
-+
---
-List of organizations (O) of subject.
-
-type: keyword
-
-example: Example, Inc.
-
---
-
-*`threat.enrichments.indicator.file.x509.subject.organizational_unit`*::
-+
---
-List of organizational units (OU) of subject.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.file.x509.subject.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`threat.enrichments.indicator.file.x509.version_number`*::
-+
---
-Version of x509 format.
-
-type: keyword
-
-example: 3
-
---
-
-*`threat.enrichments.indicator.first_seen`*::
-+
---
-The date and time when intelligence source first reported sighting this indicator.
-
-type: date
-
-example: 2020-11-05T17:25:47.000Z
-
---
-
-*`threat.enrichments.indicator.geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`threat.enrichments.indicator.geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`threat.enrichments.indicator.geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`threat.enrichments.indicator.geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`threat.enrichments.indicator.geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`threat.enrichments.indicator.geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`threat.enrichments.indicator.geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`threat.enrichments.indicator.geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`threat.enrichments.indicator.geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`threat.enrichments.indicator.geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`threat.enrichments.indicator.geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-*`threat.enrichments.indicator.ip`*::
-+
---
-Identifies a threat indicator as an IP address (irrespective of direction).
-
-type: ip
-
-example: 1.2.3.4
-
---
-
-*`threat.enrichments.indicator.last_seen`*::
-+
---
-The date and time when intelligence source last reported sighting this indicator.
-
-type: date
-
-example: 2020-11-05T17:25:47.000Z
-
---
-
-*`threat.enrichments.indicator.marking.tlp`*::
-+
---
-Traffic Light Protocol sharing markings. Recommended values are:
- * WHITE
- * GREEN
- * AMBER
- * RED
-
-type: keyword
-
-example: White
-
---
-
-*`threat.enrichments.indicator.modified_at`*::
-+
---
-The date and time when intelligence source last modified information for this indicator.
-
-type: date
-
-example: 2020-11-05T17:25:47.000Z
-
---
-
-*`threat.enrichments.indicator.port`*::
-+
---
-Identifies a threat indicator as a port number (irrespective of direction).
-
-type: long
-
-example: 443
-
---
-
-*`threat.enrichments.indicator.provider`*::
-+
---
-The name of the indicator's provider.
-
-type: keyword
-
-example: lrz_urlhaus
-
---
-
-*`threat.enrichments.indicator.reference`*::
-+
---
-Reference URL linking to additional information about this indicator.
-
-type: keyword
-
-example: https://system.example.com/indicator/0001234
-
---
-
-*`threat.enrichments.indicator.registry.data.bytes`*::
-+
---
-Original bytes written with base64 encoding.
-For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values.
-
-type: keyword
-
-example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA=
-
---
-
-*`threat.enrichments.indicator.registry.data.strings`*::
-+
---
-Content when writing string types.
-Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`).
-
-type: wildcard
-
-example: ["C:\rta\red_ttp\bin\myapp.exe"]
-
---
-
-*`threat.enrichments.indicator.registry.data.type`*::
-+
---
-Standard registry type for encoding contents
-
-type: keyword
-
-example: REG_SZ
-
---
-
-*`threat.enrichments.indicator.registry.hive`*::
-+
---
-Abbreviated name for the hive.
-
-type: keyword
-
-example: HKLM
-
---
-
-*`threat.enrichments.indicator.registry.key`*::
-+
---
-Hive-relative path of keys.
-
-type: keyword
-
-example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe
-
---
-
-*`threat.enrichments.indicator.registry.path`*::
-+
---
-Full path, including hive, key and value
-
-type: keyword
-
-example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger
-
---
-
-*`threat.enrichments.indicator.registry.value`*::
-+
---
-Name of the value written.
-
-type: keyword
-
-example: Debugger
-
---
-
-*`threat.enrichments.indicator.scanner_stats`*::
-+
---
-Count of AV/EDR vendors that successfully detected malicious file or URL.
-
-type: long
-
-example: 4
-
---
-
-*`threat.enrichments.indicator.sightings`*::
-+
---
-Number of times this indicator was observed conducting threat activity.
-
-type: long
-
-example: 20
-
---
-
-*`threat.enrichments.indicator.type`*::
-+
---
-Type of indicator as represented by Cyber Observable in STIX 2.0. Recommended values:
- * autonomous-system
- * artifact
- * directory
- * domain-name
- * email-addr
- * file
- * ipv4-addr
- * ipv6-addr
- * mac-addr
- * mutex
- * port
- * process
- * software
- * url
- * user-account
- * windows-registry-key
- * x509-certificate
-
-type: keyword
-
-example: ipv4-addr
-
---
-
-*`threat.enrichments.indicator.url.domain`*::
-+
---
-Domain of the url, such as "www.elastic.co".
-In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field.
-If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field.
-
-type: keyword
-
-example: www.elastic.co
-
---
-
-*`threat.enrichments.indicator.url.extension`*::
-+
---
-The field contains the file extension from the original request url, excluding the leading dot.
-The file extension is only set if it exists, as not every url has a file extension.
-The leading period must not be included. For example, the value must be "png", not ".png".
-Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz").
-
-type: keyword
-
-example: png
-
---
-
-*`threat.enrichments.indicator.url.fragment`*::
-+
---
-Portion of the url after the `#`, such as "top".
-The `#` is not part of the fragment.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.url.full`*::
-+
---
-If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source.
-
-type: wildcard
-
-example: https://www.elastic.co:443/search?q=elasticsearch#top
-
---
-
-*`threat.enrichments.indicator.url.full.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.enrichments.indicator.url.original`*::
-+
---
-Unmodified original url as seen in the event source.
-Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path.
-This field is meant to represent the URL as it was observed, complete or not.
-
-type: wildcard
-
-example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch
-
---
-
-*`threat.enrichments.indicator.url.original.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.enrichments.indicator.url.password`*::
-+
---
-Password of the request.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.url.path`*::
-+
---
-Path of the request, such as "/search".
-
-type: wildcard
-
---
-
-*`threat.enrichments.indicator.url.port`*::
-+
---
-Port of the request, such as 443.
-
-type: long
-
-example: 443
-
-format: string
-
---
-
-*`threat.enrichments.indicator.url.query`*::
-+
---
-The query field describes the query string of the request, such as "q=elasticsearch".
-The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.url.registered_domain`*::
-+
---
-The highest registered url domain, stripped of the subdomain.
-For example, the registered domain for "foo.example.com" is "example.com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk".
-
-type: keyword
-
-example: example.com
-
---
-
-*`threat.enrichments.indicator.url.scheme`*::
-+
---
-Scheme of the request, such as "https".
-Note: The `:` is not part of the scheme.
-
-type: keyword
-
-example: https
-
---
-
-*`threat.enrichments.indicator.url.subdomain`*::
-+
---
-The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain.
-For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period.
-
-type: keyword
-
-example: east
-
---
-
-*`threat.enrichments.indicator.url.top_level_domain`*::
-+
---
-The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk".
-
-type: keyword
-
-example: co.uk
-
---
-
-*`threat.enrichments.indicator.url.username`*::
-+
---
-Username of the request.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.x509.alternative_names`*::
-+
---
-List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses.
-
-type: keyword
-
-example: *.elastic.co
-
---
-
-*`threat.enrichments.indicator.x509.issuer.common_name`*::
-+
---
-List of common name (CN) of issuing certificate authority.
-
-type: keyword
-
-example: Example SHA2 High Assurance Server CA
-
---
-
-*`threat.enrichments.indicator.x509.issuer.country`*::
-+
---
-List of country (C) codes
-
-type: keyword
-
-example: US
-
---
-
-*`threat.enrichments.indicator.x509.issuer.distinguished_name`*::
-+
---
-Distinguished name (DN) of issuing certificate authority.
-
-type: keyword
-
-example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA
-
---
-
-*`threat.enrichments.indicator.x509.issuer.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: Mountain View
-
---
-
-*`threat.enrichments.indicator.x509.issuer.organization`*::
-+
---
-List of organizations (O) of issuing certificate authority.
-
-type: keyword
-
-example: Example Inc
-
---
-
-*`threat.enrichments.indicator.x509.issuer.organizational_unit`*::
-+
---
-List of organizational units (OU) of issuing certificate authority.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`threat.enrichments.indicator.x509.issuer.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`threat.enrichments.indicator.x509.not_after`*::
-+
---
-Time at which the certificate is no longer considered valid.
-
-type: date
-
-example: 2020-07-16 03:15:39+00:00
-
---
-
-*`threat.enrichments.indicator.x509.not_before`*::
-+
---
-Time at which the certificate is first considered valid.
-
-type: date
-
-example: 2019-08-16 01:40:25+00:00
-
---
-
-*`threat.enrichments.indicator.x509.public_key_algorithm`*::
-+
---
-Algorithm used to generate the public key.
-
-type: keyword
-
-example: RSA
-
---
-
-*`threat.enrichments.indicator.x509.public_key_curve`*::
-+
---
-The curve used by the elliptic curve public key algorithm. This is algorithm specific.
-
-type: keyword
-
-example: nistp521
-
---
-
-*`threat.enrichments.indicator.x509.public_key_exponent`*::
-+
---
-Exponent used to derive the public key. This is algorithm specific.
-
-type: long
-
-example: 65537
-
-Field is not indexed.
-
---
-
-*`threat.enrichments.indicator.x509.public_key_size`*::
-+
---
-The size of the public key space in bits.
-
-type: long
-
-example: 2048
-
---
-
-*`threat.enrichments.indicator.x509.serial_number`*::
-+
---
-Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters.
-
-type: keyword
-
-example: 55FBB9C7DEBF09809D12CCAA
-
---
-
-*`threat.enrichments.indicator.x509.signature_algorithm`*::
-+
---
-Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353.
-
-type: keyword
-
-example: SHA256-RSA
-
---
-
-*`threat.enrichments.indicator.x509.subject.common_name`*::
-+
---
-List of common names (CN) of subject.
-
-type: keyword
-
-example: shared.global.example.net
-
---
-
-*`threat.enrichments.indicator.x509.subject.country`*::
-+
---
-List of country (C) code
-
-type: keyword
-
-example: US
-
---
-
-*`threat.enrichments.indicator.x509.subject.distinguished_name`*::
-+
---
-Distinguished name (DN) of the certificate subject entity.
-
-type: keyword
-
-example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net
-
---
-
-*`threat.enrichments.indicator.x509.subject.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: San Francisco
-
---
-
-*`threat.enrichments.indicator.x509.subject.organization`*::
-+
---
-List of organizations (O) of subject.
-
-type: keyword
-
-example: Example, Inc.
-
---
-
-*`threat.enrichments.indicator.x509.subject.organizational_unit`*::
-+
---
-List of organizational units (OU) of subject.
-
-type: keyword
-
---
-
-*`threat.enrichments.indicator.x509.subject.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`threat.enrichments.indicator.x509.version_number`*::
-+
---
-Version of x509 format.
-
-type: keyword
-
-example: 3
-
---
-
-*`threat.enrichments.matched.atomic`*::
-+
---
-Identifies the atomic indicator value that matched a local environment endpoint or network event.
-
-type: keyword
-
-example: bad-domain.com
-
---
-
-*`threat.enrichments.matched.field`*::
-+
---
-Identifies the field of the atomic indicator that matched a local environment endpoint or network event.
-
-type: keyword
-
-example: file.hash.sha256
-
---
-
-*`threat.enrichments.matched.id`*::
-+
---
-Identifies the _id of the indicator document enriching the event.
-
-type: keyword
-
-example: ff93aee5-86a1-4a61-b0e6-0cdc313d01b5
-
---
-
-*`threat.enrichments.matched.index`*::
-+
---
-Identifies the _index of the indicator document enriching the event.
-
-type: keyword
-
-example: filebeat-8.0.0-2021.05.23-000011
-
---
-
-*`threat.enrichments.matched.type`*::
-+
---
-Identifies the type of match that caused the event to be enriched with the given indicator
-
-type: keyword
-
-example: indicator_match_rule
-
---
-
-*`threat.framework`*::
-+
---
-Name of the threat framework used to further categorize and classify the tactic and technique of the reported threat. Framework classification can be provided by detecting systems, evaluated at ingest time, or retrospectively tagged to events.
-
-type: keyword
-
-example: MITRE ATT&CK
-
---
-
-*`threat.group.alias`*::
-+
---
-The alias(es) of the group for a set of related intrusion activity that are tracked by a common name in the security community.
-While not required, you can use a MITRE ATT&CK® group alias(es).
-
-type: keyword
-
-example: [ "Magecart Group 6" ]
-
---
-
-*`threat.group.id`*::
-+
---
-The id of the group for a set of related intrusion activity that are tracked by a common name in the security community.
-While not required, you can use a MITRE ATT&CK® group id.
-
-type: keyword
-
-example: G0037
-
---
-
-*`threat.group.name`*::
-+
---
-The name of the group for a set of related intrusion activity that are tracked by a common name in the security community.
-While not required, you can use a MITRE ATT&CK® group name.
-
-type: keyword
-
-example: FIN6
-
---
-
-*`threat.group.reference`*::
-+
---
-The reference URL of the group for a set of related intrusion activity that are tracked by a common name in the security community.
-While not required, you can use a MITRE ATT&CK® group reference URL.
-
-type: keyword
-
-example: https://attack.mitre.org/groups/G0037/
-
---
-
-*`threat.indicator.as.number`*::
-+
---
-Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet.
-
-type: long
-
-example: 15169
-
---
-
-*`threat.indicator.as.organization.name`*::
-+
---
-Organization name.
-
-type: keyword
-
-example: Google LLC
-
---
-
-*`threat.indicator.as.organization.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.indicator.confidence`*::
-+
---
-Identifies the vendor-neutral confidence rating using the None/Low/Medium/High scale defined in Appendix A of the STIX 2.1 framework. Vendor-specific confidence scales may be added as custom fields.
-Expected values are:
- * Not Specified
- * None
- * Low
- * Medium
- * High
-
-type: keyword
-
-example: Medium
-
---
-
-*`threat.indicator.description`*::
-+
---
-Describes the type of action conducted by the threat.
-
-type: keyword
-
-example: IP x.x.x.x was observed delivering the Angler EK.
-
---
-
-*`threat.indicator.email.address`*::
-+
---
-Identifies a threat indicator as an email address (irrespective of direction).
-
-type: keyword
-
-example: phish@example.com
-
---
-
-*`threat.indicator.file.accessed`*::
-+
---
-Last time the file was accessed.
-Note that not all filesystems keep track of access time.
-
-type: date
-
---
-
-*`threat.indicator.file.attributes`*::
-+
---
-Array of file attributes.
-Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write.
-
-type: keyword
-
-example: ["readonly", "system"]
-
---
-
-*`threat.indicator.file.code_signature.digest_algorithm`*::
-+
---
-The hashing algorithm used to sign the process.
-This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm.
-
-type: keyword
-
-example: sha256
-
---
-
-*`threat.indicator.file.code_signature.exists`*::
-+
---
-Boolean to capture if a signature is present.
-
-type: boolean
-
-example: true
-
---
-
-*`threat.indicator.file.code_signature.signing_id`*::
-+
---
-The identifier used to sign the process.
-This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: com.apple.xpc.proxy
-
---
-
-*`threat.indicator.file.code_signature.status`*::
-+
---
-Additional information about the certificate status.
-This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked.
-
-type: keyword
-
-example: ERROR_UNTRUSTED_ROOT
-
---
-
-*`threat.indicator.file.code_signature.subject_name`*::
-+
---
-Subject name of the code signer
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`threat.indicator.file.code_signature.team_id`*::
-+
---
-The team identifier used to sign the process.
-This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only.
-
-type: keyword
-
-example: EQHXZ8M8AV
-
---
-
-*`threat.indicator.file.code_signature.timestamp`*::
-+
---
-Date and time when the code signature was generated and signed.
-
-type: date
-
-example: 2021-01-01T12:10:30Z
-
---
-
-*`threat.indicator.file.code_signature.trusted`*::
-+
---
-Stores the trust status of the certificate chain.
-Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status.
-
-type: boolean
-
-example: true
-
---
-
-*`threat.indicator.file.code_signature.valid`*::
-+
---
-Boolean to capture if the digital signature is verified against the binary content.
-Leave unpopulated if a certificate was unchecked.
-
-type: boolean
-
-example: true
-
---
-
-*`threat.indicator.file.created`*::
-+
---
-File creation time.
-Note that not all filesystems store the creation time.
-
-type: date
-
---
-
-*`threat.indicator.file.ctime`*::
-+
---
-Last time the file attributes or metadata changed.
-Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file.
-
-type: date
-
---
-
-*`threat.indicator.file.device`*::
-+
---
-Device that is the source of the file.
-
-type: keyword
-
-example: sda
-
---
-
-*`threat.indicator.file.directory`*::
-+
---
-Directory where the file is located. It should include the drive letter, when appropriate.
-
-type: keyword
-
-example: /home/alice
-
---
-
-*`threat.indicator.file.drive_letter`*::
-+
---
-Drive letter where the file is located. This field is only relevant on Windows.
-The value should be uppercase, and not include the colon.
-
-type: keyword
-
-example: C
-
---
-
-*`threat.indicator.file.elf.architecture`*::
-+
---
-Machine architecture of the ELF file.
-
-type: keyword
-
-example: x86-64
-
---
-
-*`threat.indicator.file.elf.byte_order`*::
-+
---
-Byte sequence of ELF file.
-
-type: keyword
-
-example: Little Endian
-
---
-
-*`threat.indicator.file.elf.cpu_type`*::
-+
---
-CPU type of the ELF file.
-
-type: keyword
-
-example: Intel
-
---
-
-*`threat.indicator.file.elf.creation_date`*::
-+
---
-Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators.
-
-type: date
-
---
-
-*`threat.indicator.file.elf.exports`*::
-+
---
-List of exported element names and types.
-
-type: flattened
-
---
-
-*`threat.indicator.file.elf.header.abi_version`*::
-+
---
-Version of the ELF Application Binary Interface (ABI).
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.header.class`*::
-+
---
-Header class of the ELF file.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.header.data`*::
-+
---
-Data table of the ELF header.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.header.entrypoint`*::
-+
---
-Header entrypoint of the ELF file.
-
-type: long
-
-format: string
-
---
-
-*`threat.indicator.file.elf.header.object_version`*::
-+
---
-"0x1" for original ELF files.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.header.os_abi`*::
-+
---
-Application Binary Interface (ABI) of the Linux OS.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.header.type`*::
-+
---
-Header type of the ELF file.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.header.version`*::
-+
---
-Version of the ELF header.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.imports`*::
-+
---
-List of imported element names and types.
-
-type: flattened
-
---
-
-*`threat.indicator.file.elf.sections`*::
-+
---
-An array containing an object for each section of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`.
-
-type: nested
-
---
-
-*`threat.indicator.file.elf.sections.chi2`*::
-+
---
-Chi-square probability distribution of the section.
-
-type: long
-
-format: number
-
---
-
-*`threat.indicator.file.elf.sections.entropy`*::
-+
---
-Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`threat.indicator.file.elf.sections.flags`*::
-+
---
-ELF Section List flags.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.sections.name`*::
-+
---
-ELF Section List name.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.sections.physical_offset`*::
-+
---
-ELF Section List offset.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.sections.physical_size`*::
-+
---
-ELF Section List physical size.
-
-type: long
-
-format: bytes
-
---
-
-*`threat.indicator.file.elf.sections.type`*::
-+
---
-ELF Section List type.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.sections.virtual_address`*::
-+
---
-ELF Section List virtual address.
-
-type: long
-
-format: string
-
---
-
-*`threat.indicator.file.elf.sections.virtual_size`*::
-+
---
-ELF Section List virtual size.
-
-type: long
-
-format: string
-
---
-
-*`threat.indicator.file.elf.segments`*::
-+
---
-An array containing an object for each segment of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`.
-
-type: nested
-
---
-
-*`threat.indicator.file.elf.segments.sections`*::
-+
---
-ELF object segment sections.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.segments.type`*::
-+
---
-ELF object segment type.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.shared_libraries`*::
-+
---
-List of shared libraries used by this ELF object.
-
-type: keyword
-
---
-
-*`threat.indicator.file.elf.telfhash`*::
-+
---
-telfhash symbol hash for ELF file.
-
-type: keyword
-
---
-
-*`threat.indicator.file.extension`*::
-+
---
-File extension, excluding the leading dot.
-Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz").
-
-type: keyword
-
-example: png
-
---
-
-*`threat.indicator.file.fork_name`*::
-+
---
-A fork is additional data associated with a filesystem object.
-On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist.
-On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name.
-
-type: keyword
-
-example: Zone.Identifer
-
---
-
-*`threat.indicator.file.gid`*::
-+
---
-Primary group ID (GID) of the file.
-
-type: keyword
-
-example: 1001
-
---
-
-*`threat.indicator.file.group`*::
-+
---
-Primary group name of the file.
-
-type: keyword
-
-example: alice
-
---
-
-*`threat.indicator.file.hash.md5`*::
-+
---
-MD5 hash.
-
-type: keyword
-
---
-
-*`threat.indicator.file.hash.sha1`*::
-+
---
-SHA1 hash.
-
-type: keyword
-
---
-
-*`threat.indicator.file.hash.sha256`*::
-+
---
-SHA256 hash.
-
-type: keyword
-
---
-
-*`threat.indicator.file.hash.sha512`*::
-+
---
-SHA512 hash.
-
-type: keyword
-
---
-
-*`threat.indicator.file.hash.ssdeep`*::
-+
---
-SSDEEP hash.
-
-type: keyword
-
---
-
-*`threat.indicator.file.inode`*::
-+
---
-Inode representing the file in the filesystem.
-
-type: keyword
-
-example: 256383
-
---
-
-*`threat.indicator.file.mime_type`*::
-+
---
-MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used.
-
-type: keyword
-
---
-
-*`threat.indicator.file.mode`*::
-+
---
-Mode of the file in octal representation.
-
-type: keyword
-
-example: 0640
-
---
-
-*`threat.indicator.file.mtime`*::
-+
---
-Last time the file content was modified.
-
-type: date
-
---
-
-*`threat.indicator.file.name`*::
-+
---
-Name of the file including the extension, without the directory.
-
-type: keyword
-
-example: example.png
-
---
-
-*`threat.indicator.file.owner`*::
-+
---
-File owner's username.
-
-type: keyword
-
-example: alice
-
---
-
-*`threat.indicator.file.path`*::
-+
---
-Full path to the file, including the file name. It should include the drive letter, when appropriate.
-
-type: keyword
-
-example: /home/alice/example.png
-
---
-
-*`threat.indicator.file.path.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.indicator.file.pe.architecture`*::
-+
---
-CPU architecture target for the file.
-
-type: keyword
-
-example: x64
-
---
-
-*`threat.indicator.file.pe.company`*::
-+
---
-Internal company name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft Corporation
-
---
-
-*`threat.indicator.file.pe.description`*::
-+
---
-Internal description of the file, provided at compile-time.
-
-type: keyword
-
-example: Paint
-
---
-
-*`threat.indicator.file.pe.file_version`*::
-+
---
-Internal version of the file, provided at compile-time.
-
-type: keyword
-
-example: 6.3.9600.17415
-
---
-
-*`threat.indicator.file.pe.imphash`*::
-+
---
-A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html.
-
-type: keyword
-
-example: 0c6803c4e922103c4dca5963aad36ddf
-
---
-
-*`threat.indicator.file.pe.original_file_name`*::
-+
---
-Internal name of the file, provided at compile-time.
-
-type: keyword
-
-example: MSPAINT.EXE
-
---
-
-*`threat.indicator.file.pe.product`*::
-+
---
-Internal product name of the file, provided at compile-time.
-
-type: keyword
-
-example: Microsoft® Windows® Operating System
-
---
-
-*`threat.indicator.file.size`*::
-+
---
-File size in bytes.
-Only relevant when `file.type` is "file".
-
-type: long
-
-example: 16384
-
---
-
-*`threat.indicator.file.target_path`*::
-+
---
-Target path for symlinks.
-
-type: keyword
-
---
-
-*`threat.indicator.file.target_path.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.indicator.file.type`*::
-+
---
-File type (file, dir, or symlink).
-
-type: keyword
-
-example: file
-
---
-
-*`threat.indicator.file.uid`*::
-+
---
-The user ID (UID) or security identifier (SID) of the file owner.
-
-type: keyword
-
-example: 1001
-
---
-
-*`threat.indicator.file.x509.alternative_names`*::
-+
---
-List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses.
-
-type: keyword
-
-example: *.elastic.co
-
---
-
-*`threat.indicator.file.x509.issuer.common_name`*::
-+
---
-List of common name (CN) of issuing certificate authority.
-
-type: keyword
-
-example: Example SHA2 High Assurance Server CA
-
---
-
-*`threat.indicator.file.x509.issuer.country`*::
-+
---
-List of country (C) codes
-
-type: keyword
-
-example: US
-
---
-
-*`threat.indicator.file.x509.issuer.distinguished_name`*::
-+
---
-Distinguished name (DN) of issuing certificate authority.
-
-type: keyword
-
-example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA
-
---
-
-*`threat.indicator.file.x509.issuer.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: Mountain View
-
---
-
-*`threat.indicator.file.x509.issuer.organization`*::
-+
---
-List of organizations (O) of issuing certificate authority.
-
-type: keyword
-
-example: Example Inc
-
---
-
-*`threat.indicator.file.x509.issuer.organizational_unit`*::
-+
---
-List of organizational units (OU) of issuing certificate authority.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`threat.indicator.file.x509.issuer.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`threat.indicator.file.x509.not_after`*::
-+
---
-Time at which the certificate is no longer considered valid.
-
-type: date
-
-example: 2020-07-16 03:15:39+00:00
-
---
-
-*`threat.indicator.file.x509.not_before`*::
-+
---
-Time at which the certificate is first considered valid.
-
-type: date
-
-example: 2019-08-16 01:40:25+00:00
-
---
-
-*`threat.indicator.file.x509.public_key_algorithm`*::
-+
---
-Algorithm used to generate the public key.
-
-type: keyword
-
-example: RSA
-
---
-
-*`threat.indicator.file.x509.public_key_curve`*::
-+
---
-The curve used by the elliptic curve public key algorithm. This is algorithm specific.
-
-type: keyword
-
-example: nistp521
-
---
-
-*`threat.indicator.file.x509.public_key_exponent`*::
-+
---
-Exponent used to derive the public key. This is algorithm specific.
-
-type: long
-
-example: 65537
-
-Field is not indexed.
-
---
-
-*`threat.indicator.file.x509.public_key_size`*::
-+
---
-The size of the public key space in bits.
-
-type: long
-
-example: 2048
-
---
-
-*`threat.indicator.file.x509.serial_number`*::
-+
---
-Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters.
-
-type: keyword
-
-example: 55FBB9C7DEBF09809D12CCAA
-
---
-
-*`threat.indicator.file.x509.signature_algorithm`*::
-+
---
-Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353.
-
-type: keyword
-
-example: SHA256-RSA
-
---
-
-*`threat.indicator.file.x509.subject.common_name`*::
-+
---
-List of common names (CN) of subject.
-
-type: keyword
-
-example: shared.global.example.net
-
---
-
-*`threat.indicator.file.x509.subject.country`*::
-+
---
-List of country (C) code
-
-type: keyword
-
-example: US
-
---
-
-*`threat.indicator.file.x509.subject.distinguished_name`*::
-+
---
-Distinguished name (DN) of the certificate subject entity.
-
-type: keyword
-
-example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net
-
---
-
-*`threat.indicator.file.x509.subject.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: San Francisco
-
---
-
-*`threat.indicator.file.x509.subject.organization`*::
-+
---
-List of organizations (O) of subject.
-
-type: keyword
-
-example: Example, Inc.
-
---
-
-*`threat.indicator.file.x509.subject.organizational_unit`*::
-+
---
-List of organizational units (OU) of subject.
-
-type: keyword
-
---
-
-*`threat.indicator.file.x509.subject.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`threat.indicator.file.x509.version_number`*::
-+
---
-Version of x509 format.
-
-type: keyword
-
-example: 3
-
---
-
-*`threat.indicator.first_seen`*::
-+
---
-The date and time when intelligence source first reported sighting this indicator.
-
-type: date
-
-example: 2020-11-05T17:25:47.000Z
-
---
-
-*`threat.indicator.geo.city_name`*::
-+
---
-City name.
-
-type: keyword
-
-example: Montreal
-
---
-
-*`threat.indicator.geo.continent_code`*::
-+
---
-Two-letter code representing continent's name.
-
-type: keyword
-
-example: NA
-
---
-
-*`threat.indicator.geo.continent_name`*::
-+
---
-Name of the continent.
-
-type: keyword
-
-example: North America
-
---
-
-*`threat.indicator.geo.country_iso_code`*::
-+
---
-Country ISO code.
-
-type: keyword
-
-example: CA
-
---
-
-*`threat.indicator.geo.country_name`*::
-+
---
-Country name.
-
-type: keyword
-
-example: Canada
-
---
-
-*`threat.indicator.geo.location`*::
-+
---
-Longitude and latitude.
-
-type: geo_point
-
-example: { "lon": -73.614830, "lat": 45.505918 }
-
---
-
-*`threat.indicator.geo.name`*::
-+
---
-User-defined description of a location, at the level of granularity they care about.
-Could be the name of their data centers, the floor number, if this describes a local physical entity, city names.
-Not typically used in automated geolocation.
-
-type: keyword
-
-example: boston-dc
-
---
-
-*`threat.indicator.geo.postal_code`*::
-+
---
-Postal code associated with the location.
-Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country.
-
-type: keyword
-
-example: 94040
-
---
-
-*`threat.indicator.geo.region_iso_code`*::
-+
---
-Region ISO code.
-
-type: keyword
-
-example: CA-QC
-
---
-
-*`threat.indicator.geo.region_name`*::
-+
---
-Region name.
-
-type: keyword
-
-example: Quebec
-
---
-
-*`threat.indicator.geo.timezone`*::
-+
---
-The time zone of the location, such as IANA time zone name.
-
-type: keyword
-
-example: America/Argentina/Buenos_Aires
-
---
-
-*`threat.indicator.ip`*::
-+
---
-Identifies a threat indicator as an IP address (irrespective of direction).
-
-type: ip
-
-example: 1.2.3.4
-
---
-
-*`threat.indicator.last_seen`*::
-+
---
-The date and time when intelligence source last reported sighting this indicator.
-
-type: date
-
-example: 2020-11-05T17:25:47.000Z
-
---
-
-*`threat.indicator.marking.tlp`*::
-+
---
-Traffic Light Protocol sharing markings.
-Recommended values are:
- * WHITE
- * GREEN
- * AMBER
- * RED
-
-type: keyword
-
-example: WHITE
-
---
-
-*`threat.indicator.modified_at`*::
-+
---
-The date and time when intelligence source last modified information for this indicator.
-
-type: date
-
-example: 2020-11-05T17:25:47.000Z
-
---
-
-*`threat.indicator.port`*::
-+
---
-Identifies a threat indicator as a port number (irrespective of direction).
-
-type: long
-
-example: 443
-
---
-
-*`threat.indicator.provider`*::
-+
---
-The name of the indicator's provider.
-
-type: keyword
-
-example: lrz_urlhaus
-
---
-
-*`threat.indicator.reference`*::
-+
---
-Reference URL linking to additional information about this indicator.
-
-type: keyword
-
-example: https://system.example.com/indicator/0001234
-
---
-
-*`threat.indicator.registry.data.bytes`*::
-+
---
-Original bytes written with base64 encoding.
-For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values.
-
-type: keyword
-
-example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA=
-
---
-
-*`threat.indicator.registry.data.strings`*::
-+
---
-Content when writing string types.
-Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`).
-
-type: wildcard
-
-example: ["C:\rta\red_ttp\bin\myapp.exe"]
-
---
-
-*`threat.indicator.registry.data.type`*::
-+
---
-Standard registry type for encoding contents
-
-type: keyword
-
-example: REG_SZ
-
---
-
-*`threat.indicator.registry.hive`*::
-+
---
-Abbreviated name for the hive.
-
-type: keyword
-
-example: HKLM
-
---
-
-*`threat.indicator.registry.key`*::
-+
---
-Hive-relative path of keys.
-
-type: keyword
-
-example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe
-
---
-
-*`threat.indicator.registry.path`*::
-+
---
-Full path, including hive, key and value
-
-type: keyword
-
-example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger
-
---
-
-*`threat.indicator.registry.value`*::
-+
---
-Name of the value written.
-
-type: keyword
-
-example: Debugger
-
---
-
-*`threat.indicator.scanner_stats`*::
-+
---
-Count of AV/EDR vendors that successfully detected malicious file or URL.
-
-type: long
-
-example: 4
-
---
-
-*`threat.indicator.sightings`*::
-+
---
-Number of times this indicator was observed conducting threat activity.
-
-type: long
-
-example: 20
-
---
-
-*`threat.indicator.type`*::
-+
---
-Type of indicator as represented by Cyber Observable in STIX 2.0.
-Recommended values:
- * autonomous-system
- * artifact
- * directory
- * domain-name
- * email-addr
- * file
- * ipv4-addr
- * ipv6-addr
- * mac-addr
- * mutex
- * port
- * process
- * software
- * url
- * user-account
- * windows-registry-key
- * x509-certificate
-
-type: keyword
-
-example: ipv4-addr
-
---
-
-*`threat.indicator.url.domain`*::
-+
---
-Domain of the url, such as "www.elastic.co".
-In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field.
-If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field.
-
-type: keyword
-
-example: www.elastic.co
-
---
-
-*`threat.indicator.url.extension`*::
-+
---
-The field contains the file extension from the original request url, excluding the leading dot.
-The file extension is only set if it exists, as not every url has a file extension.
-The leading period must not be included. For example, the value must be "png", not ".png".
-Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz").
-
-type: keyword
-
-example: png
-
---
-
-*`threat.indicator.url.fragment`*::
-+
---
-Portion of the url after the `#`, such as "top".
-The `#` is not part of the fragment.
-
-type: keyword
-
---
-
-*`threat.indicator.url.full`*::
-+
---
-If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source.
-
-type: wildcard
-
-example: https://www.elastic.co:443/search?q=elasticsearch#top
-
---
-
-*`threat.indicator.url.full.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.indicator.url.original`*::
-+
---
-Unmodified original url as seen in the event source.
-Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path.
-This field is meant to represent the URL as it was observed, complete or not.
-
-type: wildcard
-
-example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch
-
---
-
-*`threat.indicator.url.original.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.indicator.url.password`*::
-+
---
-Password of the request.
-
-type: keyword
-
---
-
-*`threat.indicator.url.path`*::
-+
---
-Path of the request, such as "/search".
-
-type: wildcard
-
---
-
-*`threat.indicator.url.port`*::
-+
---
-Port of the request, such as 443.
-
-type: long
-
-example: 443
-
-format: string
-
---
-
-*`threat.indicator.url.query`*::
-+
---
-The query field describes the query string of the request, such as "q=elasticsearch".
-The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases.
-
-type: keyword
-
---
-
-*`threat.indicator.url.registered_domain`*::
-+
---
-The highest registered url domain, stripped of the subdomain.
-For example, the registered domain for "foo.example.com" is "example.com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk".
-
-type: keyword
-
-example: example.com
-
---
-
-*`threat.indicator.url.scheme`*::
-+
---
-Scheme of the request, such as "https".
-Note: The `:` is not part of the scheme.
-
-type: keyword
-
-example: https
-
---
-
-*`threat.indicator.url.subdomain`*::
-+
---
-The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain.
-For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period.
-
-type: keyword
-
-example: east
-
---
-
-*`threat.indicator.url.top_level_domain`*::
-+
---
-The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk".
-
-type: keyword
-
-example: co.uk
-
---
-
-*`threat.indicator.url.username`*::
-+
---
-Username of the request.
-
-type: keyword
-
---
-
-*`threat.indicator.x509.alternative_names`*::
-+
---
-List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses.
-
-type: keyword
-
-example: *.elastic.co
-
---
-
-*`threat.indicator.x509.issuer.common_name`*::
-+
---
-List of common name (CN) of issuing certificate authority.
-
-type: keyword
-
-example: Example SHA2 High Assurance Server CA
-
---
-
-*`threat.indicator.x509.issuer.country`*::
-+
---
-List of country (C) codes
-
-type: keyword
-
-example: US
-
---
-
-*`threat.indicator.x509.issuer.distinguished_name`*::
-+
---
-Distinguished name (DN) of issuing certificate authority.
-
-type: keyword
-
-example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA
-
---
-
-*`threat.indicator.x509.issuer.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: Mountain View
-
---
-
-*`threat.indicator.x509.issuer.organization`*::
-+
---
-List of organizations (O) of issuing certificate authority.
-
-type: keyword
-
-example: Example Inc
-
---
-
-*`threat.indicator.x509.issuer.organizational_unit`*::
-+
---
-List of organizational units (OU) of issuing certificate authority.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`threat.indicator.x509.issuer.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`threat.indicator.x509.not_after`*::
-+
---
-Time at which the certificate is no longer considered valid.
-
-type: date
-
-example: 2020-07-16 03:15:39+00:00
-
---
-
-*`threat.indicator.x509.not_before`*::
-+
---
-Time at which the certificate is first considered valid.
-
-type: date
-
-example: 2019-08-16 01:40:25+00:00
-
---
-
-*`threat.indicator.x509.public_key_algorithm`*::
-+
---
-Algorithm used to generate the public key.
-
-type: keyword
-
-example: RSA
-
---
-
-*`threat.indicator.x509.public_key_curve`*::
-+
---
-The curve used by the elliptic curve public key algorithm. This is algorithm specific.
-
-type: keyword
-
-example: nistp521
-
---
-
-*`threat.indicator.x509.public_key_exponent`*::
-+
---
-Exponent used to derive the public key. This is algorithm specific.
-
-type: long
-
-example: 65537
-
-Field is not indexed.
-
---
-
-*`threat.indicator.x509.public_key_size`*::
-+
---
-The size of the public key space in bits.
-
-type: long
-
-example: 2048
-
---
-
-*`threat.indicator.x509.serial_number`*::
-+
---
-Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters.
-
-type: keyword
-
-example: 55FBB9C7DEBF09809D12CCAA
-
---
-
-*`threat.indicator.x509.signature_algorithm`*::
-+
---
-Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353.
-
-type: keyword
-
-example: SHA256-RSA
-
---
-
-*`threat.indicator.x509.subject.common_name`*::
-+
---
-List of common names (CN) of subject.
-
-type: keyword
-
-example: shared.global.example.net
-
---
-
-*`threat.indicator.x509.subject.country`*::
-+
---
-List of country (C) code
-
-type: keyword
-
-example: US
-
---
-
-*`threat.indicator.x509.subject.distinguished_name`*::
-+
---
-Distinguished name (DN) of the certificate subject entity.
-
-type: keyword
-
-example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net
-
---
-
-*`threat.indicator.x509.subject.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: San Francisco
-
---
-
-*`threat.indicator.x509.subject.organization`*::
-+
---
-List of organizations (O) of subject.
-
-type: keyword
-
-example: Example, Inc.
-
---
-
-*`threat.indicator.x509.subject.organizational_unit`*::
-+
---
-List of organizational units (OU) of subject.
-
-type: keyword
-
---
-
-*`threat.indicator.x509.subject.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`threat.indicator.x509.version_number`*::
-+
---
-Version of x509 format.
-
-type: keyword
-
-example: 3
-
---
-
-*`threat.software.alias`*::
-+
---
-The alias(es) of the software for a set of related intrusion activity that are tracked by a common name in the security community.
-While not required, you can use a MITRE ATT&CK® associated software description.
-
-type: keyword
-
-example: [ "X-Agent" ]
-
---
-
-*`threat.software.id`*::
-+
---
-The id of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®.
-While not required, you can use a MITRE ATT&CK® software id.
-
-type: keyword
-
-example: S0552
-
---
-
-*`threat.software.name`*::
-+
---
-The name of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®.
-While not required, you can use a MITRE ATT&CK® software name.
-
-type: keyword
-
-example: AdFind
-
---
-
-*`threat.software.platforms`*::
-+
---
-The platforms of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®.
-Recommended Values:
- * AWS
- * Azure
- * Azure AD
- * GCP
- * Linux
- * macOS
- * Network
- * Office 365
- * SaaS
- * Windows
-
-While not required, you can use a MITRE ATT&CK® software platforms.
-
-type: keyword
-
-example: [ "Windows" ]
-
---
-
-*`threat.software.reference`*::
-+
---
-The reference URL of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®.
-While not required, you can use a MITRE ATT&CK® software reference URL.
-
-type: keyword
-
-example: https://attack.mitre.org/software/S0552/
-
---
-
-*`threat.software.type`*::
-+
---
-The type of software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®.
-Recommended values
- * Malware
- * Tool
-
- While not required, you can use a MITRE ATT&CK® software type.
-
-type: keyword
-
-example: Tool
-
---
-
-*`threat.tactic.id`*::
-+
---
-The id of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/ )
-
-type: keyword
-
-example: TA0002
-
---
-
-*`threat.tactic.name`*::
-+
---
-Name of the type of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/)
-
-type: keyword
-
-example: Execution
-
---
-
-*`threat.tactic.reference`*::
-+
---
-The reference url of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/ )
-
-type: keyword
-
-example: https://attack.mitre.org/tactics/TA0002/
-
---
-
-*`threat.technique.id`*::
-+
---
-The id of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/)
-
-type: keyword
-
-example: T1059
-
---
-
-*`threat.technique.name`*::
-+
---
-The name of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/)
-
-type: keyword
-
-example: Command and Scripting Interpreter
-
---
-
-*`threat.technique.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.technique.reference`*::
-+
---
-The reference url of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/)
-
-type: keyword
-
-example: https://attack.mitre.org/techniques/T1059/
-
---
-
-*`threat.technique.subtechnique.id`*::
-+
---
-The full id of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/)
-
-type: keyword
-
-example: T1059.001
-
---
-
-*`threat.technique.subtechnique.name`*::
-+
---
-The name of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/)
-
-type: keyword
-
-example: PowerShell
-
---
-
-*`threat.technique.subtechnique.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`threat.technique.subtechnique.reference`*::
-+
---
-The reference url of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/)
-
-type: keyword
-
-example: https://attack.mitre.org/techniques/T1059/001/
-
---
-
-[float]
-=== tls
-
-Fields related to a TLS connection. These fields focus on the TLS protocol itself and intentionally avoids in-depth analysis of the related x.509 certificate files.
-
-
-*`tls.cipher`*::
-+
---
-String indicating the cipher used during the current connection.
-
-type: keyword
-
-example: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-
---
-
-*`tls.client.certificate`*::
-+
---
-PEM-encoded stand-alone certificate offered by the client. This is usually mutually-exclusive of `client.certificate_chain` since this value also exists in that list.
-
-type: keyword
-
-example: MII...
-
---
-
-*`tls.client.certificate_chain`*::
-+
---
-Array of PEM-encoded certificates that make up the certificate chain offered by the client. This is usually mutually-exclusive of `client.certificate` since that value should be the first certificate in the chain.
-
-type: keyword
-
-example: ["MII...", "MII..."]
-
---
-
-*`tls.client.hash.md5`*::
-+
---
-Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash.
-
-type: keyword
-
-example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC
-
---
-
-*`tls.client.hash.sha1`*::
-+
---
-Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash.
-
-type: keyword
-
-example: 9E393D93138888D288266C2D915214D1D1CCEB2A
-
---
-
-*`tls.client.hash.sha256`*::
-+
---
-Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash.
-
-type: keyword
-
-example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0
-
---
-
-*`tls.client.issuer`*::
-+
---
-Distinguished name of subject of the issuer of the x.509 certificate presented by the client.
-
-type: keyword
-
-example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com
-
---
-
-*`tls.client.ja3`*::
-+
---
-A hash that identifies clients based on how they perform an SSL/TLS handshake.
-
-type: keyword
-
-example: d4e5b18d6b55c71272893221c96ba240
-
---
-
-*`tls.client.not_after`*::
-+
---
-Date/Time indicating when client certificate is no longer considered valid.
-
-type: date
-
-example: 2021-01-01T00:00:00.000Z
-
---
-
-*`tls.client.not_before`*::
-+
---
-Date/Time indicating when client certificate is first considered valid.
-
-type: date
-
-example: 1970-01-01T00:00:00.000Z
-
---
-
-*`tls.client.server_name`*::
-+
---
-Also called an SNI, this tells the server which hostname to which the client is attempting to connect to. When this value is available, it should get copied to `destination.domain`.
-
-type: keyword
-
-example: www.elastic.co
-
---
-
-*`tls.client.subject`*::
-+
---
-Distinguished name of subject of the x.509 certificate presented by the client.
-
-type: keyword
-
-example: CN=myclient, OU=Documentation Team, DC=example, DC=com
-
---
-
-*`tls.client.supported_ciphers`*::
-+
---
-Array of ciphers offered by the client during the client hello.
-
-type: keyword
-
-example: ["TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "..."]
-
---
-
-*`tls.client.x509.alternative_names`*::
-+
---
-List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses.
-
-type: keyword
-
-example: *.elastic.co
-
---
-
-*`tls.client.x509.issuer.common_name`*::
-+
---
-List of common name (CN) of issuing certificate authority.
-
-type: keyword
-
-example: Example SHA2 High Assurance Server CA
-
---
-
-*`tls.client.x509.issuer.country`*::
-+
---
-List of country (C) codes
-
-type: keyword
-
-example: US
-
---
-
-*`tls.client.x509.issuer.distinguished_name`*::
-+
---
-Distinguished name (DN) of issuing certificate authority.
-
-type: keyword
-
-example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA
-
---
-
-*`tls.client.x509.issuer.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: Mountain View
-
---
-
-*`tls.client.x509.issuer.organization`*::
-+
---
-List of organizations (O) of issuing certificate authority.
-
-type: keyword
-
-example: Example Inc
-
---
-
-*`tls.client.x509.issuer.organizational_unit`*::
-+
---
-List of organizational units (OU) of issuing certificate authority.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`tls.client.x509.issuer.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`tls.client.x509.not_after`*::
-+
---
-Time at which the certificate is no longer considered valid.
-
-type: date
-
-example: 2020-07-16 03:15:39+00:00
-
---
-
-*`tls.client.x509.not_before`*::
-+
---
-Time at which the certificate is first considered valid.
-
-type: date
-
-example: 2019-08-16 01:40:25+00:00
-
---
-
-*`tls.client.x509.public_key_algorithm`*::
-+
---
-Algorithm used to generate the public key.
-
-type: keyword
-
-example: RSA
-
---
-
-*`tls.client.x509.public_key_curve`*::
-+
---
-The curve used by the elliptic curve public key algorithm. This is algorithm specific.
-
-type: keyword
-
-example: nistp521
-
---
-
-*`tls.client.x509.public_key_exponent`*::
-+
---
-Exponent used to derive the public key. This is algorithm specific.
-
-type: long
-
-example: 65537
-
-Field is not indexed.
-
---
-
-*`tls.client.x509.public_key_size`*::
-+
---
-The size of the public key space in bits.
-
-type: long
-
-example: 2048
-
---
-
-*`tls.client.x509.serial_number`*::
-+
---
-Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters.
-
-type: keyword
-
-example: 55FBB9C7DEBF09809D12CCAA
-
---
-
-*`tls.client.x509.signature_algorithm`*::
-+
---
-Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353.
-
-type: keyword
-
-example: SHA256-RSA
-
---
-
-*`tls.client.x509.subject.common_name`*::
-+
---
-List of common names (CN) of subject.
-
-type: keyword
-
-example: shared.global.example.net
-
---
-
-*`tls.client.x509.subject.country`*::
-+
---
-List of country (C) code
-
-type: keyword
-
-example: US
-
---
-
-*`tls.client.x509.subject.distinguished_name`*::
-+
---
-Distinguished name (DN) of the certificate subject entity.
-
-type: keyword
-
-example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net
-
---
-
-*`tls.client.x509.subject.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: San Francisco
-
---
-
-*`tls.client.x509.subject.organization`*::
-+
---
-List of organizations (O) of subject.
-
-type: keyword
-
-example: Example, Inc.
-
---
-
-*`tls.client.x509.subject.organizational_unit`*::
-+
---
-List of organizational units (OU) of subject.
-
-type: keyword
-
---
-
-*`tls.client.x509.subject.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`tls.client.x509.version_number`*::
-+
---
-Version of x509 format.
-
-type: keyword
-
-example: 3
-
---
-
-*`tls.curve`*::
-+
---
-String indicating the curve used for the given cipher, when applicable.
-
-type: keyword
-
-example: secp256r1
-
---
-
-*`tls.established`*::
-+
---
-Boolean flag indicating if the TLS negotiation was successful and transitioned to an encrypted tunnel.
-
-type: boolean
-
---
-
-*`tls.next_protocol`*::
-+
---
-String indicating the protocol being tunneled. Per the values in the IANA registry (https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids), this string should be lower case.
-
-type: keyword
-
-example: http/1.1
-
---
-
-*`tls.resumed`*::
-+
---
-Boolean flag indicating if this TLS connection was resumed from an existing TLS negotiation.
-
-type: boolean
-
---
-
-*`tls.server.certificate`*::
-+
---
-PEM-encoded stand-alone certificate offered by the server. This is usually mutually-exclusive of `server.certificate_chain` since this value also exists in that list.
-
-type: keyword
-
-example: MII...
-
---
-
-*`tls.server.certificate_chain`*::
-+
---
-Array of PEM-encoded certificates that make up the certificate chain offered by the server. This is usually mutually-exclusive of `server.certificate` since that value should be the first certificate in the chain.
-
-type: keyword
-
-example: ["MII...", "MII..."]
-
---
-
-*`tls.server.hash.md5`*::
-+
---
-Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash.
-
-type: keyword
-
-example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC
-
---
-
-*`tls.server.hash.sha1`*::
-+
---
-Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash.
-
-type: keyword
-
-example: 9E393D93138888D288266C2D915214D1D1CCEB2A
-
---
-
-*`tls.server.hash.sha256`*::
-+
---
-Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash.
-
-type: keyword
-
-example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0
-
---
-
-*`tls.server.issuer`*::
-+
---
-Subject of the issuer of the x.509 certificate presented by the server.
-
-type: keyword
-
-example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com
-
---
-
-*`tls.server.ja3s`*::
-+
---
-A hash that identifies servers based on how they perform an SSL/TLS handshake.
-
-type: keyword
-
-example: 394441ab65754e2207b1e1b457b3641d
-
---
-
-*`tls.server.not_after`*::
-+
---
-Timestamp indicating when server certificate is no longer considered valid.
-
-type: date
-
-example: 2021-01-01T00:00:00.000Z
-
---
-
-*`tls.server.not_before`*::
-+
---
-Timestamp indicating when server certificate is first considered valid.
-
-type: date
-
-example: 1970-01-01T00:00:00.000Z
-
---
-
-*`tls.server.subject`*::
-+
---
-Subject of the x.509 certificate presented by the server.
-
-type: keyword
-
-example: CN=www.example.com, OU=Infrastructure Team, DC=example, DC=com
-
---
-
-*`tls.server.x509.alternative_names`*::
-+
---
-List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses.
-
-type: keyword
-
-example: *.elastic.co
-
---
-
-*`tls.server.x509.issuer.common_name`*::
-+
---
-List of common name (CN) of issuing certificate authority.
-
-type: keyword
-
-example: Example SHA2 High Assurance Server CA
-
---
-
-*`tls.server.x509.issuer.country`*::
-+
---
-List of country (C) codes
-
-type: keyword
-
-example: US
-
---
-
-*`tls.server.x509.issuer.distinguished_name`*::
-+
---
-Distinguished name (DN) of issuing certificate authority.
-
-type: keyword
-
-example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA
-
---
-
-*`tls.server.x509.issuer.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: Mountain View
-
---
-
-*`tls.server.x509.issuer.organization`*::
-+
---
-List of organizations (O) of issuing certificate authority.
-
-type: keyword
-
-example: Example Inc
-
---
-
-*`tls.server.x509.issuer.organizational_unit`*::
-+
---
-List of organizational units (OU) of issuing certificate authority.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`tls.server.x509.issuer.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`tls.server.x509.not_after`*::
-+
---
-Time at which the certificate is no longer considered valid.
-
-type: date
-
-example: 2020-07-16 03:15:39+00:00
-
---
-
-*`tls.server.x509.not_before`*::
-+
---
-Time at which the certificate is first considered valid.
-
-type: date
-
-example: 2019-08-16 01:40:25+00:00
-
---
-
-*`tls.server.x509.public_key_algorithm`*::
-+
---
-Algorithm used to generate the public key.
-
-type: keyword
-
-example: RSA
-
---
-
-*`tls.server.x509.public_key_curve`*::
-+
---
-The curve used by the elliptic curve public key algorithm. This is algorithm specific.
-
-type: keyword
-
-example: nistp521
-
---
-
-*`tls.server.x509.public_key_exponent`*::
-+
---
-Exponent used to derive the public key. This is algorithm specific.
-
-type: long
-
-example: 65537
-
-Field is not indexed.
-
---
-
-*`tls.server.x509.public_key_size`*::
-+
---
-The size of the public key space in bits.
-
-type: long
-
-example: 2048
-
---
-
-*`tls.server.x509.serial_number`*::
-+
---
-Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters.
-
-type: keyword
-
-example: 55FBB9C7DEBF09809D12CCAA
-
---
-
-*`tls.server.x509.signature_algorithm`*::
-+
---
-Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353.
-
-type: keyword
-
-example: SHA256-RSA
-
---
-
-*`tls.server.x509.subject.common_name`*::
-+
---
-List of common names (CN) of subject.
-
-type: keyword
-
-example: shared.global.example.net
-
---
-
-*`tls.server.x509.subject.country`*::
-+
---
-List of country (C) code
-
-type: keyword
-
-example: US
-
---
-
-*`tls.server.x509.subject.distinguished_name`*::
-+
---
-Distinguished name (DN) of the certificate subject entity.
-
-type: keyword
-
-example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net
-
---
-
-*`tls.server.x509.subject.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: San Francisco
-
---
-
-*`tls.server.x509.subject.organization`*::
-+
---
-List of organizations (O) of subject.
-
-type: keyword
-
-example: Example, Inc.
-
---
-
-*`tls.server.x509.subject.organizational_unit`*::
-+
---
-List of organizational units (OU) of subject.
-
-type: keyword
-
---
-
-*`tls.server.x509.subject.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`tls.server.x509.version_number`*::
-+
---
-Version of x509 format.
-
-type: keyword
-
-example: 3
-
---
-
-*`tls.version`*::
-+
---
-Numeric part of the version parsed from the original string.
-
-type: keyword
-
-example: 1.2
-
---
-
-*`tls.version_protocol`*::
-+
---
-Normalized lowercase protocol name parsed from original string.
-
-type: keyword
-
-example: tls
-
---
-
-*`span.id`*::
-+
---
-Unique identifier of the span within the scope of its trace.
-A span represents an operation within a transaction, such as a request to another service, or a database query.
-
-type: keyword
-
-example: 3ff9a8981b7ccd5a
-
---
-
-*`trace.id`*::
-+
---
-Unique identifier of the trace.
-A trace groups multiple events like transactions that belong together. For example, a user request handled by multiple inter-connected services.
-
-type: keyword
-
-example: 4bf92f3577b34da6a3ce929d0e0e4736
-
---
-
-*`transaction.id`*::
-+
---
-Unique identifier of the transaction within the scope of its trace.
-A transaction is the highest level of work measured within a service, such as a request to a server.
-
-type: keyword
-
-example: 00f067aa0ba902b7
-
---
-
-[float]
-=== url
-
-URL fields provide support for complete or partial URLs, and supports the breaking down into scheme, domain, path, and so on.
-
-
-*`url.domain`*::
-+
---
-Domain of the url, such as "www.elastic.co".
-In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field.
-If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field.
-
-type: keyword
-
-example: www.elastic.co
-
---
-
-*`url.extension`*::
-+
---
-The field contains the file extension from the original request url, excluding the leading dot.
-The file extension is only set if it exists, as not every url has a file extension.
-The leading period must not be included. For example, the value must be "png", not ".png".
-Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz").
-
-type: keyword
-
-example: png
-
---
-
-*`url.fragment`*::
-+
---
-Portion of the url after the `#`, such as "top".
-The `#` is not part of the fragment.
-
-type: keyword
-
---
-
-*`url.full`*::
-+
---
-If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source.
-
-type: wildcard
-
-example: https://www.elastic.co:443/search?q=elasticsearch#top
-
---
-
-*`url.full.text`*::
-+
---
-type: match_only_text
-
---
-
-*`url.original`*::
-+
---
-Unmodified original url as seen in the event source.
-Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path.
-This field is meant to represent the URL as it was observed, complete or not.
-
-type: wildcard
-
-example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch
-
---
-
-*`url.original.text`*::
-+
---
-type: match_only_text
-
---
-
-*`url.password`*::
-+
---
-Password of the request.
-
-type: keyword
-
---
-
-*`url.path`*::
-+
---
-Path of the request, such as "/search".
-
-type: wildcard
-
---
-
-*`url.port`*::
-+
---
-Port of the request, such as 443.
-
-type: long
-
-example: 443
-
-format: string
-
---
-
-*`url.query`*::
-+
---
-The query field describes the query string of the request, such as "q=elasticsearch".
-The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases.
-
-type: keyword
-
---
-
-*`url.registered_domain`*::
-+
---
-The highest registered url domain, stripped of the subdomain.
-For example, the registered domain for "foo.example.com" is "example.com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk".
-
-type: keyword
-
-example: example.com
-
---
-
-*`url.scheme`*::
-+
---
-Scheme of the request, such as "https".
-Note: The `:` is not part of the scheme.
-
-type: keyword
-
-example: https
-
---
-
-*`url.subdomain`*::
-+
---
-The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain.
-For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period.
-
-type: keyword
-
-example: east
-
---
-
-*`url.top_level_domain`*::
-+
---
-The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com".
-This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk".
-
-type: keyword
-
-example: co.uk
-
---
-
-*`url.username`*::
-+
---
-Username of the request.
-
-type: keyword
-
---
-
-[float]
-=== user
-
-The user fields describe information about the user that is relevant to the event.
-Fields can have one entry or multiple entries. If a user has more than one id, provide an array that includes all of them.
-
-
-*`user.changes.domain`*::
-+
---
-Name of the directory the user is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`user.changes.email`*::
-+
---
-User email address.
-
-type: keyword
-
---
-
-*`user.changes.full_name`*::
-+
---
-User's full name, if available.
-
-type: keyword
-
-example: Albert Einstein
-
---
-
-*`user.changes.full_name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user.changes.group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`user.changes.group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`user.changes.group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-*`user.changes.hash`*::
-+
---
-Unique user hash to correlate information for a user in anonymized form.
-Useful if `user.id` or `user.name` contain confidential information and cannot be used.
-
-type: keyword
-
---
-
-*`user.changes.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
-example: S-1-5-21-202424912787-2692429404-2351956786-1000
-
---
-
-*`user.changes.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: a.einstein
-
---
-
-*`user.changes.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user.changes.roles`*::
-+
---
-Array of user roles at the time of the event.
-
-type: keyword
-
-example: ["kibana_admin", "reporting_user"]
-
---
-
-*`user.domain`*::
-+
---
-Name of the directory the user is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`user.effective.domain`*::
-+
---
-Name of the directory the user is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`user.effective.email`*::
-+
---
-User email address.
-
-type: keyword
-
---
-
-*`user.effective.full_name`*::
-+
---
-User's full name, if available.
-
-type: keyword
-
-example: Albert Einstein
-
---
-
-*`user.effective.full_name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user.effective.group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`user.effective.group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`user.effective.group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-*`user.effective.hash`*::
-+
---
-Unique user hash to correlate information for a user in anonymized form.
-Useful if `user.id` or `user.name` contain confidential information and cannot be used.
-
-type: keyword
-
---
-
-*`user.effective.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
-example: S-1-5-21-202424912787-2692429404-2351956786-1000
-
---
-
-*`user.effective.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: a.einstein
-
---
-
-*`user.effective.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user.effective.roles`*::
-+
---
-Array of user roles at the time of the event.
-
-type: keyword
-
-example: ["kibana_admin", "reporting_user"]
-
---
-
-*`user.email`*::
-+
---
-User email address.
-
-type: keyword
-
---
-
-*`user.full_name`*::
-+
---
-User's full name, if available.
-
-type: keyword
-
-example: Albert Einstein
-
---
-
-*`user.full_name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user.group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`user.group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`user.group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-*`user.hash`*::
-+
---
-Unique user hash to correlate information for a user in anonymized form.
-Useful if `user.id` or `user.name` contain confidential information and cannot be used.
-
-type: keyword
-
---
-
-*`user.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
-example: S-1-5-21-202424912787-2692429404-2351956786-1000
-
---
-
-*`user.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: a.einstein
-
---
-
-*`user.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user.roles`*::
-+
---
-Array of user roles at the time of the event.
-
-type: keyword
-
-example: ["kibana_admin", "reporting_user"]
-
---
-
-*`user.target.domain`*::
-+
---
-Name of the directory the user is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`user.target.email`*::
-+
---
-User email address.
-
-type: keyword
-
---
-
-*`user.target.full_name`*::
-+
---
-User's full name, if available.
-
-type: keyword
-
-example: Albert Einstein
-
---
-
-*`user.target.full_name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user.target.group.domain`*::
-+
---
-Name of the directory the group is a member of.
-For example, an LDAP or Active Directory domain name.
-
-type: keyword
-
---
-
-*`user.target.group.id`*::
-+
---
-Unique identifier for the group on the system/platform.
-
-type: keyword
-
---
-
-*`user.target.group.name`*::
-+
---
-Name of the group.
-
-type: keyword
-
---
-
-*`user.target.hash`*::
-+
---
-Unique user hash to correlate information for a user in anonymized form.
-Useful if `user.id` or `user.name` contain confidential information and cannot be used.
-
-type: keyword
-
---
-
-*`user.target.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
-example: S-1-5-21-202424912787-2692429404-2351956786-1000
-
---
-
-*`user.target.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: a.einstein
-
---
-
-*`user.target.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user.target.roles`*::
-+
---
-Array of user roles at the time of the event.
-
-type: keyword
-
-example: ["kibana_admin", "reporting_user"]
-
---
-
-[float]
-=== user_agent
-
-The user_agent fields normally come from a browser request.
-They often show up in web service logs coming from the parsed user agent string.
-
-
-*`user_agent.device.name`*::
-+
---
-Name of the device.
-
-type: keyword
-
-example: iPhone
-
---
-
-*`user_agent.name`*::
-+
---
-Name of the user agent.
-
-type: keyword
-
-example: Safari
-
---
-
-*`user_agent.original`*::
-+
---
-Unparsed user_agent string.
-
-type: keyword
-
-example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1
-
---
-
-*`user_agent.original.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user_agent.os.family`*::
-+
---
-OS family (such as redhat, debian, freebsd, windows).
-
-type: keyword
-
-example: debian
-
---
-
-*`user_agent.os.full`*::
-+
---
-Operating system name, including the version or code name.
-
-type: keyword
-
-example: Mac OS Mojave
-
---
-
-*`user_agent.os.full.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user_agent.os.kernel`*::
-+
---
-Operating system kernel version as a raw string.
-
-type: keyword
-
-example: 4.4.0-112-generic
-
---
-
-*`user_agent.os.name`*::
-+
---
-Operating system name, without the version.
-
-type: keyword
-
-example: Mac OS X
-
---
-
-*`user_agent.os.name.text`*::
-+
---
-type: match_only_text
-
---
-
-*`user_agent.os.platform`*::
-+
---
-Operating system platform (such centos, ubuntu, windows).
-
-type: keyword
-
-example: darwin
-
---
-
-*`user_agent.os.type`*::
-+
---
-Use the `os.type` field to categorize the operating system into one of the broad commercial families.
-One of these following values should be used (lowercase): linux, macos, unix, windows.
-If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition.
-
-type: keyword
-
-example: macos
-
---
-
-*`user_agent.os.version`*::
-+
---
-Operating system version as a raw string.
-
-type: keyword
-
-example: 10.14.1
-
---
-
-*`user_agent.version`*::
-+
---
-Version of the user agent.
-
-type: keyword
-
-example: 12.0
-
---
-
-[float]
-=== vlan
-
-The VLAN fields are used to identify 802.1q tag(s) of a packet, as well as ingress and egress VLAN associations of an observer in relation to a specific packet or connection.
-Network.vlan fields are used to record a single VLAN tag, or the outer tag in the case of q-in-q encapsulations, for a packet or connection as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic.
-Network.inner VLAN fields are used to report inner q-in-q 802.1q tags (multiple 802.1q encapsulations) as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. Network.inner VLAN fields should only be used in addition to network.vlan fields to indicate q-in-q tagging.
-Observer.ingress and observer.egress VLAN values are used to record observer specific information when observer events contain discrete ingress and egress VLAN information, typically provided by firewalls, routers, or load balancers.
-
-
-*`vlan.id`*::
-+
---
-VLAN ID as reported by the observer.
-
-type: keyword
-
-example: 10
-
---
-
-*`vlan.name`*::
-+
---
-Optional VLAN name as reported by the observer.
-
-type: keyword
-
-example: outside
-
---
-
-[float]
-=== vulnerability
-
-The vulnerability fields describe information about a vulnerability that is relevant to an event.
-
-
-*`vulnerability.category`*::
-+
---
-The type of system or architecture that the vulnerability affects. These may be platform-specific (for example, Debian or SUSE) or general (for example, Database or Firewall). For example (https://qualysguard.qualys.com/qwebhelp/fo_portal/knowledgebase/vulnerability_categories.htm[Qualys vulnerability categories])
-This field must be an array.
-
-type: keyword
-
-example: ["Firewall"]
-
---
-
-*`vulnerability.classification`*::
-+
---
-The classification of the vulnerability scoring system. For example (https://www.first.org/cvss/)
-
-type: keyword
-
-example: CVSS
-
---
-
-*`vulnerability.description`*::
-+
---
-The description of the vulnerability that provides additional context of the vulnerability. For example (https://cve.mitre.org/about/faqs.html#cve_entry_descriptions_created[Common Vulnerabilities and Exposure CVE description])
-
-type: keyword
-
-example: In macOS before 2.12.6, there is a vulnerability in the RPC...
-
---
-
-*`vulnerability.description.text`*::
-+
---
-type: match_only_text
-
---
-
-*`vulnerability.enumeration`*::
-+
---
-The type of identifier used for this vulnerability. For example (https://cve.mitre.org/about/)
-
-type: keyword
-
-example: CVE
-
---
-
-*`vulnerability.id`*::
-+
---
-The identification (ID) is the number portion of a vulnerability entry. It includes a unique identification number for the vulnerability. For example (https://cve.mitre.org/about/faqs.html#what_is_cve_id)[Common Vulnerabilities and Exposure CVE ID]
-
-type: keyword
-
-example: CVE-2019-00001
-
---
-
-*`vulnerability.reference`*::
-+
---
-A resource that provides additional information, context, and mitigations for the identified vulnerability.
-
-type: keyword
-
-example: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111
-
---
-
-*`vulnerability.report_id`*::
-+
---
-The report or scan identification number.
-
-type: keyword
-
-example: 20191018.0001
-
---
-
-*`vulnerability.scanner.vendor`*::
-+
---
-The name of the vulnerability scanner vendor.
-
-type: keyword
-
-example: Tenable
-
---
-
-*`vulnerability.score.base`*::
-+
---
-Scores can range from 0.0 to 10.0, with 10.0 being the most severe.
-Base scores cover an assessment for exploitability metrics (attack vector, complexity, privileges, and user interaction), impact metrics (confidentiality, integrity, and availability), and scope. For example (https://www.first.org/cvss/specification-document)
-
-type: float
-
-example: 5.5
-
---
-
-*`vulnerability.score.environmental`*::
-+
---
-Scores can range from 0.0 to 10.0, with 10.0 being the most severe.
-Environmental scores cover an assessment for any modified Base metrics, confidentiality, integrity, and availability requirements. For example (https://www.first.org/cvss/specification-document)
-
-type: float
-
-example: 5.5
-
---
-
-*`vulnerability.score.temporal`*::
-+
---
-Scores can range from 0.0 to 10.0, with 10.0 being the most severe.
-Temporal scores cover an assessment for code maturity, remediation level, and confidence. For example (https://www.first.org/cvss/specification-document)
-
-type: float
-
---
-
-*`vulnerability.score.version`*::
-+
---
-The National Vulnerability Database (NVD) provides qualitative severity rankings of "Low", "Medium", and "High" for CVSS v2.0 base score ranges in addition to the severity ratings for CVSS v3.0 as they are defined in the CVSS v3.0 specification.
-CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based non-profit organization, whose mission is to help computer security incident response teams across the world. For example (https://nvd.nist.gov/vuln-metrics/cvss)
-
-type: keyword
-
-example: 2.0
-
---
-
-*`vulnerability.severity`*::
-+
---
-The severity of the vulnerability can help with metrics and internal prioritization regarding remediation. For example (https://nvd.nist.gov/vuln-metrics/cvss)
-
-type: keyword
-
-example: Critical
-
---
-
-[float]
-=== x509
-
-This implements the common core fields for x509 certificates. This information is likely logged with TLS sessions, digital signatures found in executable binaries, S/MIME information in email bodies, or analysis of files on disk.
-When the certificate relates to a file, use the fields at `file.x509`. When hashes of the DER-encoded certificate are available, the `hash` data set should be populated as well (e.g. `file.hash.sha256`).
-Events that contain certificate information about network connections, should use the x509 fields under the relevant TLS fields: `tls.server.x509` and/or `tls.client.x509`.
-
-
-*`x509.alternative_names`*::
-+
---
-List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses.
-
-type: keyword
-
-example: *.elastic.co
-
---
-
-*`x509.issuer.common_name`*::
-+
---
-List of common name (CN) of issuing certificate authority.
-
-type: keyword
-
-example: Example SHA2 High Assurance Server CA
-
---
-
-*`x509.issuer.country`*::
-+
---
-List of country (C) codes
-
-type: keyword
-
-example: US
-
---
-
-*`x509.issuer.distinguished_name`*::
-+
---
-Distinguished name (DN) of issuing certificate authority.
-
-type: keyword
-
-example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA
-
---
-
-*`x509.issuer.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: Mountain View
-
---
-
-*`x509.issuer.organization`*::
-+
---
-List of organizations (O) of issuing certificate authority.
-
-type: keyword
-
-example: Example Inc
-
---
-
-*`x509.issuer.organizational_unit`*::
-+
---
-List of organizational units (OU) of issuing certificate authority.
-
-type: keyword
-
-example: www.example.com
-
---
-
-*`x509.issuer.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`x509.not_after`*::
-+
---
-Time at which the certificate is no longer considered valid.
-
-type: date
-
-example: 2020-07-16 03:15:39+00:00
-
---
-
-*`x509.not_before`*::
-+
---
-Time at which the certificate is first considered valid.
-
-type: date
-
-example: 2019-08-16 01:40:25+00:00
-
---
-
-*`x509.public_key_algorithm`*::
-+
---
-Algorithm used to generate the public key.
-
-type: keyword
-
-example: RSA
-
---
-
-*`x509.public_key_curve`*::
-+
---
-The curve used by the elliptic curve public key algorithm. This is algorithm specific.
-
-type: keyword
-
-example: nistp521
-
---
-
-*`x509.public_key_exponent`*::
-+
---
-Exponent used to derive the public key. This is algorithm specific.
-
-type: long
-
-example: 65537
-
-Field is not indexed.
-
---
-
-*`x509.public_key_size`*::
-+
---
-The size of the public key space in bits.
-
-type: long
-
-example: 2048
-
---
-
-*`x509.serial_number`*::
-+
---
-Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters.
-
-type: keyword
-
-example: 55FBB9C7DEBF09809D12CCAA
-
---
-
-*`x509.signature_algorithm`*::
-+
---
-Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353.
-
-type: keyword
-
-example: SHA256-RSA
-
---
-
-*`x509.subject.common_name`*::
-+
---
-List of common names (CN) of subject.
-
-type: keyword
-
-example: shared.global.example.net
-
---
-
-*`x509.subject.country`*::
-+
---
-List of country (C) code
-
-type: keyword
-
-example: US
-
---
-
-*`x509.subject.distinguished_name`*::
-+
---
-Distinguished name (DN) of the certificate subject entity.
-
-type: keyword
-
-example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net
-
---
-
-*`x509.subject.locality`*::
-+
---
-List of locality names (L)
-
-type: keyword
-
-example: San Francisco
-
---
-
-*`x509.subject.organization`*::
-+
---
-List of organizations (O) of subject.
-
-type: keyword
-
-example: Example, Inc.
-
---
-
-*`x509.subject.organizational_unit`*::
-+
---
-List of organizational units (OU) of subject.
-
-type: keyword
-
---
-
-*`x509.subject.state_or_province`*::
-+
---
-List of state or province names (ST, S, or P)
-
-type: keyword
-
-example: California
-
---
-
-*`x509.version_number`*::
-+
---
-Version of x509 format.
-
-type: keyword
-
-example: 3
-
---
-
-[[exported-fields-file_integrity]]
-== File Integrity fields
-
-These are the fields generated by the file_integrity module.
-
-
-[float]
-=== file
-
-File attributes.
-
-
-[float]
-=== elf
-
-These fields contain Linux Executable Linkable Format (ELF) metadata.
-
-
-*`file.elf.go_imports`*::
-+
---
-List of imported Go language element names and types.
-
-type: flattened
-
---
-
-*`file.elf.go_imports_names_entropy`*::
-+
---
-Shannon entropy calculation from the list of Go imports.
-
-type: long
-
-format: number
-
---
-
-*`file.elf.go_imports_names_var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the list of Go imports.
-
-type: long
-
-format: number
-
---
-
-*`file.elf.go_import_hash`*::
-+
---
-A hash of the Go language imports in an ELF file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma).
-
-type: keyword
-
-example: 10bddcb4cee42080f76c88d9ff964491
-
---
-
-*`file.elf.go_stripped`*::
-+
---
-Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable.
-
-type: boolean
-
---
-
-*`file.elf.imports_names_entropy`*::
-+
---
-Shannon entropy calculation from the list of imported element names and types.
-
-type: long
-
-format: number
-
---
-
-*`file.elf.imports_names_var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the list of imported element names and types.
-
-type: long
-
-format: number
-
---
-
-*`file.elf.import_hash`*::
-+
---
-A hash of the imports in an ELF file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-This is an ELF implementation of the Windows PE imphash.
-
-type: keyword
-
-example: d41d8cd98f00b204e9800998ecf8427e
-
---
-
-*`file.elf.sections.var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-[float]
-=== macho
-
-These fields contain Mach object file Format (Mach-O) metadata.
-
-
-*`file.macho.go_imports`*::
-+
---
-List of imported Go language element names and types.
-
-type: flattened
-
---
-
-*`file.macho.go_imports_names_entropy`*::
-+
---
-Shannon entropy calculation from the list of Go imports.
-
-type: long
-
-format: number
-
---
-
-*`file.macho.go_imports_names_var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the list of Go imports.
-
-type: long
-
-format: number
-
---
-
-*`file.macho.go_import_hash`*::
-+
---
-A hash of the Go language imports in a Mach-O file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma).
-
-type: keyword
-
-example: 10bddcb4cee42080f76c88d9ff964491
-
---
-
-*`file.macho.go_stripped`*::
-+
---
-Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable.
-
-type: boolean
-
---
-
-*`file.macho.imports`*::
-+
---
-List of imported element names and types.
-
-type: flattened
-
---
-
-*`file.macho.imports_names_entropy`*::
-+
---
-Shannon entropy calculation from the list of imported element names and types.
-
-type: long
-
-format: number
-
---
-
-*`file.macho.imports_names_var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the list of imported element names and types.
-
-type: long
-
-format: number
-
---
-
-*`file.macho.import_hash`*::
-+
---
-A hash of the imports in a Mach-O file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-This is a synonym for symhash.
-
-type: keyword
-
-example: d3ccf195b62a9279c3c19af1080497ec
-
---
-
-*`file.macho.sections`*::
-+
---
-An array containing an object for each section of the Mach-O file.
-The keys that should be present in these objects are defined by sub-fields underneath `macho.sections.*`.
-
-type: nested
-
---
-
-*`file.macho.sections.entropy`*::
-+
---
-Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`file.macho.sections.var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`file.macho.sections.name`*::
-+
---
-Mach-O Section List name.
-
-type: keyword
-
---
-
-*`file.macho.sections.physical_size`*::
-+
---
-Mach-O Section List physical size.
-
-type: long
-
-format: string
-
---
-
-*`file.macho.sections.virtual_size`*::
-+
---
-Mach-O Section List virtual size.
-
-type: long
-
-format: string
-
---
-
-*`file.macho.symhash`*::
-+
---
-A hash of the imports in a Mach-O file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-
-type: keyword
-
-example: d3ccf195b62a9279c3c19af1080497ec
-
---
-
-[float]
-=== pe
-
-These fields contain Windows Portable Executable (PE) metadata.
-
-
-*`file.pe.go_imports`*::
-+
---
-List of imported Go language element names and types.
-
-type: flattened
-
---
-
-*`file.pe.go_imports_names_entropy`*::
-+
---
-Shannon entropy calculation from the list of Go imports.
-
-type: long
-
-format: number
-
---
-
-*`file.pe.go_imports_names_var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the list of Go imports.
-
-type: long
-
-format: number
-
---
-
-*`file.pe.go_import_hash`*::
-+
---
-A hash of the Go language imports in a PE file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma).
-
-type: keyword
-
-example: 10bddcb4cee42080f76c88d9ff964491
-
---
-
-*`file.pe.go_stripped`*::
-+
---
-Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable.
-
-type: boolean
-
---
-
-*`file.pe.imports`*::
-+
---
-List of imported element names and types.
-
-type: flattened
-
---
-
-*`file.pe.imports_names_entropy`*::
-+
---
-Shannon entropy calculation from the list of imported element names and types.
-
-type: long
-
-format: number
-
---
-
-*`file.pe.imports_names_var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the list of imported element names and types.
-
-type: long
-
-format: number
-
---
-
-*`file.pe.import_hash`*::
-+
---
-A hash of the imports in a PE file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values.
-This is a synonym for imphash.
-
-type: keyword
-
---
-
-*`file.pe.sections`*::
-+
---
-An array containing an object for each section of the ELF file.
-The keys that should be present in these objects are defined by sub-fields underneath `pe.sections.*`.
-
-type: nested
-
---
-
-*`file.pe.sections.entropy`*::
-+
---
-Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`file.pe.sections.var_entropy`*::
-+
---
-Variance for Shannon entropy calculation from the section.
-
-type: long
-
-format: number
-
---
-
-*`file.pe.sections.name`*::
-+
---
-PE Section List name.
-
-type: keyword
-
---
-
-*`file.pe.sections.physical_size`*::
-+
---
-PE Section List physical size.
-
-type: long
-
-format: string
-
---
-
-*`file.pe.sections.virtual_size`*::
-+
---
-PE Section List virtual size.
-
-type: long
-
-format: string
-
---
-
-[float]
-=== hash
-
-Hashes of the file. The keys are algorithm names and the values are the hex encoded digest values.
-
-
-
-*`hash.blake2b_256`*::
-+
---
-BLAKE2b-256 hash of the file.
-
-type: keyword
-
---
-
-*`hash.blake2b_384`*::
-+
---
-BLAKE2b-384 hash of the file.
-
-type: keyword
-
---
-
-*`hash.blake2b_512`*::
-+
---
-BLAKE2b-512 hash of the file.
-
-type: keyword
-
---
-
-*`hash.md5`*::
-+
---
-MD5 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha1`*::
-+
---
-SHA1 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha224`*::
-+
---
-SHA224 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha256`*::
-+
---
-SHA256 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha384`*::
-+
---
-SHA384 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha3_224`*::
-+
---
-SHA3_224 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha3_256`*::
-+
---
-SHA3_256 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha3_384`*::
-+
---
-SHA3_384 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha3_512`*::
-+
---
-SHA3_512 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha512`*::
-+
---
-SHA512 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha512_224`*::
-+
---
-SHA512/224 hash of the file.
-
-type: keyword
-
---
-
-*`hash.sha512_256`*::
-+
---
-SHA512/256 hash of the file.
-
-type: keyword
-
---
-
-*`hash.xxh64`*::
-+
---
-XX64 hash of the file.
-
-type: keyword
-
---
-
-[[exported-fields-host-processor]]
-== Host fields
-
-Info collected for the host machine.
-
-
-
-
-*`host.containerized`*::
-+
---
-If the host is a container.
-
-
-type: boolean
-
---
-
-*`host.os.build`*::
-+
---
-OS build information.
-
-
-type: keyword
-
-example: 18D109
-
---
-
-*`host.os.codename`*::
-+
---
-OS codename, if any.
-
-
-type: keyword
-
-example: stretch
-
---
-
-[[exported-fields-jolokia-autodiscover]]
-== Jolokia Discovery autodiscover provider fields
-
-Metadata from Jolokia Discovery added by the jolokia provider.
-
-
-
-*`jolokia.agent.version`*::
-+
---
-Version number of jolokia agent.
-
-
-type: keyword
-
---
-
-*`jolokia.agent.id`*::
-+
---
-Each agent has a unique id which can be either provided during startup of the agent in form of a configuration parameter or being autodetected. If autodected, the id has several parts: The IP, the process id, hashcode of the agent and its type.
-
-
-type: keyword
-
---
-
-*`jolokia.server.product`*::
-+
---
-The container product if detected.
-
-
-type: keyword
-
---
-
-*`jolokia.server.version`*::
-+
---
-The container's version (if detected).
-
-
-type: keyword
-
---
-
-*`jolokia.server.vendor`*::
-+
---
-The vendor of the container the agent is running in.
-
-
-type: keyword
-
---
-
-*`jolokia.url`*::
-+
---
-The URL how this agent can be contacted.
-
-
-type: keyword
-
---
-
-*`jolokia.secured`*::
-+
---
-Whether the agent was configured for authentication or not.
-
-
-type: boolean
-
---
-
-[[exported-fields-kubernetes-processor]]
-== Kubernetes fields
-
-Kubernetes metadata added by the kubernetes processor
-
-
-
-
-*`kubernetes.pod.name`*::
-+
---
-Kubernetes pod name
-
-
-type: keyword
-
---
-
-*`kubernetes.pod.uid`*::
-+
---
-Kubernetes Pod UID
-
-
-type: keyword
-
---
-
-*`kubernetes.pod.ip`*::
-+
---
-Kubernetes Pod IP
-
-
-type: ip
-
---
-
-*`kubernetes.namespace`*::
-+
---
-Kubernetes namespace
-
-
-type: keyword
-
---
-
-*`kubernetes.node.name`*::
-+
---
-Kubernetes node name
-
-
-type: keyword
-
---
-
-*`kubernetes.node.hostname`*::
-+
---
-Kubernetes hostname as reported by the node’s kernel
-
-
-type: keyword
-
---
-
-*`kubernetes.labels.*`*::
-+
---
-Kubernetes labels map
-
-
-type: object
-
---
-
-*`kubernetes.annotations.*`*::
-+
---
-Kubernetes annotations map
-
-
-type: object
-
---
-
-*`kubernetes.selectors.*`*::
-+
---
-Kubernetes selectors map
-
-
-type: object
-
---
-
-*`kubernetes.replicaset.name`*::
-+
---
-Kubernetes replicaset name
-
-
-type: keyword
-
---
-
-*`kubernetes.deployment.name`*::
-+
---
-Kubernetes deployment name
-
-
-type: keyword
-
---
-
-*`kubernetes.statefulset.name`*::
-+
---
-Kubernetes statefulset name
-
-
-type: keyword
-
---
-
-*`kubernetes.container.name`*::
-+
---
-Kubernetes container name (different than the name from the runtime)
-
-
-type: keyword
-
---
-
-[[exported-fields-process]]
-== Process fields
-
-Process metadata fields
-
-
-
-
-*`process.exe`*::
-+
---
-type: alias
-
-alias to: process.executable
-
---
-
-[float]
-=== owner
-
-Process owner information.
-
-
-*`process.owner.id`*::
-+
---
-Unique identifier of the user.
-
-type: keyword
-
---
-
-*`process.owner.name`*::
-+
---
-Short name or login of the user.
-
-type: keyword
-
-example: albert
-
---
-
-*`process.owner.name.text`*::
-+
---
-type: text
-
---
-
-[[exported-fields-system]]
-== System fields
-
-These are the fields generated by the system module.
-
-
-
-
-*`event.origin`*::
-+
---
-Origin of the event. This can be a file path (e.g. `/var/log/log.1`), or the name of the system component that supplied the data (e.g. `netlink`).
-
-
-type: keyword
-
---
-
-
-*`user.entity_id`*::
-+
---
-ID uniquely identifying the user on a host. It is computed as a SHA-256 hash of the host ID, user ID, and user name.
-
-
-type: keyword
-
---
-
-*`user.terminal`*::
-+
---
-Terminal of the user.
-
-
-type: keyword
-
---
-
-
-*`process.thread.capabilities.effective`*::
-+
---
-This is the set of capabilities used by the kernel to perform permission checks for the thread.
-
-type: keyword
-
-example: ["CAP_BPF", "CAP_SYS_ADMIN"]
-
---
-
-*`process.thread.capabilities.permitted`*::
-+
---
-This is a limiting superset for the effective capabilities that the thread may assume.
-
-type: keyword
-
-example: ["CAP_BPF", "CAP_SYS_ADMIN"]
-
---
-
-[float]
-=== hash
-
-Hashes of the executable. The keys are algorithm names and the values are the hex encoded digest values.
-
-
-
-*`process.hash.blake2b_256`*::
-+
---
-BLAKE2b-256 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.blake2b_384`*::
-+
---
-BLAKE2b-384 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.blake2b_512`*::
-+
---
-BLAKE2b-512 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.sha224`*::
-+
---
-SHA224 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.sha384`*::
-+
---
-SHA384 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.sha3_224`*::
-+
---
-SHA3_224 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.sha3_256`*::
-+
---
-SHA3_256 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.sha3_384`*::
-+
---
-SHA3_384 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.sha3_512`*::
-+
---
-SHA3_512 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.sha512_224`*::
-+
---
-SHA512/224 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.sha512_256`*::
-+
---
-SHA512/256 hash of the executable.
-
-type: keyword
-
---
-
-*`process.hash.xxh64`*::
-+
---
-XX64 hash of the executable.
-
-type: keyword
-
---
-
-[float]
-=== system.audit
-
-
-
-
-[float]
-=== host
-
-`host` contains general host information.
-
-
-
-*`system.audit.host.uptime`*::
-+
---
-Uptime in nanoseconds.
-
-
-type: long
-
-format: duration
-
---
-
-*`system.audit.host.boottime`*::
-+
---
-Boot time.
-
-
-type: date
-
---
-
-*`system.audit.host.containerized`*::
-+
---
-Set if host is a container.
-
-
-type: boolean
-
---
-
-*`system.audit.host.timezone.name`*::
-+
---
-Name of the timezone of the host, e.g. BST.
-
-
-type: keyword
-
---
-
-*`system.audit.host.timezone.offset.sec`*::
-+
---
-Timezone offset in seconds.
-
-
-type: long
-
---
-
-*`system.audit.host.hostname`*::
-+
---
-Hostname.
-
-
-type: keyword
-
---
-
-*`system.audit.host.id`*::
-+
---
-Host ID.
-
-
-type: keyword
-
---
-
-*`system.audit.host.architecture`*::
-+
---
-Host architecture (e.g. x86_64).
-
-
-type: keyword
-
---
-
-*`system.audit.host.mac`*::
-+
---
-MAC addresses.
-
-
-type: keyword
-
---
-
-*`system.audit.host.ip`*::
-+
---
-IP addresses.
-
-
-type: ip
-
---
-
-[float]
-=== os
-
-`os` contains information about the operating system.
-
-
-
-*`system.audit.host.os.codename`*::
-+
---
-OS codename, if any (e.g. stretch).
-
-
-type: keyword
-
---
-
-*`system.audit.host.os.platform`*::
-+
---
-OS platform (e.g. centos, ubuntu, windows).
-
-
-type: keyword
-
---
-
-*`system.audit.host.os.name`*::
-+
---
-OS name (e.g. Mac OS X).
-
-
-type: keyword
-
---
-
-*`system.audit.host.os.family`*::
-+
---
-OS family (e.g. redhat, debian, freebsd, windows).
-
-
-type: keyword
-
---
-
-*`system.audit.host.os.version`*::
-+
---
-OS version.
-
-
-type: keyword
-
---
-
-*`system.audit.host.os.kernel`*::
-+
---
-The operating system's kernel version.
-
-
-type: keyword
-
---
-
-*`system.audit.host.os.type`*::
-+
---
-OS type (see ECS os.type).
-
-
-type: keyword
-
---
-
-[float]
-=== package
-
-`package` contains information about an installed or removed package.
-
-
-
-*`system.audit.package.entity_id`*::
-+
---
-ID uniquely identifying the package. It is computed as a SHA-256 hash of the
- host ID, package name, and package version.
-
-
-type: keyword
-
---
-
-*`system.audit.package.name`*::
-+
---
-Package name.
-
-
-type: keyword
-
---
-
-*`system.audit.package.version`*::
-+
---
-Package version.
-
-
-type: keyword
-
---
-
-*`system.audit.package.release`*::
-+
---
-Package release.
-
-
-type: keyword
-
---
-
-*`system.audit.package.arch`*::
-+
---
-Package architecture.
-
-
-type: keyword
-
---
-
-*`system.audit.package.license`*::
-+
---
-Package license.
-
-
-type: keyword
-
---
-
-*`system.audit.package.installtime`*::
-+
---
-Package install time.
-
-
-type: date
-
---
-
-*`system.audit.package.size`*::
-+
---
-Package size.
-
-
-type: long
-
---
-
-*`system.audit.package.summary`*::
-+
---
-Package summary.
-
-
---
-
-*`system.audit.package.url`*::
-+
---
-Package URL.
-
-
-type: keyword
-
---
-
-[float]
-=== user
-
-`user` contains information about the users on a system.
-
-
-
-*`system.audit.user.name`*::
-+
---
-User name.
-
-
-type: keyword
-
---
-
-*`system.audit.user.uid`*::
-+
---
-User ID.
-
-
-type: keyword
-
---
-
-*`system.audit.user.gid`*::
-+
---
-Group ID.
-
-
-type: keyword
-
---
-
-*`system.audit.user.dir`*::
-+
---
-User's home directory.
-
-
-type: keyword
-
---
-
-*`system.audit.user.shell`*::
-+
---
-Program to run at login.
-
-
-type: keyword
-
---
-
-*`system.audit.user.user_information`*::
-+
---
-General user information. On Linux, this is the gecos field.
-
-
-type: keyword
-
---
-
-*`system.audit.user.group`*::
-+
---
-`group` contains information about any groups the user is part of (beyond the user's primary group).
-
-
-type: object
-
---
-
-[float]
-=== password
-
-`password` contains information about a user's password (not the password itself).
-
-
-
-*`system.audit.user.password.type`*::
-+
---
-A user's password type. Possible values are `shadow_password` (the password hash is in the shadow file), `password_disabled`, `no_password` (this is dangerous as anyone can log in), and `crypt_password` (when the password field in /etc/passwd seems to contain an encrypted password).
-
-
-type: keyword
-
---
-
-*`system.audit.user.password.last_changed`*::
-+
---
-The day the user's password was last changed.
-
-
-type: date
-
---
-
-:edit_url!:
\ No newline at end of file
diff --git a/auditbeat/docs/getting-started.asciidoc b/auditbeat/docs/getting-started.asciidoc
deleted file mode 100644
index 0e7cb1d38da8..000000000000
--- a/auditbeat/docs/getting-started.asciidoc
+++ /dev/null
@@ -1,151 +0,0 @@
-[id="{beatname_lc}-installation-configuration"]
-== {beatname_uc} quick start: installation and configuration
-
-++++
-Quick start: installation and configuration
-++++
-
-This guide describes how to get started quickly with audit data collection.
-You'll learn how to:
-
-* install {beatname_uc} on each system you want to monitor
-* specify the location of your audit data
-* parse log data into fields and send it to {es}
-* visualize the log data in {kib}
-
-[role="screenshot"]
-image::./images/auditbeat-auditd-dashboard.png[{beatname_uc} Auditd dashboard]
-
-[float]
-=== Before you begin
-
-You need {es} for storing and searching your data, and {kib} for visualizing and
-managing it.
-
-include::{libbeat-dir}/tab-widgets/spinup-stack-widget.asciidoc[]
-
-[float]
-[[install]]
-=== Step 1: Install {beatname_uc}
-
-Install {beatname_uc} on all the servers you want to monitor.
-
-To download and install {beatname_uc}, use the commands that work with your
-system:
-
-include::{libbeat-dir}/tab-widgets/install-widget.asciidoc[]
-
-The commands shown are for AMD platforms, but ARM packages are also available.
-Refer to the https://www.elastic.co/downloads/beats/{beatname_lc}[download page]
-for the full list of available packages.
-
-[float]
-[[other-installation-options]]
-==== Other installation options
-
-* <>
-* https://www.elastic.co/downloads/beats/{beatname_lc}[Download page]
-* <>
-* <>
-
-[float]
-[[set-connection]]
-=== Step 2: Connect to the {stack}
-
-include::{libbeat-dir}/shared/connecting-to-es.asciidoc[]
-
-[float]
-[[enable-modules]]
-=== Step 3: Configure data collection modules
-
-{beatname_uc} uses <> to collect audit information.
-
-By default, {beatname_uc} uses a configuration that's tailored to the operating
-system where {beatname_uc} is running.
-
-To use a different configuration, change the module settings in
-+{beatname_lc}.yml+.
-
-The following example shows the `file_integrity` module configured to generate
-events whenever a file in one of the specified paths changes on disk:
-
-["source","sh",subs="attributes"]
--------------------------------------
-auditbeat.modules:
-
-- module: file_integrity
- paths:
- - /bin
- - /usr/bin
- - /sbin
- - /usr/sbin
- - /etc
--------------------------------------
-
-
-include::{libbeat-dir}/shared/config-check.asciidoc[]
-
-[float]
-[[setup-assets]]
-=== Step 4: Set up assets
-
-{beatname_uc} comes with predefined assets for parsing, indexing, and
-visualizing your data. To load these assets:
-
-. Make sure the user specified in +{beatname_lc}.yml+ is
-<>.
-
-. From the installation directory, run:
-+
---
-include::{libbeat-dir}/tab-widgets/setup-widget.asciidoc[]
---
-+
-`-e` is optional and sends output to standard error instead of the configured log output.
-
-This step loads the recommended {ref}/index-templates.html[index template] for writing to {es}
-and deploys the sample dashboards for visualizing the data in {kib}.
-
-[TIP]
-=====
-A connection to {es} (or {ess}) is required to set up the initial
-environment. If you're using a different output, such as {ls}, see
-<> and <>.
-=====
-
-[float]
-[[start]]
-=== Step 5: Start {beatname_uc}
-
-Before starting {beatname_uc}, modify the user credentials in
-+{beatname_lc}.yml+ and specify a user who is
-<>.
-
-To start {beatname_uc}, run:
-
-// tag::start-step[]
-include::{libbeat-dir}/tab-widgets/start-widget.asciidoc[]
-// end::start-step[]
-
-{beatname_uc} should begin streaming events to {es}.
-
-If you see a warning about too many open files, you need to increase the
-`ulimit`. See the <> for more details.
-
-[float]
-[[view-data]]
-=== Step 6: View your data in {kib}
-
-To make it easier for you to start auditing the activities of users and
-processes on your system, {beatname_uc} comes with pre-built {kib} dashboards
-and UIs for visualizing your data.
-
-include::{libbeat-dir}/shared/opendashboards.asciidoc[tag=open-dashboards]
-
-[float]
-=== What's next?
-
-Now that you have audit data streaming into {es}, learn how to unify your logs,
-metrics, uptime, and application performance data.
-
-include::{libbeat-dir}/shared/obs-apps.asciidoc[]
diff --git a/auditbeat/docs/howto/howto.asciidoc b/auditbeat/docs/howto/howto.asciidoc
deleted file mode 100644
index 0c0334f29021..000000000000
--- a/auditbeat/docs/howto/howto.asciidoc
+++ /dev/null
@@ -1,39 +0,0 @@
-[[howto-guides]]
-= How to guides
-
-[partintro]
---
-Learn how to perform common {beatname_uc} configuration tasks.
-
-* <<{beatname_lc}-template>>
-* <>
-* <>
-* <<{beatname_lc}-geoip>>
-* <>
-* <>
-* <>
-* <>
-
-
---
-
-include::{libbeat-dir}/howto/load-index-templates.asciidoc[]
-
-include::{libbeat-dir}/howto/change-index-name.asciidoc[]
-
-include::{libbeat-dir}/howto/load-dashboards.asciidoc[]
-
-include::{libbeat-dir}/shared-geoip.asciidoc[]
-
-include::{libbeat-dir}/shared-config-ingest.asciidoc[]
-
-:standalone:
-include::{libbeat-dir}/shared-env-vars.asciidoc[]
-:standalone!:
-
-:standalone:
-include::{libbeat-dir}/yaml.asciidoc[]
-:standalone!:
-
-
-
diff --git a/auditbeat/docs/images/auditbeat-kernel-executions-dashboard.png b/auditbeat/docs/images/auditbeat-kernel-executions-dashboard.png
deleted file mode 100644
index 855bbc5eb37e..000000000000
Binary files a/auditbeat/docs/images/auditbeat-kernel-executions-dashboard.png and /dev/null differ
diff --git a/auditbeat/docs/images/auditbeat-kernel-overview-dashboard.png b/auditbeat/docs/images/auditbeat-kernel-overview-dashboard.png
deleted file mode 100644
index 2f08cdcddbef..000000000000
Binary files a/auditbeat/docs/images/auditbeat-kernel-overview-dashboard.png and /dev/null differ
diff --git a/auditbeat/docs/images/auditbeat-kernel-sockets-dashboard.png b/auditbeat/docs/images/auditbeat-kernel-sockets-dashboard.png
deleted file mode 100644
index 156c3f38f526..000000000000
Binary files a/auditbeat/docs/images/auditbeat-kernel-sockets-dashboard.png and /dev/null differ
diff --git a/auditbeat/docs/index.asciidoc b/auditbeat/docs/index.asciidoc
deleted file mode 100644
index bf2db3607ce7..000000000000
--- a/auditbeat/docs/index.asciidoc
+++ /dev/null
@@ -1,58 +0,0 @@
-= Auditbeat Reference
-
-:libbeat-dir: {docdir}/../../libbeat/docs
-
-include::{libbeat-dir}/version.asciidoc[]
-
-include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[]
-
-include::{asciidoc-dir}/../../shared/attributes.asciidoc[]
-
-:beatname_lc: auditbeat
-:beatname_uc: Auditbeat
-:beatname_pkg: {beatname_lc}
-:github_repo_name: beats
-:discuss_forum: beats/{beatname_lc}
-:beat_default_index_prefix: {beatname_lc}
-:deb_os:
-:rpm_os:
-:mac_os:
-:docker_platform:
-:win_os:
-:linux_os:
-:no_cache_processor:
-:no_decode_cef_processor:
-:no_decode_csv_fields_processor:
-:no_parse_aws_vpc_flow_log_processor:
-:no_script_processor:
-:no_timestamp_processor:
-
-include::{libbeat-dir}/shared-beats-attributes.asciidoc[]
-
-include::./overview.asciidoc[]
-
-include::./getting-started.asciidoc[]
-
-include::./setting-up-running.asciidoc[]
-
-include::./upgrading.asciidoc[]
-
-include::./configuring-howto.asciidoc[]
-
-include::{docdir}/howto/howto.asciidoc[]
-
-include::./modules.asciidoc[]
-
-include::./fields.asciidoc[]
-
-include::{libbeat-dir}/monitoring/monitoring-beats.asciidoc[]
-
-include::{libbeat-dir}/shared-securing-beat.asciidoc[]
-
-include::./troubleshooting.asciidoc[]
-
-include::./faq.asciidoc[]
-
-include::{libbeat-dir}/contributing-to-beats.asciidoc[]
-
-
diff --git a/auditbeat/docs/modules.asciidoc b/auditbeat/docs/modules.asciidoc
deleted file mode 100644
index d94daa75bad1..000000000000
--- a/auditbeat/docs/modules.asciidoc
+++ /dev/null
@@ -1,10 +0,0 @@
-[id="{beatname_lc}-modules"]
-= Modules
-
-[partintro]
---
-This section contains detailed information about the metric collecting modules
-contained in {beatname_uc}. More details about each module can be found under
-the links below.
-
-include::modules_list.asciidoc[]
diff --git a/auditbeat/docs/modules/auditd.asciidoc b/auditbeat/docs/modules/auditd.asciidoc
deleted file mode 100644
index 0361dc56097e..000000000000
--- a/auditbeat/docs/modules/auditd.asciidoc
+++ /dev/null
@@ -1,327 +0,0 @@
-////
-This file is generated! See scripts/docs_collector.py
-////
-
-:modulename: auditd
-
-[id="{beatname_lc}-module-auditd"]
-== Auditd Module
-
-The `auditd` module receives audit events from the Linux Audit Framework that
-is a part of the Linux kernel.
-
-This module is available only for Linux.
-
-[float]
-=== How it works
-
-This module establishes a subscription to the kernel to receive the events
-as they occur. So unlike most other modules, the `period` configuration
-option is unused because it is not implemented using polling.
-
-The Linux Audit Framework can send multiple messages for a single auditable
-event. For example, a `rename` syscall causes the kernel to send eight separate
-messages. Each message describes a different aspect of the activity that is
-occurring (the syscall itself, file paths, current working directory, process
-title). This module will combine all of the data from each of the messages
-into a single event.
-
-Messages for one event can be interleaved with messages from another event. This
-module will buffer the messages in order to combine related messages into a
-single event even if they arrive interleaved or out of order.
-
-[float]
-=== Useful commands
-
-When running {beatname_uc} with the `auditd` module enabled, you might find
-that other monitoring tools interfere with {beatname_uc}.
-
-For example, you might encounter errors if another process, such as `auditd`, is
-registered to receive data from the Linux Audit Framework. You can use these
-commands to see if the `auditd` service is running and stop it:
-
-* See if `auditd` is running:
-+
-[source,shell]
------
-service auditd status
------
-
-* Stop the `auditd` service:
-+
-[source,shell]
------
-service auditd stop
------
-
-* Disable `auditd` from starting on boot:
-+
-[source,shell]
------
-chkconfig auditd off
------
-
-To save CPU usage and disk space, you can use this command to stop `journald`
-from listening to audit messages:
-
-[source,shell]
------
-systemctl mask systemd-journald-audit.socket
------
-
-[float]
-=== Inspect the kernel audit system status
-
-{beatname_uc} provides useful commands to query the state of the audit system
-in the Linux kernel.
-
-* See the list of installed audit rules:
-+
-[source,shell]
------
-auditbeat show auditd-rules
------
-+
-Prints the list of loaded rules, similar to `auditctl -l`:
-+
-[source,shell]
------
--a never,exit -S all -F pid=26253
--a always,exit -F arch=b32 -S all -F key=32bit-abi
--a always,exit -F arch=b64 -S execve,execveat -F key=exec
--a always,exit -F arch=b64 -S connect,accept,bind -F key=external-access
--w /etc/group -p wa -k identity
--w /etc/passwd -p wa -k identity
--w /etc/gshadow -p wa -k identity
--a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F key=access
--a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F key=access
------
-
-* See the status of the audit system:
-+
-[source,shell]
------
-auditbeat show auditd-status
------
-+
-Prints the status of the kernel audit system, similar to `auditctl -s`:
-+
-[source,shell]
------
-enabled 1
-failure 0
-pid 0
-rate_limit 0
-backlog_limit 8192
-lost 14407
-backlog 0
-backlog_wait_time 0
-features 0xf
------
-
-[float]
-=== Configuration options
-
-This module has some configuration options for tuning its behavior. The
-following example shows all configuration options with their default values.
-
-[source,yaml]
-----
-- module: auditd
- resolve_ids: true
- failure_mode: silent
- backlog_limit: 8192
- rate_limit: 0
- include_raw_message: false
- include_warnings: false
- backpressure_strategy: auto
- immutable: false
-----
-
-This module also supports the
-<>
-described later.
-
-*`socket_type`*:: This optional setting controls the type of
-socket that {beatname_uc} uses to receive events from the kernel. The two
-options are `unicast` and `multicast`.
-+
-`unicast` should be used when {beatname_uc} is the primary userspace daemon for
-receiving audit events and managing the rules. Only a single process can receive
-audit events through the "unicast" connection so any other daemons should be
-stopped (e.g. stop `auditd`).
-+
-`multicast` can be used in kernel versions 3.16 and newer. By using `multicast`
-{beatname_uc} will receive an audit event broadcast that is not exclusive to a
-a single process. This is ideal for situations where `auditd` is running and
-managing the rules.
-+
-By default {beatname_uc} will use `multicast` if the kernel version is 3.16 or
-newer and no rules have been defined. Otherwise `unicast` will be used.
-
-*`immutable`*:: This boolean setting sets the audit config as immutable (`-e 2`).
-This option can only be used with the `socket_type: unicast` since {beatname_uc}
-needs to manage the rules to be able to set it.
-+
-It is important to note that with this setting enabled, if {beatname_uc} is
-stopped and resumed events will continue to be processed but the
-configuration won't be updated until the system is restarted entirely.
-
-*`resolve_ids`*:: This boolean setting enables the resolution of UIDs and
-GIDs to their associated names. The default value is true.
-
-*`failure_mode`*:: This determines the kernel's behavior on critical
-failures such as errors sending events to {beatname_uc}, the backlog limit was
-exceeded, the kernel ran out of memory, or the rate limit was exceeded. The
-options are `silent`, `log`, or `panic`. `silent` basically makes the kernel
-ignore the errors, `log` makes the kernel write the audit messages using
-`printk` so they show up in system's syslog, and `panic` causes the kernel to
-panic to prevent use of the machine. {beatname_uc}'s default is `silent`.
-
-*`backlog_limit`*:: This controls the maximum number of audit messages
-that will be buffered by the kernel.
-
-*`rate_limit`*:: This sets a rate limit on the number of messages/sec
-delivered by the kernel. The default is 0, which disables rate limiting.
-Changing this value to anything other than zero can cause messages to be lost.
-The preferred approach to reduce the messaging rate is be more selective in the
-audit ruleset.
-
-*`include_raw_message`*:: This boolean setting causes {beatname_uc} to
-include each of the raw messages that contributed to the event in the document
-as a field called `event.original`. The default value is false. This setting is
-primarily used for development and debugging purposes.
-
-*`include_warnings`*:: This boolean setting causes {beatname_uc} to
-include as warnings any issues that were encountered while parsing the raw
-messages. The messages are written to the `error.message` field. The default
-value is false. When this setting is enabled the raw messages will be included
-in the event regardless of the `include_raw_message` config setting. This
-setting is primarily used for development and debugging purposes.
-
-*`audit_rules`*:: A string containing the audit rules that should be
-installed to the kernel. There should be one rule per line. Comments can be
-embedded in the string using `#` as a prefix. The format for rules is the same
-used by the Linux `auditctl` utility. {beatname_uc} supports adding file watches
-(`-w`) and syscall rules (`-a` or `-A`). For more information, see
-<>.
-
-*`audit_rule_files`*:: A list of files to load audit rules from. This files are
-loaded after the rules declared in `audit_rules` are loaded. Wildcards are
-supported and will expand in lexicographical order. The format is the same as
-that of the `audit_rules` field.
-
-*`ignore_errors`*:: This setting allows errors during rule loading and parsing
-to be ignored, but logged as warnings.
-
-*`backpressure_strategy`*:: Specifies the strategy that {beatname_uc} uses to
-prevent backpressure from propagating to the kernel and impacting audited
-processes.
-+
---
-The possible values are:
-
-- `auto` (default): {beatname_uc} uses the `kernel` strategy, if supported, or
-falls back to the `userspace` strategy.
-- `kernel`: {beatname_uc} sets the `backlog_wait_time` in the kernel's
-audit framework to 0. This causes events to be discarded in the kernel if
-the audit backlog queue fills to capacity. Requires a 3.14 kernel or
-newer.
-- `userspace`: {beatname_uc} drops events when there is backpressure
-from the publishing pipeline. If no `rate_limit` is set, {beatname_uc} sets a rate
-limit of 5000. Users should test their setup and adjust the `rate_limit`
-option accordingly.
-- `both`: {beatname_uc} uses the `kernel` and `userspace` strategies at the same
-time.
-- `none`: No backpressure mitigation measures are enabled.
---
-
-include::{docdir}/auditbeat-options.asciidoc[]
-
-[float]
-[[audit-rules]]
-=== Audit rules
-
-The audit rules are where you configure the activities that are audited. These
-rules are configured as either syscalls or files that should be monitored. For
-example you can track all `connect` syscalls or file system writes to
-`/etc/passwd`.
-
-Auditing a large number of syscalls can place a heavy load on the system so
-consider carefully the rules you define and try to apply filters in the rules
-themselves to be as selective as possible.
-
-The kernel evaluates the rules in the order in which they were defined so place
-the most active rules first in order to speed up evaluation.
-
-You can assign keys to each rule for better identification of the rule that
-triggered an event and easier filtering later in Elasticsearch.
-
-Defining any audit rules in the config causes {beatname_uc} to purge all
-existing audit rules prior to adding the rules specified in the config.
-Therefore it is unnecessary and unsupported to include a `-D` (delete all) rule.
-
-["source","sh",subs="attributes"]
-----
-{beatname_lc}.modules:
-- module: auditd
- audit_rules: |
- # Things that affect identity.
- -w /etc/group -p wa -k identity
- -w /etc/passwd -p wa -k identity
- -w /etc/gshadow -p wa -k identity
- -w /etc/shadow -p wa -k identity
-
- # Unauthorized access attempts to files (unsuccessful).
- -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access
- -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access
- -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access
- -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access
-----
-
-
-[float]
-=== Example configuration
-
-The Auditd module supports the common configuration options that are
-described under <>. Here
-is an example configuration:
-
-[source,yaml]
-----
-auditbeat.modules:
-- module: auditd
- # Load audit rules from separate files. Same format as audit.rules(7).
- audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
- audit_rules: |
- ## Define audit rules here.
- ## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
- ## examples or add your own rules.
-
- ## If you are on a 64 bit platform, everything should be running
- ## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
- ## because this might be a sign of someone exploiting a hole in the 32
- ## bit API.
- #-a always,exit -F arch=b32 -S all -F key=32bit-abi
-
- ## Executions.
- #-a always,exit -F arch=b64 -S execve,execveat -k exec
-
- ## External access (warning: these can be expensive to audit).
- #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access
-
- ## Identity changes.
- #-w /etc/group -p wa -k identity
- #-w /etc/passwd -p wa -k identity
- #-w /etc/gshadow -p wa -k identity
-
- ## Unauthorized access attempts.
- #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
- #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
-
-
-----
-
-
-:modulename!:
-
diff --git a/auditbeat/docs/modules/file_integrity.asciidoc b/auditbeat/docs/modules/file_integrity.asciidoc
deleted file mode 100644
index 872ba5189255..000000000000
--- a/auditbeat/docs/modules/file_integrity.asciidoc
+++ /dev/null
@@ -1,183 +0,0 @@
-////
-This file is generated! See scripts/docs_collector.py
-////
-
-:modulename: file_integrity
-
-[id="{beatname_lc}-module-file_integrity"]
-== File Integrity Module
-
-The `file_integrity` module sends events when a file is changed (created,
-updated, or deleted) on disk. The events contain file metadata and hashes.
-
-The module is implemented for Linux, macOS (Darwin), and Windows.
-
-[float]
-=== How it works
-
-This module uses features of the operating system to monitor file changes in
-realtime. When the module starts it creates a subscription with the OS to
-receive notifications of changes to the specified files or directories. Upon
-receiving notification of a change the module will read the file's metadata
-and then compute a hash of the file's contents.
-
-At startup this module will perform an initial scan of the configured files
-and directories to generate baseline data for the monitored paths and detect
-changes since the last time it was run. It uses locally persisted data in order
-to only send events for new or modified files.
-
-The operating system features that power this feature are as follows.
-
-* Linux - Multiple backends are supported: `auto`, `fsnotify`, `kprobes`, `ebpf`.
-By default, `fsnotify` is used, and therefore the kernel must have inotify support.
-Inotify was initially merged into the 2.6.13 Linux kernel.
-The eBPF backend uses modern eBPF features and supports 5.10.16+ kernels.
-The `Kprobes` backend uses tracefs and supports 3.10+ kernels.
-FSNotify doesn't have the ability to associate user data to file events.
-The preferred backend can be selected by specifying the `backend` config option.
-Since eBPF and Kprobes are in technical preview, `auto` will default to `fsnotify`.
-* macOS (Darwin) - Uses the `FSEvents` API, present since macOS 10.5. This API
-coalesces multiple changes to a file into a single event. {beatname_uc} translates
-this coalesced changes into a meaningful sequence of actions. However,
-in rare situations the reported events may have a different ordering than what
-actually happened.
-* Windows - `ReadDirectoryChangesW` is used.
-
-The file integrity module should not be used to monitor paths on network file
-systems.
-
-[float]
-=== Configuration options
-
-This module has some configuration options for tuning its behavior. The
-following example shows all configuration options with their default values for
-Linux.
-
-[source,yaml]
-----
-- module: file_integrity
- paths:
- - /bin
- - /usr/bin
- - /sbin
- - /usr/sbin
- - /etc
- recursive: false
- exclude_files:
- - '(?i)\.sw[nop]$'
- - '~$'
- - '/\.git($|/)'
- include_files: []
- scan_at_start: true
- scan_rate_per_sec: 50 MiB
- max_file_size: 100 MiB
- hash_types: [sha1]
-----
-
-This module also supports the
-<>
-described later.
-
-*`paths`*:: A list of paths (directories or files) to watch. Globs are
-not supported. The specified paths should exist when the metricset is started.
-Paths should be absolute, although the file integrity module will attempt to
-resolve relative path events to their absolute file path. Symbolic links will
-be resolved on module start and the link target will be watched if link resolution
-is successful. Changes to the symbolic link after module start will not change
-the watch target. If the link does not resolve to a valid target, the symbolic
-link itself will be watched; if the symlink target becomes valid after module
-start up this will not be picked up by the file system watches.
-
-*`recursive`*:: By default, the watches set to the paths specified in
-`paths` are not recursive. This means that only changes to the contents
-of this directories are watched. If `recursive` is set to `true`, the
-`file_integrity` module will watch for changes on this directory and all
-its subdirectories.
-
-*`exclude_files`*:: A list of regular expressions used to filter out events
-for unwanted files. The expressions are matched against the full path of every
-file and directory. When used in conjunction with `include_files`, file paths need
-to match both `include_files` and not match `exclude_files` to be selected.
-By default, no files are excluded. See <>
-for a list of supported regexp patterns. It is recommended to wrap regular
-expressions in single quotation marks to avoid issues with YAML escaping
-rules.
-If `recursive` is set to true, subdirectories can also be excluded here by
-specifying them.
-
-*`include_files`*:: A list of regular expressions used to specify which files to
-select. When configured, only files matching the pattern will be monitored.
-The expressions are matched against the full path of every file and directory.
-When used in conjunction with `exclude_files`, file paths need
-to match both `include_files` and not match `exclude_files` to be selected.
-By default, all files are selected. See <>
-for a list of supported regexp patterns. It is recommended to wrap regular
-expressions in single quotation marks to avoid issues with YAML escaping
-rules.
-
-*`scan_at_start`*:: A boolean value that controls if {beatname_uc} scans
-over the configured file paths at startup and send events for the files
-that have been modified since the last time {beatname_uc} was running. The
-default value is true.
-+
-This feature depends on data stored locally in `path.data` in order to determine
-if a file has changed. The first time {beatname_uc} runs it will send an event
-for each file it encounters.
-
-*`scan_rate_per_sec`*:: When `scan_at_start` is enabled this sets an
-average read rate defined in bytes per second for the initial scan. This
-throttles the amount of CPU and I/O that {beatname_uc} consumes at startup.
-The default value is "50 MiB". Setting the value to "0" disables throttling.
-For convenience units can be specified as a suffix to the value. The supported
-units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, `gb`, `tib`, `tb`,
-`pib`, `pb`, `eib`, and `eb`.
-
-*`max_file_size`*:: The maximum size of a file in bytes for which
-{beatname_uc} will compute hashes and run file parsers. Files larger than this
-size will not be hashed or analysed by configured file parsers. The default
-value is 100 MiB. For convenience, units can be specified as a suffix to the
-value. The supported units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`,
-`gb`, `tib`, `tb`, `pib`, `pb`, `eib`, and `eb`.
-
-*`hash_types`*:: A list of hash types to compute when the file changes.
-The supported hash types are `blake2b_256`, `blake2b_384`, `blake2b_512`, `md5`,
-`sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `sha512_224`, `sha512_256`,
-`sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, and `xxh64`. The default value is `sha1`.
-
-*`file_parsers`*:: A list of `file_integrity` fields under `file` that will be
-populated by file format parsers. The available fields that can be analysed
-are listed in the auditbeat.reference.yml file. File parsers are run on all
-files within the `max_file_size` limit in the configured paths during a scan or
-when a file event involves the file. Files that are not targets of the specific
-file parser are only sniffed to examine whether analysis should proceed. This will
-usually only involve reading a small number of bytes.
-
-*`backend`*:: (*Linux only*) Select the backend which will be used to
-source events. Valid values: `auto`, `fsnotify`, `kprobes`, `ebpf`. Default: `fsnotify`.
-
-include::{docdir}/auditbeat-options.asciidoc[]
-
-
-[float]
-=== Example configuration
-
-The File Integrity module supports the common configuration options that are
-described under <>. Here
-is an example configuration:
-
-[source,yaml]
-----
-auditbeat.modules:
-- module: file_integrity
- paths:
- - /bin
- - /usr/bin
- - /sbin
- - /usr/sbin
- - /etc
-
-----
-
-
-:modulename!:
-
diff --git a/auditbeat/docs/modules_list.asciidoc b/auditbeat/docs/modules_list.asciidoc
deleted file mode 100644
index ed367bac1d09..000000000000
--- a/auditbeat/docs/modules_list.asciidoc
+++ /dev/null
@@ -1,14 +0,0 @@
-////
-This file is generated! See scripts/docs_collector.py
-////
-
- * <<{beatname_lc}-module-auditd,Auditd>>
- * <<{beatname_lc}-module-file_integrity,File Integrity>>
- * <<{beatname_lc}-module-system,System>>
-
-
---
-
-include::./modules/auditd.asciidoc[]
-include::./modules/file_integrity.asciidoc[]
-include::../../x-pack/auditbeat/docs/modules/system.asciidoc[]
diff --git a/auditbeat/docs/overview.asciidoc b/auditbeat/docs/overview.asciidoc
deleted file mode 100644
index 547638ff509f..000000000000
--- a/auditbeat/docs/overview.asciidoc
+++ /dev/null
@@ -1,11 +0,0 @@
-[id="{beatname_lc}-overview"]
-== {beatname_uc} overview
-
-{beatname_uc} is a lightweight shipper that you can install on your servers to
-audit the activities of users and processes on your systems. For example, you
-can use {beatname_uc} to collect and centralize audit events from the Linux
-Audit Framework. You can also use {beatname_uc} to detect changes to critical
-files, like binaries and configuration files, and identify potential security
-policy violations.
-
-include::{libbeat-dir}/shared-libbeat-description.asciidoc[]
diff --git a/auditbeat/docs/reload-configuration.asciidoc b/auditbeat/docs/reload-configuration.asciidoc
deleted file mode 100644
index dab510164d89..000000000000
--- a/auditbeat/docs/reload-configuration.asciidoc
+++ /dev/null
@@ -1,51 +0,0 @@
-[id="{beatname_lc}-configuration-reloading"]
-== Reload the configuration dynamically
-
-++++
-Config file reloading
-++++
-
-beta[]
-
-You can configure {beatname_uc} to dynamically reload configuration files when
-there are changes. To do this, you specify a path
-(https://golang.org/pkg/path/filepath/#Glob[glob]) to watch for module
-configuration changes. When the files found by the glob change, new modules are
-started/stopped according to changes in the configuration files.
-
-To enable dynamic config reloading, you specify the `path` and `reload` options
-in the main +{beatname_lc}.yml+ config file. For example:
-
-["source","sh"]
-------------------------------------------------------------------------------
-auditbeat.config.modules:
- path: ${path.config}/conf.d/*.yml
- reload.enabled: true
- reload.period: 10s
-------------------------------------------------------------------------------
-
-*`path`*:: A glob that defines the files to check for changes.
-
-*`reload.enabled`*:: When set to `true`, enables dynamic config reload.
-
-*`reload.period`*:: Specifies how often the files are checked for changes. Do not
-set the `period` to less than 1s because the modification time of files is often
-stored in seconds. Setting the `period` to less than 1s will result in
-unnecessary overhead.
-
-Each file found by the glob must contain a list of one or more module
-definitions. For example:
-
-[source,yaml]
-------------------------------------------------------------------------------
-- module: file_integrity
- paths:
- - /www/wordpress
- - /www/wordpress/wp-admin
- - /www/wordpress/wp-content
- - /www/wordpress/wp-includes
-------------------------------------------------------------------------------
-
-NOTE: On systems with POSIX file permissions, all Beats configuration files are
-subject to ownership and file permission checks. If you encounter config loading
-errors related to file ownership, see {beats-ref}/config-file-permissions.html.
diff --git a/auditbeat/docs/running-on-docker.asciidoc b/auditbeat/docs/running-on-docker.asciidoc
deleted file mode 100644
index dee50fa254a3..000000000000
--- a/auditbeat/docs/running-on-docker.asciidoc
+++ /dev/null
@@ -1,14 +0,0 @@
-include::{libbeat-dir}/shared-docker.asciidoc[]
-
-==== Special requirements
-
-Under Docker, {beatname_uc} runs as a non-root user, but requires some privileged
-capabilities to operate correctly. Ensure that the +AUDIT_CONTROL+ and +AUDIT_READ+
-capabilities are available to the container.
-
-It is also essential to run {beatname_uc} in the host PID namespace.
-
-["source","sh",subs="attributes"]
-----
-docker run --cap-add=AUDIT_CONTROL --cap-add=AUDIT_READ --user=root --pid=host {dockerimage}
-----
diff --git a/auditbeat/docs/running-on-kubernetes.asciidoc b/auditbeat/docs/running-on-kubernetes.asciidoc
deleted file mode 100644
index f5f4f0f4715e..000000000000
--- a/auditbeat/docs/running-on-kubernetes.asciidoc
+++ /dev/null
@@ -1,101 +0,0 @@
-[[running-on-kubernetes]]
-=== Running {beatname_uc} on Kubernetes
-
-{beatname_uc} <> can be used on Kubernetes to
-check files integrity.
-
-TIP: Running {ecloud} on Kubernetes? See {eck-ref}/k8s-beat.html[Run {beats} on ECK].
-
-ifeval::["{release-state}"=="unreleased"]
-
-However, version {version} of {beatname_uc} has not yet been
-released, so no Docker image is currently available for this version.
-
-endif::[]
-
-
-[float]
-==== Kubernetes deploy manifests
-
-By deploying {beatname_uc} as a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet]
-we ensure we get a running instance on each node of the cluster.
-
-Everything is deployed under `kube-system` namespace, you can change that by
-updating the YAML file.
-
-To get the manifests just run:
-
-["source", "sh", subs="attributes"]
-------------------------------------------------
-curl -L -O https://raw.githubusercontent.com/elastic/beats/{branch}/deploy/kubernetes/{beatname_lc}-kubernetes.yaml
-------------------------------------------------
-
-[WARNING]
-=======================================
-If you are using Kubernetes 1.7 or earlier: {beatname_uc} uses a hostPath volume to persist internal data, it's located
-under /var/lib/{beatname_lc}-data. The manifest uses folder autocreation (`DirectoryOrCreate`), which was introduced in
-Kubernetes 1.8. You will need to remove `type: DirectoryOrCreate` from the manifest and create the host folder yourself.
-=======================================
-
-[float]
-==== Settings
-
-Some parameters are exposed in the manifest to configure logs destination, by
-default they will use an existing Elasticsearch deploy if it's present, but you
-may want to change that behavior, so just edit the YAML file and modify them:
-
-["source", "yaml", subs="attributes"]
-------------------------------------------------
-- name: ELASTICSEARCH_HOST
- value: elasticsearch
-- name: ELASTICSEARCH_PORT
- value: "9200"
-- name: ELASTICSEARCH_USERNAME
- value: elastic
-- name: ELASTICSEARCH_PASSWORD
- value: changeme
-------------------------------------------------
-
-[float]
-===== Running {beatname_uc} on control plane nodes
-
-Kubernetes control plane nodes can use https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/[taints]
-to limit the workloads that can run on them. To run {beatname_uc} on control plane nodes you may need to
-update the Daemonset spec to include proper tolerations:
-
-[source,yaml]
-------------------------------------------------
-spec:
- tolerations:
- - key: node-role.kubernetes.io/control-plane
- effect: NoSchedule
-------------------------------------------------
-
-[float]
-==== Deploy
-
-To deploy {beatname_uc} to Kubernetes just run:
-
-["source", "sh", subs="attributes"]
-------------------------------------------------
-kubectl create -f {beatname_lc}-kubernetes.yaml
-------------------------------------------------
-
-Then you should be able to check the status by running:
-
-["source", "sh", subs="attributes"]
-------------------------------------------------
-$ kubectl --namespace=kube-system get ds/{beatname_lc}
-
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE
-{beatname_lc} 32 32 0 32 0 1m
-------------------------------------------------
-
-[WARNING]
-=======================================
-{beatname_uc} is able to monitor the file integrity of files in pods,
-to do that, the directories with the container root file systems have to be
-mounted as volumes in the {beatname_uc} container. For example, containers
-executed with containerd have their root file systems under `/run/containerd`.
-The https://raw.githubusercontent.com/elastic/beats/{branch}/deploy/kubernetes/{beatname_lc}-kubernetes.yaml[reference manifest] contains an example of this.
-=======================================
diff --git a/auditbeat/docs/setting-up-running.asciidoc b/auditbeat/docs/setting-up-running.asciidoc
deleted file mode 100644
index 4e2bd8265f90..000000000000
--- a/auditbeat/docs/setting-up-running.asciidoc
+++ /dev/null
@@ -1,58 +0,0 @@
-/////
-// NOTE:
-// Each beat has its own setup overview to allow for the addition of content
-// that is unique to each beat.
-/////
-
-[[setting-up-and-running]]
-== Set up and run {beatname_uc}
-
-++++
-Set up and run
-++++
-
-Before reading this section, see
-<<{beatname_lc}-installation-configuration>> for basic
-installation instructions to get you started.
-
-This section includes additional information on how to install, set up, and run
-{beatname_uc}, including:
-
-* <>
-
-* <>
-
-* <>
-
-* <>
-
-* <>
-
-* <>
-
-* <>
-
-* <<{beatname_lc}-starting>>
-
-* <>
-
-
-//MAINTAINERS: If you add a new file to this section, make sure you update the bulleted list ^^ too.
-
-include::{libbeat-dir}/shared-directory-layout.asciidoc[]
-
-include::{libbeat-dir}/keystore.asciidoc[]
-
-include::{libbeat-dir}/command-reference.asciidoc[]
-
-include::{libbeat-dir}/repositories.asciidoc[]
-
-include::./running-on-docker.asciidoc[]
-
-include::./running-on-kubernetes.asciidoc[]
-
-include::{libbeat-dir}/shared-systemd.asciidoc[]
-
-include::{libbeat-dir}/shared/start-beat.asciidoc[]
-
-include::{libbeat-dir}/shared/shutdown.asciidoc[]
diff --git a/auditbeat/docs/troubleshooting.asciidoc b/auditbeat/docs/troubleshooting.asciidoc
deleted file mode 100644
index 19eb279272b4..000000000000
--- a/auditbeat/docs/troubleshooting.asciidoc
+++ /dev/null
@@ -1,41 +0,0 @@
-[[troubleshooting]]
-= Troubleshoot
-
-[partintro]
---
-If you have issues installing or running {beatname_uc}, read the
-following tips:
-
-* <>
-* <>
-* <>
-* <>
-
-//sets block macro for getting-help.asciidoc included in next section
-
---
-
-[[getting-help]]
-== Get Help
-
-include::{libbeat-dir}/getting-help.asciidoc[]
-
-//sets block macro for debugging.asciidoc included in next section
-
-[id="enable-{beatname_lc}-debugging"]
-== Debug
-
-include::{libbeat-dir}/debugging.asciidoc[]
-
-//sets block macro for metrics-in-logs.asciidoc included in next section
-
-[id="understand-{beatname_lc}-logs"]
-[role="xpack"]
-== Understand metrics in {beatname_uc} logs
-
-++++
-Understand logged metrics
-++++
-
-include::{libbeat-dir}/metrics-in-logs.asciidoc[]
-
diff --git a/auditbeat/docs/upgrading.asciidoc b/auditbeat/docs/upgrading.asciidoc
deleted file mode 100644
index 132cb1db8434..000000000000
--- a/auditbeat/docs/upgrading.asciidoc
+++ /dev/null
@@ -1,7 +0,0 @@
-[[upgrading-auditbeat]]
-== Upgrade Auditbeat
-
-For information about upgrading to a new version, see:
-
-* {beats-ref}/breaking-changes.html[Breaking Changes]
-* {beats-ref}/upgrading.html[Upgrade]
diff --git a/docs/devguide/contributing.asciidoc b/docs/devguide/contributing.asciidoc
deleted file mode 100644
index 0637052b96c7..000000000000
--- a/docs/devguide/contributing.asciidoc
+++ /dev/null
@@ -1,245 +0,0 @@
-[[beats-contributing]]
-== Contributing to Beats
-
-If you have a bugfix or new feature that you would like to contribute, please
-start by opening a topic on the https://discuss.elastic.co/c/beats[forums].
-It may be that somebody is already working on it, or that there are particular
-issues that you should know about before implementing the change.
-
-We enjoy working with contributors to get their code accepted. There are many
-approaches to fixing a problem and it is important to find the best approach
-before writing too much code. After committing your code, check out the
-https://www.elastic.co/community/contributor[Elastic Contributor Program]
-where you can earn points and rewards for your contributions.
-
-The process for contributing to any of the Elastic repositories is similar.
-
-[float]
-[[contribution-steps]]
-=== Contribution Steps
-
-. Please make sure you have signed our
-https://www.elastic.co/contributor-agreement/[Contributor License Agreement]. We
-are not asking you to assign copyright to us, but to give us the right to
-distribute your code without restriction. We ask this of all contributors in
-order to assure our users of the origin and continuing existence of the code.
-You only need to sign the CLA once.
-
-. Send a pull request! Push your changes to your fork of the repository and
-https://help.github.com/articles/using-pull-requests[submit a pull request] using our
-<>. New PRs go to the main branch. The Beats
-core team will backport your PR if it is necessary.
-
-
-In the pull request, describe what your changes do and mention
-any bugs/issues related to the pull request. Please also add a changelog entry to
-https://github.com/elastic/beats/blob/main/CHANGELOG.next.asciidoc[CHANGELOG.next.asciidoc].
-
-[float]
-[[setting-up-dev-environment]]
-=== Setting Up Your Dev Environment
-
-The Beats are Go programs, so install the {go-version} version of
-http://golang.org/[Go] which is being used for Beats development.
-
-After https://golang.org/doc/install[installing Go], set the
-https://golang.org/doc/code.html#GOPATH[GOPATH] environment variable to point to
-your workspace location, and make sure `$GOPATH/bin` is in your PATH.
-
-NOTE: One deterministic manner to install the proper Go version to work with Beats is to use the
-https://github.com/andrewkroh/gvm[GVM] Go version manager. An example for Mac users would be:
-
-[source,shell,subs=attributes+]
-----------------------------------------------------------------------
-gvm use {go-version}
-eval $(gvm {go-version})
-----------------------------------------------------------------------
-
-Then you can clone Beats git repository:
-
-[source,shell]
-----------------------------------------------------------------------
-mkdir -p ${GOPATH}/src/github.com/elastic
-git clone https://github.com/elastic/beats ${GOPATH}/src/github.com/elastic/beats
-----------------------------------------------------------------------
-
-NOTE: If you have multiple go paths, use `${GOPATH%%:*}` instead of `${GOPATH}`.
-
-Beats developers primarily use https://github.com/magefile/mage[Mage] for development.
-You can install mage using a make target:
-
-[source,shell]
---------------------------------------------------------------------------------
-make mage
---------------------------------------------------------------------------------
-
-Then you can compile a particular Beat by using Mage. For example, for Filebeat:
-
-[source,shell]
---------------------------------------------------------------------------------
-cd beats/filebeat
-mage build
---------------------------------------------------------------------------------
-
-You can list all available mage targets with:
-
-[source,shell]
---------------------------------------------------------------------------------
-mage -l
---------------------------------------------------------------------------------
-
-Some of the Beats might have extra development requirements, in which case
-you'll find a CONTRIBUTING.md file in the Beat directory.
-
-We use an http://editorconfig.org/[EditorConfig] file in the beats repository
-to standardise how different editors handle whitespace, line endings, and other
-coding styles in our files. Most popular editors have a
-http://editorconfig.org/#download[plugin] for EditorConfig and we strongly
-recommend that you install it.
-
-[float]
-[[update-scripts]]
-=== Update scripts
-
-The Beats use a variety of scripts based on Python, make and mage to generate configuration files
-and documentation. Ensure to use the version of python listed in the https://github.com/elastic/beats/blob/main/.python-version[.python-version] file.
-
-The primary command for updating generated files is:
-
-[source,shell]
---------------------------------------------------------------------------------
-make update
---------------------------------------------------------------------------------
-Each Beat has its own `update` target (for both `make` and `mage`), as well as a master `update` in the repository root.
-If a PR adds or removes a dependency, run `make update` in the root `beats` directory.
-
-Another command properly formats go source files and adds a copyright header:
-
-[source,shell]
---------------------------------------------------------------------------------
-make fmt
---------------------------------------------------------------------------------
-
-Both of these commands should be run before submitting a PR. You can view all
-the available make targets with `make help`.
-
-These commands have the following dependencies:
-
-* Python >= {python}
-* Python https://docs.python.org/3/library/venv.html[venv module]
-* https://github.com/magefile/mage[Mage]
-
-Python venv module is included in the standard library in Python 3. On Debian/Ubuntu
-systems it also requires to install the `python3-venv` package, that includes
-additional support scripts:
-
-[source,shell]
---------------------------------------------------------------------------------
-sudo apt-get install python3-venv
---------------------------------------------------------------------------------
-
-[float]
-[[build-target-env-vars]]
-=== Selecting Build Targets
-
-Beats is built using the `make release` target. By default, make will select from a limited number of preset build targets:
-
-- darwin/amd64
-- darwin/arm64
-- linux/amd64
-- windows/amd64
-
-You can change build targets using the `PLATFORMS` environment variable. Targets set with the `PLATFORMS` variable can either be a GOOS value, or a GOOS/arch pair.
-For example, `linux` and `linux/amd64` are both valid targets. You can select multiple targets, and the `PLATFORMS` list is space delimited, for example `darwin windows` will build on all supported darwin and windows architectures.
-In addition, you can add or remove from the list of build targets by prepending `+` or `-` to a given target. For example: `+bsd` or `-darwin`.
-
-You can find the complete list of supported build targets with `go tool dist list`.
-
-[float]
-[[running-linter]]
-=== Linting
-
-Beats uses https://golangci-lint.run/[golangci-lint]. You can run the pre-configured linter against your change:
-
-[source,shell]
---------------------------------------------------------------------------------
-mage llc
---------------------------------------------------------------------------------
-
-`llc` stands for `Lint Last Change` which includes all the Go files that were changed in either the last commit (if you're on the `main` branch) or in a difference between your feature branch and the `main` branch.
-
-It's expected that sometimes a contributor will be asked to fix linter issues unrelated to their contribution since the linter was introduced later than changes in some of the files.
-
-You can also run the linter against an individual package, for example the filbeat command package:
-
-[source,shell]
---------------------------------------------------------------------------------
-golangci-lint run ./filebeat/cmd/...
---------------------------------------------------------------------------------
-
-[float]
-[[running-testsuite]]
-=== Testing
-
-You can run the whole testsuite with the following command:
-
-[source,shell]
---------------------------------------------------------------------------------
-make testsuite
---------------------------------------------------------------------------------
-
-Running the testsuite has the following requirements:
-
-* Python >= {python}
-* Docker >= {docker}
-* Docker-compose >= {docker-compose}
-
-For more details, refer to the <> guide.
-
-[float]
-[[documentation]]
-=== Documentation
-
-The main documentation for each Beat is located under `/docs` and is
-based on https://docs.asciidoctor.org/asciidoc/latest/[AsciiDoc]. The Beats
-documentation also makes extensive use of conditionals and content reuse to
-ensure consistency and accuracy. Before contributing to the documentation, read
-the following resources:
-
-* https://github.com/elastic/docs/blob/master/README.asciidoc[Docs HOWTO]
-* <>
-
-[float]
-[[dependencies]]
-=== Dependencies
-
-In order to create Beats we rely on Golang libraries and other
-external tools.
-
-[float]
-==== Other dependencies
-
-Besides Go libraries, we are using development tools to generate parsers for inputs and processors.
-
-The following packages are required to run `go generate`:
-
-[float]
-===== Auditbeat
-
-* FlatBuffers >= 1.9
-
-[float]
-===== Filebeat
-
-* Graphviz >= 2.43.0
-* Ragel >= 6.10
-
-
-[float]
-[[changelog]]
-=== Changelog
-
-To keep up to date with changes to the official Beats for community developers,
-follow the developer changelog
-https://github.com/elastic/beats/blob/main/CHANGELOG-developer.next.asciidoc[here].
-
diff --git a/docs/devguide/create-metricset.asciidoc b/docs/devguide/create-metricset.asciidoc
deleted file mode 100644
index 2c2d798086b1..000000000000
--- a/docs/devguide/create-metricset.asciidoc
+++ /dev/null
@@ -1,320 +0,0 @@
-[[creating-metricsets]]
-=== Creating a Metricset
-
-include::generator-support-note.asciidoc[tag=metricset-generator]
-
-A metricset is the part of a Metricbeat module that fetches and structures the
-data from the remote service. Each module can have multiple metricsets. In this guide, you learn how to create your own metricset.
-
-When creating a metricset for the first time, it generally helps to look at the
-implementation of existing metricsets for inspiration.
-
-To create a new metricset:
-
-. Run the following command inside the metricbeat beat directory:
-+
-[source,bash]
-----
-make create-metricset
-----
-+
-You need Python to run this command, then, you'll be prompted to enter a module and metricset name. Remember that a module represents the service you want to retrieve metrics from (like Redis) and a metricset is a specific set of grouped metrics (like `info` on Redis). Only use characters `[a-z]`
-and, if required, underscores (`_`). No other characters are allowed.
-+
-When you run `make create-metricset`, it creates all the basic files for your metricset, along with the required module
-files if the module does not already exist. See <> for more details about the module files.
-+
-NOTE: We use `{metricset}`, `{module}`, and `{beat}` in this guide as placeholders. You need to replace these with
-the actual names of your metricset, module, and beat.
-+
-The metricset that you created is already a functioning metricset and can be compiled.
-+
-. Compile your new metricset by running the following command:
-+
-[source,bash]
-----
-mage update
-mage build
-----
-+
-The first command, `mage update`, updates all generated files with the most recent files, data, and meta information from the metricset. The second command,
-`mage build`, compiles your source code and provides you with a binary called metricbeat in the same folder. You can run the
-binary in debug mode with the following command:
-+
-[source,bash]
-----
-./metricbeat -e -d "*"
-----
-
-After running the mage commands, you'll find the metricset, along with its generated files, under `module/{module}/{metricset}`. This directory
-contains the following files:
-
-* `\{metricset}.go`
-* `_meta/docs.asciidoc`
-* `_meta/data.json`
-* `_meta/fields.yml`
-
-Let's look at the files in more detail next.
-
-[float]
-==== \{metricset}.go File
-
-The first file is `{metricset}.go`. It contains the logic on how to fetch data from the service and convert it for sending to the output.
-
-The generated file looks like this:
-
-https://github.com/elastic/beats/blob/main/metricbeat/scripts/module/metricset/metricset.go.tmpl
-
-[source,go]
-----
-include::../../metricbeat/scripts/module/metricset/metricset.go.tmpl[]
-----
-
-The `package` clause and `import` declaration are part of the base structure of each Go file. You should only
-modify this part of the file if your implementation requires more imports.
-
-[float]
-===== Initialisation
-
-The init method registers the metricset with the central registry. In Go the `init()` function is called
-before the execution of all other code. This means the module will be automatically registered with the global registry.
-
-The `New` method, which is passed to `MustAddMetricSet`, will be called after the setup of the module and before starting to fetch data. You normally don't need to change this part of the file.
-
-[source,go]
-----
-func init() {
- mb.Registry.MustAddMetricSet("{module}", "{metricset}", New)
-}
-----
-
-[float]
-===== Definition
-
-The MetricSet type defines all fields of the metricset. As a minimum it must be composed of the `mb.BaseMetricSet` fields,
-but can be extended with additional entries. These variables can be used to persist data or configuration between
-multiple fetch calls.
-
-You can add more fields to the MetricSet type, as you can see in the following example where the `username` and `password` string fields are added:
-
-[source,go]
-----
-type MetricSet struct {
- mb.BaseMetricSet
- username string
- password string
-}
-----
-
-
-[float]
-===== Creation
-
-The `New` function creates a new instance of the MetricSet. The setup process
-of the MetricSet is also part of `New`. This method will be called before `Fetch`
-is called the first time.
-
-
-The `New` function also sets up the configuration by processing additional
-configuration entries, if needed.
-
-[source,go]
-----
-
-func New(base mb.BaseMetricSet) (mb.MetricSet, error) {
-
- config := struct{}{}
-
- if err := base.Module().UnpackConfig(&config); err != nil {
- return nil, err
- }
-
- return &MetricSet{
- BaseMetricSet: base,
- }, nil
-}
-----
-
-[float]
-===== Fetching
-
-The `Fetch` method is the central part of the metricset. `Fetch` is called every
-time new data is retrieved. If more than one host is defined, `Fetch` is
-called once for each host. The frequency of calling `Fetch` is based on the `period`
-defined in the configuration file.
-
-`Fetch` must publish the event using the `mb.ReporterV2.Event` method. If an error
-happens, `Fetch` can return an error, or if `Event` is being called in a loop,
-published using the `mb.ReporterV2.Error` method. This means
-that Metricbeat always sends an event, even on failure. You must make sure that the
-error message helps to identify the actual error.
-
-The following example shows a metricset `Fetch` method with a counter that is
-incremented for each `Fetch` call:
-
-[source,go]
-----
-func (m *MetricSet) Fetch(report mb.ReporterV2) error {
-
- report.Event(mb.Event{
- MetricSetFields: common.MapStr{
- "counter": m.counter,
- }
- })
- m.counter++
-
- return nil
-}
-----
-
-The JSON output derived from the reported event will be identical to the naming and
-structure you use in `common.MapStr`. For more details about `MapStr` and its functions, see the
-https://godoc.org/github.com/elastic/beats/libbeat/common#MapStr[MapStr API docs].
-
-
-[float]
-===== Multi Fetching
-
-`Event` can be called multiple times inside of the `Fetch` method for metricsets that might expose multiple events.
-`Event` returns a bool that indicates if the metricset is already closed and no further events can be processed,
-in which case `Fetch` should return immediately. If there is an error while processing one of many events,
-it can be published using the `mb.ReporterV2.Error` method, as opposed to returning an error value.
-
-[float]
-===== Parsing and Normalizing Fields
-
-In Metricbeat we aim to normalize the metric names from all metricsets to
-respect a common <>. This
-makes it easy for users to find and interpret metrics. To simplify parsing,
-converting, renaming, and restructuring of the object read from the monitored
-system to the Metricbeat format, we have created the
-https://godoc.org/github.com/elastic/beats/libbeat/common/schema[schema] package
-that allows you to declaratively define transformations.
-
-For example, assuming this input object:
-
-[source,go]
-----
-input := map[string]interface{}{
- "testString": "hello",
- "testInt": "42",
- "testBool": "true",
- "testFloat": "42.1",
- "testObjString": "hello, object",
-}
-----
-
-And the requirement to transform it into this one:
-
-[source,go]
-----
-common.MapStr{
- "test_string": "hello",
- "test_int": int64(42),
- "test_bool": true,
- "test_float": 42.1,
- "test_obj": common.MapStr{
- "test_obj_string": "hello, object",
- },
-}
-----
-
-You can use the schema package to transform the data, and optionally mark some fields in a schema as required or not. For example:
-
-[source,go]
-----
-import (
- s "github.com/elastic/beats/libbeat/common/schema"
- c "github.com/elastic/beats/libbeat/common/schema/mapstrstr"
-)
-
-var (
- schema = s.Schema{
- "test_string": c.Str("testString", s.Required), <1>
- "test_int": c.Int("testInt"), <2>
- "test_bool": c.Bool("testBool", s.Optional), <3>
- "test_float": c.Float("testFloat"),
- "test_obj": s.Object{
- "test_obj_string": c.Str("testObjString", s.IgnoreAllErrors), <4>
- },
- }
-)
-
-func eventMapping(input map[string]interface{}) common.MapStr {
- return schema.Apply(input) <5>
-}
-----
-<1> Marks a field as required.
-<2> If a field has no schema option set, it is equivalent to `Required`.
-<3> Marks the field as optional.
-<4> Ignore any value conversion error
-<5> By default, `Apply` will fail and return an error if any required field is missing. Using the optional second argument, you can specify how `Apply` handles different fields of the schema. The possible values are:
-- `AllRequired` is the default behavior. Returns an error if any required field is missing, including fields that are required because no schema option is set.
-- `FailOnRequired` will fail if a field explicitly marked as `required` is missing.
-- `NotFoundKeys(cb func([]string))` takes a callback function that will be called with a list of missing keys, allowing for finer-grained error handling.
-
-In the above example, note that it is possible to create the schema object once
-and apply it to all events. You can also use `ApplyTo` to add additional data to an existing `MapStr` object:
-[source,go]
-----
-
-var (
- schema = s.Schema{
- "test_string": c.Str("testString"),
- "test_int": c.Int("testInt"),
- "test_bool": c.Bool("testBool"),
- "test_float": c.Float("testFloat"),
- "test_obj": s.Object{
- "test_obj_string": c.Str("testObjString"),
- },
- }
-
- additionalSchema = s.Schema{
- "second_string": c.Str("secondString"),
- "second_int": c.Int("secondInt"),
- }
-)
-
- data, err := schema.Apply(input)
- if err != nil {
- return err
- }
-
- if m.parseMoreData{
- _, err := additionalSchema.ApplyTo(data, input)
- if len(err) > 0 { <1>
- return err.Err()
- }
- }
-
-----
-<1> `ApplyTo` returns a raw MultiError object, making it suitable for finer-grained error handling.
-
-
-[float]
-==== Configuration File
-The configuration file for a metricset is handled by the module. If there are
-multiple metricsets in one module, make sure you add all metricsets to the configuration.
-For example:
-
-[source,go]
-----
-metricbeat:
- modules:
- - module: {module-name}
- metricsets: ["{metricset1}", "{metricset2}"]
-----
-
-NOTE: Make sure that you run `make collect` after updating the config file
-so that your changes are also applied to the global configuration file and the docs.
-
-For more details about the Metricbeat configuration file, see the topic about
-{metricbeat-ref}/configuration-metricbeat.html[Modules] in the Metricbeat
-documentation.
-
-
-[float]
-==== What to Do Next
-This topic provides basic steps for creating a metricset. For more details about metricsets
-and how to extend your metricset further, see <>.
-
diff --git a/docs/devguide/create-module.asciidoc b/docs/devguide/create-module.asciidoc
deleted file mode 100644
index 002ec717364b..000000000000
--- a/docs/devguide/create-module.asciidoc
+++ /dev/null
@@ -1,185 +0,0 @@
-[[creating-metricbeat-module]]
-=== Creating a Metricbeat Module
-
-Metricbeat modules are used to group multiple metricsets together and to implement shared functionality
-of the metricsets. In most cases, no implementation of the module is needed and the default module
-implementation is automatically picked.
-
-It's important to complete the configuration and documentation files for a module. When you create a new
-metricset by running `make create-metricset`, default versions of these files are generated in the `_meta` directory.
-
-[float]
-==== Module Files
-
-* `config.yml` and `config.reference.yml`
-* `docs.asciidoc`
-* `fields.yml`
-
-After updating any of these files, make sure you run `make update` in your beat directory so all generated
-files are updated.
-
-
-[float]
-===== config.yml and config.reference.yml
-
-The `config.yml` file contains the basic configuration options and looks like this:
-
-[source,yaml]
-----
-include::../../metricbeat/scripts/module/config.yml[]
-----
-
-It contains the module name, your metricset, and the default period. If you have multiple
-metricsets in your module, make sure that you extend the metricset array:
-
-[source,yaml]
-----
- metricsets: ["{metricset1}", "{metricset2}"]
-----
-
-The `full.config.yml` file is optional and by default has the same content as the `config.yml`. It is used
-to add and document more advanced configuration options that should not be part of the minimal
-config file shipped by default.
-
-[float]
-===== docs.asciidoc
-
-The `docs.asciidoc` file contains the documentation about your module. During generation of the
-documentation, the default config file will be appended to the docs. Use this file to describe your
-module in more detail and to document specific configuration options.
-
-[source,asciidoc]
-----
-include::../../metricbeat/scripts/module/docs.asciidoc[]
-----
-
-[float]
-===== fields.yml
-
-The `fields.yml` file contains the top level structure for the fields in your metricset. It's used in combination with
-the `fields.yml` file in each metricset to generate the template and documentation for the fields.
-
-The default file looks like this:
-
-[source,yaml]
-----
-include::../../metricbeat/scripts/module/fields.yml[]
-----
-
-Make sure that you update at least the description of the module.
-
-
-[float]
-==== Testing
-
-It's a common pattern to use a `testing.go` file in the module package to share some testing functionality among
-the metricsets. This file does not have `_test.go` in the name because otherwise it would not be compiled for sub packages.
-
-To see an example of the `testing.go` file, look at the https://github.com/elastic/beats/tree/{branch}/metricbeat/module/mysql[mysql module].
-
-[float]
-===== Test a Metricbeat module manually
-
-To test a Metricbeat module manually, follow the steps below.
-
-First we have to build the Docker image which is available for the modules. The Dockerfile is located inside a `_meta` folder within each module folder. As an example let's take MySQL module.
-
-This steps assume you have checked out the Beats repository from Github and are inside `beats` directory. First, we have to enter in the `_meta` folder mentioned above and build the Docker image called `metricbeat-mysql`:
-
-[source,bash]
-----
-$ cd metricbeat/module/mysql/_meta/
-$ docker build -t metricbeat-mysql .
-...
-Removing intermediate container 0e58cfb7b197
- ---> 9492074840ea
-Step 5/5 : COPY test.cnf /etc/mysql/conf.d/test.cnf
- ---> 002969e1d810
-Successfully built 002969e1d810
-Successfully tagged metricbeat-mysql:latest
-----
-
-Before we run the container we have just created, we also need to know which port to expose. The port is listed in the `metricbeat/{module}/_meta/env` file:
-
-[source,bash]
-----
-$ cat env
-MYSQL_DSN=root:test@tcp(mysql:3306)/
-MYSQL_HOST=mysql
-MYSQL_PORT=3306
-----
-
-As we see, the port is 3306. We now have all the information to start our MySQL service locally:
-
-[source,bash]
-----
-$ docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret metricbeat-mysql
-----
-
-This starts the container and you can now use it for testing the MySQL module.
-
-To run Metricbeat with the module we need to build the binary, enable the module first. The assumption is now that you are back in the `beats` folder path:
-
-[source,bash]
-----
-$ cd metricbeat
-$ mage build
-$ ./metricbeat modules enable mysql
-----
-
-This will enable the module and rename file `metricbeat/modules.d/mysql.yml.disabled` to `metricbeat/modules.d/mysql.yml`. According to our {metricbeat-ref}/metricbeat-module-mysql.html[documentation] we should specify username and password to user MySQL. It's always a good idea to take a look at the docs to see also that a pre-built dashboard is also available. So tweaking the config a bit, this is how it looks like:
-
-[source,yaml]
-----
-$ cat modules.d/mysql.yml
-
-# Module: mysql
-# Docs: https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-mysql.html
-
-- module: mysql
- metricsets:
- - status
- # - galera_status
- period: 10s
-
- # Host DSN should be defined as "user:pass@tcp(127.0.0.1:3306)/"
- # or "unix(/var/lib/mysql/mysql.sock)/",
- # or another DSN format supported by .
- # The username and password can either be set in the DSN or using the username
- # and password config options. Those specified in the DSN take precedence.
- hosts: ["tcp(127.0.0.1:3306)/"]
-
- # Username of hosts. Empty by default.
- username: root
-
- # Password of hosts. Empty by default.
- password: secret
-----
-
-It's now sending data to your local Elasticsearch instance. If you need to modify the mysql config, adjust `modules.d/mysql.yml` and restart Metricbeat.
-
-
-
-
-[float]
-===== Run Environment tests for one module
-
-All the environments are setup with docker. `make integration-tests-environment` and `make system-tests-environment` can be used to run tests for all modules. In case you are developing a module it is convenient to run the tests only for one module and directly run it on your machine.
-
-First you need to start the environment for your module to test and expose the port to your local machine. For this you can run the following command inside the metricbeat directory:
-
-[source,bash]
-----
-MODULE=apache PORT=80 make run-module
-----
-
-Note: The apache module with port 80 is taken here as an example. You must put the name and port for your own module here.
-
-This will start the environment and you must wait until the service is completely started. After that you can run the test which require an environment:
-
-[source,bash]
-----
-MODULE=apache make test-module
-----
-
-This will run the integration and system tests connecting to the environment in your docker container.
diff --git a/docs/devguide/documentation.asciidoc b/docs/devguide/documentation.asciidoc
deleted file mode 100644
index 82e12a2721bb..000000000000
--- a/docs/devguide/documentation.asciidoc
+++ /dev/null
@@ -1,114 +0,0 @@
-[[contributing-docs]]
-=== Contributing to the docs
-
-The Beats documentation follows the tagging guidelines described in the
-https://github.com/elastic/docs/blob/master/README.asciidoc[Docs HOWTO]. However
-it extends these capabilities in a couple ways:
-
-* The documentation makes extensive use of
-https://docs.asciidoctor.org/asciidoc/latest/directives/conditionals/[AsciiDoc conditionals]
-to provide content that is reused across multiple books. This means that there
-might not be a single source file for each published HTML page. Some files are
-shared across multiple books, either as complete pages or snippets. For more
-details, refer to <>.
-
-* The documentation includes some files that are generated from YAML source or
-pieced together from content that lives in `_meta` directories under the code
-(for example, the module and exported fields documentation). For more details,
-refer to <>.
-
-[float]
-[[where-to-find-files]]
-==== Where to find the Beats docs source
-
-Because the Beats documentation makes use of shared content, doc generation
-scripts, and componentization, the source files are located in several places:
-
-|===
-| Documentation | Location of source files
-
-| Main docs for the Beat, including index files
-| `/docs`
-
-| Shared docs and Beats Platform Reference
-| `libbeat/docs`
-
-| Processor docs
-| `docs` folders under processors in `libbeat/processors/`,
-`x-pack//processors/`, and `x-pack/libbeat/processors/`
-
-| Output docs
-| `docs` folders under outputs in `libbeat/outputs/`
-
-| Module docs
-| `_meta` folders under modules and datasets in `libbeat/module/`,
-`/module/`, and `x-pack//module/`
-|===
-
-The https://github.com/elastic/docs/blob/master/conf.yaml[conf.yaml] file in the
-`docs` repo shows all the resources used to build each book. This file is used
-to drive the classic docs build and is the source of truth for file locations.
-
-TIP: If you can't find the source for a page you want to update, go to the
-published page at www.elastic.co and click the Edit link to navigate to the
-source.
-
-The Beats documentation build also has dependencies on the following files in
-the https://github.com/elastic/docs[docs] repo:
-
-* `shared/versions/stack/.asciidoc`
-* `shared/attributes.asciidoc`
-
-[float]
-[[generated-docs]]
-==== Generated docs
-
-After updating `docs.asciidoc` files in `_meta` directories, you must run the
-doc collector scripts to regenerate the docs.
-
-Make sure you
-<> and use
-the correct Go version. The Go version is listed in the `version.asciidoc` file
-for the branch you want to update.
-
-To run the docs collector scripts, change to the beats directory and run:
-
-`make update`
-
-WARNING: The `make update` command overwrites files in the `docs` directories
-**without warning**. If you accidentally update a generated file and run
-`make update`, your changes will be overwritten.
-
-To format your files, you might also need to run this command:
-
-`make fmt`
-
-The make command calls the following scripts to generate the docs:
-
-https://github.com/elastic/beats/blob/main/auditbeat/scripts/docs_collector.py[auditbeat/scripts/docs_collector.py]
-generates:
-
-* `auditbeat/docs/modules_list.asciidoc`
-* `auditbeat/docs/modules/*.asciidoc`
-
-https://github.com/elastic/beats/blob/main/filebeat/scripts/docs_collector.py[filebeat/scripts/docs_collector.py]
-generates:
-
-* `filebeat/docs/modules_list.asciidoc`
-* `filebeat/docs/modules/*.asciidoc`
-
-https://github.com/elastic/beats/blob/main/metricbeat/scripts/mage/docs_collector.go[metricbeat/scripts/mage/docs_collector.go]
-generates:
-
-* `metricbeat/docs/modules_list.asciidoc`
-* `metricbeat/docs/modules/*.asciidoc`
-
-https://github.com/elastic/beats/blob/main/libbeat/scripts/generate_fields_docs.py[libbeat/scripts/generate_fields_docs.py]
-generates
-
-* `auditbeat/docs/fields.asciidoc`
-* `filebeat/docs/fields.asciidoc`
-* `heartbeat/docs/fields.asciidoc`
-* `metricbeat/docs/fields.asciidoc`
-* `packetbeat/docs/fields.asciidoc`
-* `winlogbeat/docs/fields.asciidoc`
diff --git a/docs/devguide/event-conventions.asciidoc b/docs/devguide/event-conventions.asciidoc
deleted file mode 100644
index 3d2c09513272..000000000000
--- a/docs/devguide/event-conventions.asciidoc
+++ /dev/null
@@ -1,75 +0,0 @@
-[[event-conventions]]
-=== Naming Conventions
-
-When creating events, use the following conventions for field names and abbreviations.
-
-[[field-names]]
-==== Field Names
-
-Use the following naming conventions for field names:
-
-- All fields must be lower case.
-- Use snake case (underscores) for combining words.
-- Group related fields into subdocuments by using dot (.) notation. Groups typically have common prefixes. For example, if you have fields called `CPULoad` and `CPUSystem` in a service, you would convert
-them into `cpu.load` and `cpu.system` in the event.
-- Avoid repeating the namespace in field names. If a word or abbreviation appears in the namespace, it's not needed in the field name. For example, instead of `cpu.cpu_load`, use `cpu.load`.
-- Use <> when the metric matches one of the known units.
-- Use <> and avoid using abbreviations that aren't commonly known.
-- Organise the documents from general to specific to allow for namespacing. The type, such as `.pct`, should always be last. For example, `system.core.user.pct`.
-- If two fields are the same, but with different units, remove the less granular one. For example, include `timeout.sec`, but don't include `timeout.min`. If a less granular value is required, you can calculate it later.
-- If a field name matches the namespace used for nested fields, add `.value` to the field name. For example, instead of:
-+
-[source,yaml]
-----------
-workers
-workers.busy
-workers.idle
-----------
-+
-Use:
-+
-[source,yaml]
-----------
-workers.value
-workers.busy
-workers.idle
-----------
-- Do not use dots (.) in individual field names. Dots are reserved for grouping related fields into subdocuments.
-- Use singular and plural names properly to reflect the field content. For example, use `requests_per_sec` rather than `request_per_sec`.
-
-[[units]]
-==== Units
-
-These are well-known suffixes to represent units of stored values, use them as a dotted suffix when
-possible. For example `system.memory.used.bytes` or `system.diskio.read.count`:
-
-[options="header"]
-|=======================
-|Suffix |Units
-|count |item count
-|pct |percentage
-|day |days
-|sec |seconds
-|ms |millisecond
-|us |microseconds
-|ns |nanoseconds
-|bytes |bytes
-|mb |megabytes
-|=======================
-
-
-[[abbreviations]]
-==== Standardised Names
-
-Here is a list of standardised names and units that are used across all Beats:
-
-[options="header"]
-|=======================
-|Use... |Instead of...
-|avg |average
-|connection |conn
-|max |maximum
-|min |minimum
-|request |req
-|msg |message
-|=======================
diff --git a/docs/devguide/faq.asciidoc b/docs/devguide/faq.asciidoc
deleted file mode 100644
index 2f37bf0553dd..000000000000
--- a/docs/devguide/faq.asciidoc
+++ /dev/null
@@ -1,21 +0,0 @@
-[[dev-faq]]
-=== Metricbeat Developer FAQ
-
-This is a list of common questions when creating a metricset and the potential answers.
-
-[float]
-==== Metricset is not compiled
-
-You are compiling your Beat, but the newly created metricset is not compiled?
-
-Make sure that the path to your module and metricset are added as an import path either in your `main.go`
-file or your `include/list.go` file. You can do this manually or by running `make imports`.
-
-[float]
-==== Metricset is not started
-
-The metricset is compiled, but not started when starting Metricbeat?
-
-After creating your metricset, make sure you run `make collect`. This command adds the configuration
-of your metricset to the default configuration. If the metricset still doesn't start, check your
-default configuration file to see if the metricset is listed there.
diff --git a/docs/devguide/fields-yml.asciidoc b/docs/devguide/fields-yml.asciidoc
deleted file mode 100644
index 87197fc2fe91..000000000000
--- a/docs/devguide/fields-yml.asciidoc
+++ /dev/null
@@ -1,163 +0,0 @@
-[[event-fields-yml]]
-=== Defining field mappings
-
-You must define the fields used by your Beat, along with their mapping details,
-in `_meta/fields.yml`. After editing this file, run `make update`.
-
-Define the field mappings in the `fields` array:
-
-[source,yaml]
-----------------------------------------------------------------------
-- key: mybeat
- title: mybeat
- description: These are the fields used by mybeat.
- fields:
- - name: last_name <1>
- type: keyword <2>
- required: true <3>
- description: > <4>
- The last name.
- - name: first_name
- type: keyword
- required: true
- description: >
- The first name.
- - name: comment
- type: text
- required: false
- description: >
- Comment made by the user.
-----------------------------------------------------------------------
-
-<1> `name`: The field name
-<2> `type`: The field type. The value of `type` can be any datatype {ref}/mapping-types.html[available in {es}]. If no value is specified, the default type is `keyword`.
-<3> `required`: Whether or not a field value is required
-<4> `description`: Some information about the field contents
-
-==== Mapping parameters
-
-You can specify other mapping parameters for each field. See the
-{ref}/mapping-params.html[{es} Reference] for more details about each
-parameter.
-
-[horizontal]
-`format`:: Specify a custom date format used by the field.
-`multi_fields`:: For `text` or `keyword` fields, use `multi_fields` to define
-multi-field mappings.
-`enabled`:: Whether or not the field is enabled.
-`analyzer`:: Which analyzer to use when indexing.
-`search_analyzer`:: Which analyzer to use when searching.
-`norms`:: Applies to `text` and `keyword` fields. Default is `false`.
-`dynamic`:: Dynamic field control. Can be one of `true` (default), `false`, or
-`strict`.
-`index`:: Whether or not the field should be indexed.
-`doc_values`:: Whether or not the field should have doc values generated.
-`copy_to`:: Which field to copy the field value into.
-`ignore_above`:: {es} ignores (does not index) strings that are longer than the
-specified value. When this property value is missing or `0`, the `libbeat`
-default value of `1024` characters is used. If the value is `-1`, the {es}
-default value is used.
-
-For example, you can use the `copy_to` mapping parameter to copy the
-`last_name` and `first_name` fields into the `full_name` field at index time:
-
-[source,yaml]
-----------------------------------------------------------------------
-- key: mybeat
- title: mybeat
- description: These are the fields used by mybeat.
- fields:
- - name: last_name
- type: text
- required: true
- copy_to: full_name <1>
- description: >
- The last name.
- - name: first_name
- type: text
- required: true
- copy_to: full_name <2>
- description: >
- The first name.
- - name: full_name
- type: text
- required: false
- description: >
- The last_name and first_name combined into one field for easy searchability.
-----------------------------------------------------------------------
-<1> Copy the value of `last_name` into `full_name`
-<2> Copy the value of `first_name` into `full_name`
-
-There are also some {kib}-specific properties, not detailed here. These are:
-`analyzed`, `count`, `searchable`, `aggregatable`, and `script`. {kib}
-parameters can also be described using `pattern`, `input_format`,
-`output_format`, `output_precision`, `label_template`, `url_template`, and
-`open_link_in_current_tab`.
-
-==== Defining text multi-fields
-
-There are various options that you can apply when using text fields. You can
-define a simple text field using the default analyzer without any other options,
-as in the example shown earlier.
-
-To keep the original keyword value when using `text` mappings, for instance to
-use in aggregations or ordering, you can use a multi-field mapping:
-
-[source,yaml]
-----------------------------------------------------------------------
-- key: mybeat
- title: mybeat
- description: These are the fields used by mybeat.
- fields:
- - name: city
- type: text
- multi_fields: <1>
- - name: keyword <2>
- type: keyword <3>
-----------------------------------------------------------------------
-<1> `multi_fields`: Define the `multi_fields` mapping parameter.
-<2> `name`: This is a conventional name for a multi-field. It can be anything (`raw` is another common option) but the convention is to use `keyword`.
-<3> `type`: Specify the `keyword` type to use the field in aggregations or to order documents.
-
-For more information, see the {ref}/multi-fields.html[{es} documentation about
-multi-fields].
-
-==== Defining a text analyzer in-line
-
-It is possible to define a new text analyzer or search analyzer in-line with
-the field definition in the field's mapping parameters.
-
-For example, you can define a new text analyzer that does not break hyphenated names:
-
-[source,yaml]
-----------------------------------------------------------------------
-- key: mybeat
- title: mybeat
- description: These are the fields used by mybeat.
- fields:
- - name: last_name
- type: text
- required: true
- description: >
- The last name.
- analyzer:
- mybeat_hyphenated_name: <1>
- type: pattern <2>
- pattern: "[\\W&&[^-]]+" <3>
- search_analyzer:
- mybeat_hyphenated_name: <4>
- type: pattern
- pattern: "[\\W&&[^-]]+"
-----------------------------------------------------------------------
-<1> Use a newly defined text analyzer
-<2> Define the custome analyzer type
-<3> Specify the analyzer behaviour
-<4> Use the same analyzer for the search
-
-The names of custom analyzers that are defined in-line may not be reused for a different
-text analyzer. If a text analyzer name is reused it is checked for matching existing
-instances of the analyzer. It is recommended that the analyzer name is prefixed with the
-beat name to avoid name clashes.
-
-For more information, see {ref}/analysis-custom-analyzer.html[{es} documentation about
-defining custom text analyzers].
diff --git a/docs/devguide/generator-support-note.asciidoc b/docs/devguide/generator-support-note.asciidoc
deleted file mode 100644
index 25579798ed28..000000000000
--- a/docs/devguide/generator-support-note.asciidoc
+++ /dev/null
@@ -1,13 +0,0 @@
-// tag::metricset-generator[]
-IMPORTANT: Elastic provides no warranty or support for the code used to generate
-metricsets. The generator is mainly offered as guidance for developers who want
-to create their own data shippers.
-
-// end::metricset-generator[]
-
-// tag::filebeat-generator[]
-IMPORTANT: Elastic provides no warranty or support for the code used to generate
-modules and filesets. The generator is mainly offered as guidance for developers
-who want to create their own data shippers.
-
-// end::filebeat-generator[]
\ No newline at end of file
diff --git a/docs/devguide/images/beat_overview.png b/docs/devguide/images/beat_overview.png
deleted file mode 100644
index 55621249ec6a..000000000000
Binary files a/docs/devguide/images/beat_overview.png and /dev/null differ
diff --git a/docs/devguide/index.asciidoc b/docs/devguide/index.asciidoc
deleted file mode 100644
index 3f554ee45540..000000000000
--- a/docs/devguide/index.asciidoc
+++ /dev/null
@@ -1,42 +0,0 @@
-[[beats-reference]]
-= Beats Developer Guide
-
-:libbeat-dir: {docdir}/../../libbeat/docs
-
-include::{libbeat-dir}/version.asciidoc[]
-
-include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[]
-
-:dev-guide: true
-:beatname_lc: beatname
-:beatname_uc: a Beat
-
-include::{asciidoc-dir}/../../shared/attributes.asciidoc[]
-
-include::{libbeat-dir}/shared-beats-attributes.asciidoc[]
-
-include::./pull-request-guidelines.asciidoc[]
-
-include::./contributing.asciidoc[]
-
-include::./documentation.asciidoc[]
-
-include::./testing.asciidoc[]
-
-include::{libbeat-dir}/communitybeats.asciidoc[]
-
-include::./fields-yml.asciidoc[]
-
-include::./event-conventions.asciidoc[]
-
-include::./python.asciidoc[]
-
-include::./newdashboards.asciidoc[]
-
-include::./new_protocol.asciidoc[]
-
-include::./metricbeat-devguide.asciidoc[]
-
-include::./modules-dev-guide.asciidoc[]
-
-include::./migrate-dashboards.asciidoc[]
diff --git a/docs/devguide/metricbeat-devguide.asciidoc b/docs/devguide/metricbeat-devguide.asciidoc
deleted file mode 100644
index 265bef2b8dd6..000000000000
--- a/docs/devguide/metricbeat-devguide.asciidoc
+++ /dev/null
@@ -1,61 +0,0 @@
-
-[[metricbeat-developer-guide]]
-== Extending Metricbeat
-
-Metricbeat periodically interrogates other services to fetch key metrics
-information. As a developer, you can use Metricbeat in two different ways:
-
-* Extend Metricbeat directly
-* Create your own Beat and use Metricbeat as a library
-
-We recommend that you start by creating your own Beat to keep the development of your own module or metricset
-independent of Metricbeat. At a later stage, if you decide to add a module to Metricbeat, you can reuse
-the code without making additional changes.
-
-This following topics describe how to contribute to Metricbeat by adding metricsets, modules, and new Beats based on Metricbeat:
-
-* <>
-* <>
-* <>
-* <>
-* <>
-
-If you would like to contribute to Metricbeat or the Beats project, also see
-<>.
-
-[[metricbeat-dev-overview]]
-=== Overview
-
-Metricbeat consists of modules and metricsets. A Metricbeat module is typically
-named after the service the metrics are fetched from, such as redis,
-mysql, and so on. Each module can contain multiple metricsets. A metricset represents
-multiple metrics that are normally retrieved with one request from the remote
-system. For example, the Redis `info` metricset retrieves info that you get when you
-run the Redis `INFO` command, and the MySQL `status` metricset retrieves
-info that you get when you issue the MySQL `SHOW GLOBAL STATUS` query.
-
-[float]
-==== Module and Metricsets Requirements
-
-To guarantee the best user experience, it's important to us that only high quality
-modules are part of Metricbeat. The modules and metricsets that are contributed
-must meet the following requirements:
-
-* Complete `fields.yml` file to generate docs and Elasticsearch templates
-* Documentation files
-* Integration tests
-* 80% test coverage (unit, integration, and system tests combined)
-
-Metricbeat allows you to build a wide variety of modules and metricsets on top of it.
-For a module to be accepted, it should focus on fetching service metrics
-directly from the service itself and not via a third-party tool. The goal is to
-have as few movable parts as possible and for Metricbeat to run as close as
-possible to the service that it needs to monitor.
-
-include::./create-metricset.asciidoc[]
-
-include::./metricset-details.asciidoc[]
-
-include::./create-module.asciidoc[]
-
-include::./faq.asciidoc[]
diff --git a/docs/devguide/metricset-details.asciidoc b/docs/devguide/metricset-details.asciidoc
deleted file mode 100644
index acb7209d0e82..000000000000
--- a/docs/devguide/metricset-details.asciidoc
+++ /dev/null
@@ -1,326 +0,0 @@
-[[metricset-details]]
-=== Metricset Details
-
-This topic provides additional details about creating metricsets.
-
-[float]
-=== Adding Special Configuration Options
-
-Each metricset can have its own configuration variables defined. To make use of
-these variables, you must extend the `New` method. For example, let's assume that
-you want to add a `password` config option to the metricset. You would extend
-`beat.yml` in the following way:
-
-[source,yaml]
-----
-metricbeat.modules:
-- module: {module}
- metricsets: ["{metricset}"]
- password: "test1234"
-----
-
-To read in the new `password` config option, you need to modify the `New` method. First you define a config
-struct that contains the value types to be read. You can set default values, as needed. Then you pass the config to
-the `UnpackConfig` method for loading the configuration.
-
-Your implementation should look something like this:
-
-[source,go]
-----
-type MetricSet struct {
- mb.BaseMetricSet
- password string
-}
-
-func New(base mb.BaseMetricSet) (mb.MetricSet, error) {
-
- // Unpack additional configuration options.
- config := struct {
- Password string `config:"password"`
- }{
- Password: "",
- }
- err := base.Module().UnpackConfig(&config)
- if err != nil {
- return nil, err
- }
-
- return &MetricSet{
- BaseMetricSet: base,
- password: config.Password,
- }, nil
-}
-----
-
-
-[float]
-==== Timeout Connections to Services
-
-Each time the `Fetch` method is called, it makes a request to the service, so it's
-important to handle the connections correctly. We recommended that you set up the
-connections in the `New` method and persist them in the `MetricSet` object. This allows
-connections to be reused.
-
-One very important point is that connections must respect the timeout variable:
-`base.Module().Config().Timeout`. If the timeout elapses before the request completes,
-the request must be ended and an error must be returned to make sure the next request
-can be started on time. By default the Timeout is set to Period, so one request gets
-ended before a new request is made.
-
-If a request must be ended or has an error, make sure that you return a useful error
-message. This error message is also sent to Elasticsearch, making it possible to not
-only fetch metrics from the service, but also report potential problems or errors with
-the metricset.
-
-
-[float]
-==== Data Transformation
-
-If the data transformation that has to happen in the `Fetch` method is
-extensive, we recommend that you create a second file called `data.go` in the same package
-as the metricset. The `data.go` file should contain a function called `eventMapping(...)`.
-A separate file is not required, but is currently a best practice because it isolates the
-functionality of the metricset and `Fetch` method from the data mapping.
-
-
-
-[float]
-==== fields.yml
-
-You can find up to 3 different types of files named `fields.yml` in the beats repository for each metricbeat module:
-
-* `metricbeat/fields.yml`: Contains the definitions to create the Elasticsearch template, the Kibana index pattern configuration and the exported fields documentation for metricsets. To make sure the Elasticsearch template is correct, it's important to keep this file up-to-date with all the changes. Generally, you shouldn't touch this file manually because it's generated by some commands in the build environment.
-* `metricbeat/module/{module}/_meta/fields.yml`: Contains the general top level structure for all metricsets in a module.
-Normally you only need to modify the description in this file. Here is an example for the `fields.yml` file from the MySQL module.
-+
-[source,yaml]
-----
-include::../../metricbeat/module/mysql/_meta/fields.yml[]
-----
-+
-* `metricbeat/module/{module}/{metricset}/_meta/fields.yml`: Contains all fields definitions retrieved by the metricset.
-As field types, each field must have a core data type
-{ref}/mapping-types.html#_core_datatypes[supported by elasticsearch]. Here's a very basic example that shows one group from the MySQL `status` metricset:
-+
-[source,yaml]
-----
-- name: status
- type: group
- description: >
- `status` contains the metrics that were obtained by the status SQL query.
- fields:
- - name: aborted
- type: group
- description: Aborted status fields.
- fields:
- - name: clients
- type: integer
- description: >
- The number of connections that were aborted because the client died without closing the connection properly.
-
- - name: connects
- type: integer
- description: >
- The number of failed attempts to connect to the MySQL server.
-----
-+
-
-// TODO: Add link to general fields.yml developer guide
-
-[float]
-==== Testing
-
-It's important to also add tests for your metricset. There are three different types of tests that you need for testing a Beat:
-
-* unit tests
-* integration tests
-* system tests
-
-We recommend that you use all three when you create a metricset. Unit tests are
-written in Go and have no dependencies. Integration tests are also written
-in Go but require the service from which the module collects metrics to also be running.
-System tests for Metricbeat also require the service to be running in most cases and are
-written in Python {python_major_version} based on our small Python test framework.
-We use https://docs.python.org/3/library/venv.html[venv] to deal with Python dependencies.
-You can simply run the command `make python-env` and then `. build/python-env/bin/activate` .
-
-You should use a combination of the three test types to test your metricsets because
-each method has advantages and disadvantages. To get started with your own tests, it's best
-to look at the existing tests. You'll find the unit and integration tests
-in the `_test.go` files under existing modules and metricsets.
-Integration tests usually take the form of `TestFetch` and `TestData`.
-The system tests are under `tests/systems`.
-
-
-[float]
-===== Adding a Test Environment
-
-Integration and system tests need an environment that's running the service. You
-can create this environment by using Docker and a docker-compose file. If you add a
-module that requires a service, you must add the service to the virtual environment.
-To do this, you:
-
-* Update the `docker-compose.yml` file with your environment
-* Update the `docker-entrypoint.sh` script
-
-The `docker-compose.yml` file is at the root of Metricbeat. Most services have
-existing Docker modules and can be added as simply as Redis:
-
-[source,yaml]
-----
-redis:
- image: redis:3.2.3
-----
-
-To allow the Beat to access your service, make sure that you define the environment
-variables in the docker-compose file and add the link to the container:
-
-[source,yaml]
-----
-beat:
- links:
- - redis
- environment:
- - REDIS_HOST=redis
- - REDIS_PORT=6379
-----
-
-To make sure the service is running before the tests are started, modify the
-`docker-entrypoint.sh` script to add a check that verifies your service is
-running. For example, the check for Redis looks like this:
-
-[source,shell]
-----
-waitFor ${REDIS_HOST} ${REDIS_PORT} Redis
-----
-
-The environment expects your service to be available as soon as it receives a response from
-the given address and port.
-
-[float]
-===== Adding the standard metricset integration tests
-
-There are normally two integration tests that are part of every metricset: `TestFetch` and `TestData`.
-Both tests will start up a new instance of your metricset and fetch an event. In order to start a metricset, you need to create a configuration object:
-
-[source,go]
-----
-func getConfig() map[string]interface{} {
- return map[string]interface{}{
- "module": "{module}",
- "metricsets": []string{"{metricset}"},
- "hosts": []string{GetEnvHost() + ":" + GetEnvPort()}, <1>
- }
-}
-
-func GetEnvHost() string { <2>
- host := os.Getenv("{module}_HOST")
- if len(host) == 0 {
- host = "127.0.0.1"
- }
- return host
-}
-
-func GetEnvPort() string { <2>
- port := os.Getenv("{module}_PORT")
-
- if len(port) == 0 {
- port = "1234"
- }
- return port
-}
-
-----
-<1> Add any additional config options your metricset needs here.
-<2> The endpoint used by the metricset needs to be configurable for manual and automated testing.
-Environment variables should be defined in the module under `_meta/env` and included in the `docker-compose.yml` file.
-
-The `TestFetch` integration test will return a single event from your metricset, which you can use to test the validity of the data.
-`TestData` will (re)generate the `_meta/data.json` file that documents the data reported by the metricset.
-
-[source,go]
-----
-import (
- "os"
- "testing"
-
- "github.com/stretchr/testify/assert"
-
- "github.com/elastic/beats/libbeat/tests/compose"
- mbtest "github.com/elastic/beats/metricbeat/mb/testing"
-)
-
-func TestFetch(t *testing.T) {
- compose.EnsureUp(t, "{module}") <1>
-
- f := mbtest.NewReportingMetricSetV2Error(t, getConfig())
-
- events, errs := mbtest.ReportingFetchV2Error(f)
- if len(errs) > 0 {
- t.Fatalf("Expected 0 errord, had %d. %v\n", len(errs), errs)
- }
-
- assert.NotEmpty(t, events) <2>
-
-}
-
-func TestData(t *testing.T) {
-
- f := mbtest.NewReportingMetricSetV2Error(t, getConfig())
-
- err := mbtest.WriteEventsReporterV2Error(f, t, "") <3>
- if !assert.NoError(t, err) {
- t.FailNow()
- }
-}
-----
-<1> Use this to start the docker service associated with your metricset.
-<2> Add any further validity checks to verify the metricset is working.
-<3> `WriteEventsReporterV2Error` will take the first valid event from the metricset and write it to `_meta/data.json`
-
-[float]
-===== Running the Tests
-
-To run all the tests, run `make testsuite`. To only run unit tests, run
-`mage unitTest`, or for integration tests `mage integTest`.
-Be aware that a running Docker environment is needed for integration and system
-tests.
-
-To run `TestData` and generate the `data.json` file, run
-`go test -tags=integration -data -run TestData` in the directory where your test is located.
-
-To run the integration tests for a single module, set the `MODULE` environment
-variable to the name of the directory of the module. For example you can run the
-following command to run integration tests for `apache` module:
-
-[source,shell]
-----
-MODULE=apache mage integTest
-----
-
-
-[float]
-=== Documentation
-
-Each module must be documented. The documentation is based on asciidoc and is in
-the file `module/{module}/_meta/docs.asciidoc` for the module and in `module/{module}/{metricset}/_meta/docs.asciidoc`
- for the metricset. Basic documentation with the config file and an example output is automatically
- generated. Use these files to document specific configuration options or usage examples.
-
-
-
-
-////
-TODO: The following parts should be added as soon as the content exists or the implementation is completed.
-
-[float]
-== Field naming
-https://github.com/elastic/beats/blob/main/metricbeat/module/doc.go
-
-[float]
-== Dashboards
-
-Dashboards are an important part of each metricset. Data gets much more useful
-when visualized. To create dashboards for the metricset, follow the guide here
-(link to dashboard guide).
-////
diff --git a/docs/devguide/migrate-dashboards.asciidoc b/docs/devguide/migrate-dashboards.asciidoc
deleted file mode 100644
index 453b065c90e2..000000000000
--- a/docs/devguide/migrate-dashboards.asciidoc
+++ /dev/null
@@ -1,98 +0,0 @@
-== Migrating dashboards from Kibana 5.x to 6.x
-
-This section is useful for the community Beats to migrate the Kibana 5.x dashboards to 6.x dashboards.
-
-In the Kibana 5.x, the saved dashboards consist of multiple JSON files, one for each dashboard, search, visualization
-and index-pattern. To import a dashboard in Kibana, you need to load not only the JSON file containing the dashboard, but
-also all its dependencies (searches, visualizations).
-
-Starting with Kibana 6.0, the dashboards are loaded by default via the Kibana API. In this case, the saved dashboard
-consist of a single JSON file that includes not only the dashboard content, but also all its dependencies.
-
-As the format of the dashboards and index-pattern for Kibana 5.x is different than the ones for Kibana 6.x, they are placed in different
-directories. Depending on the Kibana version, the 5.x or 6.x dashboards are loaded.
-
-The Kibana 5.x dashboards are placed under the 5.x directory that contains the following directories:
-- search
-- visualization
-- dashboard
-- index-pattern
-
-The Kibana 6.x dashboards and later are placed under the default directory that contains the following directories:
-- dashboard
-- index-pattern
-
-NOTE:: Please make sure the 5.x and default directories are created before running the following commands.
-
-To migrate your Kibana 5.x dashboards to Kibana 6.0 and above, you can import the dashboards into Kibana 5.6 and then
-export them using Beats 6.0 version.
-
-* Start Kibana 5.6
-* Import Kibana 5.x dashboards using Beats 6.0 version.
-
-Before importing the dashboards, make sure you run `make update` in the Beat directory, that updates the `_meta/kibana` directory. It generates the index-pattern from
-the `fields.yml` file, and places it under the `5.x/index-pattern` and `default/index-pattern` directories. In case of Metricbeat, Filebeat and Auditbeat,
-it collects the dashboards from all the modules to the `_meta/kibana` directory.
-
-[source,shell]
------------------
-make update
------------------
-
-Then load all the Beat's dashboards. For example, to load the Metricbeat rabbitmq dashboards together with the Metricbeat index-pattern into Kibana 5.6,
-using the Kibana API:
-
-[source,shell]
------------------
-make update
-./metricbeat setup -E setup.dashboards.directory=_meta/kibana
------------------
-
-* Export the dashboards using Beats 6.0 version.
-
-You can export the dashboards via the Kibana API by using the
-https://github.com/elastic/beats/blob/main/dev-tools/cmd/dashboards/export_dashboards.go[export_dashboards.go]
-application.
-
-For example, to export the Metricbeat rabbitmq dashboard:
-
-[source,shell]
------------------
-cd beats/metricbeat
-go run ../dev-tools/cmd/dashboards/export_dashboards.go -dashboards Metricbeat-Rabbitmq -output
-module/rabbitmq/_meta/kibana/default/Metricbeat-Rabbitmq.json <1>
------------------
-<1> `Metricbeat-Rabbitmq` is the ID of the dashboard that you want to export.
-
-Note: You can get the dashboard ID from the URL of the dashboard in Kibana. Depending on the Kibana version the
-dashboard was created, the ID consists of a name or random characters that can be separated by `-`.
-
-This command creates a single JSON file (Metricbeat-Rabbitmq.JSON) that contains the dashboard and all the dependencies like searches,
-visualizations. The name of the output file has the format: -.json.
-
-Starting with Beats 6.0.0, you can create an `yml` file for each module or for the entire Beat with all the dashboards.
-Below is an example of the `module.yml` file for the system module in Metricbeat.
-
-[source,yaml]
-----------------
-dashboards:
- - id: Metricbeat-system-overview <1>
- file: Metricbeat-system-overview.json <2>
-
- - id: 79ffd6e0-faa0-11e6-947f-177f697178b8
- file: Metricbeat-host-overview.json
-
- - id: CPU-slash-Memory-per-container
- file: Metricbeat-docker-overview.json
-----------------
-<1> Dashboard ID.
-<2> The JSON file where the dashboard is saved on disk.
-
-Using the yml file, you can export all the dashboards for a single module or for the entire Beat using a single command:
-
-[source,shell]
-----
-cd metricbeat/module/system
-go run ../../../dev-tools/cmd/dashboards/export_dashboards.go -yml module.yml
-----
-
diff --git a/docs/devguide/modules-dev-guide.asciidoc b/docs/devguide/modules-dev-guide.asciidoc
deleted file mode 100644
index 7e5178cd651c..000000000000
--- a/docs/devguide/modules-dev-guide.asciidoc
+++ /dev/null
@@ -1,530 +0,0 @@
-[[filebeat-modules-devguide]]
-== Creating a New Filebeat Module
-
-include::generator-support-note.asciidoc[tag=filebeat-generator]
-
-This guide will walk you through creating a new Filebeat module.
-
-All Filebeat modules currently live in the main
-https://github.com/elastic/beats[Beats] repository. To clone the repository and
-build Filebeat (which you will need for testing), please follow the general
-instructions in <>.
-
-[float]
-=== Overview
-
-Each Filebeat module is composed of one or more "filesets". We usually create a
-module for each service that we support (`nginx` for Nginx, `mysql` for Mysql,
-and so on) and a fileset for each type of log that the service creates. For
-example, the Nginx module has `access` and `error` filesets. You can contribute
-a new module (with at least one fileset), or a new fileset for an existing
-module.
-
-NOTE: In this guide we use `{module}` and `{fileset}` as placeholders for the
-module and fileset names. You need to replace these with the actual names you
-entered when your created the module and fileset. Only use characters `[a-z]` and, if required, underscores (`_`). No other characters are allowed.
-
-[float]
-=== Creating a new module
-
-Run the following command in the `filebeat` folder:
-
-[source,bash]
-----
-make create-module MODULE={module}
-----
-
-After running the `make create-module` command, you'll find the module,
-along with its generated files, under `module/{module}`. This
-directory contains the following files:
-
-[source,bash]
-----
-module/{module}
-├── module.yml
-└── _meta
- └── docs.asciidoc
- └── fields.yml
- └── kibana
-----
-
-Let's look at these files one by one.
-
-[float]
-==== module.yml
-
-This file contains list of all the dashboards available for the module and used by `export_dashboards.go` script for exporting dashboards.
-Each dashboard is defined by an id and the name of json file where the dashboard is saved locally.
-At generation new fileset this file will be automatically updated with "default" dashboard settings for new fileset.
-Please ensure that this settings are correct.
-
-[float]
-==== _meta/docs.asciidoc
-
-This file contains module-specific documentation. You should include information
-about which versions of the service were tested and the variables that are
-defined in each fileset.
-
-[float]
-==== _meta/fields.yml
-
-The module level `fields.yml` contains descriptions for the module-level fields.
-Please review and update the title and the descriptions in this file. The title
-is used as a title in the docs, so it's best to capitalize it.
-
-[float]
-==== _meta/kibana
-
-This folder contains the sample Kibana dashboards for this module. To create
-them, you can build them visually in Kibana and then export them with `export_dashboards`.
-
-The tool will export all of the dashboard dependencies (visualizations,
-saved searches) automatically.
-
-You can see various ways of using `export_dashboards` at <>.
-The recommended way to export them is to list your dashboards in your module's
-`module.yml` file:
-
-[source,yaml]
-----
-dashboards:
-- id: 69f5ae20-eb02-11e7-8f04-beef1daadb05
- file: mymodule-overview.json
-- id: c0a7ce90-cafe-4242-8647-534bb4c21040
- file: mymodule-errors.json
-----
-
-Then run `export_dashboards` like this:
-
-[source,shell]
-----
-$ cd dev-tools/cmd/dashboards
-$ make # if export_dashboard is not built yet
-$ ./export_dashboards --yml '../../../filebeat/module/{module}/module.yml'
-----
-
-New Filebeat modules might not be compatible with Kibana 5.x. To export dashboards
-that are compatible with 5.x, run the following command inside the developer
-virtual environment:
-
-[source,shell]
-----
-$ cd filebeat
-$ make python-env
-$ cd module/{module}/
-$ python ../../../dev-tools/export_5x_dashboards.py --regex {module} --dir _meta/kibana/5.x
-----
-
-Where the `--regex` parameter should match the dashboard you want to export.
-
-Please note that dashboards exported from Kibana 5.x are not compatible with Kibana 6.x.
-
-You can find more details about the process of creating and exporting the Kibana
-dashboards by reading {beatsdevguide}/new-dashboards.html[this guide].
-
-[float]
-=== Creating a new fileset
-
-Run the following command in the `filebeat` folder:
-
-[source,bash]
-----
-make create-fileset MODULE={module} FILESET={fileset}
-----
-
-After running the `make create-fileset` command, you'll find the fileset,
-along with its generated files, under `module/{module}/{fileset}`. This
-directory contains the following files:
-
-[source,bash]
-----
-module/{module}/{fileset}
-├── manifest.yml
-├── config
-│ └── {fileset}.yml
-├── ingest
-│ └── pipeline.json
-├── _meta
-│ └── fields.yml
-│ └── kibana
-│ └── default
-└── test
-----
-
-Let's look at these files one by one.
-
-[float]
-==== manifest.yml
-
-The `manifest.yml` is the control file for the module, where variables are
-defined and the other files are referenced. It is a YAML file, but in many
-places in the file, you can use built-in or defined variables by using the
-`{{.variable}}` syntax.
-
-The `var` section of the file defines the fileset variables and their default
-values. The module variables can be referenced in other configuration files,
-and their value can be overridden at runtime by the Filebeat configuration.
-
-As the fileset creator, you can use any names for the variables you define. Each
-variable must have a default value. So in it's simplest form, this is how you
-can define a new variable:
-
-[source,yaml]
-----
-var:
- - name: pipeline
- default: with_plugins
-----
-
-Most fileset should have a `paths` variable defined, which sets the default
-paths where the log files are located:
-
-[source,yaml]
-----
-var:
- - name: paths
- default:
- - /example/test.log*
- os.darwin:
- - /usr/local/example/test.log*
- - /example/test.log*
- os.windows:
- - c:/programdata/example/logs/test.log*
-----
-
-There's quite a lot going on in this file, so let's break it down:
-
-* The name of the variable is `paths` and the default value is an array with one
- element: `"/example/test.log*"`.
-* Note that variable values don't have to be strings.
- They can be also numbers, objects, or as shown in this example, arrays.
-* We will use the `paths` variable to set the input `paths`
- setting, so "glob" values can be used here.
-* Besides the `default` value, the file defines values for particular
- operating systems: a default for darwin/OS X/macOS systems and a default for
- Windows systems. These are introduced via the `os.darwin` and `os.windows`
- keywords. The values under these keys become the default for the variable, if
- Filebeat is executed on the respective OS.
-
-Besides the variable definition, the `manifest.yml` file also contains
-references to the ingest pipeline and input configuration to use (see next
-sections):
-
-[source,yaml]
-----
-ingest_pipeline: ingest/pipeline.json
-input: config/testfileset.yml
-----
-
-These should point to the respective files from the fileset.
-
-Note that when evaluating the contents of these files, the variables are
-expanded, which enables you to select one file or the other depending on the
-value of a variable. For example:
-
-[source,yaml]
-----
-ingest_pipeline: ingest/{{.pipeline}}.json
-----
-
-This example selects the ingest pipeline file based on the value of the
-`pipeline` variable. For the `pipeline` variable shown earlier, the path would
-resolve to `ingest/with_plugins.json` (assuming the variable value isn't
-overridden at runtime.)
-
-In 6.6 and later, you can specify multiple ingest pipelines.
-
-[source,yaml]
-----
-ingest_pipeline:
- - ingest/main.json
- - ingest/plain_logs.json
- - ingest/json_logs.json
-----
-
-When multiple ingest pipelines are specified the first one in the list is
-considered to be the entry point pipeline.
-
-One reason for using multiple pipelines might be to send all logs harvested
-by this fileset to the entry point pipeline and have it delegate different parts of
-the processing to other pipelines. You can read details about setting
-this up in <>.
-
-[float]
-==== config/*.yml
-
-The `config/` folder contains template files that generate Filebeat input
-configurations. The Filebeat inputs are primarily responsible for tailing
-files, filtering, and multi-line stitching, so that's what you configure in the
-template files.
-
-A typical example looks like this:
-
-[source,yaml]
-----
-type: log
-paths:
-{{ range $i, $path := .paths }}
- - {{$path}}
-{{ end }}
-exclude_files: [".gz$"]
-----
-
-You'll find this example in the template file that gets generated automatically
-when you run `make create-fileset`. In this example, the `paths` variable is
-used to construct the `paths` list for the input `paths` option.
-
-Any template files that you add to the `config/` folder need to generate a valid
-Filebeat input configuration in YAML format. The options accepted by the
-input configuration are documented in the
-{filebeat-ref}/configuration-filebeat-options.html[Filebeat Inputs] section of
-the Filebeat documentation.
-
-The template files use the templating language defined by the
-https://golang.org/pkg/text/template/[Go standard library].
-
-Here is another example that also configures multiline stitching:
-
-[source,yaml]
-----
-type: log
-paths:
-{{ range $i, $path := .paths }}
- - {{$path}}
-{{ end }}
-exclude_files: [".gz$"]
-multiline:
- pattern: "^# User@Host: "
- negate: true
- match: after
-----
-
-Although you can add multiple configuration files under the `config/` folder,
-only the file indicated by the `manifest.yml` file will be loaded. You can use
-variables to dynamically switch between configurations.
-
-[float]
-==== ingest/*.json
-
-The `ingest/` folder contains {es} {ref}/ingest.html[ingest pipeline]
-configurations. Ingest pipelines are responsible for parsing the log lines and
-doing other manipulations on the data.
-
-The files in this folder are JSON or YAML documents representing
-{ref}/pipeline.html[pipeline definitions]. Just like with the `config/`
-folder, you can define multiple pipelines, but a single one is loaded at runtime
-based on the information from `manifest.yml`.
-
-The generator creates a JSON object similar to this one:
-
-[source,json]
-----
-{
- "description": "Pipeline for parsing {module} {fileset} logs",
- "processors": [
- ],
- "on_failure" : [{
- "set" : {
- "field" : "error.message",
- "value" : "{{ _ingest.on_failure_message }}"
- }
- }]
-}
-----
-
-Alternatively, you can use YAML formatted pipelines, which uses a simpler syntax:
-
-[source,yaml]
-----
-description: "Pipeline for parsing {module} {fileset} logs"
-processors:
-on_failure:
- - set:
- field: error.message
- value: "{{ _ingest.on_failure_message }}"
-----
-
-From here, you would typically add processors to the `processors` array to do
-the actual parsing. For information about available ingest processors, see the
-{ref}/processors.html[processor reference documentation]. In
-particular, you will likely find the
-{ref}/grok-processor.html[grok processor] to be useful for parsing.
-Here is an example for parsing the Nginx access logs.
-
-[source,json]
-----
-{
- "grok": {
- "field": "message",
- "patterns":[
- "%{IPORHOST:nginx.access.remote_ip} - %{DATA:nginx.access.user_name} \\[%{HTTPDATE:nginx.access.time}\\] \"%{WORD:nginx.access.method} %{DATA:nginx.access.url} HTTP/%{NUMBER:nginx.access.http_version}\" %{NUMBER:nginx.access.response_code} %{NUMBER:nginx.access.body_sent.bytes} \"%{DATA:nginx.access.referrer}\" \"%{DATA:nginx.access.agent}\""
- ],
- "ignore_missing": true
- }
-}
-----
-
-Note that you should follow the convention of naming of fields prefixed with the
-module and fileset name: `{module}.{fileset}.field`, e.g.
-`nginx.access.remote_ip`. Also, please review our <>.
-
-[[ingest-json-entry-point-pipeline]]
-In 6.6 and later, ingest pipelines can use the
-{ref}/conditionals-with-multiple-pipelines.html[`pipeline` processor] to delegate
-parts of the processings to other pipelines.
-
-This can be useful if you want a fileset to ingest the same _logical_ information
-presented in different formats, e.g. csv vs. json versions of the same log files.
-Imagine an entry point ingest pipeline that detects the format of a log entry and then conditionally
-delegates further processing of that log entry, depending on the format, to another
-pipeline.
-
-["source","json",subs="callouts"]
-----
-{
- "processors": [
- {
- "grok": {
- "field": "message",
- "patterns": [
- "^%{CHAR:first_char}"
- ],
- "pattern_definitions": {
- "CHAR": "."
- }
- }
- },
- {
- "pipeline": {
- "if": "ctx.first_char == '{'",
- "name": "{< IngestPipeline "json-log-processing-pipeline" >}" <1>
- }
- },
- {
- "pipeline": {
- "if": "ctx.first_char != '{'",
- "name": "{< IngestPipeline "plain-log-processing-pipeline" >}"
- }
- }
- ]
-}
-----
-<1> Use the `IngestPipeline` template function to resolve the name. This function converts the
-specified name into the fully qualified pipeline ID that is stored in Elasticsearch.
-
-In order for the above pipeline to work, Filebeat must load the entry point pipeline
-as well as any sub-pipelines into Elasticsearch. You can tell Filebeat to do
-so by specifying all the necessary pipelines for the fileset in its `manifest.yml`
-file. The first pipeline in the list is considered to be the entry point pipeline.
-
-[source,yaml]
-----
-ingest_pipeline:
- - ingest/main.json
- - ingest/plain_logs.yml
- - ingest/json_logs.json
-----
-
-While developing the pipeline definition, we recommend making use of the
-{ref}/simulate-pipeline-api.html[Simulate Pipeline API] for testing
-and quick iteration.
-
-By default Filebeat does not update Ingest pipelines if already loaded. If you
-want to force updating your pipeline during development, use
-`./filebeat setup --pipelines` command. This uploads pipelines even if they
-are already available on the node.
-
-[float]
-==== _meta/fields.yml
-
-The `fields.yml` file contains the top-level structure for the fields in your
-fileset. It is used as the source of truth for:
-
-* the generated Elasticsearch mapping template
-* the generated Kibana index pattern
-* the generated documentation for the exported fields
-
-Besides the `fields.yml` file in the fileset, there is also a `fields.yml` file
-at the module level, placed under `module/{module}/_meta/fields.yml`, which
-should contain the fields defined at the module level, and the description of
-the module itself. In most cases, you should add the fields at the fileset
-level.
-
-After `pipeline.json` is created, it is possible to generate a base `field.yml`.
-
-[source,bash]
-----
-make create-fields MODULE={module} FILESET={fileset}
-----
-
-Please, always check the generated file and make sure the fields are correct.
-You must add field documentation manually.
-
-If the fields are correct, it is time to generate documentation, configuration
-and Kibana index patterns.
-
-[source,bash]
-----
-make update
-----
-
-[float]
-==== test
-
-In the `test/` directory, you should place sample log files generated by the
-service. We have integration tests, automatically executed by CI, that will run
-Filebeat on each of the log files under the `test/` folder and check that there
-are no parsing errors and that all fields are documented.
-
-In addition, assuming you have a `test.log` file, you can add a
-`test.log-expected.json` file in the same directory that contains the expected
-documents as they are found via an Elasticsearch search. In this case, the
-integration tests will automatically check that the result is the same on each
-run.
-
-In order to test the filesets with the sample logs and/or generate the expected output one should run the tests
-locally for a specific module, using the following procedure under Filebeat directory:
-
-. Start an Elasticsearch instance locally. For example, using Docker:
-+
-[source,bash]
-----
-docker run \
- --name elasticsearch \
- -p 9200:9200 -p 9300:9300 \
- -e "xpack.security.http.ssl.enabled=false" -e "ELASTIC_PASSWORD=changeme" \
- -e "discovery.type=single-node" \
- --pull always --rm --detach \
- docker.elastic.co/elasticsearch/elasticsearch:master-SNAPSHOT
-----
-. Create an "admin" user on that Elasticsearch instance:
-+
-[source,bash]
-----
-curl -u elastic:changeme \
- http://localhost:9200/_security/user/admin \
- -X POST -H 'Content-Type: application/json' \
- -d '{"password": "changeme", "roles": ["superuser"]}'
-----
-. Create the testing binary: `make filebeat.test`
-. Update fields yaml: `make update`
-. Create python env: `make python-env`
-. Source python env: `source ./build/python-env/bin/activate`
-. Run a test, for example to check nginx access log parsing:
-+
-[source,bash]
-----
-INTEGRATION_TESTS=1 BEAT_STRICT_PERMS=false ES_PASS=changeme \
-TESTING_FILEBEAT_MODULES=nginx \
-pytest tests/system/test_modules.py -v --full-trace
-----
-. Add and remove option env vars as required. Here are some useful ones:
-* `TESTING_FILEBEAT_ALLOW_OLDER`: if set to 1, allow connecting older versions of Elasticsearch
-* `TESTING_FILEBEAT_MODULES`: comma separated list of modules to test.
-* `TESTING_FILEBEAT_FILESETS`: comma separated list of filesets to test.
-* `TESTING_FILEBEAT_FILEPATTERN`: glob pattern for log files within the fileset to test.
-* `GENERATE`: if set to 1, the expected documents will be generated.
-
-The filebeat logs are writen to the `build` directory. It may be useful to tail them in another terminal using `tail -F build/system-tests/run/test_modules.Test.*/output.log`.
-
-For example if there's a syntax error in an ingest pipeline, the test will probably just hang. The filebeat log output will contain the error message from elasticsearch.
diff --git a/docs/devguide/new_protocol.asciidoc b/docs/devguide/new_protocol.asciidoc
deleted file mode 100644
index defd50c0bc3f..000000000000
--- a/docs/devguide/new_protocol.asciidoc
+++ /dev/null
@@ -1,101 +0,0 @@
-[[new-protocol]]
-== Adding a New Protocol to Packetbeat
-
-The following topics describe how to add a new protocol to Packetbeat:
-
-* <>
-* <>
-* <>
-
-[[getting-ready-new-protocol]]
-=== Getting Ready
-
-Packetbeat is written in http://golang.org/[Go], so having Go installed and knowing the basics are prerequisites for understanding this guide. But don't worry if you aren't a Go expert. Go is a relatively new language, and very few people are experts in it. In fact, several people learned Go by contributing to Packetbeat and libbeat, including the original Packetbeat authors.
-
-You will also need a good understanding of the wire protocol that you want to
-add support for. For standard protocols or protocols used in open source
-projects, you can usually find detailed specifications and example source code.
-Wireshark is a very useful tool for understanding the inner workings of the
-protocols it supports.
-
-In some cases you can even make use of existing libraries for doing the actual
-parsing and decoding of the protocol. If the particular protocol has a Go
-implementation with a liberal enough license, you might be able to use it to
-parse and decode individual messages instead of writing your own parser.
-
-Before starting, please also read the <>.
-
-[float]
-==== Cloning and Compiling
-
-After you have https://golang.org/doc/install[installed Go] and set up the
-https://golang.org/doc/code.html#GOPATH[GOPATH] environment variable to point to
-your preferred workspace location, you can clone Packetbeat with the
-following commands:
-
-[source,shell]
-----------------------------------------------------------------------
-$ mkdir -p ${GOPATH}/src/github.com/elastic
-$ cd ${GOPATH}/src/github.com/elastic
-$ git clone https://github.com/elastic/beats.git
-----------------------------------------------------------------------
-
-Note: If you have multiple go paths use `${GOPATH%%:*}`instead of `${GOPATH}`.
-
-Then you can compile it with:
-
-[source,shell]
-----------------------------------------------------------------------
-$ cd beats
-$ make
-----------------------------------------------------------------------
-
-Note that the location where you clone is important. If you prefer working
-outside of the `GOPATH` environment, you can clone to another directory and only
-create a symlink to the `$GOPATH/src/github.com/elastic/` directory.
-
-[float]
-=== Forking and Branching
-
-We recommend the following work flow for contributing to Packetbeat:
-
-* Fork Beats in GitHub to your own account
-
-* In the `$GOPATH/src/github.com/elastic/beats` folder, add your fork
- as a new remote. For example (replace `tsg` with your GitHub account):
-
-[source,shell]
-----------------------------------------------------------------------
-$ git remote add tsg git@github.com:tsg/beats.git
-----------------------------------------------------------------------
-
-* Create a new branch for your work:
-
-[source,shell]
-----------------------------------------------------------------------
-$ git checkout -b cool_new_protocol
-----------------------------------------------------------------------
-
-* Commit as often as you like, and then push to your private fork with:
-
-[source,shell]
-----------------------------------------------------------------------
-$ git push --set-upstream tsg cool_new_protocol
-----------------------------------------------------------------------
-
-* When you are ready to submit your PR, simply do so from the GitHub web
- interface. Feel free to submit your PR early. You can still add commits to
- the branch after creating the PR. Submitting the PR early gives us more time to
- provide feedback and perhaps help you with it.
-
-[[protocol-modules]]
-=== Protocol Modules
-
-We are working on updating this section. While you're waiting for updates, you
-might want to try out the TCP protocol generator at
-https://github.com/elastic/beats/tree/master/packetbeat/scripts/tcp-protocol.
-
-[[protocol-testing]]
-=== Testing
-
-We are working on updating this section.
diff --git a/docs/devguide/newdashboards.asciidoc b/docs/devguide/newdashboards.asciidoc
deleted file mode 100644
index 9e540abb025a..000000000000
--- a/docs/devguide/newdashboards.asciidoc
+++ /dev/null
@@ -1,389 +0,0 @@
-[[new-dashboards]]
-== Creating New Kibana Dashboards for a Beat or a Beat module
-
-++++
-Creating New Kibana Dashboards
-++++
-
-
-When contributing to Beats development, you may want to add new dashboards or
-customize the existing ones. To get started, you can
-<> that come with the official
-Beats and use them as a starting point for your own dashboards. When you're done
-making changes to the dashboards in Kibana, you can use the `export_dashboards`
-script to <>, along with all
-dependencies, to a local directory.
-
-To make sure the dashboards are compatible with the latest version of Kibana and Elasticsearch, we
-recommend that you use the virtual environment under
-https://github.com/elastic/beats/tree/master/testing/environments[beats/testing/environments] to import, create, and
-export the Kibana dashboards.
-
-The following topics provide more detail about importing and working with Beats dashboards:
-
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-
-[[import-dashboards]]
-=== Importing Existing Beat Dashboards
-
-The official Beats come with Kibana dashboards, and starting with 6.0.0, they
-are part of every Beat package.
-
-You can use the Beat executable to import all the dashboards and the index pattern for a Beat, including the dependencies such as visualizations and searches.
-
-To import the dashboards, run the `setup` command.
-
-
-[source,shell]
--------------------------
-./metricbeat setup
--------------------------
-
-The `setup` phase loads several dependencies, such as:
-
-- Index mapping template in Elasticsearch
-- Kibana dashboards
-- Ingest pipelines
-- ILM policy
-
-The dependencies vary depending on the Beat you're setting up.
-
-For more details about the `setup` command, see the command-line help. For example:
-
-[source,shell]
-----
-./metricbeat help setup
-
-This command does initial setup of the environment:
-
- * Index mapping template in Elasticsearch to ensure fields are mapped.
- * Kibana dashboards (where available).
- * ML jobs (where available).
- * Ingest pipelines (where available).
- * ILM policy (for Elasticsearch 6.5 and newer).
-
-Usage:
- metricbeat setup [flags]
-
-Flags:
- --dashboards Setup dashboards
- -h, --help help for setup
- --index-management Setup all components related to Elasticsearch index management, including template, ilm policy and rollover alias
- --pipelines Setup Ingest pipelines
-----
-
-The flags are useful when you don't want to load everything. For example, to
-import only the dashboards, use the `--dashboards` flag:
-
-[source,shell]
-----
-./metricbeat setup --dashboards
-----
-
-Starting with Beats 6.0.0, the dashboards are no longer loaded directly into Elasticsearch. Instead, they are imported directly into Kibana.
-Thus, if your Kibana instance is not listening on localhost, or you enabled
-{xpack} for Kibana, you need to either configure the Kibana endpoint in
-the config for the Beat, or pass the Kibana host and credentials as
-arguments to the `setup` command. For example:
-
-[source,shell]
-----
-./metricbeat setup -E setup.kibana.host=192.168.3.206:5601 -E setup.kibana.username=elastic -E setup.kibana.password=secret
-----
-
-By default, the `setup` command imports the dashboards from the `kibana`
-directory, which is available in the Beat package.
-
-NOTE: The format of the saved dashboards is not compatible between Kibana 5.x and 6.x. Thus, the Kibana 5.x dashboards are available in
-the `5.x` directory, and the Kibana 6.0 dashboards, and older are in the `default` directory.
-
-In case you are using customized dashboards, you can import them:
-
-- from a local directory:
-+
-[source,shell]
-----------------------------------------------------------------------
-./metricbeat setup -E setup.dashboards.directory=kibana
-----------------------------------------------------------------------
-
-- from a local zip archive:
-+
-[source,shell]
-----------------------------------------------------------------------
-./metricbeat setup -E setup.dashboards.file=metricbeat-dashboards-6.0.zip
-----------------------------------------------------------------------
-
-- from a zip archive available online:
-+
-[source,shell]
-----------------------------------------------------------------------
-./metricbeat setup -E setup.dashboards.url=path/to/url
-----------------------------------------------------------------------
-+
-
-See <> for a description of the `setup.dashboards` configuration options.
-
-
-[[import-dashboards-for-development]]
-==== Import Dashboards for Development
-
-You can make use of the Magefile from the Beat GitHub repository to import the
-dashboards. If Kibana is running on localhost, then you can run the following command
-from the root of the Beat:
-
-[source,shell]
---------------------------------
-mage dashboards
---------------------------------
-
-[[import-dashboard-options]]
-==== Kibana dashboards configuration
-
-The configuration file (`*.reference.yml`) of each Beat contains the `setup.dashboards` section for configuring from where to get the Kibana dashboards, as well as the name of the index pattern.
-Each of these configuration options can be overwritten with the command line options by using `-E` flag.
-
-
-*`setup.dashboards.directory=`*::
-Local directory that contains the saved dashboards and their dependencies.
-The default value is the `kibana` directory available in the Beat package.
-
-*`setup.dashboards.file=`*::
-Local zip archive with the dashboards. The archive can contain Kibana dashboards for a single Beat or for multiple Beats. The dashboards of each Beat are placed under a separate directory with the name of the Beat.
-
-*`setup.dashboards.url=`*::
-Zip archive with the dashboards, available online. The archive can contain Kibana dashboards for a single Beat or for
-multiple Beats. The dashboards for each Beat are placed under a separate directory with the name of the Beat.
-
-*`setup.dashboards.index `*::
-You should only use this option if you want to change the index pattern name that's used by default. For example, if the
-default is `metricbeat-*`, you can change it to `custombeat-*`.
-
-
-[[build-dashboards]]
-=== Building Your Own Beat Dashboards
-
-NOTE: If you want to modify a dashboard that comes with a Beat, it's better to modify a copy of the dashboard because the Beat overwrites the dashboards during the setup phase in order to have the latest version. For duplicating a dashboard, just use the `Clone` button from the top of the page.
-
-
-Before building your own dashboards or customizing the existing ones, you need to load:
-
-* the Beat index pattern, which specifies how Kibana should display the Beat fields
-* the Beat dashboards that you want to customize
-
-For the Elastic Beats, the index pattern is available in the Beat package under
-`kibana/*/index-pattern`. The index-pattern is automatically generated from the `fields.yml` file, available in the Beat package. For more details
-check the <> section.
-
-All Beats dashboards, visualizations and saved searches must follow common naming conventions:
-
-* Dashboard names have prefix `[BeatName Module]`, e.g. `[Filebeat Nginx] Access logs`
-* Visualizations and searches have suffix `[BeatName Module]`, e.g. `Top processes [Filebeat Nginx]`
-
-NOTE: You can set a custom name (skip suffix) for visualization placed on a dashboard. The original visualization will
-stay intact.
-
-The naming convention rules can be verified with the the tool `mage check`. The command fails if it detects:
-
-* empty description on a dashboard
-* unexpected dashboard title format (missing prefix `[BeatName ModuleName]`)
-* unexpected visualization title format (missing suffix `[BeatName Module]`)
-
-After creating your own dashboards in Kibana, you can <> to a local
-directory, and then <> in order to be able to share the dashboards with the community.
-
-[[generate-index-pattern]]
-=== Generating the Beat Index Pattern
-
-The index-pattern defines the format of each field, and it's used by Kibana to know how to display the field.
-If you change the fields exported by the Beat, you need to generate a new index pattern for your Beat. Otherwise, you can just use the index pattern available under the `kibana/*/index-pattern` directory.
-
-The Beat index pattern is generated from the `fields.yml`, which contains all
-the fields exported by the Beat. For each field, besides the `type`, you can configure the
-`format` field. The format informs Kibana about how to display a certain field. A good example is `percentage` or `bytes`
-to display fields as `50%` or `5MB`.
-
-To generate the index pattern from the `fields.yml`, you need to run the following command in the Beat repository:
-
-[source,shell]
----------------
-make update
----------------
-
-[[export-dashboards]]
-=== Exporting New and Modified Beat Dashboards
-
-To export all the dashboards for any Elastic Beat or any community Beat, including any new or modified dashboards and all dependencies such as
-visualizations, searches, you can use the Go script `export_dashboards.go` from
-https://github.com/elastic/beats/tree/master/dev-tools/cmd/dashboards[dev-tools].
-See the dev-tools https://github.com/elastic/beats/tree/master/dev-tools/README.md[readme] for more info.
-
-Alternatively, if the scripts above are not available, you can use your Beat binary to export Kibana 6.0 dashboards or later.
-
-==== Exporting from Kibana 6.0 to 7.14
-
-The `dev-tools/cmd/export_dashboards.go` script helps you export your customized Kibana dashboards until the v7.14.x release.
-You might need to export a single dashboard or all the dashboards available for a module or Beat.
-
-It is also possible to use a Beat binary to export.
-
-==== Exporting from Kibana 7.15 or newer
-
-From 7.15, your Beats version must be the same as your Kibana version
-to make sure the export API required is available.
-
-===== Migrate legacy dashboards made with Kibana 7.14 or older
-
-After you updated your Kibana instance to at least 7.15, you have to
-export your dashboards again with either `export_dashboards.go` tool or
-with your Beat.
-
-===== Export a single Kibana dashboard
-
-To export a single dashboard for a module you can use the following command inside a Beat with modules:
-
-[source,shell]
----------------
-MODULE=redis ID=AV4REOpp5NkDleZmzKkE mage exportDashboard
----------------
-
-[source,shell]
----------------
-./filebeat export dashboard --id 7fea2930-478e-11e7-b1f0-cb29bac6bf8b --folder module/redis
----------------
-
-This generates an appropriate folder under module/redis for the dashboard, separating assets into dashboards, searches, vizualizations, etc.
-Each exported file is a JSON and their names are the IDs of the assets.
-
-NOTE: The dashboard ID is available in the dashboard URL. For example, in case the dashboard URL is
-`app/kibana#/dashboard/AV4REOpp5NkDleZmzKkE?_g=()&_a=(description:'Overview%2...`, the dashboard ID is `AV4REOpp5NkDleZmzKkE`.
-
-===== Export all module/Beat dashboards
-
-Each module should contain a `module.yml` file with a list of all the dashboards available for the module. For the Beats that don't have support for modules (e.g. Packetbeat),
-there is a `dashboards.yml` file that defines all the Packetbeat dashboards.
-
-Below, it's an example of the `module.yml` file for the system module in Metricbeat:
-
-[source,shell]
----------------
-dashboards:
-- id: Metricbeat-system-overview
- file: Metricbeat-system-overview.ndjson
-
-- id: 79ffd6e0-faa0-11e6-947f-177f697178b8
- file: Metricbeat-host-overview.ndjson
-
-- id: CPU-slash-Memory-per-container
- file: Metricbeat-containers-overview.ndjson
----------------
-
-
-Each dashboard is defined by an `id` and the name of ndjson `file` where the dashboard is saved locally.
-
-By passing the yml file to the `export_dashboards.go` script or to the Beat, you can export all the dashboards defined:
-
-[source,shell]
--------------------
-go run dev-tools/cmd/dashboards/export_dashboards.go --yml filebeat/module/system/module.yml --folder dashboards
--------------------
-
-[source,shell]
--------------------
-./filebeat export dashboard --yml filebeat/module/system/module.yml
--------------------
-
-
-===== Export dashboards from a Kibana Space
-
-If you are using the Kibana Spaces feature and want to export dashboards from a specific Space, pass the Space ID to the `export_dashboards.go` script:
-
-[source,shell]
--------------------
-go run dev-tools/cmd/dashboards/export_dashboards.go -space-id my-space [other-options]
--------------------
-
-In case of running `export dashboard` of a Beat, you need to set the Space ID in `setup.kibana.space.id`.
-
-
-==== Exporting Kibana 5.x dashboards
-
-To export only some Kibana dashboards for an Elastic Beat or community Beat, you can simply pass a regular expression to
-the `export_dashboards.py` script to match the selected Kibana dashboards.
-
-Before running the `export_dashboards.py` script for the first time, you
-need to create an environment that contains all the required Python packages.
-
-[source,shell]
--------------------------
-make python-env
--------------------------
-
-For example, to export all Kibana dashboards that start with the **Packetbeat** name:
-
-[source,shell]
-----------------------------------------------------------------------
-python ../dev-tools/cmd/dashboards/export_dashboards.py --regex Packetbeat*
-----------------------------------------------------------------------
-
-To see all the available options, read the descriptions below or run:
-
-[source,shell]
-----------------------------------------------------------------------
-python ../dev-tools/cmd/dashboards/export_dashboards.py -h
-----------------------------------------------------------------------
-
-*`--url `*::
-The Elasticsearch URL. The default value is http://localhost:9200.
-
-*`--regex `*::
-Regular expression to match all the Kibana dashboards to be exported. This argument is required.
-
-*`--kibana `*::
-The Elasticsearch index pattern where Kibana saves its configuration. The default value is `.kibana`.
-
-*`--dir `*::
-The output directory where the dashboards and all dependencies will be saved. The default value is `output`.
-
-The output directory has the following structure:
-
-[source,shell]
---------------
-output/
- index-pattern/
- dashboard/
- visualization/
- search/
---------------
-
-[[archive-dashboards]]
-=== Archiving Your Beat Dashboards
-
-The Kibana dashboards for the Elastic Beats are saved under the `kibana` directory. To create a zip archive with the
-dashboards, including visualizations and searches and the index pattern, you can run the following command in the Beat
-repository:
-
-[source,shell]
---------------
-make package-dashboards
---------------
-
-The Makefile is part of libbeat, which means that community Beats contributors can use the commands shown here to
-archive dashboards. The dashboards must be available under the `kibana` directory.
-
-Another option would be to create a repository only with the dashboards, and use the GitHub release functionality to
-create a zip archive.
-
-Share the Kibana dashboards archive with the community, so other users can use your cool Kibana visualizations!
-
-
-
-[[share-beat-dashboards]]
-=== Sharing Your Beat Dashboards
-
-When you're done with your own Beat dashboards, how about letting everyone know? You can create a topic on the https://discuss.elastic.co/c/beats[Beats
-forum], and provide the link to the zip archive together with a short description.
diff --git a/docs/devguide/pull-request-guidelines.asciidoc b/docs/devguide/pull-request-guidelines.asciidoc
deleted file mode 100644
index 113c8aa5d53a..000000000000
--- a/docs/devguide/pull-request-guidelines.asciidoc
+++ /dev/null
@@ -1,18 +0,0 @@
-[[pr-review]]
-== Pull request review guidelines
-
-Every change made to Beats must be held to a high standard, and while the responsibility for quality in a pull request ultimately lies with the author, Beats team members have the responsibility as reviewers to verify during their review process. Where this document is unclear or inappropriate let common sense and consensus override it.
-
-[float]
-=== Code Style
-
-Everyone's got an opinion on style. To avoid spending time on this issue we rely almost exclusively on `go fmt` and https://houndci.com/[hound] to police style. If neither of these tools complain the code is almost certainly fine. There may be exceptions to this, but they should be extremely rare. Only override the judgement of these tools in the most unusual of situations.
-
-[float]
-=== Flaky Tests
-
-As software projects grow so does the complexity of their test cases and with that the probability of some tests becoming 'flaky'. It is everyone's responsibility to handle flaky tests. If you notice a pull request build failing for a reason that is unrelated to the pushed code follow the procedure below:
-
-1. Create an issue using the "Flaky Test" github issue template with the "Flaky Test" label attached.
-2. Create a PR to mute or fix the flaky test.
-3. Merge that PR and rebase off of it before continuing with the normal PR process for your original PR.
diff --git a/docs/devguide/python.asciidoc b/docs/devguide/python.asciidoc
deleted file mode 100644
index 8f86e81fcc39..000000000000
--- a/docs/devguide/python.asciidoc
+++ /dev/null
@@ -1,90 +0,0 @@
-[[python-beats]]
-=== Python in Beats
-
-Python is used for Beats development, it is the language used to implement
-system tests and some other tools. Python dependencies are managed by the use of
-virtual environments, supported by
-https://docs.python.org/3/library/venv.html[venv].
-
-Beats development requires Python >= {python}.
-
-[[installing-python]]
-==== Installing Python and venv
-
-Python uses to be installed in many operating systems. If it is not installed in
-your system you can follow the instructions available in https://www.python.org/downloads/
-
-In Ubuntu/Debian systems, Python 3 can be installed with:
-
-["source","sh"]
-----
-sudo apt-get install python3 python3-venv
-----
-
-There are packages for specific minor versions, so for example if Python 3.7
-wants to be used, it can be installed with the following command:
-
-["source","sh"]
-----
-sudo apt-get install python3.7 python3.7-venv
-----
-
-It is recommended to use Python >= {python}.
-
-[[python-virtual-environments]]
-==== Working with virtual environments
-
-All `make` and `mage` targets manage their own virtual environments in a transparent
-way, so for the most common operations required when contributing to beats,
-nothing special needs to be done.
-
-Virtual environments used by `make` can be found in most Beats directories under
-`build/python-env`, they are created by targets that need it, or can be
-explicitly created by running `make python-env`. The ones used by `mage` are
-created when required under `build/ve`.
-
-There are some environment variables that can be used to customize the creation
-of these virtual environments:
-
-* `PYTHON_EXE`: Python executable to be used in the virtual environment. It has
- to exist in the path.
-* `PYTHON_ENV`: Path to the virtual environment to use. If it doesn't exist, it
- is created by `make` or `mage` targets when needed.
-
-Virtual environments can also be used without `make` or `mage`, this is usual
-for example when running individual system tests with `pytest`. There are two
-ways to run commands from the virtual environment:
-
-* "Activating" the virtual environment in your current terminal running
- `source ./build/python-env/bin/activate`. Virtual environment can be
- deactivated by running `deactivate`.
-* Directly running commands from the virtual environment path. For example
- `pytest` can be executed as `./build/python-env/bin/pytest`.
-
-To recreate a virtual environment, remove its directory. All virtual
-environments are also removed with `make clean`.
-
-[[python-older-versions]]
-==== Working with older versions
-
-Older versions of Beats were not compatible with Python 3, if you need to
-temporary work on one of these versions of Beats, and you don't want to remove
-your current virtual environments, you can use environment variables to run
-commands in a temporary virtual environment.
-
-For example you can run `make update` with Python 2.7 with the following
-command:
-
-["source","sh"]
------
-PYTHON_EXE=python2.7 PYTHON_ENV=/tmp/venv2 make update
------
-
-If you need to run tests you can also create a virtual environment and then
-activate it to run commands from there:
-["source","sh"]
------
-PYTHON_EXE=python2.7 PYTHON_ENV=/tmp/venv2 make python-env
-source /tmp/venv2/bin/activate
-...
------
diff --git a/docs/devguide/terraform.asciidoc b/docs/devguide/terraform.asciidoc
deleted file mode 100644
index 0cdd0198f214..000000000000
--- a/docs/devguide/terraform.asciidoc
+++ /dev/null
@@ -1,81 +0,0 @@
-[[terraform-beats]]
-== Terraform in Beats
-
-Terraform is used to provision scenarios for integration testing of some cloud
-features. Features implementing integration tests that require the presence of
-cloud resources should have their own Terraform configuration, this configuration
-can be used when developing locally to create (and destroy) resources that allow
-to test these features.
-
-Tests requiring access to cloud providers should be disabled by default with the
-use of build tags.
-
-[[installing-terraform]]
-=== Installing Terraform
-
-Terraform is available in https://www.terraform.io/downloads.html
-
-Download it and place it in some directory in your PATH.
-
-`terraform` is the main command for Terraform and the only one that is usually
-needed to manage configurations. Terraform will also download other plugins that
-implement the specific functionality for each provider. These plugins are
-automatically managed and stored in the working copy, if you want to share the
-plugins between multiple working copies you can manually install them in the
-user the user plugins directory located at `~/.terraform.d/plugins`,
-or `%APPDATA%\terraform.d\plugins on Windows`.
-
-Plugins are available in https://registry.terraform.io/
-
-[[using-terraform]]
-=== Using Terraform
-
-The most important commands when using Terraform are:
-* `terraform init` to do some initial checks and install the required plugins.
-* `terraform apply` to create the resources defined in the configuration.
-* `terraform destroy` to destroy resources previously created.
-
-Cloud providers use to require credentials, they can be provided with the usual
-methods supported by these providers, using environment variables and/or
-credential files.
-
-Terraform stores the last known state of the resources managed by a
-configuration in a `terraform.tfstate` file. It is important to keep this file
-as it is used as input by `terraform destroy`. This file is created in the same
-directory where `terraform apply` is executed.
-
-Please take a look to Terraform documentation for more details: https://www.terraform.io/intro/index.html
-
-[[terraform-configurations]]
-=== Terraform configuration guidelines
-
-The main purpouse of Terraform in Beats is to create and destroy cloud resources
-required by integration tests. For these configurations there are some things to
-take into account:
-* Apply should work without additional inputs or files. Only input will be the
- required for specific providers, using environment variables or credential
- files.
-* You must be able to apply the same configuration multiple times in the same
- account. This will allow to have multiple builds using the same configuration
- but with different instances of the resources. Some resources are already
- created with unique identifiers (as EC2 instances), some others have to be
- explicitly created with unique names (e.g. S3 buckets). For these cases random
- suffixes can be added to identifiers.
-* Destroy must work without additional input, and should be able to destroy all
- the resources created by the configuration. There are some resources that need
- specific flags to be destroyed by `terraform destroy`. For example S3 buckets
- need a flag to force to empty the bucket before deleting it, or RDS instances
- need a flag to disable snapshots on deletion.
-
-[[terraform-in-ci]]
-=== Terraform in CI
-
-Integration tests that need the presence of certain resources to work can be
-executed in CI if they provide a Terraform configuration to start these
-resources. These tests are disabled by default in CI.
-
-Terraform states are archived as artifacrs of builds, this allows to manually
-destroy resources created by builds that were not able to do a proper cleanup.
-
-
-
diff --git a/docs/devguide/testing.asciidoc b/docs/devguide/testing.asciidoc
deleted file mode 100644
index 07f2ae21025c..000000000000
--- a/docs/devguide/testing.asciidoc
+++ /dev/null
@@ -1,110 +0,0 @@
-[[testing]]
-=== Testing
-
-Beats has a various sets of tests. This guide should help to understand how the different test suites work, how they are used and new tests are added.
-
-In general there are two major test suites:
-
-* Tests written in Go
-* Tests written in Python
-
-The tests written in Go use the https://golang.org/pkg/testing/[Go Testing
-package]. The tests written in Python depend on https://docs.pytest.org/en/latest/[pytest] and require a compiled and executable binary from the Go code. The python test run a beat with a specific config and params and either check if the output is as expected or if the correct things show up in the logs.
-
-For both of the above test suites so called integration tests exists. Integration tests in Beats are tests which require an external system like Elasticsearch to test if the integration with this service works as expected. Beats provides in its testsuite docker containers and docker-compose files to start these environments but a developer can run the required services also locally.
-
-==== Running Go Tests
-
-The Go tests can be executed in each Go package by running `go test .`. This will execute all tests which don't don't require an external service to be running. To run all non integration tests for a beat run `mage unitTest`.
-
-All Go tests are in the same package as the tested code itself and have the suffix `_test` in the file name. Most of the tests are in the same package as the rest of the code. Some of the tests which should be separate from the rest of the code or should not use private variables go under `{packagename}_test`.
-
-===== Running Go Integration Tests
-
-Integration tests are labelled with the `//go:build integration` build tag and use the `_integration_test.go` suffix.
-
-To run the integration tests use the `mage goIntegTest` target, which will start the required services using https://docs.docker.com/compose/[docker-compose] and run all integration tests.
-
-It is also possible to run module specific integration tests. For example, to run kafka only tests use `MODULE=kafka mage integTest -v`
-
-It is possible to start the `docker-compose` services manually to allow selecting which specific tests should be run. An example follows for filebeat:
-
-[source,bash]
-----
-cd filebeat
-# Pull and build the containers. Only needs to be done once unless you change the containers.
-mage docker:composeBuild
-# Bring up all containers, wait until they are healthy, and put them in the background.
-mage docker:composeUp
-# Run all integration tests.
-go test ./filebeat/... -tags integration
-# Stop all started containers.
-mage docker:composeDown
-----
-
-===== Generate sample events
-
-Go tests support generating sample events to be used as fixtures.
-
-This generation can be perfomed running `go test --data`. This functionality is supported by packetbeat and Metricbeat.
-
-In Metricbeat, run the command from within a module like this: `go test --tags integration,azure --data --run "TestData"`. Make sure to add the relevant tags (`integration` is common then add module and metricset specific tags).
-
-A note about tags: the `--data` flag is a custom flag added by Metricbeat and Packetbeat frameworks. It will not be present in case tags do not match, as the relevant code will not be run and silently skipped (without the tag the test file is ignored by Go compiler so the framework doesn't load). This may happen if there are different tags in the build tags of the metricset under test (i.e. the GCP billing metricset requires the `billing` tag too).
-
-==== Running System (integration) Tests (Python and Go)
-
-The system tests are defined in the `tests/system` (for legacy Python test) and on `tests/integration` (for Go tests) directory. They require a testing binary to be available and the python environment to be set up.
-
-To create the testing binary run `mage buildSystemTestBinary`. This will create the test binary in the beat directory. To set up the Python testing environment run `mage pythonVirtualEnv` which will create a virtual environment with all test dependencies and print its location. To activate it, the instructions depend on your operating system. See the https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#activating-a-virtual-environment[virtualenv documentation].
-
-To run the system and integration tests use the `mage pythonIntegTest` target, which will start the required services using https://docs.docker.com/compose/[docker-compose] and run all integration tests. Similar to Go integration tests, the individual steps can be done manually to allow selecting which tests should be run:
-
-[source,bash]
-----
-# Create and activate the system test virtual environment (assumes a Unix system).
-source $(mage pythonVirtualEnv)/bin/activate
-
-# Pull and build the containers. Only needs to be done once unless you change the containers.
-mage docker:composeBuild
-
-# Bring up all containers, wait until they are healthy, and put them in the background.
-mage docker:composeUp
-
-# Run all system and integration tests.
-INTEGRATION_TESTS=1 pytest ./tests/system
-
-# Stop all started containers.
-mage docker:composeDown
-----
-
-Filebeat's module python tests have additional documentation found in the <> guide.
-
-==== Test commands
-
-To list all mage commands run `mage -l`. A quick summary of the available test Make commands is:
-
-* `unit`: Go tests
-* `unit-tests`: Go tests with coverage reports
-* `integration-tests`: Go tests with services in local docker
-* `integration-tests-environment`: Go tests inside docker with service in docker
-* `fast-system-tests`: Python tests
-* `system-tests`: Python tests with coverage report
-* `INTEGRATION_TESTS=1 system-tests`: Python tests with local services
-* `system-tests-environment`: Python tests inside docker with service in docker
-* `testsuite`: Complete test suite in docker environment is run
-* `test`: Runs testsuite without environment
-
-There are two experimental test commands:
-
-* `benchmark-tests`: Running Go tests with `-bench` flag
-* `load-tests`: Running system tests with `LOAD_TESTS=1` flag
-
-
-==== Coverage report
-
-If the tests were run to create a test coverage, the coverage report files can be found under `build/docs`. To create a more human readable file out of the `.cov` file `make coverage-report` can be used. It creates a `.html` file for each report and a `full.html` as summary of all reports together in the directory `build/coverage`.
-
-==== Race detection
-
-All tests can be run with the Go race detector enabled by setting the environment variable `RACE_DETECTOR=1`. This applies to tests in Go and Python. For Python the test binary has to be recompile when the flag is changed. Having the race detection enabled will slow down the tests.
diff --git a/docs/docset.yml b/docs/docset.yml
new file mode 100644
index 000000000000..48aef0b01f44
--- /dev/null
+++ b/docs/docset.yml
@@ -0,0 +1,491 @@
+project: 'Beats docs'
+cross_links:
+ - docs-content
+ - ecs
+ - elasticsearch
+ - integration-docs
+ - logstash
+toc:
+ - toc: reference
+ - toc: release-notes
+ - toc: extend
+subs:
+ ref: "https://www.elastic.co/guide/en/elasticsearch/reference/current"
+ ref-bare: "https://www.elastic.co/guide/en/elasticsearch/reference"
+ ref-8x: "https://www.elastic.co/guide/en/elasticsearch/reference/8.1"
+ ref-80: "https://www.elastic.co/guide/en/elasticsearch/reference/8.0"
+ ref-7x: "https://www.elastic.co/guide/en/elasticsearch/reference/7.17"
+ ref-70: "https://www.elastic.co/guide/en/elasticsearch/reference/7.0"
+ ref-60: "https://www.elastic.co/guide/en/elasticsearch/reference/6.0"
+ ref-64: "https://www.elastic.co/guide/en/elasticsearch/reference/6.4"
+ xpack-ref: "https://www.elastic.co/guide/en/x-pack/6.2"
+ logstash-ref: "https://www.elastic.co/guide/en/logstash/current"
+ kibana-ref: "https://www.elastic.co/guide/en/kibana/current"
+ kibana-ref-all: "https://www.elastic.co/guide/en/kibana"
+ beats-ref-root: "https://www.elastic.co/guide/en/beats"
+ beats-ref: "https://www.elastic.co/guide/en/beats/libbeat/current"
+ beats-ref-60: "https://www.elastic.co/guide/en/beats/libbeat/6.0"
+ beats-ref-63: "https://www.elastic.co/guide/en/beats/libbeat/6.3"
+ beats-devguide: "https://www.elastic.co/guide/en/beats/devguide/current"
+ auditbeat-ref: "https://www.elastic.co/guide/en/beats/auditbeat/current"
+ packetbeat-ref: "https://www.elastic.co/guide/en/beats/packetbeat/current"
+ metricbeat-ref: "https://www.elastic.co/guide/en/beats/metricbeat/current"
+ filebeat-ref: "https://www.elastic.co/guide/en/beats/filebeat/current"
+ functionbeat-ref: "https://www.elastic.co/guide/en/beats/functionbeat/current"
+ winlogbeat-ref: "https://www.elastic.co/guide/en/beats/winlogbeat/current"
+ heartbeat-ref: "https://www.elastic.co/guide/en/beats/heartbeat/current"
+ journalbeat-ref: "https://www.elastic.co/guide/en/beats/journalbeat/current"
+ ingest-guide: "https://www.elastic.co/guide/en/ingest/current"
+ fleet-guide: "https://www.elastic.co/guide/en/fleet/current"
+ apm-guide-ref: "https://www.elastic.co/guide/en/apm/guide/current"
+ apm-guide-7x: "https://www.elastic.co/guide/en/apm/guide/7.17"
+ apm-app-ref: "https://www.elastic.co/guide/en/kibana/current"
+ apm-agents-ref: "https://www.elastic.co/guide/en/apm/agent"
+ apm-android-ref: "https://www.elastic.co/guide/en/apm/agent/android/current"
+ apm-py-ref: "https://www.elastic.co/guide/en/apm/agent/python/current"
+ apm-py-ref-3x: "https://www.elastic.co/guide/en/apm/agent/python/3.x"
+ apm-node-ref-index: "https://www.elastic.co/guide/en/apm/agent/nodejs"
+ apm-node-ref: "https://www.elastic.co/guide/en/apm/agent/nodejs/current"
+ apm-node-ref-1x: "https://www.elastic.co/guide/en/apm/agent/nodejs/1.x"
+ apm-rum-ref: "https://www.elastic.co/guide/en/apm/agent/rum-js/current"
+ apm-ruby-ref: "https://www.elastic.co/guide/en/apm/agent/ruby/current"
+ apm-java-ref: "https://www.elastic.co/guide/en/apm/agent/java/current"
+ apm-go-ref: "https://www.elastic.co/guide/en/apm/agent/go/current"
+ apm-dotnet-ref: "https://www.elastic.co/guide/en/apm/agent/dotnet/current"
+ apm-php-ref: "https://www.elastic.co/guide/en/apm/agent/php/current"
+ apm-ios-ref: "https://www.elastic.co/guide/en/apm/agent/swift/current"
+ apm-lambda-ref: "https://www.elastic.co/guide/en/apm/lambda/current"
+ apm-attacher-ref: "https://www.elastic.co/guide/en/apm/attacher/current"
+ docker-logging-ref: "https://www.elastic.co/guide/en/beats/loggingplugin/current"
+ esf-ref: "https://www.elastic.co/guide/en/esf/current"
+ kinesis-firehose-ref: "https://www.elastic.co/guide/en/kinesis/{{kinesis_version}}"
+ estc-welcome-current: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current"
+ estc-welcome: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current"
+ estc-welcome-all: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions"
+ hadoop-ref: "https://www.elastic.co/guide/en/elasticsearch/hadoop/current"
+ stack-ref: "https://www.elastic.co/guide/en/elastic-stack/current"
+ stack-ref-67: "https://www.elastic.co/guide/en/elastic-stack/6.7"
+ stack-ref-68: "https://www.elastic.co/guide/en/elastic-stack/6.8"
+ stack-ref-70: "https://www.elastic.co/guide/en/elastic-stack/7.0"
+ stack-ref-80: "https://www.elastic.co/guide/en/elastic-stack/8.0"
+ stack-ov: "https://www.elastic.co/guide/en/elastic-stack-overview/current"
+ stack-gs: "https://www.elastic.co/guide/en/elastic-stack-get-started/current"
+ stack-gs-current: "https://www.elastic.co/guide/en/elastic-stack-get-started/current"
+ javaclient: "https://www.elastic.co/guide/en/elasticsearch/client/java-api/current"
+ java-api-client: "https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current"
+ java-rest: "https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current"
+ jsclient: "https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current"
+ jsclient-current: "https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current"
+ es-ruby-client: "https://www.elastic.co/guide/en/elasticsearch/client/ruby-api/current"
+ es-dotnet-client: "https://www.elastic.co/guide/en/elasticsearch/client/net-api/current"
+ es-php-client: "https://www.elastic.co/guide/en/elasticsearch/client/php-api/current"
+ es-python-client: "https://www.elastic.co/guide/en/elasticsearch/client/python-api/current"
+ defguide: "https://www.elastic.co/guide/en/elasticsearch/guide/2.x"
+ painless: "https://www.elastic.co/guide/en/elasticsearch/painless/current"
+ plugins: "https://www.elastic.co/guide/en/elasticsearch/plugins/current"
+ plugins-8x: "https://www.elastic.co/guide/en/elasticsearch/plugins/8.1"
+ plugins-7x: "https://www.elastic.co/guide/en/elasticsearch/plugins/7.17"
+ plugins-6x: "https://www.elastic.co/guide/en/elasticsearch/plugins/6.8"
+ glossary: "https://www.elastic.co/guide/en/elastic-stack-glossary/current"
+ upgrade_guide: "https://www.elastic.co/products/upgrade_guide"
+ blog-ref: "https://www.elastic.co/blog/"
+ curator-ref: "https://www.elastic.co/guide/en/elasticsearch/client/curator/current"
+ curator-ref-current: "https://www.elastic.co/guide/en/elasticsearch/client/curator/current"
+ metrics-ref: "https://www.elastic.co/guide/en/metrics/current"
+ metrics-guide: "https://www.elastic.co/guide/en/metrics/guide/current"
+ logs-ref: "https://www.elastic.co/guide/en/logs/current"
+ logs-guide: "https://www.elastic.co/guide/en/logs/guide/current"
+ uptime-guide: "https://www.elastic.co/guide/en/uptime/current"
+ observability-guide: "https://www.elastic.co/guide/en/observability/current"
+ observability-guide-all: "https://www.elastic.co/guide/en/observability"
+ siem-guide: "https://www.elastic.co/guide/en/siem/guide/current"
+ security-guide: "https://www.elastic.co/guide/en/security/current"
+ security-guide-all: "https://www.elastic.co/guide/en/security"
+ endpoint-guide: "https://www.elastic.co/guide/en/endpoint/current"
+ sql-odbc: "https://www.elastic.co/guide/en/elasticsearch/sql-odbc/current"
+ ecs-ref: "https://www.elastic.co/guide/en/ecs/current"
+ ecs-logging-ref: "https://www.elastic.co/guide/en/ecs-logging/overview/current"
+ ecs-logging-go-logrus-ref: "https://www.elastic.co/guide/en/ecs-logging/go-logrus/current"
+ ecs-logging-go-zap-ref: "https://www.elastic.co/guide/en/ecs-logging/go-zap/current"
+ ecs-logging-go-zerolog-ref: "https://www.elastic.co/guide/en/ecs-logging/go-zap/current"
+ ecs-logging-java-ref: "https://www.elastic.co/guide/en/ecs-logging/java/current"
+ ecs-logging-dotnet-ref: "https://www.elastic.co/guide/en/ecs-logging/dotnet/current"
+ ecs-logging-nodejs-ref: "https://www.elastic.co/guide/en/ecs-logging/nodejs/current"
+ ecs-logging-php-ref: "https://www.elastic.co/guide/en/ecs-logging/php/current"
+ ecs-logging-python-ref: "https://www.elastic.co/guide/en/ecs-logging/python/current"
+ ecs-logging-ruby-ref: "https://www.elastic.co/guide/en/ecs-logging/ruby/current"
+ ml-docs: "https://www.elastic.co/guide/en/machine-learning/current"
+ eland-docs: "https://www.elastic.co/guide/en/elasticsearch/client/eland/current"
+ eql-ref: "https://eql.readthedocs.io/en/latest/query-guide"
+ extendtrial: "https://www.elastic.co/trialextension"
+ wikipedia: "https://en.wikipedia.org/wiki"
+ forum: "https://discuss.elastic.co/"
+ xpack-forum: "https://discuss.elastic.co/c/50-x-pack"
+ security-forum: "https://discuss.elastic.co/c/x-pack/shield"
+ watcher-forum: "https://discuss.elastic.co/c/x-pack/watcher"
+ monitoring-forum: "https://discuss.elastic.co/c/x-pack/marvel"
+ graph-forum: "https://discuss.elastic.co/c/x-pack/graph"
+ apm-forum: "https://discuss.elastic.co/c/apm"
+ enterprise-search-ref: "https://www.elastic.co/guide/en/enterprise-search/current"
+ app-search-ref: "https://www.elastic.co/guide/en/app-search/current"
+ workplace-search-ref: "https://www.elastic.co/guide/en/workplace-search/current"
+ enterprise-search-node-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/enterprise-search-node/current"
+ enterprise-search-php-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/php/current"
+ enterprise-search-python-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/python/current"
+ enterprise-search-ruby-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/ruby/current"
+ elastic-maps-service: "https://maps.elastic.co"
+ integrations-docs: "https://docs.elastic.co/en/integrations"
+ integrations-devguide: "https://www.elastic.co/guide/en/integrations-developer/current"
+ time-units: "https://www.elastic.co/guide/en/elasticsearch/reference/current/api-conventions.html#time-units"
+ byte-units: "https://www.elastic.co/guide/en/elasticsearch/reference/current/api-conventions.html#byte-units"
+ apm-py-ref-v: "https://www.elastic.co/guide/en/apm/agent/python/current"
+ apm-node-ref-v: "https://www.elastic.co/guide/en/apm/agent/nodejs/current"
+ apm-rum-ref-v: "https://www.elastic.co/guide/en/apm/agent/rum-js/current"
+ apm-ruby-ref-v: "https://www.elastic.co/guide/en/apm/agent/ruby/current"
+ apm-java-ref-v: "https://www.elastic.co/guide/en/apm/agent/java/current"
+ apm-go-ref-v: "https://www.elastic.co/guide/en/apm/agent/go/current"
+ apm-ios-ref-v: "https://www.elastic.co/guide/en/apm/agent/swift/current"
+ apm-dotnet-ref-v: "https://www.elastic.co/guide/en/apm/agent/dotnet/current"
+ apm-php-ref-v: "https://www.elastic.co/guide/en/apm/agent/php/current"
+ ecloud: "Elastic Cloud"
+ esf: "Elastic Serverless Forwarder"
+ ess: "Elasticsearch Service"
+ ece: "Elastic Cloud Enterprise"
+ eck: "Elastic Cloud on Kubernetes"
+ serverless-full: "Elastic Cloud Serverless"
+ serverless-short: "Serverless"
+ es-serverless: "Elasticsearch Serverless"
+ es3: "Elasticsearch Serverless"
+ obs-serverless: "Elastic Observability Serverless"
+ sec-serverless: "Elastic Security Serverless"
+ serverless-docs: "https://docs.elastic.co/serverless"
+ cloud: "https://www.elastic.co/guide/en/cloud/current"
+ ess-utm-params: "?page=docs&placement=docs-body"
+ ess-baymax: "?page=docs&placement=docs-body"
+ ess-trial: "https://cloud.elastic.co/registration?page=docs&placement=docs-body"
+ ess-product: "https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body"
+ ess-console: "https://cloud.elastic.co?page=docs&placement=docs-body"
+ ess-console-name: "Elasticsearch Service Console"
+ ess-deployments: "https://cloud.elastic.co/deployments?page=docs&placement=docs-body"
+ ece-ref: "https://www.elastic.co/guide/en/cloud-enterprise/current"
+ eck-ref: "https://www.elastic.co/guide/en/cloud-on-k8s/current"
+ ess-leadin: "You can run Elasticsearch on your own hardware or use our hosted Elasticsearch Service that is available on AWS, GCP, and Azure. https://cloud.elastic.co/registration{ess-utm-params}[Try the Elasticsearch Service for free]."
+ ess-leadin-short: "Our hosted Elasticsearch Service is available on AWS, GCP, and Azure, and you can https://cloud.elastic.co/registration{ess-utm-params}[try it for free]."
+ ess-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg[link=\"https://cloud.elastic.co/registration{ess-utm-params}\", title=\"Supported on Elasticsearch Service\"]"
+ ece-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud_ece.svg[link=\"https://cloud.elastic.co/registration{ess-utm-params}\", title=\"Supported on Elastic Cloud Enterprise\"]"
+ cloud-only: "This feature is designed for indirect use by https://cloud.elastic.co/registration{ess-utm-params}[Elasticsearch Service], https://www.elastic.co/guide/en/cloud-enterprise/{ece-version-link}[Elastic Cloud Enterprise], and https://www.elastic.co/guide/en/cloud-on-k8s/current[Elastic Cloud on Kubernetes]. Direct use is not supported."
+ ess-setting-change: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg[link=\"{ess-trial}\", title=\"Supported on {ess}\"] indicates a change to a supported https://www.elastic.co/guide/en/cloud/current/ec-add-user-settings.html[user setting] for Elasticsearch Service."
+ ess-skip-section: "If you use Elasticsearch Service, skip this section. Elasticsearch Service handles these changes for you."
+ api-cloud: "https://www.elastic.co/docs/api/doc/cloud"
+ api-ece: "https://www.elastic.co/docs/api/doc/cloud-enterprise"
+ api-kibana-serverless: "https://www.elastic.co/docs/api/doc/serverless"
+ es-feature-flag: "This feature is in development and not yet available for use. This documentation is provided for informational purposes only."
+ es-ref-dir: "'{{elasticsearch-root}}/docs/reference'"
+ apm-app: "APM app"
+ uptime-app: "Uptime app"
+ synthetics-app: "Synthetics app"
+ logs-app: "Logs app"
+ metrics-app: "Metrics app"
+ infrastructure-app: "Infrastructure app"
+ siem-app: "SIEM app"
+ security-app: "Elastic Security app"
+ ml-app: "Machine Learning"
+ dev-tools-app: "Dev Tools"
+ ingest-manager-app: "Ingest Manager"
+ stack-manage-app: "Stack Management"
+ stack-monitor-app: "Stack Monitoring"
+ alerts-ui: "Alerts and Actions"
+ rules-ui: "Rules"
+ rac-ui: "Rules and Connectors"
+ connectors-ui: "Connectors"
+ connectors-feature: "Actions and Connectors"
+ stack-rules-feature: "Stack Rules"
+ user-experience: "User Experience"
+ ems: "Elastic Maps Service"
+ ems-init: "EMS"
+ hosted-ems: "Elastic Maps Server"
+ ipm-app: "Index Pattern Management"
+ ingest-pipelines: "ingest pipelines"
+ ingest-pipelines-app: "Ingest Pipelines"
+ ingest-pipelines-cap: "Ingest pipelines"
+ ls-pipelines: "Logstash pipelines"
+ ls-pipelines-app: "Logstash Pipelines"
+ maint-windows: "maintenance windows"
+ maint-windows-app: "Maintenance Windows"
+ maint-windows-cap: "Maintenance windows"
+ custom-roles-app: "Custom Roles"
+ data-source: "data view"
+ data-sources: "data views"
+ data-source-caps: "Data View"
+ data-sources-caps: "Data Views"
+ data-source-cap: "Data view"
+ data-sources-cap: "Data views"
+ project-settings: "Project settings"
+ manage-app: "Management"
+ index-manage-app: "Index Management"
+ data-views-app: "Data Views"
+ rules-app: "Rules"
+ saved-objects-app: "Saved Objects"
+ tags-app: "Tags"
+ api-keys-app: "API keys"
+ transforms-app: "Transforms"
+ connectors-app: "Connectors"
+ files-app: "Files"
+ reports-app: "Reports"
+ maps-app: "Maps"
+ alerts-app: "Alerts"
+ crawler: "Enterprise Search web crawler"
+ ents: "Enterprise Search"
+ app-search-crawler: "App Search web crawler"
+ agent: "Elastic Agent"
+ agents: "Elastic Agents"
+ fleet: "Fleet"
+ fleet-server: "Fleet Server"
+ integrations-server: "Integrations Server"
+ ingest-manager: "Ingest Manager"
+ ingest-management: "ingest management"
+ package-manager: "Elastic Package Manager"
+ integrations: "Integrations"
+ package-registry: "Elastic Package Registry"
+ artifact-registry: "Elastic Artifact Registry"
+ aws: "AWS"
+ stack: "Elastic Stack"
+ xpack: "X-Pack"
+ es: "Elasticsearch"
+ kib: "Kibana"
+ esms: "Elastic Stack Monitoring Service"
+ esms-init: "ESMS"
+ ls: "Logstash"
+ beats: "Beats"
+ auditbeat: "Auditbeat"
+ filebeat: "Filebeat"
+ heartbeat: "Heartbeat"
+ metricbeat: "Metricbeat"
+ packetbeat: "Packetbeat"
+ winlogbeat: "Winlogbeat"
+ functionbeat: "Functionbeat"
+ journalbeat: "Journalbeat"
+ es-sql: "Elasticsearch SQL"
+ esql: "ES|QL"
+ elastic-agent: "Elastic Agent"
+ k8s: "Kubernetes"
+ log-driver-long: "Elastic Logging Plugin for Docker"
+ security: "X-Pack security"
+ security-features: "security features"
+ operator-feature: "operator privileges feature"
+ es-security-features: "Elasticsearch security features"
+ stack-security-features: "Elastic Stack security features"
+ endpoint-sec: "Endpoint Security"
+ endpoint-cloud-sec: "Endpoint and Cloud Security"
+ elastic-defend: "Elastic Defend"
+ elastic-sec: "Elastic Security"
+ elastic-endpoint: "Elastic Endpoint"
+ swimlane: "Swimlane"
+ sn: "ServiceNow"
+ sn-itsm: "ServiceNow ITSM"
+ sn-itom: "ServiceNow ITOM"
+ sn-sir: "ServiceNow SecOps"
+ jira: "Jira"
+ ibm-r: "IBM Resilient"
+ webhook: "Webhook"
+ webhook-cm: "Webhook - Case Management"
+ opsgenie: "Opsgenie"
+ bedrock: "Amazon Bedrock"
+ gemini: "Google Gemini"
+ hive: "TheHive"
+ monitoring: "X-Pack monitoring"
+ monitor-features: "monitoring features"
+ stack-monitor-features: "Elastic Stack monitoring features"
+ watcher: "Watcher"
+ alert-features: "alerting features"
+ reporting: "X-Pack reporting"
+ report-features: "reporting features"
+ graph: "X-Pack graph"
+ graph-features: "graph analytics features"
+ searchprofiler: "Search Profiler"
+ xpackml: "X-Pack machine learning"
+ ml: "machine learning"
+ ml-cap: "Machine learning"
+ ml-init: "ML"
+ ml-features: "machine learning features"
+ stack-ml-features: "Elastic Stack machine learning features"
+ ccr: "cross-cluster replication"
+ ccr-cap: "Cross-cluster replication"
+ ccr-init: "CCR"
+ ccs: "cross-cluster search"
+ ccs-cap: "Cross-cluster search"
+ ccs-init: "CCS"
+ ilm: "index lifecycle management"
+ ilm-cap: "Index lifecycle management"
+ ilm-init: "ILM"
+ dlm: "data lifecycle management"
+ dlm-cap: "Data lifecycle management"
+ dlm-init: "DLM"
+ search-snap: "searchable snapshot"
+ search-snaps: "searchable snapshots"
+ search-snaps-cap: "Searchable snapshots"
+ slm: "snapshot lifecycle management"
+ slm-cap: "Snapshot lifecycle management"
+ slm-init: "SLM"
+ rollup-features: "data rollup features"
+ ipm: "index pattern management"
+ ipm-cap: "Index pattern"
+ rollup: "rollup"
+ rollup-cap: "Rollup"
+ rollups: "rollups"
+ rollups-cap: "Rollups"
+ rollup-job: "rollup job"
+ rollup-jobs: "rollup jobs"
+ rollup-jobs-cap: "Rollup jobs"
+ dfeed: "datafeed"
+ dfeeds: "datafeeds"
+ dfeed-cap: "Datafeed"
+ dfeeds-cap: "Datafeeds"
+ ml-jobs: "machine learning jobs"
+ ml-jobs-cap: "Machine learning jobs"
+ anomaly-detect: "anomaly detection"
+ anomaly-detect-cap: "Anomaly detection"
+ anomaly-job: "anomaly detection job"
+ anomaly-jobs: "anomaly detection jobs"
+ anomaly-jobs-cap: "Anomaly detection jobs"
+ dataframe: "data frame"
+ dataframes: "data frames"
+ dataframe-cap: "Data frame"
+ dataframes-cap: "Data frames"
+ watcher-transform: "payload transform"
+ watcher-transforms: "payload transforms"
+ watcher-transform-cap: "Payload transform"
+ watcher-transforms-cap: "Payload transforms"
+ transform: "transform"
+ transforms: "transforms"
+ transform-cap: "Transform"
+ transforms-cap: "Transforms"
+ dataframe-transform: "transform"
+ dataframe-transform-cap: "Transform"
+ dataframe-transforms: "transforms"
+ dataframe-transforms-cap: "Transforms"
+ dfanalytics-cap: "Data frame analytics"
+ dfanalytics: "data frame analytics"
+ dataframe-analytics-config: "'{dataframe} analytics config'"
+ dfanalytics-job: "'{dataframe} analytics job'"
+ dfanalytics-jobs: "'{dataframe} analytics jobs'"
+ dfanalytics-jobs-cap: "'{dataframe-cap} analytics jobs'"
+ cdataframe: "continuous data frame"
+ cdataframes: "continuous data frames"
+ cdataframe-cap: "Continuous data frame"
+ cdataframes-cap: "Continuous data frames"
+ cdataframe-transform: "continuous transform"
+ cdataframe-transforms: "continuous transforms"
+ cdataframe-transforms-cap: "Continuous transforms"
+ ctransform: "continuous transform"
+ ctransform-cap: "Continuous transform"
+ ctransforms: "continuous transforms"
+ ctransforms-cap: "Continuous transforms"
+ oldetection: "outlier detection"
+ oldetection-cap: "Outlier detection"
+ olscore: "outlier score"
+ olscores: "outlier scores"
+ fiscore: "feature influence score"
+ evaluatedf-api: "evaluate {dataframe} analytics API"
+ evaluatedf-api-cap: "Evaluate {dataframe} analytics API"
+ binarysc: "binary soft classification"
+ binarysc-cap: "Binary soft classification"
+ regression: "regression"
+ regression-cap: "Regression"
+ reganalysis: "regression analysis"
+ reganalysis-cap: "Regression analysis"
+ depvar: "dependent variable"
+ feature-var: "feature variable"
+ feature-vars: "feature variables"
+ feature-vars-cap: "Feature variables"
+ classification: "classification"
+ classification-cap: "Classification"
+ classanalysis: "classification analysis"
+ classanalysis-cap: "Classification analysis"
+ infer-cap: "Inference"
+ infer: "inference"
+ lang-ident-cap: "Language identification"
+ lang-ident: "language identification"
+ data-viz: "Data Visualizer"
+ file-data-viz: "File Data Visualizer"
+ feat-imp: "feature importance"
+ feat-imp-cap: "Feature importance"
+ nlp: "natural language processing"
+ nlp-cap: "Natural language processing"
+ apm-agent: "APM agent"
+ apm-go-agent: "Elastic APM Go agent"
+ apm-go-agents: "Elastic APM Go agents"
+ apm-ios-agent: "Elastic APM iOS agent"
+ apm-ios-agents: "Elastic APM iOS agents"
+ apm-java-agent: "Elastic APM Java agent"
+ apm-java-agents: "Elastic APM Java agents"
+ apm-dotnet-agent: "Elastic APM .NET agent"
+ apm-dotnet-agents: "Elastic APM .NET agents"
+ apm-node-agent: "Elastic APM Node.js agent"
+ apm-node-agents: "Elastic APM Node.js agents"
+ apm-php-agent: "Elastic APM PHP agent"
+ apm-php-agents: "Elastic APM PHP agents"
+ apm-py-agent: "Elastic APM Python agent"
+ apm-py-agents: "Elastic APM Python agents"
+ apm-ruby-agent: "Elastic APM Ruby agent"
+ apm-ruby-agents: "Elastic APM Ruby agents"
+ apm-rum-agent: "Elastic APM Real User Monitoring (RUM) JavaScript agent"
+ apm-rum-agents: "Elastic APM RUM JavaScript agents"
+ apm-lambda-ext: "Elastic APM AWS Lambda extension"
+ project-monitors: "project monitors"
+ project-monitors-cap: "Project monitors"
+ private-location: "Private Location"
+ private-locations: "Private Locations"
+ pwd: "YOUR_PASSWORD"
+ esh: "ES-Hadoop"
+ default-dist: "default distribution"
+ oss-dist: "OSS-only distribution"
+ observability: "Observability"
+ api-request-title: "Request"
+ api-prereq-title: "Prerequisites"
+ api-description-title: "Description"
+ api-path-parms-title: "Path parameters"
+ api-query-parms-title: "Query parameters"
+ api-request-body-title: "Request body"
+ api-response-codes-title: "Response codes"
+ api-response-body-title: "Response body"
+ api-example-title: "Example"
+ api-examples-title: "Examples"
+ api-definitions-title: "Properties"
+ multi-arg: "†footnoteref:[multi-arg,This parameter accepts multiple arguments.]"
+ multi-arg-ref: "†footnoteref:[multi-arg]"
+ yes-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png[Yes,20,15]"
+ no-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png[No,20,15]"
+ es-repo: "https://github.com/elastic/elasticsearch/"
+ es-issue: "https://github.com/elastic/elasticsearch/issues/"
+ es-pull: "https://github.com/elastic/elasticsearch/pull/"
+ es-commit: "https://github.com/elastic/elasticsearch/commit/"
+ kib-repo: "https://github.com/elastic/kibana/"
+ kib-issue: "https://github.com/elastic/kibana/issues/"
+ kibana-issue: "'{kib-repo}issues/'"
+ kib-pull: "https://github.com/elastic/kibana/pull/"
+ kibana-pull: "'{kib-repo}pull/'"
+ kib-commit: "https://github.com/elastic/kibana/commit/"
+ ml-repo: "https://github.com/elastic/ml-cpp/"
+ ml-issue: "https://github.com/elastic/ml-cpp/issues/"
+ ml-pull: "https://github.com/elastic/ml-cpp/pull/"
+ ml-commit: "https://github.com/elastic/ml-cpp/commit/"
+ apm-repo: "https://github.com/elastic/apm-server/"
+ apm-issue: "https://github.com/elastic/apm-server/issues/"
+ apm-pull: "https://github.com/elastic/apm-server/pull/"
+ kibana-blob: "https://github.com/elastic/kibana/blob/current/"
+ apm-get-started-ref: "https://www.elastic.co/guide/en/apm/get-started/current"
+ apm-server-ref: "https://www.elastic.co/guide/en/apm/server/current"
+ apm-server-ref-v: "https://www.elastic.co/guide/en/apm/server/current"
+ apm-server-ref-m: "https://www.elastic.co/guide/en/apm/server/master"
+ apm-server-ref-62: "https://www.elastic.co/guide/en/apm/server/6.2"
+ apm-server-ref-64: "https://www.elastic.co/guide/en/apm/server/6.4"
+ apm-server-ref-70: "https://www.elastic.co/guide/en/apm/server/7.0"
+ apm-overview-ref-v: "https://www.elastic.co/guide/en/apm/get-started/current"
+ apm-overview-ref-70: "https://www.elastic.co/guide/en/apm/get-started/7.0"
+ apm-overview-ref-m: "https://www.elastic.co/guide/en/apm/get-started/master"
+ infra-guide: "https://www.elastic.co/guide/en/infrastructure/guide/current"
+ a-data-source: "a data view"
+ icon-bug: "pass:[]"
+ icon-checkInCircleFilled: "pass:[]"
+ icon-warningFilled: "pass:[]"
diff --git a/docs/extend/_migrating_dashboards_from_kibana_5_x_to_6_x.md b/docs/extend/_migrating_dashboards_from_kibana_5_x_to_6_x.md
new file mode 100644
index 000000000000..fdc6a7b93fe2
--- /dev/null
+++ b/docs/extend/_migrating_dashboards_from_kibana_5_x_to_6_x.md
@@ -0,0 +1,84 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/_migrating_dashboards_from_kibana_5_x_to_6_x.html
+---
+
+# Migrating dashboards from Kibana 5.x to 6.x [_migrating_dashboards_from_kibana_5_x_to_6_x]
+
+This section is useful for the community Beats to migrate the Kibana 5.x dashboards to 6.x dashboards.
+
+In the Kibana 5.x, the saved dashboards consist of multiple JSON files, one for each dashboard, search, visualization and index-pattern. To import a dashboard in Kibana, you need to load not only the JSON file containing the dashboard, but also all its dependencies (searches, visualizations).
+
+Starting with Kibana 6.0, the dashboards are loaded by default via the Kibana API. In this case, the saved dashboard consist of a single JSON file that includes not only the dashboard content, but also all its dependencies.
+
+As the format of the dashboards and index-pattern for Kibana 5.x is different than the ones for Kibana 6.x, they are placed in different directories. Depending on the Kibana version, the 5.x or 6.x dashboards are loaded.
+
+The Kibana 5.x dashboards are placed under the 5.x directory that contains the following directories: - search - visualization - dashboard - index-pattern
+
+The Kibana 6.x dashboards and later are placed under the default directory that contains the following directories: - dashboard - index-pattern
+
+NOTE
+: Please make sure the 5.x and default directories are created before running the following commands.
+
+To migrate your Kibana 5.x dashboards to Kibana 6.0 and above, you can import the dashboards into Kibana 5.6 and then export them using Beats 6.0 version.
+
+* Start Kibana 5.6
+* Import Kibana 5.x dashboards using Beats 6.0 version.
+
+Before importing the dashboards, make sure you run `make update` in the Beat directory, that updates the `_meta/kibana` directory. It generates the index-pattern from the `fields.yml` file, and places it under the `5.x/index-pattern` and `default/index-pattern` directories. In case of Metricbeat, Filebeat and Auditbeat, it collects the dashboards from all the modules to the `_meta/kibana` directory.
+
+```shell
+make update
+```
+
+Then load all the Beat’s dashboards. For example, to load the Metricbeat rabbitmq dashboards together with the Metricbeat index-pattern into Kibana 5.6, using the Kibana API:
+
+```shell
+make update
+./metricbeat setup -E setup.dashboards.directory=_meta/kibana
+```
+
+* Export the dashboards using Beats 6.0 version.
+
+You can export the dashboards via the Kibana API by using the [export_dashboards.go](https://github.com/elastic/beats/blob/main/dev-tools/cmd/dashboards/export_dashboards.go) application.
+
+For example, to export the Metricbeat rabbitmq dashboard:
+
+```shell
+cd beats/metricbeat
+go run ../dev-tools/cmd/dashboards/export_dashboards.go -dashboards Metricbeat-Rabbitmq -output
+module/rabbitmq/_meta/kibana/default/Metricbeat-Rabbitmq.json <1>
+```
+
+1. `Metricbeat-Rabbitmq` is the ID of the dashboard that you want to export.
+
+
+Note: You can get the dashboard ID from the URL of the dashboard in Kibana. Depending on the Kibana version the dashboard was created, the ID consists of a name or random characters that can be separated by `-`.
+
+This command creates a single JSON file (Metricbeat-Rabbitmq.JSON) that contains the dashboard and all the dependencies like searches, visualizations. The name of the output file has the format: -.json.
+
+Starting with Beats 6.0.0, you can create an `yml` file for each module or for the entire Beat with all the dashboards. Below is an example of the `module.yml` file for the system module in Metricbeat.
+
+```yaml
+dashboards:
+ - id: Metricbeat-system-overview <1>
+ file: Metricbeat-system-overview.json <2>
+
+ - id: 79ffd6e0-faa0-11e6-947f-177f697178b8
+ file: Metricbeat-host-overview.json
+
+ - id: CPU-slash-Memory-per-container
+ file: Metricbeat-docker-overview.json
+```
+
+1. Dashboard ID.
+2. The JSON file where the dashboard is saved on disk.
+
+
+Using the yml file, you can export all the dashboards for a single module or for the entire Beat using a single command:
+
+```shell
+cd metricbeat/module/system
+go run ../../../dev-tools/cmd/dashboards/export_dashboards.go -yml module.yml
+```
+
diff --git a/docs/extend/archive-dashboards.md b/docs/extend/archive-dashboards.md
new file mode 100644
index 000000000000..09b36e0606e6
--- /dev/null
+++ b/docs/extend/archive-dashboards.md
@@ -0,0 +1,19 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/archive-dashboards.html
+---
+
+# Archiving Your Beat Dashboards [archive-dashboards]
+
+The Kibana dashboards for the Elastic Beats are saved under the `kibana` directory. To create a zip archive with the dashboards, including visualizations and searches and the index pattern, you can run the following command in the Beat repository:
+
+```shell
+make package-dashboards
+```
+
+The Makefile is part of libbeat, which means that community Beats contributors can use the commands shown here to archive dashboards. The dashboards must be available under the `kibana` directory.
+
+Another option would be to create a repository only with the dashboards, and use the GitHub release functionality to create a zip archive.
+
+Share the Kibana dashboards archive with the community, so other users can use your cool Kibana visualizations!
+
diff --git a/docs/extend/build-dashboards.md b/docs/extend/build-dashboards.md
new file mode 100644
index 000000000000..be67376a072b
--- /dev/null
+++ b/docs/extend/build-dashboards.md
@@ -0,0 +1,37 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/build-dashboards.html
+---
+
+# Building Your Own Beat Dashboards [build-dashboards]
+
+::::{note}
+If you want to modify a dashboard that comes with a Beat, it’s better to modify a copy of the dashboard because the Beat overwrites the dashboards during the setup phase in order to have the latest version. For duplicating a dashboard, just use the `Clone` button from the top of the page.
+::::
+
+
+Before building your own dashboards or customizing the existing ones, you need to load:
+
+* the Beat index pattern, which specifies how Kibana should display the Beat fields
+* the Beat dashboards that you want to customize
+
+For the Elastic Beats, the index pattern is available in the Beat package under `kibana/*/index-pattern`. The index-pattern is automatically generated from the `fields.yml` file, available in the Beat package. For more details check the [generate index pattern](/extend/generate-index-pattern.md) section.
+
+All Beats dashboards, visualizations and saved searches must follow common naming conventions:
+
+* Dashboard names have prefix `[BeatName Module]`, e.g. `[Filebeat Nginx] Access logs`
+* Visualizations and searches have suffix `[BeatName Module]`, e.g. `Top processes [Filebeat Nginx]`
+
+::::{note}
+You can set a custom name (skip suffix) for visualization placed on a dashboard. The original visualization will stay intact.
+::::
+
+
+The naming convention rules can be verified with the the tool `mage check`. The command fails if it detects:
+
+* empty description on a dashboard
+* unexpected dashboard title format (missing prefix `[BeatName ModuleName]`)
+* unexpected visualization title format (missing suffix `[BeatName Module]`)
+
+After creating your own dashboards in Kibana, you can [export the Kibana dashboards](/extend/export-dashboards.md) to a local directory, and then [archive the dashboards](/extend/archive-dashboards.md) in order to be able to share the dashboards with the community.
+
diff --git a/docs/extend/community-beats.md b/docs/extend/community-beats.md
new file mode 100644
index 000000000000..279a8e5df5e5
--- /dev/null
+++ b/docs/extend/community-beats.md
@@ -0,0 +1,336 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/community-beats.html
+---
+
+# Community {{beats}} [community-beats]
+
+::::{admonition}
+**Custom Beat generator code no longer available in 8.0 and later**
+
+The custom Beat generator was a helper tool that allowed developers to bootstrap their custom {{beats}}. This tool was deprecated in 7.16 and is no longer available starting in 8.0.
+
+Developers can continue to create custom {{beats}} to address specific and targeted use cases. If you need to create a Beat from scratch, you can use the custom Beat generator tool available in version 7.16 or 7.17 to generate the custom Beat, then upgrade its various components to the 8.x release.
+
+::::
+
+
+This page lists some of the {{beats}} developed by the open source community.
+
+Have a question about developing a community Beat? You can post questions and discuss issues in the [{{beats}} discussion forum](https://discuss.elastic.co/tags/c/elastic-stack/beats/28/beats-development).
+
+Have you created a Beat that’s not listed? Add the name and description of your Beat to the source document for [Community {{beats}}](https://github.com/elastic/beats/blob/main/libbeat/docs/communitybeats.asciidoc) and [open a pull request](https://help.github.com/articles/using-pull-requests) in the [{{beats}} GitHub repository](https://github.com/elastic/beats) to get your change merged. When you’re ready, go ahead and [announce](https://discuss.elastic.co/c/announcements) your new Beat in the Elastic discussion forum.
+
+::::{note}
+Elastic provides no warranty or support for community-sourced {{beats}}.
+::::
+
+
+[amazonbeat](https://github.com/awormuth/amazonbeat)
+: Reads data from a specified Amazon product.
+
+[apachebeat](https://github.com/radoondas/apachebeat)
+: Reads status from Apache HTTPD server-status.
+
+[apexbeat](https://github.com/verticle-io/apexbeat)
+: Extracts configurable contextual data and metrics from Java applications via the [APEX](http://toolkits.verticle.io) toolkit.
+
+[browserbeat](https://github.com/MelonSmasher/browserbeat)
+: Reads and ships browser history (Chrome, Firefox, & Safari) to an Elastic output.
+
+[cborbeat](https://github.com/toravir/cborbeat)
+: Reads from cbor encoded files (specifically log files). More: [CBOR Encoding](https://cbor.io) [Decoder](https://github.com/toravir/csd)
+
+[cloudflarebeat](https://github.com/hartfordfive/cloudflarebeat)
+: Indexes log entries from the Cloudflare Enterprise Log Share API.
+
+[cloudfrontbeat](https://github.com/jarl-tornroos/cloudfrontbeat)
+: Reads log events from Amazon Web Services [CloudFront](https://aws.amazon.com/cloudfront/).
+
+[cloudtrailbeat](https://github.com/aidan-/cloudtrailbeat)
+: Reads events from Amazon Web Services' [CloudTrail](https://aws.amazon.com/cloudtrail/).
+
+[cloudwatchmetricbeat](https://github.com/narmitech/cloudwatchmetricbeat)
+: A beat for Amazon Web Services' [CloudWatch Metrics](https://aws.amazon.com/cloudwatch/details/#other-aws-resource-monitoring).
+
+[cloudwatchlogsbeat](https://github.com/e-travel/cloudwatchlogsbeat)
+: Reads log events from Amazon Web Services' [CloudWatch Logs](https://aws.amazon.com/cloudwatch/details/#log-monitoring).
+
+[collectbeat](https://github.com/eBay/collectbeat)
+: Adds discovery on top of Filebeat and Metricbeat in environments like Kubernetes.
+
+[connbeat](https://github.com/raboof/connbeat)
+: Exposes metadata about TCP connections.
+
+[consulbeat](https://github.com/Pravoru/consulbeat)
+: Reads services health checks from consul and pushes them to Elastic.
+
+[discobeat](https://github.com/hellmouthengine/discobeat)
+: Reads messages from Discord and indexes them in Elasticsearch
+
+[dockbeat](https://github.com/Ingensi/dockbeat)
+: Reads Docker container statistics and indexes them in Elasticsearch.
+
+[earthquakebeat](https://github.com/radoondas/earthquakebeat)
+: Pulls data from [USGS](https://earthquake.usgs.gov/fdsnws/event/1/) earthquake API.
+
+[elasticbeat](https://github.com/radoondas/elasticbeat)
+: Reads status from an Elasticsearch cluster and indexes them in Elasticsearch.
+
+[envoyproxybeat](https://github.com/berfinsari/envoyproxybeat)
+: Reads stats from the Envoy Proxy and indexes them into Elasticsearch.
+
+[etcdbeat](https://github.com/gamegos/etcdbeat)
+: Reads stats from the Etcd v2 API and indexes them into Elasticsearch.
+
+[etherbeat](https://gitlab.com/hatricker/etherbeat)
+: Reads blocks from Ethereum compatible blockchain and indexes them into Elasticsearch.
+
+[execbeat](https://github.com/christiangalsterer/execbeat)
+: Periodically executes shell commands and sends the standard output and standard error to Logstash or Elasticsearch.
+
+[factbeat](https://github.com/jarpy/factbeat)
+: Collects facts from [Facter](https://github.com/puppetlabs/facter).
+
+[fastcombeat](https://github.com/ctindel/fastcombeat)
+: Periodically gather internet download speed from [fast.com](https://fast.com).
+
+[fileoccurencebeat](https://github.com/cloudronics/fileoccurancebeat)
+: Checks for file existence recurssively under a given directory, handy while handling queues/pipeline buffers.
+
+[flowbeat](https://github.com/FStelzer/flowbeat)
+: Collects, parses, and indexes [sflow](http://www.sflow.org/index.php) samples.
+
+[gabeat](https://github.com/GeneralElectric/GABeat)
+: Collects data from Google Analytics Realtime API.
+
+[gcsbeat](https://github.com/GoogleCloudPlatform/gcsbeat)
+: Reads data from [Google Cloud Storage](https://cloud.google.com/storage/) buckets.
+
+[gelfbeat](https://github.com/threatstack/gelfbeat)
+: Collects and parses GELF-encoded UDP messages.
+
+[githubbeat](https://github.com/josephlewis42/githubbeat)
+: Easily monitors GitHub repository activity.
+
+[gpfsbeat](https://github.com/hpcugent/gpfsbeat)
+: Collects GPFS metric and quota information.
+
+[hackerbeat](https://github.com/ullaakut/hackerbeat)
+: Indexes the top stories of HackerNews into an ElasticSearch instance.
+
+[hsbeat](https://github.com/YaSuenag/hsbeat)
+: Reads all performance counters in Java HotSpot VM.
+
+[httpbeat](https://github.com/christiangalsterer/httpbeat)
+: Polls multiple HTTP(S) endpoints and sends the data to Logstash or Elasticsearch. Supports all HTTP methods and proxies.
+
+[hsnburrowbeat](https://github.com/hsngerami/hsnburrowbeat)
+: Monitors Kafka consumer lag for Burrow V1.0.0(API V3).
+
+[hwsensorsbeat](https://github.com/jasperla/hwsensorsbeat)
+: Reads sensors information from OpenBSD.
+
+[icingabeat](https://github.com/icinga/icingabeat)
+: Icingabeat ships events and states from Icinga 2 to Elasticsearch or Logstash.
+
+[IIBBeat](https://github.com/visasimbu/IIBBeat)
+: Periodically executes shell commands or batch commands to collect IBM Integration node, Integration server, app status, bar file deployment time and bar file location to Logstash or Elasticsearch.
+
+[iobeat](https://github.com/devopsmakers/iobeat)
+: Reads IO stats from /proc/diskstats on Linux.
+
+[jmxproxybeat](https://github.com/radoondas/jmxproxybeat)
+: Reads Tomcat JMX metrics exposed over *JMX Proxy Servlet* to HTTP.
+
+[journalbeat](https://github.com/mheese/journalbeat)
+: Used for log shipping from systemd/journald based Linux systems.
+
+[kafkabeat](https://github.com/justsocialapps/kafkabeat)
+: Reads data from Kafka topics.
+
+[kafkabeat2](https://github.com/arkady-emelyanov/kafkabeat)
+: Reads data (json or plain) from Kafka topics.
+
+[krakenbeat](https://github.com/PPACI/krakenbeat)
+: Collect information on each transaction on the Kraken crypto platform.
+
+[lmsensorsbeat](https://github.com/eskibars/lmsensorsbeat)
+: Collects data from lm-sensors (such as CPU temperatures, fan speeds, and voltages from i2c and smbus).
+
+[logstashbeat](https://github.com/consulthys/logstashbeat)
+: Collects data from Logstash monitoring API (v5 onwards) and indexes them in Elasticsearch.
+
+[macwifibeat](https://github.com/bozdag/macwifibeat)
+: Reads various indicators for a MacBook’s WiFi Signal Strength
+
+[mcqbeat](https://github.com/yedamao/mcqbeat)
+: Reads the status of queues from memcacheq.
+
+[merakibeat](https://developer.cisco.com/codeexchange/github/repo/CiscoDevNet/merakibeat)
+: Collects [wireless health](https://dashboard.meraki.com/api_docs#wireless-health) and users [location analytics](https://documentation.meraki.com/MR/Monitoring_and_Reporting/Scanning_API) data using Cisco Meraki APIs.
+
+[mesosbeat](https://github.com/berfinsari/mesosbeat)
+: Reads stats from the Mesos API and indexes them into Elasticsearch.
+
+[mongobeat](https://github.com/scottcrespo/mongobeat)
+: Monitors MongoDB instances and can be configured to send multiple document types to Elasticsearch.
+
+[mqttbeat](https://github.com/nathan-K-/mqttbeat)
+: Add messages from mqtt topics to Elasticsearch.
+
+[mysqlbeat](https://github.com/adibendahan/mysqlbeat)
+: Run any query on MySQL and send results to Elasticsearch.
+
+[nagioscheckbeat](https://github.com/PhaedrusTheGreek/nagioscheckbeat)
+: For Nagios checks and performance data.
+
+[natsbeat](https://github.com/nfvsap/natsbeat)
+: Collects data from NATS monitoring endpoints
+
+[netatmobeat](https://github.com/radoondas/netatmobeat)
+: Reads data from Netatmo weather station.
+
+[netbeat](https://github.com/hmschreck/netbeat)
+: Reads configurable data from SNMP-enabled devices.
+
+[nginxbeat](https://github.com/mrkschan/nginxbeat)
+: Reads status from Nginx.
+
+[nginxupstreambeat](https://github.com/2Fast2BCn/nginxupstreambeat)
+: Reads upstream status from nginx upstream module.
+
+[nsqbeat](https://github.com/mschneider82/nsqbeat)
+: Reads data from a NSQ topic.
+
+[nvidiagpubeat](https://github.com/eBay/nvidiagpubeat)
+: Uses nvidia-smi to grab metrics of NVIDIA GPUs.
+
+[o365beat](https://github.com/counteractive/o365beat)
+: Ships Office 365 logs from the O365 Management Activities API
+
+[openconfigbeat](https://github.com/aristanetworks/openconfigbeat)
+: Streams data from [OpenConfig](http://openconfig.net)-enabled network devices
+
+[openvpnbeat](https://github.com/nabeel-shakeel/openvpnbeat)
+: Collects OpenVPN connection metrics
+
+[owmbeat](https://github.com/radoondas/owmbeat)
+: Open Weather Map beat to pull weather data from all around the world and store and visualize them in Elastic Stack
+
+[packagebeat](https://github.com/joehillen/packagebeat)
+: Collects information about system packages from package managers.
+
+[perfstatbeat](https://github.com/WuerthIT/perfstatbeat)
+: Collects performance metrics on the AIX operating system.
+
+[phishbeat](https://github.com/stric-co/phishbeat)
+: Monitors Certificate Transparency logs for phishing and defamatory domains.
+
+[phpfpmbeat](https://github.com/kozlice/phpfpmbeat)
+: Reads status from PHP-FPM.
+
+[pingbeat](https://github.com/joshuar/pingbeat)
+: Sends ICMP pings to a list of targets and stores the round trip time (RTT) in Elasticsearch.
+
+[powermaxbeat](https://github.com/kckecheng/powermaxbeat)
+: Collects performance metrics from Dell EMC PowerMax storage array.
+
+[processbeat](https://github.com/pawankt/processbeat)
+: Collects process health status and performance.
+
+[prombeat](https://github.com/carlpett/prombeat)
+: Indexes [Prometheus](https://prometheus.io) metrics.
+
+[prometheusbeat](https://github.com/infonova/prometheusbeat)
+: Send Prometheus metrics to Elasticsearch via the remote write feature.
+
+[protologbeat](https://github.com/hartfordfive/protologbeat)
+: Accepts structured and unstructured logs via UDP or TCP. Can also be used to receive syslog messages or GELF formatted messages. (To be used as a successor to udplogbeat)
+
+[pubsubbeat](https://github.com/GoogleCloudPlatform/pubsubbeat)
+: Reads data from [Google Cloud Pub/Sub](https://cloud.google.com/pubsub/).
+
+[redditbeat](https://github.com/voigt/redditbeat)
+: Collects new Reddit Submissions of one or multiple Subreddits.
+
+[redisbeat](https://github.com/chrsblck/redisbeat)
+: Used for Redis monitoring.
+
+[retsbeat](https://github.com/consulthys/retsbeat)
+: Collects counts of [RETS](http://www.reso.org) resource/class records from [Multiple Listing Service](https://en.wikipedia.org/wiki/Multiple_listing_service) (MLS) servers.
+
+[rsbeat](https://github.com/yourdream/rsbeat)
+: Ships redis slow logs to elasticsearch and analyze by Kibana.
+
+[safecastbeat](https://github.com/radoondas/safecastbeat)
+: Pulls data from Safecast API and store them in Elasticsearch.
+
+[saltbeat](https://github.com/martinhoefling/saltbeat)
+: Reads events from salt master event bus.
+
+[serialbeat](https://github.com/benben/serialbeat)
+: Reads from a serial device.
+
+[servicebeat](https://github.com/Corwind/servicebeat)
+: Send services status to Elasticsearch
+
+[springbeat](https://github.com/consulthys/springbeat)
+: Collects health and metrics data from Spring Boot applications running with the actuator module.
+
+[springboot2beat](https://github.com/philkra/springboot2beat)
+: Query and accumulate all metrics endpoints of a Spring Boot 2 web app via the web channel, leveraging the [mircometer.io](http://micrometer.io/) metrics facade.
+
+[statsdbeat](https://github.com/sentient/statsdbeat)
+: Receives UDP [statsd](https://github.com/etsy/statsd/wiki) events from a statsd client.
+
+[supervisorctlbeat](https://github.com/Corwind/supervisorctlbeat.git)
+: This beat aims to parse the supervisorctl status command output and send it to elasticsearch for indexation
+
+[terminalbeat](https://github.com/live-wire/terminalbeat)
+: Runs an external command and forwards the [stdout](https://www.computerhope.com/jargon/s/stdout.htm) for the same to Elasticsearch/Logstash.
+
+[timebeat](https://timebeat.app/download.php)
+: NTP and PTP clock synchonisation beat that reports accuracy metrics to elastic. Includes Kibana dashboards.
+
+[tracebeat](https://github.com/berfinsari/tracebeat)
+: Reads traceroute output and indexes them into Elasticsearch.
+
+[trivybeat](https://github.com/DmitryZ-outten/trivybeat)
+: Fetches Docker containers which are running on the same machine, scan CVEs of those containers using Trivy server and index them into Elasticsearch.
+
+[twitterbeat](https://github.com/buehler/go-elastic-twitterbeat)
+: Reads tweets for specified screen names.
+
+[udpbeat](https://github.com/gravitational/udpbeat)
+: Ships structured logs via UDP.
+
+[udplogbeat](https://github.com/hartfordfive/udplogbeat)
+: Accept events via local UDP socket (in plain-text or JSON with ability to enforce schemas). Can also be used for applications only supporting syslog logging.
+
+[unifiedbeat](https://github.com/cleesmith/unifiedbeat)
+: Reads records from Unified2 binary files generated by network intrusion detection software and indexes the records in Elasticsearch.
+
+[unitybeat](https://github.com/kckecheng/unitybeat)
+: Collects performance metrics from Dell EMC Unity storage array.
+
+[uwsgibeat](https://github.com/mrkschan/uwsgibeat)
+: Reads stats from uWSGI.
+
+[varnishlogbeat](https://github.com/phenomenes/varnishlogbeat)
+: Reads log data from a Varnish instance and ships it to Elasticsearch.
+
+[varnishstatbeat](https://github.com/phenomenes/varnishstatbeat)
+: Reads stats data from a Varnish instance and ships it to Elasticsearch.
+
+[vaultbeat](https://gitlab.com/msvechla/vaultbeat)
+: Collects performance metrics and statistics from Hashicorp’s Vault.
+
+[wmibeat](https://github.com/eskibars/wmibeat)
+: Uses WMI to grab your favorite, configurable Windows metrics.
+
+[yarnbeat](https://github.com/IBM/yarnbeat)
+: Polls YARN and MapReduce APIs for cluster and application metrics.
+
+[zfsbeat](https://github.com/maireanu/zfsbeat)
+: Querying ZFS Storage and Pool Status
diff --git a/docs/extend/contributing-docs.md b/docs/extend/contributing-docs.md
new file mode 100644
index 000000000000..c8201b2335c4
--- /dev/null
+++ b/docs/extend/contributing-docs.md
@@ -0,0 +1,84 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/contributing-docs.html
+applies_to:
+ stack: discontinued 8.18
+---
+
+# Contributing to the docs [contributing-docs]
+
+The Beats documentation follows the tagging guidelines described in the [Docs HOWTO](https://github.com/elastic/docs/blob/master/README.asciidoc). However it extends these capabilities in a couple ways:
+
+* The documentation makes extensive use of [AsciiDoc conditionals](https://docs.asciidoctor.org/asciidoc/latest/directives/conditionals/) to provide content that is reused across multiple books. This means that there might not be a single source file for each published HTML page. Some files are shared across multiple books, either as complete pages or snippets. For more details, refer to [Where to find the Beats docs source](#where-to-find-files).
+* The documentation includes some files that are generated from YAML source or pieced together from content that lives in `_meta` directories under the code (for example, the module and exported fields documentation). For more details, refer to [Generated docs](#generated-docs).
+
+
+## Where to find the Beats docs source [where-to-find-files]
+
+Because the Beats documentation makes use of shared content, doc generation scripts, and componentization, the source files are located in several places:
+
+| Documentation | Location of source files |
+| --- | --- |
+| Main docs for the Beat, including index files | `/docs` |
+| Shared docs and Beats Platform Reference | `libbeat/docs` |
+| Processor docs | `docs` folders under processors in `libbeat/processors/`,`x-pack//processors/`, and `x-pack/libbeat/processors/` |
+| Output docs | `docs` folders under outputs in `libbeat/outputs/` |
+| Module docs | `_meta` folders under modules and datasets in `libbeat/module/`,`/module/`, and `x-pack//module/` |
+
+The [conf.yaml](https://github.com/elastic/docs/blob/master/conf.yaml) file in the `docs` repo shows all the resources used to build each book. This file is used to drive the classic docs build and is the source of truth for file locations.
+
+::::{tip}
+If you can’t find the source for a page you want to update, go to the published page at www.elastic.co and click the Edit link to navigate to the source.
+::::
+
+
+The Beats documentation build also has dependencies on the following files in the [docs](https://github.com/elastic/docs) repo:
+
+* `shared/versions/stack/.asciidoc`
+* `shared/attributes.asciidoc`
+
+
+## Generated docs [generated-docs]
+
+After updating `docs.asciidoc` files in `_meta` directories, you must run the doc collector scripts to regenerate the docs.
+
+Make sure you [set up your Beats development environment](./index.md#setting-up-dev-environment) and use the correct Go version. The Go version is listed in the `version.asciidoc` file for the branch you want to update.
+
+To run the docs collector scripts, change to the beats directory and run:
+
+`make update`
+
+::::{warning}
+The `make update` command overwrites files in the `docs` directories **without warning**. If you accidentally update a generated file and run `make update`, your changes will be overwritten.
+::::
+
+
+To format your files, you might also need to run this command:
+
+`make fmt`
+
+The make command calls the following scripts to generate the docs:
+
+[auditbeat/scripts/docs_collector.py](https://github.com/elastic/beats/blob/main/auditbeat/scripts/docs_collector.py) generates:
+
+* `auditbeat/docs/modules_list.asciidoc`
+* `auditbeat/docs/modules/*.asciidoc`
+
+[filebeat/scripts/docs_collector.py](https://github.com/elastic/beats/blob/main/filebeat/scripts/docs_collector.py) generates:
+
+* `filebeat/docs/modules_list.asciidoc`
+* `filebeat/docs/modules/*.asciidoc`
+
+[metricbeat/scripts/mage/docs_collector.go](https://github.com/elastic/beats/blob/main/metricbeat/scripts/mage/docs_collector.go) generates:
+
+* `metricbeat/docs/modules_list.asciidoc`
+* `metricbeat/docs/modules/*.asciidoc`
+
+[libbeat/scripts/generate_fields_docs.py](https://github.com/elastic/beats/blob/main/libbeat/scripts/generate_fields_docs.py) generates
+
+* `auditbeat/docs/fields.asciidoc`
+* `filebeat/docs/fields.asciidoc`
+* `heartbeat/docs/fields.asciidoc`
+* `metricbeat/docs/fields.asciidoc`
+* `packetbeat/docs/fields.asciidoc`
+* `winlogbeat/docs/fields.asciidoc`
diff --git a/docs/extend/creating-metricbeat-module.md b/docs/extend/creating-metricbeat-module.md
new file mode 100644
index 000000000000..69accfff000c
--- /dev/null
+++ b/docs/extend/creating-metricbeat-module.md
@@ -0,0 +1,176 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/creating-metricbeat-module.html
+---
+
+# Creating a Metricbeat Module [creating-metricbeat-module]
+
+Metricbeat modules are used to group multiple metricsets together and to implement shared functionality of the metricsets. In most cases, no implementation of the module is needed and the default module implementation is automatically picked.
+
+It’s important to complete the configuration and documentation files for a module. When you create a new metricset by running `make create-metricset`, default versions of these files are generated in the `_meta` directory.
+
+
+## Module Files [_module_files]
+
+* `config.yml` and `config.reference.yml`
+* `docs.asciidoc`
+* `fields.yml`
+
+After updating any of these files, make sure you run `make update` in your beat directory so all generated files are updated.
+
+
+### config.yml and config.reference.yml [_config_yml_and_config_reference_yml]
+
+The `config.yml` file contains the basic configuration options and looks like this:
+
+```yaml
+- module: {module}
+ metricsets: ["{metricset}"]
+ enabled: false
+ period: 10s
+ hosts: ["localhost"]
+```
+
+It contains the module name, your metricset, and the default period. If you have multiple metricsets in your module, make sure that you extend the metricset array:
+
+```yaml
+ metricsets: ["{metricset1}", "{metricset2}"]
+```
+
+The `full.config.yml` file is optional and by default has the same content as the `config.yml`. It is used to add and document more advanced configuration options that should not be part of the minimal config file shipped by default.
+
+
+### docs.asciidoc [_docs_asciidoc]
+
+The `docs.asciidoc` file contains the documentation about your module. During generation of the documentation, the default config file will be appended to the docs. Use this file to describe your module in more detail and to document specific configuration options.
+
+```asciidoc
+This is the {module} module.
+```
+
+
+### fields.yml [_fields_yml_2]
+
+The `fields.yml` file contains the top level structure for the fields in your metricset. It’s used in combination with the `fields.yml` file in each metricset to generate the template and documentation for the fields.
+
+The default file looks like this:
+
+```yaml
+- key: {module}
+ title: "{module}"
+ release: beta
+ description: >
+ {module} module
+ fields:
+ - name: {module}
+ type: group
+ description: >
+ fields:
+```
+
+Make sure that you update at least the description of the module.
+
+
+## Testing [_testing_2]
+
+It’s a common pattern to use a `testing.go` file in the module package to share some testing functionality among the metricsets. This file does not have `_test.go` in the name because otherwise it would not be compiled for sub packages.
+
+To see an example of the `testing.go` file, look at the [mysql module](https://github.com/elastic/beats/tree/master/metricbeat/module/mysql).
+
+
+### Test a Metricbeat module manually [_test_a_metricbeat_module_manually]
+
+To test a Metricbeat module manually, follow the steps below.
+
+First we have to build the Docker image which is available for the modules. The Dockerfile is located inside a `_meta` folder within each module folder. As an example let’s take MySQL module.
+
+This steps assume you have checked out the Beats repository from Github and are inside `beats` directory. First, we have to enter in the `_meta` folder mentioned above and build the Docker image called `metricbeat-mysql`:
+
+```bash
+$ cd metricbeat/module/mysql/_meta/
+$ docker build -t metricbeat-mysql .
+...
+Removing intermediate container 0e58cfb7b197
+ ---> 9492074840ea
+Step 5/5 : COPY test.cnf /etc/mysql/conf.d/test.cnf
+ ---> 002969e1d810
+Successfully built 002969e1d810
+Successfully tagged metricbeat-mysql:latest
+```
+
+Before we run the container we have just created, we also need to know which port to expose. The port is listed in the `metricbeat/{{module}}/_meta/env` file:
+
+```bash
+$ cat env
+MYSQL_DSN=root:test@tcp(mysql:3306)/
+MYSQL_HOST=mysql
+MYSQL_PORT=3306
+```
+
+As we see, the port is 3306. We now have all the information to start our MySQL service locally:
+
+```bash
+$ docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret metricbeat-mysql
+```
+
+This starts the container and you can now use it for testing the MySQL module.
+
+To run Metricbeat with the module we need to build the binary, enable the module first. The assumption is now that you are back in the `beats` folder path:
+
+```bash
+$ cd metricbeat
+$ mage build
+$ ./metricbeat modules enable mysql
+```
+
+This will enable the module and rename file `metricbeat/modules.d/mysql.yml.disabled` to `metricbeat/modules.d/mysql.yml`. According to our [documentation](/reference/metricbeat/metricbeat-module-mysql.md) we should specify username and password to user MySQL. It’s always a good idea to take a look at the docs to see also that a pre-built dashboard is also available. So tweaking the config a bit, this is how it looks like:
+
+```yaml
+$ cat modules.d/mysql.yml
+
+# Module: mysql
+# Docs: /beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-mysql.md
+
+- module: mysql
+ metricsets:
+ - status
+ # - galera_status
+ period: 10s
+
+ # Host DSN should be defined as "user:pass@tcp(127.0.0.1:3306)/"
+ # or "unix(/var/lib/mysql/mysql.sock)/",
+ # or another DSN format supported by .
+ # The username and password can either be set in the DSN or using the username
+ # and password config options. Those specified in the DSN take precedence.
+ hosts: ["tcp(127.0.0.1:3306)/"]
+
+ # Username of hosts. Empty by default.
+ username: root
+
+ # Password of hosts. Empty by default.
+ password: secret
+```
+
+It’s now sending data to your local Elasticsearch instance. If you need to modify the mysql config, adjust `modules.d/mysql.yml` and restart Metricbeat.
+
+
+### Run Environment tests for one module [_run_environment_tests_for_one_module]
+
+All the environments are setup with docker. `make integration-tests-environment` and `make system-tests-environment` can be used to run tests for all modules. In case you are developing a module it is convenient to run the tests only for one module and directly run it on your machine.
+
+First you need to start the environment for your module to test and expose the port to your local machine. For this you can run the following command inside the metricbeat directory:
+
+```bash
+MODULE=apache PORT=80 make run-module
+```
+
+Note: The apache module with port 80 is taken here as an example. You must put the name and port for your own module here.
+
+This will start the environment and you must wait until the service is completely started. After that you can run the test which require an environment:
+
+```bash
+MODULE=apache make test-module
+```
+
+This will run the integration and system tests connecting to the environment in your docker container.
+
diff --git a/docs/extend/creating-metricsets.md b/docs/extend/creating-metricsets.md
new file mode 100644
index 000000000000..134078c4b929
--- /dev/null
+++ b/docs/extend/creating-metricsets.md
@@ -0,0 +1,332 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/creating-metricsets.html
+---
+
+# Creating a Metricset [creating-metricsets]
+
+::::{important}
+Elastic provides no warranty or support for the code used to generate metricsets. The generator is mainly offered as guidance for developers who want to create their own data shippers.
+::::
+
+
+A metricset is the part of a Metricbeat module that fetches and structures the data from the remote service. Each module can have multiple metricsets. In this guide, you learn how to create your own metricset.
+
+When creating a metricset for the first time, it generally helps to look at the implementation of existing metricsets for inspiration.
+
+To create a new metricset:
+
+1. Run the following command inside the metricbeat beat directory:
+
+ ```bash
+ make create-metricset
+ ```
+
+ You need Python to run this command, then, you’ll be prompted to enter a module and metricset name. Remember that a module represents the service you want to retrieve metrics from (like Redis) and a metricset is a specific set of grouped metrics (like `info` on Redis). Only use characters `[a-z]` and, if required, underscores (`_`). No other characters are allowed.
+
+ When you run `make create-metricset`, it creates all the basic files for your metricset, along with the required module files if the module does not already exist. See [Creating a Metricbeat Module](/extend/creating-metricbeat-module.md) for more details about the module files.
+
+ ::::{note}
+ We use `{{metricset}}`, `{{module}}`, and `{{beat}}` in this guide as placeholders. You need to replace these with the actual names of your metricset, module, and beat.
+ ::::
+
+
+ The metricset that you created is already a functioning metricset and can be compiled.
+
+2. Compile your new metricset by running the following command:
+
+ ```bash
+ mage update
+ mage build
+ ```
+
+ The first command, `mage update`, updates all generated files with the most recent files, data, and meta information from the metricset. The second command, `mage build`, compiles your source code and provides you with a binary called metricbeat in the same folder. You can run the binary in debug mode with the following command:
+
+ ```bash
+ ./metricbeat -e -d "*"
+ ```
+
+
+After running the mage commands, you’ll find the metricset, along with its generated files, under `module/{{module}}/{metricset}`. This directory contains the following files:
+
+* `\{{metricset}}.go`
+* `_meta/docs.asciidoc`
+* `_meta/data.json`
+* `_meta/fields.yml`
+
+Let’s look at the files in more detail next.
+
+
+## `{{metricset}}.go` File [_metricset_go_file]
+
+The first file is `{{metricset}}.go`. It contains the logic on how to fetch data from the service and convert it for sending to the output.
+
+The generated file looks like this:
+
+[https://github.com/elastic/beats/blob/main/metricbeat/scripts/module/metricset/metricset.go.tmpl](https://github.com/elastic/beats/blob/main/metricbeat/scripts/module/metricset/metricset.go.tmpl)
+
+```go
+package {metricset}
+
+import (
+ "github.com/elastic/elastic-agent-libs/mapstr"
+ "github.com/elastic/beats/v7/libbeat/common/cfgwarn"
+ "github.com/elastic/beats/v7/metricbeat/mb"
+)
+
+// init registers the MetricSet with the central registry as soon as the program
+// starts. The New function will be called later to instantiate an instance of
+// the MetricSet for each host is defined in the module's configuration. After the
+// MetricSet has been created then Fetch will begin to be called periodically.
+func init() {
+ mb.Registry.MustAddMetricSet("{module}", "{metricset}", New)
+}
+
+// MetricSet holds any configuration or state information. It must implement
+// the mb.MetricSet interface. And this is best achieved by embedding
+// mb.BaseMetricSet because it implements all of the required mb.MetricSet
+// interface methods except for Fetch.
+type MetricSet struct {
+ mb.BaseMetricSet
+ counter int
+}
+
+// New creates a new instance of the MetricSet. New is responsible for unpacking
+// any MetricSet specific configuration options if there are any.
+func New(base mb.BaseMetricSet) (mb.MetricSet, error) {
+ cfgwarn.Beta("The {module} {metricset} metricset is beta.")
+
+ config := struct{}{}
+ if err := base.Module().UnpackConfig(&config); err != nil {
+ return nil, err
+ }
+
+ return &MetricSet{
+ BaseMetricSet: base,
+ counter: 1,
+ }, nil
+}
+
+// Fetch method implements the data gathering and data conversion to the right
+// format. It publishes the event which is then forwarded to the output. In case
+// of an error set the Error field of mb.Event or simply call report.Error().
+func (m *MetricSet) Fetch(report mb.ReporterV2) error {
+ report.Event(mb.Event{
+ MetricSetFields: mapstr.M{
+ "counter": m.counter,
+ },
+ })
+ m.counter++
+
+ return nil
+}
+```
+
+The `package` clause and `import` declaration are part of the base structure of each Go file. You should only modify this part of the file if your implementation requires more imports.
+
+
+### Initialisation [_initialisation]
+
+The init method registers the metricset with the central registry. In Go the `init()` function is called before the execution of all other code. This means the module will be automatically registered with the global registry.
+
+The `New` method, which is passed to `MustAddMetricSet`, will be called after the setup of the module and before starting to fetch data. You normally don’t need to change this part of the file.
+
+```go
+func init() {
+ mb.Registry.MustAddMetricSet("{module}", "{metricset}", New)
+}
+```
+
+
+### Definition [_definition]
+
+The MetricSet type defines all fields of the metricset. As a minimum it must be composed of the `mb.BaseMetricSet` fields, but can be extended with additional entries. These variables can be used to persist data or configuration between multiple fetch calls.
+
+You can add more fields to the MetricSet type, as you can see in the following example where the `username` and `password` string fields are added:
+
+```go
+type MetricSet struct {
+ mb.BaseMetricSet
+ username string
+ password string
+}
+```
+
+
+### Creation [_creation]
+
+The `New` function creates a new instance of the MetricSet. The setup process of the MetricSet is also part of `New`. This method will be called before `Fetch` is called the first time.
+
+The `New` function also sets up the configuration by processing additional configuration entries, if needed.
+
+```go
+func New(base mb.BaseMetricSet) (mb.MetricSet, error) {
+
+ config := struct{}{}
+
+ if err := base.Module().UnpackConfig(&config); err != nil {
+ return nil, err
+ }
+
+ return &MetricSet{
+ BaseMetricSet: base,
+ }, nil
+}
+```
+
+
+### Fetching [_fetching]
+
+The `Fetch` method is the central part of the metricset. `Fetch` is called every time new data is retrieved. If more than one host is defined, `Fetch` is called once for each host. The frequency of calling `Fetch` is based on the `period` defined in the configuration file.
+
+`Fetch` must publish the event using the `mb.ReporterV2.Event` method. If an error happens, `Fetch` can return an error, or if `Event` is being called in a loop, published using the `mb.ReporterV2.Error` method. This means that Metricbeat always sends an event, even on failure. You must make sure that the error message helps to identify the actual error.
+
+The following example shows a metricset `Fetch` method with a counter that is incremented for each `Fetch` call:
+
+```go
+func (m *MetricSet) Fetch(report mb.ReporterV2) error {
+
+ report.Event(mb.Event{
+ MetricSetFields: common.MapStr{
+ "counter": m.counter,
+ }
+ })
+ m.counter++
+
+ return nil
+}
+```
+
+The JSON output derived from the reported event will be identical to the naming and structure you use in `common.MapStr`. For more details about `MapStr` and its functions, see the [MapStr API docs](https://godoc.org/github.com/elastic/beats/libbeat/common#MapStr).
+
+
+### Multi Fetching [_multi_fetching]
+
+`Event` can be called multiple times inside of the `Fetch` method for metricsets that might expose multiple events. `Event` returns a bool that indicates if the metricset is already closed and no further events can be processed, in which case `Fetch` should return immediately. If there is an error while processing one of many events, it can be published using the `mb.ReporterV2.Error` method, as opposed to returning an error value.
+
+
+### Parsing and Normalizing Fields [_parsing_and_normalizing_fields]
+
+In Metricbeat we aim to normalize the metric names from all metricsets to respect a common [set of conventions](/extend/event-conventions.md). This makes it easy for users to find and interpret metrics. To simplify parsing, converting, renaming, and restructuring of the object read from the monitored system to the Metricbeat format, we have created the [schema](https://godoc.org/github.com/elastic/beats/libbeat/common/schema) package that allows you to declaratively define transformations.
+
+For example, assuming this input object:
+
+```go
+input := map[string]interface{}{
+ "testString": "hello",
+ "testInt": "42",
+ "testBool": "true",
+ "testFloat": "42.1",
+ "testObjString": "hello, object",
+}
+```
+
+And the requirement to transform it into this one:
+
+```go
+common.MapStr{
+ "test_string": "hello",
+ "test_int": int64(42),
+ "test_bool": true,
+ "test_float": 42.1,
+ "test_obj": common.MapStr{
+ "test_obj_string": "hello, object",
+ },
+}
+```
+
+You can use the schema package to transform the data, and optionally mark some fields in a schema as required or not. For example:
+
+```go
+import (
+ s "github.com/elastic/beats/libbeat/common/schema"
+ c "github.com/elastic/beats/libbeat/common/schema/mapstrstr"
+)
+
+var (
+ schema = s.Schema{
+ "test_string": c.Str("testString", s.Required), <1>
+ "test_int": c.Int("testInt"), <2>
+ "test_bool": c.Bool("testBool", s.Optional), <3>
+ "test_float": c.Float("testFloat"),
+ "test_obj": s.Object{
+ "test_obj_string": c.Str("testObjString", s.IgnoreAllErrors), <4>
+ },
+ }
+)
+
+func eventMapping(input map[string]interface{}) common.MapStr {
+ return schema.Apply(input) <5>
+}
+```
+
+1. Marks a field as required.
+2. If a field has no schema option set, it is equivalent to `Required`.
+3. Marks the field as optional.
+4. Ignore any value conversion error
+5. By default, `Apply` will fail and return an error if any required field is missing. Using the optional second argument, you can specify how `Apply` handles different fields of the schema. The possible values are:* `AllRequired` is the default behavior. Returns an error if any required field is missing, including fields that are required because no schema option is set.
+* `FailOnRequired` will fail if a field explicitly marked as `required` is missing.
+* `NotFoundKeys(cb func([]string))` takes a callback function that will be called with a list of missing keys, allowing for finer-grained error handling.
+
+
+
+In the above example, note that it is possible to create the schema object once and apply it to all events. You can also use `ApplyTo` to add additional data to an existing `MapStr` object:
+
+```go
+var (
+ schema = s.Schema{
+ "test_string": c.Str("testString"),
+ "test_int": c.Int("testInt"),
+ "test_bool": c.Bool("testBool"),
+ "test_float": c.Float("testFloat"),
+ "test_obj": s.Object{
+ "test_obj_string": c.Str("testObjString"),
+ },
+ }
+
+ additionalSchema = s.Schema{
+ "second_string": c.Str("secondString"),
+ "second_int": c.Int("secondInt"),
+ }
+)
+
+ data, err := schema.Apply(input)
+ if err != nil {
+ return err
+ }
+
+ if m.parseMoreData{
+ _, err := additionalSchema.ApplyTo(data, input)
+ if len(err) > 0 { <1>
+ return err.Err()
+ }
+ }
+```
+
+1. `ApplyTo` returns a raw MultiError object, making it suitable for finer-grained error handling.
+
+
+
+## Configuration File [_configuration_file]
+
+The configuration file for a metricset is handled by the module. If there are multiple metricsets in one module, make sure you add all metricsets to the configuration. For example:
+
+```go
+metricbeat:
+ modules:
+ - module: {module-name}
+ metricsets: ["{metricset1}", "{metricset2}"]
+```
+
+::::{note}
+Make sure that you run `make collect` after updating the config file so that your changes are also applied to the global configuration file and the docs.
+::::
+
+
+For more details about the Metricbeat configuration file, see the topic about [Modules](/reference/metricbeat/configuration-metricbeat.md) in the Metricbeat documentation.
+
+
+## What to Do Next [_what_to_do_next]
+
+This topic provides basic steps for creating a metricset. For more details about metricsets and how to extend your metricset further, see [Metricset Details](/extend/metricset-details.md).
+
diff --git a/docs/extend/dev-faq.md b/docs/extend/dev-faq.md
new file mode 100644
index 000000000000..c51349b7abdc
--- /dev/null
+++ b/docs/extend/dev-faq.md
@@ -0,0 +1,23 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/dev-faq.html
+---
+
+# Metricbeat Developer FAQ [dev-faq]
+
+This is a list of common questions when creating a metricset and the potential answers.
+
+
+## Metricset is not compiled [_metricset_is_not_compiled]
+
+You are compiling your Beat, but the newly created metricset is not compiled?
+
+Make sure that the path to your module and metricset are added as an import path either in your `main.go` file or your `include/list.go` file. You can do this manually or by running `make imports`.
+
+
+## Metricset is not started [_metricset_is_not_started]
+
+The metricset is compiled, but not started when starting Metricbeat?
+
+After creating your metricset, make sure you run `make collect`. This command adds the configuration of your metricset to the default configuration. If the metricset still doesn’t start, check your default configuration file to see if the metricset is listed there.
+
diff --git a/docs/extend/event-conventions.md b/docs/extend/event-conventions.md
new file mode 100644
index 000000000000..add697dc01f6
--- /dev/null
+++ b/docs/extend/event-conventions.md
@@ -0,0 +1,72 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/event-conventions.html
+---
+
+# Naming Conventions [event-conventions]
+
+When creating events, use the following conventions for field names and abbreviations.
+
+## Field Names [field-names]
+
+Use the following naming conventions for field names:
+
+* All fields must be lower case.
+* Use snake case (underscores) for combining words.
+* Group related fields into subdocuments by using dot (.) notation. Groups typically have common prefixes. For example, if you have fields called `CPULoad` and `CPUSystem` in a service, you would convert them into `cpu.load` and `cpu.system` in the event.
+* Avoid repeating the namespace in field names. If a word or abbreviation appears in the namespace, it’s not needed in the field name. For example, instead of `cpu.cpu_load`, use `cpu.load`.
+* Use [units suffix](#units) when the metric matches one of the known units.
+* Use [standardised names](#abbreviations) and avoid using abbreviations that aren’t commonly known.
+* Organise the documents from general to specific to allow for namespacing. The type, such as `.pct`, should always be last. For example, `system.core.user.pct`.
+* If two fields are the same, but with different units, remove the less granular one. For example, include `timeout.sec`, but don’t include `timeout.min`. If a less granular value is required, you can calculate it later.
+* If a field name matches the namespace used for nested fields, add `.value` to the field name. For example, instead of:
+
+ ```yaml
+ workers
+ workers.busy
+ workers.idle
+ ```
+
+ Use:
+
+ ```yaml
+ workers.value
+ workers.busy
+ workers.idle
+ ```
+
+* Do not use dots (.) in individual field names. Dots are reserved for grouping related fields into subdocuments.
+* Use singular and plural names properly to reflect the field content. For example, use `requests_per_sec` rather than `request_per_sec`.
+
+
+## Units [units]
+
+These are well-known suffixes to represent units of stored values, use them as a dotted suffix when possible. For example `system.memory.used.bytes` or `system.diskio.read.count`:
+
+| Suffix | Units |
+| --- | --- |
+| count | item count |
+| pct | percentage |
+| day | days |
+| sec | seconds |
+| ms | millisecond |
+| us | microseconds |
+| ns | nanoseconds |
+| bytes | bytes |
+| mb | megabytes |
+
+
+## Standardised Names [abbreviations]
+
+Here is a list of standardised names and units that are used across all Beats:
+
+| Use… | Instead of… |
+| --- | --- |
+| avg | average |
+| connection | conn |
+| max | maximum |
+| min | minimum |
+| request | req |
+| msg | message |
+
+
diff --git a/docs/extend/event-fields-yml.md b/docs/extend/event-fields-yml.md
new file mode 100644
index 000000000000..9d58a112cb52
--- /dev/null
+++ b/docs/extend/event-fields-yml.md
@@ -0,0 +1,172 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/event-fields-yml.html
+---
+
+# Defining field mappings [event-fields-yml]
+
+You must define the fields used by your Beat, along with their mapping details, in `_meta/fields.yml`. After editing this file, run `make update`.
+
+Define the field mappings in the `fields` array:
+
+```yaml
+- key: mybeat
+ title: mybeat
+ description: These are the fields used by mybeat.
+ fields:
+ - name: last_name <1>
+ type: keyword <2>
+ required: true <3>
+ description: > <4>
+ The last name.
+ - name: first_name
+ type: keyword
+ required: true
+ description: >
+ The first name.
+ - name: comment
+ type: text
+ required: false
+ description: >
+ Comment made by the user.
+```
+
+1. `name`: The field name
+2. `type`: The field type. The value of `type` can be any datatype [available in {{es}}](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md). If no value is specified, the default type is `keyword`.
+3. `required`: Whether or not a field value is required
+4. `description`: Some information about the field contents
+
+
+## Mapping parameters [_mapping_parameters]
+
+You can specify other mapping parameters for each field. See the [{{es}} Reference](elasticsearch://reference/elasticsearch/mapping-reference/mapping-parameters.md) for more details about each parameter.
+
+`format`
+: Specify a custom date format used by the field.
+
+`multi_fields`
+: For `text` or `keyword` fields, use `multi_fields` to define multi-field mappings.
+
+`enabled`
+: Whether or not the field is enabled.
+
+`analyzer`
+: Which analyzer to use when indexing.
+
+`search_analyzer`
+: Which analyzer to use when searching.
+
+`norms`
+: Applies to `text` and `keyword` fields. Default is `false`.
+
+`dynamic`
+: Dynamic field control. Can be one of `true` (default), `false`, or `strict`.
+
+`index`
+: Whether or not the field should be indexed.
+
+`doc_values`
+: Whether or not the field should have doc values generated.
+
+`copy_to`
+: Which field to copy the field value into.
+
+`ignore_above`
+: {{es}} ignores (does not index) strings that are longer than the specified value. When this property value is missing or `0`, the `libbeat` default value of `1024` characters is used. If the value is `-1`, the {{es}} default value is used.
+
+For example, you can use the `copy_to` mapping parameter to copy the `last_name` and `first_name` fields into the `full_name` field at index time:
+
+```yaml
+- key: mybeat
+ title: mybeat
+ description: These are the fields used by mybeat.
+ fields:
+ - name: last_name
+ type: text
+ required: true
+ copy_to: full_name <1>
+ description: >
+ The last name.
+ - name: first_name
+ type: text
+ required: true
+ copy_to: full_name <2>
+ description: >
+ The first name.
+ - name: full_name
+ type: text
+ required: false
+ description: >
+ The last_name and first_name combined into one field for easy searchability.
+```
+
+1. Copy the value of `last_name` into `full_name`
+2. Copy the value of `first_name` into `full_name`
+
+
+There are also some {{kib}}-specific properties, not detailed here. These are: `analyzed`, `count`, `searchable`, `aggregatable`, and `script`. {{kib}} parameters can also be described using `pattern`, `input_format`, `output_format`, `output_precision`, `label_template`, `url_template`, and `open_link_in_current_tab`.
+
+
+## Defining text multi-fields [_defining_text_multi_fields]
+
+There are various options that you can apply when using text fields. You can define a simple text field using the default analyzer without any other options, as in the example shown earlier.
+
+To keep the original keyword value when using `text` mappings, for instance to use in aggregations or ordering, you can use a multi-field mapping:
+
+```yaml
+- key: mybeat
+ title: mybeat
+ description: These are the fields used by mybeat.
+ fields:
+ - name: city
+ type: text
+ multi_fields: <1>
+ - name: keyword <2>
+ type: keyword <3>
+```
+
+1. `multi_fields`: Define the `multi_fields` mapping parameter.
+2. `name`: This is a conventional name for a multi-field. It can be anything (`raw` is another common option) but the convention is to use `keyword`.
+3. `type`: Specify the `keyword` type to use the field in aggregations or to order documents.
+
+
+For more information, see the [{{es}} documentation about multi-fields](elasticsearch://reference/elasticsearch/mapping-reference/multi-fields.md).
+
+
+## Defining a text analyzer in-line [_defining_a_text_analyzer_in_line]
+
+It is possible to define a new text analyzer or search analyzer in-line with the field definition in the field’s mapping parameters.
+
+For example, you can define a new text analyzer that does not break hyphenated names:
+
+```yaml
+- key: mybeat
+ title: mybeat
+ description: These are the fields used by mybeat.
+ fields:
+ - name: last_name
+ type: text
+ required: true
+ description: >
+ The last name.
+ analyzer:
+ mybeat_hyphenated_name: <1>
+ type: pattern <2>
+ pattern: "[\\W&&[^-]]+" <3>
+ search_analyzer:
+ mybeat_hyphenated_name: <4>
+ type: pattern
+ pattern: "[\\W&&[^-]]+"
+```
+
+1. Use a newly defined text analyzer
+2. Define the custome analyzer type
+3. Specify the analyzer behaviour
+4. Use the same analyzer for the search
+
+
+The names of custom analyzers that are defined in-line may not be reused for a different text analyzer. If a text analyzer name is reused it is checked for matching existing instances of the analyzer. It is recommended that the analyzer name is prefixed with the beat name to avoid name clashes.
+
+For more information, see [{{es}} documentation about defining custom text analyzers](docs-content://manage-data/data-store/text-analysis/create-custom-analyzer.md).
+
+
diff --git a/docs/extend/export-dashboards.md b/docs/extend/export-dashboards.md
new file mode 100644
index 000000000000..e5f339aded7f
--- /dev/null
+++ b/docs/extend/export-dashboards.md
@@ -0,0 +1,133 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/export-dashboards.html
+---
+
+# Exporting New and Modified Beat Dashboards [export-dashboards]
+
+To export all the dashboards for any Elastic Beat or any community Beat, including any new or modified dashboards and all dependencies such as visualizations, searches, you can use the Go script `export_dashboards.go` from [dev-tools](https://github.com/elastic/beats/tree/master/dev-tools/cmd/dashboards). See the dev-tools [readme](https://github.com/elastic/beats/tree/master/dev-tools/README.md) for more info.
+
+Alternatively, if the scripts above are not available, you can use your Beat binary to export Kibana 6.0 dashboards or later.
+
+## Exporting from Kibana 6.0 to 7.14 [_exporting_from_kibana_6_0_to_7_14]
+
+The `dev-tools/cmd/export_dashboards.go` script helps you export your customized Kibana dashboards until the v7.14.x release. You might need to export a single dashboard or all the dashboards available for a module or Beat.
+
+It is also possible to use a Beat binary to export.
+
+
+## Exporting from Kibana 7.15 or newer [_exporting_from_kibana_7_15_or_newer]
+
+From 7.15, your Beats version must be the same as your Kibana version to make sure the export API required is available.
+
+### Migrate legacy dashboards made with Kibana 7.14 or older [_migrate_legacy_dashboards_made_with_kibana_7_14_or_older]
+
+After you updated your Kibana instance to at least 7.15, you have to export your dashboards again with either `export_dashboards.go` tool or with your Beat.
+
+
+### Export a single Kibana dashboard [_export_a_single_kibana_dashboard]
+
+To export a single dashboard for a module you can use the following command inside a Beat with modules:
+
+```shell
+MODULE=redis ID=AV4REOpp5NkDleZmzKkE mage exportDashboard
+```
+
+```shell
+./filebeat export dashboard --id 7fea2930-478e-11e7-b1f0-cb29bac6bf8b --folder module/redis
+```
+
+This generates an appropriate folder under module/redis for the dashboard, separating assets into dashboards, searches, vizualizations, etc. Each exported file is a JSON and their names are the IDs of the assets.
+
+::::{note}
+The dashboard ID is available in the dashboard URL. For example, in case the dashboard URL is `app/kibana#/dashboard/AV4REOpp5NkDleZmzKkE?_g=()&_a=(description:'Overview%2...`, the dashboard ID is `AV4REOpp5NkDleZmzKkE`.
+::::
+
+
+
+### Export all module/Beat dashboards [_export_all_modulebeat_dashboards]
+
+Each module should contain a `module.yml` file with a list of all the dashboards available for the module. For the Beats that don’t have support for modules (e.g. Packetbeat), there is a `dashboards.yml` file that defines all the Packetbeat dashboards.
+
+Below, it’s an example of the `module.yml` file for the system module in Metricbeat:
+
+```shell
+dashboards:
+- id: Metricbeat-system-overview
+ file: Metricbeat-system-overview.ndjson
+
+- id: 79ffd6e0-faa0-11e6-947f-177f697178b8
+ file: Metricbeat-host-overview.ndjson
+
+- id: CPU-slash-Memory-per-container
+ file: Metricbeat-containers-overview.ndjson
+```
+
+Each dashboard is defined by an `id` and the name of ndjson `file` where the dashboard is saved locally.
+
+By passing the yml file to the `export_dashboards.go` script or to the Beat, you can export all the dashboards defined:
+
+```shell
+go run dev-tools/cmd/dashboards/export_dashboards.go --yml filebeat/module/system/module.yml --folder dashboards
+```
+
+```shell
+./filebeat export dashboard --yml filebeat/module/system/module.yml
+```
+
+
+### Export dashboards from a Kibana Space [_export_dashboards_from_a_kibana_space]
+
+If you are using the Kibana Spaces feature and want to export dashboards from a specific Space, pass the Space ID to the `export_dashboards.go` script:
+
+```shell
+go run dev-tools/cmd/dashboards/export_dashboards.go -space-id my-space [other-options]
+```
+
+In case of running `export dashboard` of a Beat, you need to set the Space ID in `setup.kibana.space.id`.
+
+
+
+## Exporting Kibana 5.x dashboards [_exporting_kibana_5_x_dashboards]
+
+To export only some Kibana dashboards for an Elastic Beat or community Beat, you can simply pass a regular expression to the `export_dashboards.py` script to match the selected Kibana dashboards.
+
+Before running the `export_dashboards.py` script for the first time, you need to create an environment that contains all the required Python packages.
+
+```shell
+make python-env
+```
+
+For example, to export all Kibana dashboards that start with the **Packetbeat** name:
+
+```shell
+python ../dev-tools/cmd/dashboards/export_dashboards.py --regex Packetbeat*
+```
+
+To see all the available options, read the descriptions below or run:
+
+```shell
+python ../dev-tools/cmd/dashboards/export_dashboards.py -h
+```
+
+**`--url `**
+: The Elasticsearch URL. The default value is [http://localhost:9200](http://localhost:9200).
+
+**`--regex `**
+: Regular expression to match all the Kibana dashboards to be exported. This argument is required.
+
+**`--kibana `**
+: The Elasticsearch index pattern where Kibana saves its configuration. The default value is `.kibana`.
+
+**`--dir `**
+: The output directory where the dashboards and all dependencies will be saved. The default value is `output`.
+
+The output directory has the following structure:
+
+```shell
+output/
+ index-pattern/
+ dashboard/
+ visualization/
+ search/
+```
diff --git a/docs/extend/filebeat-modules-devguide.md b/docs/extend/filebeat-modules-devguide.md
new file mode 100644
index 000000000000..46b158280d2d
--- /dev/null
+++ b/docs/extend/filebeat-modules-devguide.md
@@ -0,0 +1,416 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/filebeat-modules-devguide.html
+---
+
+# Creating a New Filebeat Module [filebeat-modules-devguide]
+
+::::{important}
+Elastic provides no warranty or support for the code used to generate modules and filesets. The generator is mainly offered as guidance for developers who want to create their own data shippers.
+::::
+
+
+This guide will walk you through creating a new Filebeat module.
+
+All Filebeat modules currently live in the main [Beats](https://github.com/elastic/beats) repository. To clone the repository and build Filebeat (which you will need for testing), please follow the general instructions in [*Contributing to Beats*](./index.md).
+
+
+## Overview [_overview]
+
+Each Filebeat module is composed of one or more "filesets". We usually create a module for each service that we support (`nginx` for Nginx, `mysql` for Mysql, and so on) and a fileset for each type of log that the service creates. For example, the Nginx module has `access` and `error` filesets. You can contribute a new module (with at least one fileset), or a new fileset for an existing module.
+
+::::{note}
+In this guide we use `{{module}}` and `{{fileset}}` as placeholders for the module and fileset names. You need to replace these with the actual names you entered when your created the module and fileset. Only use characters `[a-z]` and, if required, underscores (`_`). No other characters are allowed.
+::::
+
+
+
+## Creating a new module [_creating_a_new_module]
+
+Run the following command in the `filebeat` folder:
+
+```bash
+make create-module MODULE={module}
+```
+
+After running the `make create-module` command, you’ll find the module, along with its generated files, under `module/{{module}}`. This directory contains the following files:
+
+```bash
+module/{module}
+├── module.yml
+└── _meta
+ └── docs.asciidoc
+ └── fields.yml
+ └── kibana
+```
+
+Let’s look at these files one by one.
+
+
+### module.yml [_module_yml]
+
+This file contains list of all the dashboards available for the module and used by `export_dashboards.go` script for exporting dashboards. Each dashboard is defined by an id and the name of json file where the dashboard is saved locally. At generation new fileset this file will be automatically updated with "default" dashboard settings for new fileset. Please ensure that this settings are correct.
+
+
+### _meta/docs.asciidoc [_metadocs_asciidoc]
+
+This file contains module-specific documentation. You should include information about which versions of the service were tested and the variables that are defined in each fileset.
+
+
+### _meta/fields.yml [_metafields_yml]
+
+The module level `fields.yml` contains descriptions for the module-level fields. Please review and update the title and the descriptions in this file. The title is used as a title in the docs, so it’s best to capitalize it.
+
+
+### _meta/kibana [_metakibana]
+
+This folder contains the sample Kibana dashboards for this module. To create them, you can build them visually in Kibana and then export them with `export_dashboards`.
+
+The tool will export all of the dashboard dependencies (visualizations, saved searches) automatically.
+
+You can see various ways of using `export_dashboards` at [Exporting New and Modified Beat Dashboards](/extend/export-dashboards.md). The recommended way to export them is to list your dashboards in your module’s `module.yml` file:
+
+```yaml
+dashboards:
+- id: 69f5ae20-eb02-11e7-8f04-beef1daadb05
+ file: mymodule-overview.json
+- id: c0a7ce90-cafe-4242-8647-534bb4c21040
+ file: mymodule-errors.json
+```
+
+Then run `export_dashboards` like this:
+
+```shell
+$ cd dev-tools/cmd/dashboards
+$ make # if export_dashboard is not built yet
+$ ./export_dashboards --yml '../../../filebeat/module/{module}/module.yml'
+```
+
+New Filebeat modules might not be compatible with Kibana 5.x. To export dashboards that are compatible with 5.x, run the following command inside the developer virtual environment:
+
+```shell
+$ cd filebeat
+$ make python-env
+$ cd module/{module}/
+$ python ../../../dev-tools/export_5x_dashboards.py --regex {module} --dir _meta/kibana/5.x
+```
+
+Where the `--regex` parameter should match the dashboard you want to export.
+
+Please note that dashboards exported from Kibana 5.x are not compatible with Kibana 6.x.
+
+You can find more details about the process of creating and exporting the Kibana dashboards by reading [this guide](http://www.elastic.co/guide/en/beats/devguide/master/new-dashboards.md).
+
+
+## Creating a new fileset [_creating_a_new_fileset]
+
+Run the following command in the `filebeat` folder:
+
+```bash
+make create-fileset MODULE={module} FILESET={fileset}
+```
+
+After running the `make create-fileset` command, you’ll find the fileset, along with its generated files, under `module/{{module}}/{fileset}`. This directory contains the following files:
+
+```bash
+module/{module}/{fileset}
+├── manifest.yml
+├── config
+│ └── {fileset}.yml
+├── ingest
+│ └── pipeline.json
+├── _meta
+│ └── fields.yml
+│ └── kibana
+│ └── default
+└── test
+```
+
+Let’s look at these files one by one.
+
+
+### manifest.yml [_manifest_yml]
+
+The `manifest.yml` is the control file for the module, where variables are defined and the other files are referenced. It is a YAML file, but in many places in the file, you can use built-in or defined variables by using the `{{.variable}}` syntax.
+
+The `var` section of the file defines the fileset variables and their default values. The module variables can be referenced in other configuration files, and their value can be overridden at runtime by the Filebeat configuration.
+
+As the fileset creator, you can use any names for the variables you define. Each variable must have a default value. So in it’s simplest form, this is how you can define a new variable:
+
+```yaml
+var:
+ - name: pipeline
+ default: with_plugins
+```
+
+Most fileset should have a `paths` variable defined, which sets the default paths where the log files are located:
+
+```yaml
+var:
+ - name: paths
+ default:
+ - /example/test.log*
+ os.darwin:
+ - /usr/local/example/test.log*
+ - /example/test.log*
+ os.windows:
+ - c:/programdata/example/logs/test.log*
+```
+
+There’s quite a lot going on in this file, so let’s break it down:
+
+* The name of the variable is `paths` and the default value is an array with one element: `"/example/test.log*"`.
+* Note that variable values don’t have to be strings. They can be also numbers, objects, or as shown in this example, arrays.
+* We will use the `paths` variable to set the input `paths` setting, so "glob" values can be used here.
+* Besides the `default` value, the file defines values for particular operating systems: a default for darwin/OS X/macOS systems and a default for Windows systems. These are introduced via the `os.darwin` and `os.windows` keywords. The values under these keys become the default for the variable, if Filebeat is executed on the respective OS.
+
+Besides the variable definition, the `manifest.yml` file also contains references to the ingest pipeline and input configuration to use (see next sections):
+
+```yaml
+ingest_pipeline: ingest/pipeline.json
+input: config/testfileset.yml
+```
+
+These should point to the respective files from the fileset.
+
+Note that when evaluating the contents of these files, the variables are expanded, which enables you to select one file or the other depending on the value of a variable. For example:
+
+```yaml
+ingest_pipeline: ingest/{{.pipeline}}.json
+```
+
+This example selects the ingest pipeline file based on the value of the `pipeline` variable. For the `pipeline` variable shown earlier, the path would resolve to `ingest/with_plugins.json` (assuming the variable value isn’t overridden at runtime.)
+
+In 6.6 and later, you can specify multiple ingest pipelines.
+
+```yaml
+ingest_pipeline:
+ - ingest/main.json
+ - ingest/plain_logs.json
+ - ingest/json_logs.json
+```
+
+When multiple ingest pipelines are specified the first one in the list is considered to be the entry point pipeline.
+
+One reason for using multiple pipelines might be to send all logs harvested by this fileset to the entry point pipeline and have it delegate different parts of the processing to other pipelines. You can read details about setting this up in [the `ingest/*.json` section](#ingest-json-entry-point-pipeline).
+
+
+### config/*.yml [_config_yml]
+
+The `config/` folder contains template files that generate Filebeat input configurations. The Filebeat inputs are primarily responsible for tailing files, filtering, and multi-line stitching, so that’s what you configure in the template files.
+
+A typical example looks like this:
+
+```yaml
+type: log
+paths:
+{{ range $i, $path := .paths }}
+ - {{$path}}
+{{ end }}
+exclude_files: [".gz$"]
+```
+
+You’ll find this example in the template file that gets generated automatically when you run `make create-fileset`. In this example, the `paths` variable is used to construct the `paths` list for the input `paths` option.
+
+Any template files that you add to the `config/` folder need to generate a valid Filebeat input configuration in YAML format. The options accepted by the input configuration are documented in the [Filebeat Inputs](/reference/filebeat/configuration-filebeat-options.md) section of the Filebeat documentation.
+
+The template files use the templating language defined by the [Go standard library](https://golang.org/pkg/text/template/).
+
+Here is another example that also configures multiline stitching:
+
+```yaml
+type: log
+paths:
+{{ range $i, $path := .paths }}
+ - {{$path}}
+{{ end }}
+exclude_files: [".gz$"]
+multiline:
+ pattern: "^# User@Host: "
+ negate: true
+ match: after
+```
+
+Although you can add multiple configuration files under the `config/` folder, only the file indicated by the `manifest.yml` file will be loaded. You can use variables to dynamically switch between configurations.
+
+
+### ingest/*.json [_ingest_json]
+
+The `ingest/` folder contains {{es}} [ingest pipeline](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) configurations. Ingest pipelines are responsible for parsing the log lines and doing other manipulations on the data.
+
+The files in this folder are JSON or YAML documents representing [pipeline definitions](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md). Just like with the `config/` folder, you can define multiple pipelines, but a single one is loaded at runtime based on the information from `manifest.yml`.
+
+The generator creates a JSON object similar to this one:
+
+```json
+{
+ "description": "Pipeline for parsing {module} {fileset} logs",
+ "processors": [
+ ],
+ "on_failure" : [{
+ "set" : {
+ "field" : "error.message",
+ "value" : "{{ _ingest.on_failure_message }}"
+ }
+ }]
+}
+```
+
+Alternatively, you can use YAML formatted pipelines, which uses a simpler syntax:
+
+```yaml
+description: "Pipeline for parsing {module} {fileset} logs"
+processors:
+on_failure:
+ - set:
+ field: error.message
+ value: "{{ _ingest.on_failure_message }}"
+```
+
+From here, you would typically add processors to the `processors` array to do the actual parsing. For information about available ingest processors, see the [processor reference documentation](elasticsearch://reference/ingestion-tools/enrich-processor/index.md). In particular, you will likely find the [grok processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md) to be useful for parsing. Here is an example for parsing the Nginx access logs.
+
+```json
+{
+ "grok": {
+ "field": "message",
+ "patterns":[
+ "%{IPORHOST:nginx.access.remote_ip} - %{DATA:nginx.access.user_name} \\[%{HTTPDATE:nginx.access.time}\\] \"%{WORD:nginx.access.method} %{DATA:nginx.access.url} HTTP/%{NUMBER:nginx.access.http_version}\" %{NUMBER:nginx.access.response_code} %{NUMBER:nginx.access.body_sent.bytes} \"%{DATA:nginx.access.referrer}\" \"%{DATA:nginx.access.agent}\""
+ ],
+ "ignore_missing": true
+ }
+}
+```
+
+Note that you should follow the convention of naming of fields prefixed with the module and fileset name: `{{module}}.{fileset}.field`, e.g. `nginx.access.remote_ip`. Also, please review our [Naming Conventions](/extend/event-conventions.md).
+
+$$$ingest-json-entry-point-pipeline$$$
+In 6.6 and later, ingest pipelines can use the [`pipeline` processor](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) to delegate parts of the processings to other pipelines.
+
+This can be useful if you want a fileset to ingest the same *logical* information presented in different formats, e.g. csv vs. json versions of the same log files. Imagine an entry point ingest pipeline that detects the format of a log entry and then conditionally delegates further processing of that log entry, depending on the format, to another pipeline.
+
+```json
+{
+ "processors": [
+ {
+ "grok": {
+ "field": "message",
+ "patterns": [
+ "^%{CHAR:first_char}"
+ ],
+ "pattern_definitions": {
+ "CHAR": "."
+ }
+ }
+ },
+ {
+ "pipeline": {
+ "if": "ctx.first_char == '{'",
+ "name": "{< IngestPipeline "json-log-processing-pipeline" >}" <1>
+ }
+ },
+ {
+ "pipeline": {
+ "if": "ctx.first_char != '{'",
+ "name": "{< IngestPipeline "plain-log-processing-pipeline" >}"
+ }
+ }
+ ]
+}
+```
+
+1. Use the `IngestPipeline` template function to resolve the name. This function converts the specified name into the fully qualified pipeline ID that is stored in Elasticsearch.
+
+
+In order for the above pipeline to work, Filebeat must load the entry point pipeline as well as any sub-pipelines into Elasticsearch. You can tell Filebeat to do so by specifying all the necessary pipelines for the fileset in its `manifest.yml` file. The first pipeline in the list is considered to be the entry point pipeline.
+
+```yaml
+ingest_pipeline:
+ - ingest/main.json
+ - ingest/plain_logs.yml
+ - ingest/json_logs.json
+```
+
+While developing the pipeline definition, we recommend making use of the [Simulate Pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) for testing and quick iteration.
+
+By default Filebeat does not update Ingest pipelines if already loaded. If you want to force updating your pipeline during development, use `./filebeat setup --pipelines` command. This uploads pipelines even if they are already available on the node.
+
+
+### _meta/fields.yml [_metafields_yml_2]
+
+The `fields.yml` file contains the top-level structure for the fields in your fileset. It is used as the source of truth for:
+
+* the generated Elasticsearch mapping template
+* the generated Kibana index pattern
+* the generated documentation for the exported fields
+
+Besides the `fields.yml` file in the fileset, there is also a `fields.yml` file at the module level, placed under `module/{{module}}/_meta/fields.yml`, which should contain the fields defined at the module level, and the description of the module itself. In most cases, you should add the fields at the fileset level.
+
+After `pipeline.json` is created, it is possible to generate a base `field.yml`.
+
+```bash
+make create-fields MODULE={module} FILESET={fileset}
+```
+
+Please, always check the generated file and make sure the fields are correct. You must add field documentation manually.
+
+If the fields are correct, it is time to generate documentation, configuration and Kibana index patterns.
+
+```bash
+make update
+```
+
+
+### test [_test]
+
+In the `test/` directory, you should place sample log files generated by the service. We have integration tests, automatically executed by CI, that will run Filebeat on each of the log files under the `test/` folder and check that there are no parsing errors and that all fields are documented.
+
+In addition, assuming you have a `test.log` file, you can add a `test.log-expected.json` file in the same directory that contains the expected documents as they are found via an Elasticsearch search. In this case, the integration tests will automatically check that the result is the same on each run.
+
+In order to test the filesets with the sample logs and/or generate the expected output one should run the tests locally for a specific module, using the following procedure under Filebeat directory:
+
+1. Start an Elasticsearch instance locally. For example, using Docker:
+
+ ```bash
+ docker run \
+ --name elasticsearch \
+ -p 9200:9200 -p 9300:9300 \
+ -e "xpack.security.http.ssl.enabled=false" -e "ELASTIC_PASSWORD=changeme" \
+ -e "discovery.type=single-node" \
+ --pull always --rm --detach \
+ docker.elastic.co/elasticsearch/elasticsearch:master-SNAPSHOT
+ ```
+
+2. Create an "admin" user on that Elasticsearch instance:
+
+ ```bash
+ curl -u elastic:changeme \
+ http://localhost:9200/_security/user/admin \
+ -X POST -H 'Content-Type: application/json' \
+ -d '{"password": "changeme", "roles": ["superuser"]}'
+ ```
+
+3. Create the testing binary: `make filebeat.test`
+4. Update fields yaml: `make update`
+5. Create python env: `make python-env`
+6. Source python env: `source ./build/python-env/bin/activate`
+7. Run a test, for example to check nginx access log parsing:
+
+ ```bash
+ INTEGRATION_TESTS=1 BEAT_STRICT_PERMS=false ES_PASS=changeme \
+ TESTING_FILEBEAT_MODULES=nginx \
+ pytest tests/system/test_modules.py -v --full-trace
+ ```
+
+8. Add and remove option env vars as required. Here are some useful ones:
+
+ * `TESTING_FILEBEAT_ALLOW_OLDER`: if set to 1, allow connecting older versions of Elasticsearch
+ * `TESTING_FILEBEAT_MODULES`: comma separated list of modules to test.
+ * `TESTING_FILEBEAT_FILESETS`: comma separated list of filesets to test.
+ * `TESTING_FILEBEAT_FILEPATTERN`: glob pattern for log files within the fileset to test.
+ * `GENERATE`: if set to 1, the expected documents will be generated.
+
+
+The filebeat logs are writen to the `build` directory. It may be useful to tail them in another terminal using `tail -F build/system-tests/run/test_modules.Test.*/output.log`.
+
+For example if there’s a syntax error in an ingest pipeline, the test will probably just hang. The filebeat log output will contain the error message from elasticsearch.
+
diff --git a/docs/extend/generate-index-pattern.md b/docs/extend/generate-index-pattern.md
new file mode 100644
index 000000000000..ac1b7cc795e1
--- /dev/null
+++ b/docs/extend/generate-index-pattern.md
@@ -0,0 +1,17 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/generate-index-pattern.html
+---
+
+# Generating the Beat Index Pattern [generate-index-pattern]
+
+The index-pattern defines the format of each field, and it’s used by Kibana to know how to display the field. If you change the fields exported by the Beat, you need to generate a new index pattern for your Beat. Otherwise, you can just use the index pattern available under the `kibana/*/index-pattern` directory.
+
+The Beat index pattern is generated from the `fields.yml`, which contains all the fields exported by the Beat. For each field, besides the `type`, you can configure the `format` field. The format informs Kibana about how to display a certain field. A good example is `percentage` or `bytes` to display fields as `50%` or `5MB`.
+
+To generate the index pattern from the `fields.yml`, you need to run the following command in the Beat repository:
+
+```shell
+make update
+```
+
diff --git a/docs/extend/getting-ready-new-protocol.md b/docs/extend/getting-ready-new-protocol.md
new file mode 100644
index 000000000000..1a427bccbccb
--- /dev/null
+++ b/docs/extend/getting-ready-new-protocol.md
@@ -0,0 +1,63 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/getting-ready-new-protocol.html
+---
+
+# Getting Ready [getting-ready-new-protocol]
+
+Packetbeat is written in [Go](http://golang.org/), so having Go installed and knowing the basics are prerequisites for understanding this guide. But don’t worry if you aren’t a Go expert. Go is a relatively new language, and very few people are experts in it. In fact, several people learned Go by contributing to Packetbeat and libbeat, including the original Packetbeat authors.
+
+You will also need a good understanding of the wire protocol that you want to add support for. For standard protocols or protocols used in open source projects, you can usually find detailed specifications and example source code. Wireshark is a very useful tool for understanding the inner workings of the protocols it supports.
+
+In some cases you can even make use of existing libraries for doing the actual parsing and decoding of the protocol. If the particular protocol has a Go implementation with a liberal enough license, you might be able to use it to parse and decode individual messages instead of writing your own parser.
+
+Before starting, please also read the [*Contributing to Beats*](./index.md).
+
+
+### Cloning and Compiling [_cloning_and_compiling]
+
+After you have [installed Go](https://golang.org/doc/install) and set up the [GOPATH](https://golang.org/doc/code.md#GOPATH) environment variable to point to your preferred workspace location, you can clone Packetbeat with the following commands:
+
+```shell
+$ mkdir -p ${GOPATH}/src/github.com/elastic
+$ cd ${GOPATH}/src/github.com/elastic
+$ git clone https://github.com/elastic/beats.git
+```
+
+Note: If you have multiple go paths use `${GOPATH%%:*}`instead of `${GOPATH}`.
+
+Then you can compile it with:
+
+```shell
+$ cd beats
+$ make
+```
+
+Note that the location where you clone is important. If you prefer working outside of the `GOPATH` environment, you can clone to another directory and only create a symlink to the `$GOPATH/src/github.com/elastic/` directory.
+
+
+## Forking and Branching [_forking_and_branching]
+
+We recommend the following work flow for contributing to Packetbeat:
+
+* Fork Beats in GitHub to your own account
+* In the `$GOPATH/src/github.com/elastic/beats` folder, add your fork as a new remote. For example (replace `tsg` with your GitHub account):
+
+```shell
+$ git remote add tsg git@github.com:tsg/beats.git
+```
+
+* Create a new branch for your work:
+
+```shell
+$ git checkout -b cool_new_protocol
+```
+
+* Commit as often as you like, and then push to your private fork with:
+
+```shell
+$ git push --set-upstream tsg cool_new_protocol
+```
+
+* When you are ready to submit your PR, simply do so from the GitHub web interface. Feel free to submit your PR early. You can still add commits to the branch after creating the PR. Submitting the PR early gives us more time to provide feedback and perhaps help you with it.
+
diff --git a/docs/extend/import-dashboards.md b/docs/extend/import-dashboards.md
new file mode 100644
index 000000000000..2f4bff91c611
--- /dev/null
+++ b/docs/extend/import-dashboards.md
@@ -0,0 +1,117 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/import-dashboards.html
+---
+
+# Importing Existing Beat Dashboards [import-dashboards]
+
+The official Beats come with Kibana dashboards, and starting with 6.0.0, they are part of every Beat package.
+
+You can use the Beat executable to import all the dashboards and the index pattern for a Beat, including the dependencies such as visualizations and searches.
+
+To import the dashboards, run the `setup` command.
+
+```shell
+./metricbeat setup
+```
+
+The `setup` phase loads several dependencies, such as:
+
+* Index mapping template in Elasticsearch
+* Kibana dashboards
+* Ingest pipelines
+* ILM policy
+
+The dependencies vary depending on the Beat you’re setting up.
+
+For more details about the `setup` command, see the command-line help. For example:
+
+```shell
+./metricbeat help setup
+
+This command does initial setup of the environment:
+
+ * Index mapping template in Elasticsearch to ensure fields are mapped.
+ * Kibana dashboards (where available).
+ * ML jobs (where available).
+ * Ingest pipelines (where available).
+ * ILM policy (for Elasticsearch 6.5 and newer).
+
+Usage:
+ metricbeat setup [flags]
+
+Flags:
+ --dashboards Setup dashboards
+ -h, --help help for setup
+ --index-management Setup all components related to Elasticsearch index management, including template, ilm policy and rollover alias
+ --pipelines Setup Ingest pipelines
+```
+
+The flags are useful when you don’t want to load everything. For example, to import only the dashboards, use the `--dashboards` flag:
+
+```shell
+./metricbeat setup --dashboards
+```
+
+Starting with Beats 6.0.0, the dashboards are no longer loaded directly into Elasticsearch. Instead, they are imported directly into Kibana. Thus, if your Kibana instance is not listening on localhost, or you enabled {{xpack}} for Kibana, you need to either configure the Kibana endpoint in the config for the Beat, or pass the Kibana host and credentials as arguments to the `setup` command. For example:
+
+```shell
+./metricbeat setup -E setup.kibana.host=192.168.3.206:5601 -E setup.kibana.username=elastic -E setup.kibana.password=secret
+```
+
+By default, the `setup` command imports the dashboards from the `kibana` directory, which is available in the Beat package.
+
+::::{note}
+The format of the saved dashboards is not compatible between Kibana 5.x and 6.x. Thus, the Kibana 5.x dashboards are available in the `5.x` directory, and the Kibana 6.0 dashboards, and older are in the `default` directory.
+::::
+
+
+In case you are using customized dashboards, you can import them:
+
+* from a local directory:
+
+ ```shell
+ ./metricbeat setup -E setup.dashboards.directory=kibana
+ ```
+
+* from a local zip archive:
+
+ ```shell
+ ./metricbeat setup -E setup.dashboards.file=metricbeat-dashboards-6.0.zip
+ ```
+
+* from a zip archive available online:
+
+ ```shell
+ ./metricbeat setup -E setup.dashboards.url=path/to/url
+ ```
+
+ See [Kibana dashboards configuration](#import-dashboard-options) for a description of the `setup.dashboards` configuration options.
+
+
+## Import Dashboards for Development [import-dashboards-for-development]
+
+You can make use of the Magefile from the Beat GitHub repository to import the dashboards. If Kibana is running on localhost, then you can run the following command from the root of the Beat:
+
+```shell
+mage dashboards
+```
+
+
+## Kibana dashboards configuration [import-dashboard-options]
+
+The configuration file (`*.reference.yml`) of each Beat contains the `setup.dashboards` section for configuring from where to get the Kibana dashboards, as well as the name of the index pattern. Each of these configuration options can be overwritten with the command line options by using `-E` flag.
+
+**`setup.dashboards.directory=`**
+: Local directory that contains the saved dashboards and their dependencies. The default value is the `kibana` directory available in the Beat package.
+
+**`setup.dashboards.file=`**
+: Local zip archive with the dashboards. The archive can contain Kibana dashboards for a single Beat or for multiple Beats. The dashboards of each Beat are placed under a separate directory with the name of the Beat.
+
+**`setup.dashboards.url=`**
+: Zip archive with the dashboards, available online. The archive can contain Kibana dashboards for a single Beat or for multiple Beats. The dashboards for each Beat are placed under a separate directory with the name of the Beat.
+
+**`setup.dashboards.index `**
+: You should only use this option if you want to change the index pattern name that’s used by default. For example, if the default is `metricbeat-*`, you can change it to `custombeat-*`.
+
+
diff --git a/docs/extend/index.md b/docs/extend/index.md
new file mode 100644
index 000000000000..9d1fc2d86f7b
--- /dev/null
+++ b/docs/extend/index.md
@@ -0,0 +1,194 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/beats-contributing.html
+---
+
+# Contribute to Beats [beats-contributing]
+
+If you have a bugfix or new feature that you would like to contribute, please start by opening a topic on the [forums](https://discuss.elastic.co/c/beats). It may be that somebody is already working on it, or that there are particular issues that you should know about before implementing the change.
+
+We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code. After committing your code, check out the [Elastic Contributor Program](https://www.elastic.co/community/contributor) where you can earn points and rewards for your contributions.
+
+The process for contributing to any of the Elastic repositories is similar.
+
+
+## Contribution Steps [contribution-steps]
+
+1. Please make sure you have signed our [Contributor License Agreement](https://www.elastic.co/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once.
+2. Send a pull request! Push your changes to your fork of the repository and [submit a pull request](https://help.github.com/articles/using-pull-requests) using our [pull request guidelines](/extend/pr-review.md). New PRs go to the main branch. The Beats core team will backport your PR if it is necessary.
+
+In the pull request, describe what your changes do and mention any bugs/issues related to the pull request. Please also add a changelog entry to [CHANGELOG.next.asciidoc](https://github.com/elastic/beats/blob/main/CHANGELOG.next.asciidoc).
+
+
+## Setting Up Your Dev Environment [setting-up-dev-environment]
+
+The Beats are Go programs, so install the 1.22.10 version of [Go](http://golang.org/) which is being used for Beats development.
+
+After [installing Go](https://golang.org/doc/install), set the [GOPATH](https://golang.org/doc/code.md#GOPATH) environment variable to point to your workspace location, and make sure `$GOPATH/bin` is in your PATH.
+
+::::{note}
+One deterministic manner to install the proper Go version to work with Beats is to use the [GVM](https://github.com/andrewkroh/gvm) Go version manager. An example for Mac users would be:
+::::
+
+
+```shell
+gvm use 1.22.10
+eval $(gvm 1.22.10)
+```
+
+Then you can clone Beats git repository:
+
+```shell
+mkdir -p ${GOPATH}/src/github.com/elastic
+git clone https://github.com/elastic/beats ${GOPATH}/src/github.com/elastic/beats
+```
+
+::::{note}
+If you have multiple go paths, use `${GOPATH%%:*}` instead of `${GOPATH}`.
+::::
+
+
+Beats developers primarily use [Mage](https://github.com/magefile/mage) for development. You can install mage using a make target:
+
+```shell
+make mage
+```
+
+Then you can compile a particular Beat by using Mage. For example, for Filebeat:
+
+```shell
+cd beats/filebeat
+mage build
+```
+
+You can list all available mage targets with:
+
+```shell
+mage -l
+```
+
+Some of the Beats might have extra development requirements, in which case you’ll find a CONTRIBUTING.md file in the Beat directory.
+
+We use an [EditorConfig](http://editorconfig.org/) file in the beats repository to standardise how different editors handle whitespace, line endings, and other coding styles in our files. Most popular editors have a [plugin](http://editorconfig.org/#download) for EditorConfig and we strongly recommend that you install it.
+
+
+## Update scripts [update-scripts]
+
+The Beats use a variety of scripts based on Python, make and mage to generate configuration files and documentation. Ensure to use the version of python listed in the [.python-version](https://github.com/elastic/beats/blob/main/.python-version) file.
+
+The primary command for updating generated files is:
+
+```shell
+make update
+```
+
+Each Beat has its own `update` target (for both `make` and `mage`), as well as a master `update` in the repository root. If a PR adds or removes a dependency, run `make update` in the root `beats` directory.
+
+Another command properly formats go source files and adds a copyright header:
+
+```shell
+make fmt
+```
+
+Both of these commands should be run before submitting a PR. You can view all the available make targets with `make help`.
+
+These commands have the following dependencies:
+
+* Python >= 3.7
+* Python [venv module](https://docs.python.org/3/library/venv.html)
+* [Mage](https://github.com/magefile/mage)
+
+Python venv module is included in the standard library in Python 3. On Debian/Ubuntu systems it also requires to install the `python3-venv` package, that includes additional support scripts:
+
+```shell
+sudo apt-get install python3-venv
+```
+
+
+## Selecting Build Targets [build-target-env-vars]
+
+Beats is built using the `make release` target. By default, make will select from a limited number of preset build targets:
+
+* darwin/amd64
+* darwin/arm64
+* linux/amd64
+* windows/amd64
+
+You can change build targets using the `PLATFORMS` environment variable. Targets set with the `PLATFORMS` variable can either be a GOOS value, or a GOOS/arch pair. For example, `linux` and `linux/amd64` are both valid targets. You can select multiple targets, and the `PLATFORMS` list is space delimited, for example `darwin windows` will build on all supported darwin and windows architectures. In addition, you can add or remove from the list of build targets by prepending `+` or `-` to a given target. For example: `+bsd` or `-darwin`.
+
+You can find the complete list of supported build targets with `go tool dist list`.
+
+
+## Linting [running-linter]
+
+Beats uses [golangci-lint](https://golangci-lint.run/). You can run the pre-configured linter against your change:
+
+```shell
+mage llc
+```
+
+`llc` stands for `Lint Last Change` which includes all the Go files that were changed in either the last commit (if you’re on the `main` branch) or in a difference between your feature branch and the `main` branch.
+
+It’s expected that sometimes a contributor will be asked to fix linter issues unrelated to their contribution since the linter was introduced later than changes in some of the files.
+
+You can also run the linter against an individual package, for example the filbeat command package:
+
+```shell
+golangci-lint run ./filebeat/cmd/...
+```
+
+
+## Testing [running-testsuite]
+
+You can run the whole testsuite with the following command:
+
+```shell
+make testsuite
+```
+
+Running the testsuite has the following requirements:
+
+* Python >= 3.7
+* Docker >= 1.12
+* Docker-compose >= 1.11
+
+For more details, refer to the [Testing](/extend/testing.md) guide.
+
+
+## Documentation [documentation]
+
+The main documentation for each Beat is located under `/docs` and is based on [AsciiDoc](https://docs.asciidoctor.org/asciidoc/latest/). The Beats documentation also makes extensive use of conditionals and content reuse to ensure consistency and accuracy. Before contributing to the documentation, read the following resources:
+
+* [Docs HOWTO](https://github.com/elastic/docs/blob/master/README.asciidoc)
+* [Contributing to the docs](/extend/contributing-docs.md)
+
+
+## Dependencies [dependencies]
+
+In order to create Beats we rely on Golang libraries and other external tools.
+
+
+### Other dependencies [_other_dependencies]
+
+Besides Go libraries, we are using development tools to generate parsers for inputs and processors.
+
+The following packages are required to run `go generate`:
+
+
+#### Auditbeat [_auditbeat]
+
+* FlatBuffers >= 1.9
+
+
+#### Filebeat [_filebeat]
+
+* Graphviz >= 2.43.0
+* Ragel >= 6.10
+
+
+## Changelog [changelog]
+
+To keep up to date with changes to the official Beats for community developers, follow the developer changelog [here](https://github.com/elastic/beats/blob/main/CHANGELOG-developer.next.asciidoc).
+
+
+
diff --git a/docs/extend/metricbeat-dev-overview.md b/docs/extend/metricbeat-dev-overview.md
new file mode 100644
index 000000000000..bcd6d25c472e
--- /dev/null
+++ b/docs/extend/metricbeat-dev-overview.md
@@ -0,0 +1,21 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/metricbeat-dev-overview.html
+---
+
+# Overview [metricbeat-dev-overview]
+
+Metricbeat consists of modules and metricsets. A Metricbeat module is typically named after the service the metrics are fetched from, such as redis, mysql, and so on. Each module can contain multiple metricsets. A metricset represents multiple metrics that are normally retrieved with one request from the remote system. For example, the Redis `info` metricset retrieves info that you get when you run the Redis `INFO` command, and the MySQL `status` metricset retrieves info that you get when you issue the MySQL `SHOW GLOBAL STATUS` query.
+
+
+## Module and Metricsets Requirements [_module_and_metricsets_requirements]
+
+To guarantee the best user experience, it’s important to us that only high quality modules are part of Metricbeat. The modules and metricsets that are contributed must meet the following requirements:
+
+* Complete `fields.yml` file to generate docs and Elasticsearch templates
+* Documentation files
+* Integration tests
+* 80% test coverage (unit, integration, and system tests combined)
+
+Metricbeat allows you to build a wide variety of modules and metricsets on top of it. For a module to be accepted, it should focus on fetching service metrics directly from the service itself and not via a third-party tool. The goal is to have as few movable parts as possible and for Metricbeat to run as close as possible to the service that it needs to monitor.
+
diff --git a/docs/extend/metricbeat-developer-guide.md b/docs/extend/metricbeat-developer-guide.md
new file mode 100644
index 000000000000..264f1d0b8916
--- /dev/null
+++ b/docs/extend/metricbeat-developer-guide.md
@@ -0,0 +1,29 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/metricbeat-developer-guide.html
+---
+
+# Extending Metricbeat [metricbeat-developer-guide]
+
+Metricbeat periodically interrogates other services to fetch key metrics information. As a developer, you can use Metricbeat in two different ways:
+
+* Extend Metricbeat directly
+* Create your own Beat and use Metricbeat as a library
+
+We recommend that you start by creating your own Beat to keep the development of your own module or metricset independent of Metricbeat. At a later stage, if you decide to add a module to Metricbeat, you can reuse the code without making additional changes.
+
+This following topics describe how to contribute to Metricbeat by adding metricsets, modules, and new Beats based on Metricbeat:
+
+* [Overview](./metricbeat-dev-overview.md)
+* [Creating a Metricset](./creating-metricsets.md)
+* [Metricset Details](./metricset-details.md)
+* [Creating a Metricbeat Module](./creating-metricbeat-module.md)
+* [Metricbeat Developer FAQ](./dev-faq.md)
+
+If you would like to contribute to Metricbeat or the Beats project, also see [*Contributing to Beats*](./index.md).
+
+
+
+
+
+
diff --git a/docs/extend/metricset-details.md b/docs/extend/metricset-details.md
new file mode 100644
index 000000000000..c1831564bb95
--- /dev/null
+++ b/docs/extend/metricset-details.md
@@ -0,0 +1,257 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/metricset-details.html
+---
+
+# Metricset Details [metricset-details]
+
+This topic provides additional details about creating metricsets.
+
+
+## Adding Special Configuration Options [_adding_special_configuration_options]
+
+Each metricset can have its own configuration variables defined. To make use of these variables, you must extend the `New` method. For example, let’s assume that you want to add a `password` config option to the metricset. You would extend `beat.yml` in the following way:
+
+```yaml
+metricbeat.modules:
+- module: {module}
+ metricsets: ["{metricset}"]
+ password: "test1234"
+```
+
+To read in the new `password` config option, you need to modify the `New` method. First you define a config struct that contains the value types to be read. You can set default values, as needed. Then you pass the config to the `UnpackConfig` method for loading the configuration.
+
+Your implementation should look something like this:
+
+```go
+type MetricSet struct {
+ mb.BaseMetricSet
+ password string
+}
+
+func New(base mb.BaseMetricSet) (mb.MetricSet, error) {
+
+ // Unpack additional configuration options.
+ config := struct {
+ Password string `config:"password"`
+ }{
+ Password: "",
+ }
+ err := base.Module().UnpackConfig(&config)
+ if err != nil {
+ return nil, err
+ }
+
+ return &MetricSet{
+ BaseMetricSet: base,
+ password: config.Password,
+ }, nil
+}
+```
+
+
+### Timeout Connections to Services [_timeout_connections_to_services]
+
+Each time the `Fetch` method is called, it makes a request to the service, so it’s important to handle the connections correctly. We recommended that you set up the connections in the `New` method and persist them in the `MetricSet` object. This allows connections to be reused.
+
+One very important point is that connections must respect the timeout variable: `base.Module().Config().Timeout`. If the timeout elapses before the request completes, the request must be ended and an error must be returned to make sure the next request can be started on time. By default the Timeout is set to Period, so one request gets ended before a new request is made.
+
+If a request must be ended or has an error, make sure that you return a useful error message. This error message is also sent to Elasticsearch, making it possible to not only fetch metrics from the service, but also report potential problems or errors with the metricset.
+
+
+### Data Transformation [_data_transformation]
+
+If the data transformation that has to happen in the `Fetch` method is extensive, we recommend that you create a second file called `data.go` in the same package as the metricset. The `data.go` file should contain a function called `eventMapping(...)`. A separate file is not required, but is currently a best practice because it isolates the functionality of the metricset and `Fetch` method from the data mapping.
+
+
+### fields.yml [_fields_yml]
+
+You can find up to 3 different types of files named `fields.yml` in the beats repository for each metricbeat module:
+
+* `metricbeat/fields.yml`: Contains the definitions to create the Elasticsearch template, the Kibana index pattern configuration and the exported fields documentation for metricsets. To make sure the Elasticsearch template is correct, it’s important to keep this file up-to-date with all the changes. Generally, you shouldn’t touch this file manually because it’s generated by some commands in the build environment.
+* `metricbeat/module/{{module}}/_meta/fields.yml`: Contains the general top level structure for all metricsets in a module. Normally you only need to modify the description in this file. Here is an example for the `fields.yml` file from the MySQL module.
+
+ ```yaml
+ - key: mysql
+ title: "MySQL"
+ description: >
+ MySQL server status metrics collected from MySQL.
+ short_config: false
+ release: ga
+ fields:
+ - name: mysql
+ type: group
+ description: >
+ `mysql` contains the metrics that were obtained from MySQL
+ query.
+ fields:
+ ```
+
+* `metricbeat/module/{{module}}/{metricset}/_meta/fields.yml`: Contains all fields definitions retrieved by the metricset. As field types, each field must have a core data type [supported by elasticsearch](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md#_core_datatypes). Here’s a very basic example that shows one group from the MySQL `status` metricset:
+
+ ```yaml
+ - name: status
+ type: group
+ description: >
+ `status` contains the metrics that were obtained by the status SQL query.
+ fields:
+ - name: aborted
+ type: group
+ description: Aborted status fields.
+ fields:
+ - name: clients
+ type: integer
+ description: >
+ The number of connections that were aborted because the client died without closing the connection properly.
+
+ - name: connects
+ type: integer
+ description: >
+ The number of failed attempts to connect to the MySQL server.
+ ```
+
+
+
+### Testing [_testing]
+
+It’s important to also add tests for your metricset. There are three different types of tests that you need for testing a Beat:
+
+* unit tests
+* integration tests
+* system tests
+
+We recommend that you use all three when you create a metricset. Unit tests are written in Go and have no dependencies. Integration tests are also written in Go but require the service from which the module collects metrics to also be running. System tests for Metricbeat also require the service to be running in most cases and are written in Python based on our small Python test framework. We use [venv](https://docs.python.org/3/library/venv.html) to deal with Python dependencies. You can simply run the command `make python-env` and then `. build/python-env/bin/activate` .
+
+You should use a combination of the three test types to test your metricsets because each method has advantages and disadvantages. To get started with your own tests, it’s best to look at the existing tests. You’ll find the unit and integration tests in the `_test.go` files under existing modules and metricsets. Integration tests usually take the form of `TestFetch` and `TestData`. The system tests are under `tests/systems`.
+
+
+#### Adding a Test Environment [_adding_a_test_environment]
+
+Integration and system tests need an environment that’s running the service. You can create this environment by using Docker and a docker-compose file. If you add a module that requires a service, you must add the service to the virtual environment. To do this, you:
+
+* Update the `docker-compose.yml` file with your environment
+* Update the `docker-entrypoint.sh` script
+
+The `docker-compose.yml` file is at the root of Metricbeat. Most services have existing Docker modules and can be added as simply as Redis:
+
+```yaml
+redis:
+ image: redis:3.2.3
+```
+
+To allow the Beat to access your service, make sure that you define the environment variables in the docker-compose file and add the link to the container:
+
+```yaml
+beat:
+ links:
+ - redis
+ environment:
+ - REDIS_HOST=redis
+ - REDIS_PORT=6379
+```
+
+To make sure the service is running before the tests are started, modify the `docker-entrypoint.sh` script to add a check that verifies your service is running. For example, the check for Redis looks like this:
+
+```shell
+waitFor ${REDIS_HOST} ${REDIS_PORT} Redis
+```
+
+The environment expects your service to be available as soon as it receives a response from the given address and port.
+
+
+#### Adding the standard metricset integration tests [_adding_the_standard_metricset_integration_tests]
+
+There are normally two integration tests that are part of every metricset: `TestFetch` and `TestData`. Both tests will start up a new instance of your metricset and fetch an event. In order to start a metricset, you need to create a configuration object:
+
+```go
+func getConfig() map[string]interface{} {
+ return map[string]interface{}{
+ "module": "{module}",
+ "metricsets": []string{"{metricset}"},
+ "hosts": []string{GetEnvHost() + ":" + GetEnvPort()}, <1>
+ }
+}
+
+func GetEnvHost() string { <2>
+ host := os.Getenv("{module}_HOST")
+ if len(host) == 0 {
+ host = "127.0.0.1"
+ }
+ return host
+}
+
+func GetEnvPort() string { <2>
+ port := os.Getenv("{module}_PORT")
+
+ if len(port) == 0 {
+ port = "1234"
+ }
+ return port
+}
+```
+
+1. Add any additional config options your metricset needs here.
+2. The endpoint used by the metricset needs to be configurable for manual and automated testing. Environment variables should be defined in the module under `_meta/env` and included in the `docker-compose.yml` file.
+
+
+The `TestFetch` integration test will return a single event from your metricset, which you can use to test the validity of the data. `TestData` will (re)generate the `_meta/data.json` file that documents the data reported by the metricset.
+
+```go
+import (
+ "os"
+ "testing"
+
+ "github.com/stretchr/testify/assert"
+
+ "github.com/elastic/beats/libbeat/tests/compose"
+ mbtest "github.com/elastic/beats/metricbeat/mb/testing"
+)
+
+func TestFetch(t *testing.T) {
+ compose.EnsureUp(t, "{module}") <1>
+
+ f := mbtest.NewReportingMetricSetV2Error(t, getConfig())
+
+ events, errs := mbtest.ReportingFetchV2Error(f)
+ if len(errs) > 0 {
+ t.Fatalf("Expected 0 errord, had %d. %v\n", len(errs), errs)
+ }
+
+ assert.NotEmpty(t, events) <2>
+
+}
+
+func TestData(t *testing.T) {
+
+ f := mbtest.NewReportingMetricSetV2Error(t, getConfig())
+
+ err := mbtest.WriteEventsReporterV2Error(f, t, "") <3>
+ if !assert.NoError(t, err) {
+ t.FailNow()
+ }
+}
+```
+
+1. Use this to start the docker service associated with your metricset.
+2. Add any further validity checks to verify the metricset is working.
+3. `WriteEventsReporterV2Error` will take the first valid event from the metricset and write it to `_meta/data.json`
+
+
+
+#### Running the Tests [_running_the_tests]
+
+To run all the tests, run `make testsuite`. To only run unit tests, run `mage unitTest`, or for integration tests `mage integTest`. Be aware that a running Docker environment is needed for integration and system tests.
+
+To run `TestData` and generate the `data.json` file, run `go test -tags=integration -data -run TestData` in the directory where your test is located.
+
+To run the integration tests for a single module, set the `MODULE` environment variable to the name of the directory of the module. For example you can run the following command to run integration tests for `apache` module:
+
+```shell
+MODULE=apache mage integTest
+```
+
+
+## Documentation [_documentation]
+
+Each module must be documented. The documentation is based on asciidoc and is in the file `module/{{module}}/_meta/docs.asciidoc` for the module and in `module/{{module}}/{metricset}/_meta/docs.asciidoc` for the metricset. Basic documentation with the config file and an example output is automatically generated. Use these files to document specific configuration options or usage examples.
+
diff --git a/docs/extend/new-dashboards.md b/docs/extend/new-dashboards.md
new file mode 100644
index 000000000000..912c383ae4a6
--- /dev/null
+++ b/docs/extend/new-dashboards.md
@@ -0,0 +1,28 @@
+---
+navigation_title: "Creating New Kibana Dashboards"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/new-dashboards.html
+---
+
+# Creating New Kibana Dashboards for a Beat or a Beat module [new-dashboards]
+
+
+When contributing to Beats development, you may want to add new dashboards or customize the existing ones. To get started, you can [import the Kibana dashboards](/extend/import-dashboards.md) that come with the official Beats and use them as a starting point for your own dashboards. When you’re done making changes to the dashboards in Kibana, you can use the `export_dashboards` script to [export the dashboards](/extend/export-dashboards.md), along with all dependencies, to a local directory.
+
+To make sure the dashboards are compatible with the latest version of Kibana and Elasticsearch, we recommend that you use the virtual environment under [beats/testing/environments](https://github.com/elastic/beats/tree/master/testing/environments) to import, create, and export the Kibana dashboards.
+
+The following topics provide more detail about importing and working with Beats dashboards:
+
+* [Importing Existing Beat Dashboards](/extend/import-dashboards.md)
+* [Building Your Own Beat Dashboards](/extend/build-dashboards.md)
+* [Generating the Beat Index Pattern](/extend/generate-index-pattern.md)
+* [Exporting New and Modified Beat Dashboards](/extend/export-dashboards.md)
+* [Archiving Your Beat Dashboards](/extend/archive-dashboards.md)
+* [Sharing Your Beat Dashboards](/extend/share-beat-dashboards.md)
+
+
+
+
+
+
+
diff --git a/docs/extend/new-protocol.md b/docs/extend/new-protocol.md
new file mode 100644
index 000000000000..1ad4793aae66
--- /dev/null
+++ b/docs/extend/new-protocol.md
@@ -0,0 +1,16 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/new-protocol.html
+---
+
+# Adding a New Protocol to Packetbeat [new-protocol]
+
+The following topics describe how to add a new protocol to Packetbeat:
+
+* [Getting Ready](/extend/getting-ready-new-protocol.md)
+* [Protocol Modules](/extend/protocol-modules.md)
+* [Testing](/extend/protocol-testing.md)
+
+
+
+
diff --git a/docs/extend/pr-review.md b/docs/extend/pr-review.md
new file mode 100644
index 000000000000..c764b9fff0b2
--- /dev/null
+++ b/docs/extend/pr-review.md
@@ -0,0 +1,23 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/pr-review.html
+---
+
+# Pull request review guidelines [pr-review]
+
+Every change made to Beats must be held to a high standard, and while the responsibility for quality in a pull request ultimately lies with the author, Beats team members have the responsibility as reviewers to verify during their review process. Where this document is unclear or inappropriate let common sense and consensus override it.
+
+
+## Code Style [_code_style]
+
+Everyone’s got an opinion on style. To avoid spending time on this issue we rely almost exclusively on `go fmt` and [hound](https://houndci.com/) to police style. If neither of these tools complain the code is almost certainly fine. There may be exceptions to this, but they should be extremely rare. Only override the judgement of these tools in the most unusual of situations.
+
+
+## Flaky Tests [_flaky_tests]
+
+As software projects grow so does the complexity of their test cases and with that the probability of some tests becoming *flaky*. It is everyone’s responsibility to handle flaky tests. If you notice a pull request build failing for a reason that is unrelated to the pushed code follow the procedure below:
+
+1. Create an issue using the "Flaky Test" github issue template with the "Flaky Test" label attached.
+2. Create a PR to mute or fix the flaky test.
+3. Merge that PR and rebase off of it before continuing with the normal PR process for your original PR.
+
diff --git a/docs/extend/protocol-modules.md b/docs/extend/protocol-modules.md
new file mode 100644
index 000000000000..fde1f19979fc
--- /dev/null
+++ b/docs/extend/protocol-modules.md
@@ -0,0 +1,9 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/protocol-modules.html
+---
+
+# Protocol Modules [protocol-modules]
+
+We are working on updating this section. While you’re waiting for updates, you might want to try out the TCP protocol generator at [https://github.com/elastic/beats/tree/master/packetbeat/scripts/tcp-protocol](https://github.com/elastic/beats/tree/master/packetbeat/scripts/tcp-protocol).
+
diff --git a/docs/extend/protocol-testing.md b/docs/extend/protocol-testing.md
new file mode 100644
index 000000000000..9b48102bf0e0
--- /dev/null
+++ b/docs/extend/protocol-testing.md
@@ -0,0 +1,9 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/protocol-testing.html
+---
+
+# Testing [protocol-testing]
+
+We are working on updating this section.
+
diff --git a/docs/extend/python-beats.md b/docs/extend/python-beats.md
new file mode 100644
index 000000000000..b04754cc8dcb
--- /dev/null
+++ b/docs/extend/python-beats.md
@@ -0,0 +1,68 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/python-beats.html
+---
+
+# Python in Beats [python-beats]
+
+Python is used for Beats development, it is the language used to implement system tests and some other tools. Python dependencies are managed by the use of virtual environments, supported by [venv](https://docs.python.org/3/library/venv.html).
+
+Beats development requires Python >= 3.7.
+
+## Installing Python and venv [installing-python]
+
+Python uses to be installed in many operating systems. If it is not installed in your system you can follow the instructions available in [https://www.python.org/downloads/](https://www.python.org/downloads/)
+
+In Ubuntu/Debian systems, Python 3 can be installed with:
+
+```sh
+sudo apt-get install python3 python3-venv
+```
+
+There are packages for specific minor versions, so for example if Python 3.7 wants to be used, it can be installed with the following command:
+
+```sh
+sudo apt-get install python3.7 python3.7-venv
+```
+
+It is recommended to use Python >= 3.7.
+
+
+## Working with virtual environments [python-virtual-environments]
+
+All `make` and `mage` targets manage their own virtual environments in a transparent way, so for the most common operations required when contributing to beats, nothing special needs to be done.
+
+Virtual environments used by `make` can be found in most Beats directories under `build/python-env`, they are created by targets that need it, or can be explicitly created by running `make python-env`. The ones used by `mage` are created when required under `build/ve`.
+
+There are some environment variables that can be used to customize the creation of these virtual environments:
+
+* `PYTHON_EXE`: Python executable to be used in the virtual environment. It has to exist in the path.
+* `PYTHON_ENV`: Path to the virtual environment to use. If it doesn’t exist, it is created by `make` or `mage` targets when needed.
+
+Virtual environments can also be used without `make` or `mage`, this is usual for example when running individual system tests with `pytest`. There are two ways to run commands from the virtual environment:
+
+* "Activating" the virtual environment in your current terminal running `source ./build/python-env/bin/activate`. Virtual environment can be deactivated by running `deactivate`.
+* Directly running commands from the virtual environment path. For example `pytest` can be executed as `./build/python-env/bin/pytest`.
+
+To recreate a virtual environment, remove its directory. All virtual environments are also removed with `make clean`.
+
+
+## Working with older versions [python-older-versions]
+
+Older versions of Beats were not compatible with Python 3, if you need to temporary work on one of these versions of Beats, and you don’t want to remove your current virtual environments, you can use environment variables to run commands in a temporary virtual environment.
+
+For example you can run `make update` with Python 2.7 with the following command:
+
+```sh
+PYTHON_EXE=python2.7 PYTHON_ENV=/tmp/venv2 make update
+```
+
+If you need to run tests you can also create a virtual environment and then activate it to run commands from there:
+
+```sh
+PYTHON_EXE=python2.7 PYTHON_ENV=/tmp/venv2 make python-env
+source /tmp/venv2/bin/activate
+...
+```
+
+
diff --git a/docs/extend/share-beat-dashboards.md b/docs/extend/share-beat-dashboards.md
new file mode 100644
index 000000000000..782cb21a22c3
--- /dev/null
+++ b/docs/extend/share-beat-dashboards.md
@@ -0,0 +1,9 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/share-beat-dashboards.html
+---
+
+# Sharing Your Beat Dashboards [share-beat-dashboards]
+
+When you’re done with your own Beat dashboards, how about letting everyone know? You can create a topic on the [Beats forum](https://discuss.elastic.co/c/beats), and provide the link to the zip archive together with a short description.
+
diff --git a/docs/extend/testing.md b/docs/extend/testing.md
new file mode 100644
index 000000000000..ed288f78f817
--- /dev/null
+++ b/docs/extend/testing.md
@@ -0,0 +1,118 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/devguide/current/testing.html
+---
+
+# Testing [testing]
+
+Beats has a various sets of tests. This guide should help to understand how the different test suites work, how they are used and new tests are added.
+
+In general there are two major test suites:
+
+* Tests written in Go
+* Tests written in Python
+
+The tests written in Go use the [Go Testing package](https://golang.org/pkg/testing/). The tests written in Python depend on [pytest](https://docs.pytest.org/en/latest/) and require a compiled and executable binary from the Go code. The python test run a beat with a specific config and params and either check if the output is as expected or if the correct things show up in the logs.
+
+For both of the above test suites so called integration tests exists. Integration tests in Beats are tests which require an external system like Elasticsearch to test if the integration with this service works as expected. Beats provides in its testsuite docker containers and docker-compose files to start these environments but a developer can run the required services also locally.
+
+## Running Go Tests [_running_go_tests]
+
+The Go tests can be executed in each Go package by running `go test .`. This will execute all tests which don’t don’t require an external service to be running. To run all non integration tests for a beat run `mage unitTest`.
+
+All Go tests are in the same package as the tested code itself and have the suffix `_test` in the file name. Most of the tests are in the same package as the rest of the code. Some of the tests which should be separate from the rest of the code or should not use private variables go under `{{packagename}}_test`.
+
+### Running Go Integration Tests [_running_go_integration_tests]
+
+Integration tests are labelled with the `//go:build integration` build tag and use the `_integration_test.go` suffix.
+
+To run the integration tests use the `mage goIntegTest` target, which will start the required services using [docker-compose](https://docs.docker.com/compose/) and run all integration tests.
+
+It is also possible to run module specific integration tests. For example, to run kafka only tests use `MODULE=kafka mage integTest -v`
+
+It is possible to start the `docker-compose` services manually to allow selecting which specific tests should be run. An example follows for filebeat:
+
+```bash
+cd filebeat
+# Pull and build the containers. Only needs to be done once unless you change the containers.
+mage docker:composeBuild
+# Bring up all containers, wait until they are healthy, and put them in the background.
+mage docker:composeUp
+# Run all integration tests.
+go test ./filebeat/... -tags integration
+# Stop all started containers.
+mage docker:composeDown
+```
+
+
+### Generate sample events [_generate_sample_events]
+
+Go tests support generating sample events to be used as fixtures.
+
+This generation can be perfomed running `go test --data`. This functionality is supported by packetbeat and Metricbeat.
+
+In Metricbeat, run the command from within a module like this: `go test --tags integration,azure --data --run "TestData"`. Make sure to add the relevant tags (`integration` is common then add module and metricset specific tags).
+
+A note about tags: the `--data` flag is a custom flag added by Metricbeat and Packetbeat frameworks. It will not be present in case tags do not match, as the relevant code will not be run and silently skipped (without the tag the test file is ignored by Go compiler so the framework doesn’t load). This may happen if there are different tags in the build tags of the metricset under test (i.e. the GCP billing metricset requires the `billing` tag too).
+
+
+
+## Running System (integration) Tests (Python and Go) [_running_system_integration_tests_python_and_go]
+
+The system tests are defined in the `tests/system` (for legacy Python test) and on `tests/integration` (for Go tests) directory. They require a testing binary to be available and the python environment to be set up.
+
+To create the testing binary run `mage buildSystemTestBinary`. This will create the test binary in the beat directory. To set up the Python testing environment run `mage pythonVirtualEnv` which will create a virtual environment with all test dependencies and print its location. To activate it, the instructions depend on your operating system. See the [virtualenv documentation](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#activating-a-virtual-environment).
+
+To run the system and integration tests use the `mage pythonIntegTest` target, which will start the required services using [docker-compose](https://docs.docker.com/compose/) and run all integration tests. Similar to Go integration tests, the individual steps can be done manually to allow selecting which tests should be run:
+
+```bash
+# Create and activate the system test virtual environment (assumes a Unix system).
+source $(mage pythonVirtualEnv)/bin/activate
+
+# Pull and build the containers. Only needs to be done once unless you change the containers.
+mage docker:composeBuild
+
+# Bring up all containers, wait until they are healthy, and put them in the background.
+mage docker:composeUp
+
+# Run all system and integration tests.
+INTEGRATION_TESTS=1 pytest ./tests/system
+
+# Stop all started containers.
+mage docker:composeDown
+```
+
+Filebeat’s module python tests have additional documentation found in the [Filebeat module](/extend/filebeat-modules-devguide.md) guide.
+
+
+## Test commands [_test_commands]
+
+To list all mage commands run `mage -l`. A quick summary of the available test Make commands is:
+
+* `unit`: Go tests
+* `unit-tests`: Go tests with coverage reports
+* `integration-tests`: Go tests with services in local docker
+* `integration-tests-environment`: Go tests inside docker with service in docker
+* `fast-system-tests`: Python tests
+* `system-tests`: Python tests with coverage report
+* `INTEGRATION_TESTS=1 system-tests`: Python tests with local services
+* `system-tests-environment`: Python tests inside docker with service in docker
+* `testsuite`: Complete test suite in docker environment is run
+* `test`: Runs testsuite without environment
+
+There are two experimental test commands:
+
+* `benchmark-tests`: Running Go tests with `-bench` flag
+* `load-tests`: Running system tests with `LOAD_TESTS=1` flag
+
+
+## Coverage report [_coverage_report]
+
+If the tests were run to create a test coverage, the coverage report files can be found under `build/docs`. To create a more human readable file out of the `.cov` file `make coverage-report` can be used. It creates a `.html` file for each report and a `full.html` as summary of all reports together in the directory `build/coverage`.
+
+
+## Race detection [_race_detection]
+
+All tests can be run with the Go race detector enabled by setting the environment variable `RACE_DETECTOR=1`. This applies to tests in Go and Python. For Python the test binary has to be recompile when the flag is changed. Having the race detection enabled will slow down the tests.
+
+
diff --git a/docs/extend/toc.yml b/docs/extend/toc.yml
new file mode 100644
index 000000000000..1774ca5cf8da
--- /dev/null
+++ b/docs/extend/toc.yml
@@ -0,0 +1,32 @@
+toc:
+ - file: index.md
+ - file: pr-review.md
+ - file: contributing-docs.md
+ - file: testing.md
+ - file: community-beats.md
+ children:
+ - file: event-fields-yml.md
+ - file: event-conventions.md
+ - file: python-beats.md
+ - file: new-dashboards.md
+ children:
+ - file: import-dashboards.md
+ - file: build-dashboards.md
+ - file: generate-index-pattern.md
+ - file: export-dashboards.md
+ - file: archive-dashboards.md
+ - file: share-beat-dashboards.md
+ - file: new-protocol.md
+ children:
+ - file: getting-ready-new-protocol.md
+ - file: protocol-modules.md
+ - file: protocol-testing.md
+ - file: metricbeat-developer-guide.md
+ children:
+ - file: metricbeat-dev-overview.md
+ - file: creating-metricsets.md
+ - file: metricset-details.md
+ - file: creating-metricbeat-module.md
+ - file: dev-faq.md
+ - file: filebeat-modules-devguide.md
+ - file: _migrating_dashboards_from_kibana_5_x_to_6_x.md
diff --git a/docs/reference/auditbeat/add-cloud-metadata.md b/docs/reference/auditbeat/add-cloud-metadata.md
new file mode 100644
index 000000000000..32b1a459cb05
--- /dev/null
+++ b/docs/reference/auditbeat/add-cloud-metadata.md
@@ -0,0 +1,205 @@
+---
+navigation_title: "add_cloud_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-cloud-metadata.html
+---
+
+# Add cloud metadata [add-cloud-metadata]
+
+
+The `add_cloud_metadata` processor enriches each event with instance metadata from the machine’s hosting provider. At startup it will query a list of hosting providers and cache the instance metadata.
+
+The following cloud providers are supported:
+
+* Amazon Web Services (AWS)
+* Digital Ocean
+* Google Compute Engine (GCE)
+* [Tencent Cloud](https://www.qcloud.com/?lang=en) (QCloud)
+* Alibaba Cloud (ECS)
+* Huawei Cloud (ECS)
+* Azure Virtual Machine
+* Openstack Nova
+* Hetzner Cloud
+
+
+## Special notes [_special_notes]
+
+`huawei` is an alias for `openstack`. Huawei cloud runs on OpenStack platform, and when viewed from a metadata API standpoint, it is impossible to differentiate it from OpenStack. If you know that your deployments run on Huawei Cloud exclusively, and you wish to have `cloud.provider` value as `huawei`, you can achieve this by overwriting the value using an `add_fields` processor.
+
+The Alibaba Cloud and Tencent cloud providers are disabled by default, because they require to access a remote host. The `providers` setting allows users to select a list of default providers to query.
+
+Cloud providers tend to maintain metadata services compliant with other cloud providers. For example, Openstack supports [EC2 compliant metadat service](https://docs.openstack.org/nova/latest/user/metadata.html#ec2-compatible-metadata). This makes it impossible to differentiate cloud provider (`cloud.provider` property) with auto discovery (when `providers` configuration is omitted). The processor implementation incorporates a priority mechanism where priority is given to some providers over others when there are multiple successful metadata results. Currently, `aws/ec2` and `azure` have priority over any other provider as their metadata retrival rely on SDKs. The expectation here is that SDK methods should fail if run in an environment not configured accordingly (ex:- missing configurations or credentials).
+
+
+## Configurations [_configurations]
+
+The simple configuration below enables the processor.
+
+```yaml
+processors:
+ - add_cloud_metadata: ~
+```
+
+The `add_cloud_metadata` processor has three optional configuration settings. The first one is `timeout` which specifies the maximum amount of time to wait for a successful response when detecting the hosting provider. The default timeout value is `3s`.
+
+If a timeout occurs then no instance metadata will be added to the events. This makes it possible to enable this processor for all your deployments (in the cloud or on-premise).
+
+The second optional setting is `providers`. The `providers` settings accepts a list of cloud provider names to be used. If `providers` is not configured, then all providers that do not access a remote endpoint are enabled by default. The list of providers may alternatively be configured with the environment variable `BEATS_ADD_CLOUD_METADATA_PROVIDERS`, by setting it to a comma-separated list of provider names.
+
+List of names the `providers` setting supports:
+
+* "alibaba", or "ecs" for the Alibaba Cloud provider (disabled by default).
+* "azure" for Azure Virtual Machine (enabled by default). If the virtual machine is part of an AKS managed cluster, the fields `orchestrator.cluster.name` and `orchestrator.cluster.id` can also be retrieved. "TENANT_ID", "CLIENT_ID" and "CLIENT_SECRET" environment variables need to be set for authentication purposes. If not set we fallback to [DefaultAzureCredential](https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication?tabs=bash#2-authenticate-with-azure) and user can choose different authentication methods (e.g. workload identity).
+* "digitalocean" for Digital Ocean (enabled by default).
+* "aws", or "ec2" for Amazon Web Services (enabled by default).
+* "gcp" for Google Copmute Enging (enabled by default).
+* "openstack", "nova", or "huawei" for Openstack Nova (enabled by default).
+* "openstack-ssl", or "nova-ssl" for Openstack Nova when SSL metadata APIs are enabled (enabled by default).
+* "tencent", or "qcloud" for Tencent Cloud (disabled by default).
+* "hetzner" for Hetzner Cloud (enabled by default).
+
+For example, configuration below only utilize `aws` metadata retrival mechanism,
+
+```yaml
+processors:
+ - add_cloud_metadata:
+ providers:
+ aws
+```
+
+The third optional configuration setting is `overwrite`. When `overwrite` is `true`, `add_cloud_metadata` overwrites existing `cloud.*` fields (`false` by default).
+
+The `add_cloud_metadata` processor supports SSL options to configure the http client used to query cloud metadata. See [SSL](/reference/auditbeat/configuration-ssl.md) for more information.
+
+
+## Provided metadata [_provided_metadata]
+
+The metadata that is added to events varies by hosting provider. Below are examples for each of the supported providers.
+
+*AWS*
+
+Metadata given below are extracted from [instance identity document](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html),
+
+```json
+{
+ "cloud": {
+ "account.id": "123456789012",
+ "availability_zone": "us-east-1c",
+ "instance.id": "i-4e123456",
+ "machine.type": "t2.medium",
+ "image.id": "ami-abcd1234",
+ "provider": "aws",
+ "region": "us-east-1"
+ }
+}
+```
+
+If the EC2 instance has IMDS enabled and if tags are allowed through IMDS endpoint, the processor will further append tags in metadata. Please refer official documentation on [IMDS endpoint](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) for further details.
+
+```json
+{
+ "aws": {
+ "tags": {
+ "org" : "myOrg",
+ "owner": "userID"
+ }
+ }
+}
+```
+
+*Digital Ocean*
+
+```json
+{
+ "cloud": {
+ "instance.id": "1234567",
+ "provider": "digitalocean",
+ "region": "nyc2"
+ }
+}
+```
+
+*GCP*
+
+```json
+{
+ "cloud": {
+ "availability_zone": "us-east1-b",
+ "instance.id": "1234556778987654321",
+ "machine.type": "f1-micro",
+ "project.id": "my-dev",
+ "provider": "gcp"
+ }
+}
+```
+
+*Tencent Cloud*
+
+```json
+{
+ "cloud": {
+ "availability_zone": "gz-azone2",
+ "instance.id": "ins-qcloudv5",
+ "provider": "qcloud",
+ "region": "china-south-gz"
+ }
+}
+```
+
+*Alibaba Cloud*
+
+This metadata is only available when VPC is selected as the network type of the ECS instance.
+
+```json
+{
+ "cloud": {
+ "availability_zone": "cn-shenzhen",
+ "instance.id": "i-wz9g2hqiikg0aliyun2b",
+ "provider": "ecs",
+ "region": "cn-shenzhen-a"
+ }
+}
+```
+
+*Azure Virtual Machine*
+
+```json
+{
+ "cloud": {
+ "provider": "azure",
+ "instance.id": "04ab04c3-63de-4709-a9f9-9ab8c0411d5e",
+ "instance.name": "test-az-vm",
+ "machine.type": "Standard_D3_v2",
+ "region": "eastus2"
+ }
+}
+```
+
+*Openstack Nova*
+
+```json
+{
+ "cloud": {
+ "instance.name": "test-998d932195.mycloud.tld",
+ "instance.id": "i-00011a84",
+ "availability_zone": "xxxx-az-c",
+ "provider": "openstack",
+ "machine.type": "m2.large"
+ }
+}
+```
+
+*Hetzner Cloud*
+
+```json
+{
+ "cloud": {
+ "availability_zone": "hel1-dc2",
+ "instance.name": "my-hetzner-instance",
+ "instance.id": "111111",
+ "provider": "hetzner",
+ "region": "eu-central"
+ }
+}
+```
+
diff --git a/docs/reference/auditbeat/add-cloudfoundry-metadata.md b/docs/reference/auditbeat/add-cloudfoundry-metadata.md
new file mode 100644
index 000000000000..92dd2afbdc54
--- /dev/null
+++ b/docs/reference/auditbeat/add-cloudfoundry-metadata.md
@@ -0,0 +1,70 @@
+---
+navigation_title: "add_cloudfoundry_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-cloudfoundry-metadata.html
+---
+
+# Add Cloud Foundry metadata [add-cloudfoundry-metadata]
+
+
+The `add_cloudfoundry_metadata` processor annotates each event with relevant metadata from Cloud Foundry applications. The events are annotated with Cloud Foundry metadata, only if the event contains a reference to a Cloud Foundry application (using field `cloudfoundry.app.id`) and the configured Cloud Foundry client is able to retrieve information for the application.
+
+Each event is annotated with:
+
+* Application Name
+* Space ID
+* Space Name
+* Organization ID
+* Organization Name
+
+::::{note}
+Pivotal Application Service and Tanzu Application Service include this metadata in all events from the firehose since version 2.8. In these cases the metadata in the events is used, and `add_cloudfoundry_metadata` processor doesn’t modify these fields.
+::::
+
+
+For efficient annotation, application metadata retrieved by the Cloud Foundry client is stored in a persistent cache on the filesystem under the `path.data` directory. This is done so the metadata can persist across restarts of Auditbeat. For control over this cache, use the `cache_duration` and `cache_retry_delay` settings.
+
+```yaml
+processors:
+ - add_cloudfoundry_metadata:
+ api_address: https://api.dev.cfdev.sh
+ client_id: uaa-filebeat
+ client_secret: verysecret
+ ssl:
+ verification_mode: none
+ # To connect to Cloud Foundry over verified TLS you can specify a client and CA certificate.
+ #ssl:
+ # certificate_authorities: ["/etc/pki/cf/ca.pem"]
+ # certificate: "/etc/pki/cf/cert.pem"
+ # key: "/etc/pki/cf/cert.key"
+```
+
+It has the following settings:
+
+`api_address`
+: (Optional) The URL of the Cloud Foundry API. It uses `http://api.bosh-lite.com` by default.
+
+`doppler_address`
+: (Optional) The URL of the Cloud Foundry Doppler Websocket. It uses value from ${api_address}/v2/info by default.
+
+`uaa_address`
+: (Optional) The URL of the Cloud Foundry UAA API. It uses value from ${api_address}/v2/info by default.
+
+`rlp_address`
+: (Optional) The URL of the Cloud Foundry RLP Gateway. It uses value from ${api_address}/v2/info by default.
+
+`client_id`
+: Client ID to authenticate with Cloud Foundry.
+
+`client_secret`
+: Client Secret to authenticate with Cloud Foundry.
+
+`cache_duration`
+: (Optional) Maximum amount of time to cache an application’s metadata. Defaults to 120 seconds.
+
+`cache_retry_delay`
+: (Optional) Time to wait before trying to obtain an application’s metadata again in case of error. Defaults to 20 seconds.
+
+`ssl`
+: (Optional) SSL configuration to use when connecting to Cloud Foundry.
+
diff --git a/docs/reference/auditbeat/add-docker-metadata.md b/docs/reference/auditbeat/add-docker-metadata.md
new file mode 100644
index 000000000000..fe9b442b5765
--- /dev/null
+++ b/docs/reference/auditbeat/add-docker-metadata.md
@@ -0,0 +1,80 @@
+---
+navigation_title: "add_docker_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-docker-metadata.html
+---
+
+# Add Docker metadata [add-docker-metadata]
+
+
+The `add_docker_metadata` processor annotates each event with relevant metadata from Docker containers. At startup it detects a docker environment and caches the metadata. The events are annotated with Docker metadata, only if a valid configuration is detected and the processor is able to reach Docker API.
+
+Each event is annotated with:
+
+* Container ID
+* Name
+* Image
+* Labels
+
+::::{note}
+When running Auditbeat in a container, you need to provide access to Docker’s unix socket in order for the `add_docker_metadata` processor to work. You can do this by mounting the socket inside the container. For example:
+
+`docker run -v /var/run/docker.sock:/var/run/docker.sock ...`
+
+To avoid privilege issues, you may also need to add `--user=root` to the `docker run` flags. Because the user must be part of the docker group in order to access `/var/run/docker.sock`, root access is required if Auditbeat is running as non-root inside the container.
+
+If Docker daemon is restarted the mounted socket will become invalid and metadata will stop working, in these situations there are two options:
+
+* Restart Auditbeat every time Docker is restarted
+* Mount the entire `/var/run` directory (instead of just the socket)
+
+::::
+
+
+```yaml
+processors:
+ - add_docker_metadata:
+ host: "unix:///var/run/docker.sock"
+ #match_fields: ["system.process.cgroup.id"]
+ #match_pids: ["process.pid", "process.parent.pid"]
+ #match_source: true
+ #match_source_index: 4
+ #match_short_id: true
+ #cleanup_timeout: 60
+ #labels.dedot: false
+ # To connect to Docker over TLS you must specify a client and CA certificate.
+ #ssl:
+ # certificate_authority: "/etc/pki/root/ca.pem"
+ # certificate: "/etc/pki/client/cert.pem"
+ # key: "/etc/pki/client/cert.key"
+```
+
+It has the following settings:
+
+`host`
+: (Optional) Docker socket (UNIX or TCP socket). It uses `unix:///var/run/docker.sock` by default.
+
+`ssl`
+: (Optional) SSL configuration to use when connecting to the Docker socket.
+
+`match_fields`
+: (Optional) A list of fields to match a container ID, at least one of them should hold a container ID to get the event enriched.
+
+`match_pids`
+: (Optional) A list of fields that contain process IDs. If the process is running in Docker then the event will be enriched. The default value is `["process.pid", "process.parent.pid"]`.
+
+`match_source`
+: (Optional) Match container ID from a log path present in the `log.file.path` field. Enabled by default.
+
+`match_short_id`
+: (Optional) Match container short ID from a log path present in the `log.file.path` field. Disabled by default. This allows to match directories names that have the first 12 characters of the container ID. For example, `/var/log/containers/b7e3460e2b21/*.log`.
+
+`match_source_index`
+: (Optional) Index in the source path split by `/` to look for container ID. It defaults to 4 to match `/var/lib/docker/containers//*.log`
+
+`cleanup_timeout`
+: (Optional) Time of inactivity to consider we can clean and forget metadata for a container, 60s by default.
+
+`labels.dedot`
+: (Optional) Default to be false. If set to true, replace dots in labels with `_`.
+
diff --git a/docs/reference/auditbeat/add-fields.md b/docs/reference/auditbeat/add-fields.md
new file mode 100644
index 000000000000..7430ad82e2ef
--- /dev/null
+++ b/docs/reference/auditbeat/add-fields.md
@@ -0,0 +1,51 @@
+---
+navigation_title: "add_fields"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-fields.html
+---
+
+# Add fields [add-fields]
+
+
+The `add_fields` processor adds additional fields to the event. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. The `add_fields` processor will overwrite the target field if it already exists. By default the fields that you specify will be grouped under the `fields` sub-dictionary in the event. To group the fields under a different sub-dictionary, use the `target` setting. To store the fields as top-level fields, set `target: ''`.
+
+`target`
+: (Optional) Sub-dictionary to put all fields into. Defaults to `fields`. Setting this to `@metadata` will add values to the event metadata instead of fields.
+
+`fields`
+: Fields to be added.
+
+For example, this configuration:
+
+```yaml
+processors:
+ - add_fields:
+ target: project
+ fields:
+ name: myproject
+ id: '574734885120952459'
+```
+
+Adds these fields to any event:
+
+```json
+{
+ "project": {
+ "name": "myproject",
+ "id": "574734885120952459"
+ }
+}
+```
+
+This configuration will alter the event metadata:
+
+```yaml
+processors:
+ - add_fields:
+ target: '@metadata'
+ fields:
+ op_type: "index"
+```
+
+When the event is ingested (e.g. by Elastisearch) the document will have `op_type: "index"` set as a metadata field.
+
diff --git a/docs/reference/auditbeat/add-host-metadata.md b/docs/reference/auditbeat/add-host-metadata.md
new file mode 100644
index 000000000000..bad0295d7310
--- /dev/null
+++ b/docs/reference/auditbeat/add-host-metadata.md
@@ -0,0 +1,92 @@
+---
+navigation_title: "add_host_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-host-metadata.html
+---
+
+# Add Host metadata [add-host-metadata]
+
+
+```yaml
+processors:
+ - add_host_metadata:
+ cache.ttl: 5m
+ geo:
+ name: nyc-dc1-rack1
+ location: 40.7128, -74.0060
+ continent_name: North America
+ country_iso_code: US
+ region_name: New York
+ region_iso_code: NY
+ city_name: New York
+```
+
+It has the following settings:
+
+`netinfo.enabled`
+: (Optional) Default true. Include IP addresses and MAC addresses as fields host.ip and host.mac
+
+`cache.ttl`
+: (Optional) The processor uses an internal cache for the host metadata. This sets the cache expiration time. The default is 5m, negative values disable caching altogether.
+
+`geo.name`
+: (Optional) User definable token to be used for identifying a discrete location. Frequently a datacenter, rack, or similar.
+
+`geo.location`
+: (Optional) Longitude and latitude in comma separated format.
+
+`geo.continent_name`
+: (Optional) Name of the continent.
+
+`geo.country_name`
+: (Optional) Name of the country.
+
+`geo.region_name`
+: (Optional) Name of the region.
+
+`geo.city_name`
+: (Optional) Name of the city.
+
+`geo.country_iso_code`
+: (Optional) ISO country code.
+
+`geo.region_iso_code`
+: (Optional) ISO region code.
+
+`replace_fields`
+: (Optional) Default true. If set to false, original host fields from the event will not be replaced by host fields from `add_host_metadata`.
+
+The `add_host_metadata` processor annotates each event with relevant metadata from the host machine. The fields added to the event look like the following:
+
+```json
+{
+ "host":{
+ "architecture":"x86_64",
+ "name":"example-host",
+ "id":"",
+ "os":{
+ "family":"darwin",
+ "type":"macos",
+ "build":"16G1212",
+ "platform":"darwin",
+ "version":"10.12.6",
+ "kernel":"16.7.0",
+ "name":"Mac OS X"
+ },
+ "ip": ["192.168.0.1", "10.0.0.1"],
+ "mac": ["00:25:96:12:34:56", "72:00:06:ff:79:f1"],
+ "geo": {
+ "continent_name": "North America",
+ "country_iso_code": "US",
+ "region_name": "New York",
+ "region_iso_code": "NY",
+ "city_name": "New York",
+ "name": "nyc-dc1-rack1",
+ "location": "40.7128, -74.0060"
+ }
+ }
+}
+```
+
+Note: `add_host_metadata` processor will overwrite host fields if `host.*` fields already exist in the event from Beats by default with `replace_fields` equals to `true`. Please use `add_observer_metadata` if the beat is being used to monitor external systems.
+
diff --git a/docs/reference/auditbeat/add-id.md b/docs/reference/auditbeat/add-id.md
new file mode 100644
index 000000000000..10e2f87ba69f
--- /dev/null
+++ b/docs/reference/auditbeat/add-id.md
@@ -0,0 +1,24 @@
+---
+navigation_title: "add_id"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-id.html
+---
+
+# Generate an ID for an event [add-id]
+
+
+The `add_id` processor generates a unique ID for an event.
+
+```yaml
+processors:
+ - add_id: ~
+```
+
+The following settings are supported:
+
+`target_field`
+: (Optional) Field where the generated ID will be stored. Default is `@metadata._id`.
+
+`type`
+: (Optional) Type of ID to generate. Currently only `elasticsearch` is supported and is the default. The `elasticsearch` type generates IDs using the same algorithm that Elasticsearch uses for auto-generating document IDs.
+
diff --git a/docs/reference/auditbeat/add-kubernetes-metadata.md b/docs/reference/auditbeat/add-kubernetes-metadata.md
new file mode 100644
index 000000000000..bbdc594cbef6
--- /dev/null
+++ b/docs/reference/auditbeat/add-kubernetes-metadata.md
@@ -0,0 +1,244 @@
+---
+navigation_title: "add_kubernetes_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-kubernetes-metadata.html
+---
+
+# Add Kubernetes metadata [add-kubernetes-metadata]
+
+
+The `add_kubernetes_metadata` processor annotates each event with relevant metadata based on which Kubernetes pod the event originated from. This processor only adds metadata to the events that do not have it yet present.
+
+At startup, it detects an `in_cluster` environment and caches the Kubernetes-related metadata. Events are only annotated if a valid configuration is detected. If it’s not able to detect a valid Kubernetes configuration, the events are not annotated with Kubernetes-related metadata.
+
+Each event is annotated with:
+
+* Pod Name
+* Pod UID
+* Namespace
+* Labels
+
+In addition, the node and namespace metadata are added to the pod metadata.
+
+The `add_kubernetes_metadata` processor has two basic building blocks:
+
+* Indexers
+* Matchers
+
+Indexers use pod metadata to create unique identifiers for each one of the pods. These identifiers help to correlate the metadata of the observed pods with actual events. For example, the `ip_port` indexer can take a Kubernetes pod and create identifiers for it based on all its `pod_ip:container_port` combinations.
+
+Matchers use information in events to construct lookup keys that match the identifiers created by the indexers. For example, when the `fields` matcher takes `["metricset.host"]` as a lookup field, it would construct a lookup key with the value of the field `metricset.host`. When one of these lookup keys matches with one of the identifiers, the event is enriched with the metadata of the identified pod.
+
+Each Beat can define its own default indexers and matchers which are enabled by default. For example, Filebeat enables the `container` indexer, which identifies pod metadata based on all container IDs, and a `logs_path` matcher, which takes the `log.file.path` field, extracts the container ID, and uses it to retrieve metadata.
+
+You can find more information about the available indexers and matchers, and some examples in [Indexers and matchers](#kubernetes-indexers-and-matchers).
+
+The configuration below enables the processor when auditbeat is run as a pod in Kubernetes.
+
+```yaml
+processors:
+ - add_kubernetes_metadata:
+ # Defining indexers and matchers manually is required for auditbeat, for instance:
+ #indexers:
+ # - ip_port:
+ #matchers:
+ # - fields:
+ # lookup_fields: ["metricset.host"]
+ #labels.dedot: true
+ #annotations.dedot: true
+```
+
+The configuration below enables the processor on a Beat running as a process on the Kubernetes node.
+
+```yaml
+processors:
+ - add_kubernetes_metadata:
+ host:
+ # If kube_config is not set, KUBECONFIG environment variable will be checked
+ # and if not present it will fall back to InCluster
+ kube_config: $Auditbeat Reference/.kube/config
+ # Defining indexers and matchers manually is required for auditbeat, for instance:
+ #indexers:
+ # - ip_port:
+ #matchers:
+ # - fields:
+ # lookup_fields: ["metricset.host"]
+ #labels.dedot: true
+ #annotations.dedot: true
+```
+
+The configuration below has the default indexers and matchers disabled and enables ones that the user is interested in.
+
+```yaml
+processors:
+ - add_kubernetes_metadata:
+ host:
+ # If kube_config is not set, KUBECONFIG environment variable will be checked
+ # and if not present it will fall back to InCluster
+ kube_config: ~/.kube/config
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - ip_port:
+ matchers:
+ - fields:
+ lookup_fields: ["metricset.host"]
+ #labels.dedot: true
+ #annotations.dedot: true
+```
+
+The `add_kubernetes_metadata` processor has the following configuration settings:
+
+`host`
+: (Optional) Specify the node to scope auditbeat to in case it cannot be accurately detected, as when running auditbeat in host network mode.
+
+`scope`
+: (Optional) Specify if the processor should have visibility at the node level or at the entire cluster level. Possible values are `node` and `cluster`. Scope is `node` by default.
+
+`namespace`
+: (Optional) Select the namespace from which to collect the metadata. If it is not set, the processor collects metadata from all namespaces. It is unset by default.
+
+`add_resource_metadata`
+: (Optional) Specify filters and configuration for the extra metadata, that will be added to the event. Configuration parameters:
+
+ * `node` or `namespace`: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included while annotations are not. To change default behaviour `include_labels`, `exclude_labels` and `include_annotations` can be defined. Those settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Note: wildcards are not supported for those settings. The enrichment of `node` or `namespace` metadata can be individually disabled by setting `enabled: false`.
+ * `deployment`: If resource is `pod` and it is created from a `deployment`, by default the deployment name is added, this can be disabled by setting `deployment: false`.
+ * `cronjob`: If resource is `pod` and it is created from a `cronjob`, by default the cronjob name is added, this can be disabled by setting `cronjob: false`.
+
+ Example:
+
+
+```yaml
+ add_resource_metadata:
+ namespace:
+ include_labels: ["namespacelabel1"]
+ #labels.dedot: true
+ #annotations.dedot: true
+ node:
+ include_labels: ["nodelabel2"]
+ include_annotations: ["nodeannotation1"]
+ #labels.dedot: true
+ #annotations.dedot: true
+ deployment: false
+ cronjob: false
+```
+
+`kube_config`
+: (Optional) Use given config file as configuration for Kubernetes client. It defaults to `KUBECONFIG` environment variable if present.
+
+`use_kubeadm`
+: (Optional) Default true. By default requests to kubeadm config map are made in order to enrich cluster name by requesting /api/v1/namespaces/kube-system/configmaps/kubeadm-config API endpoint.
+
+`kube_client_options`
+: (Optional) Additional options can be configured for Kubernetes client. Currently client QPS and burst are supported, if not set Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) will be used. Example:
+
+```yaml
+ kube_client_options:
+ qps: 5
+ burst: 10
+```
+
+`cleanup_timeout`
+: (Optional) Specify the time of inactivity before stopping the running configuration for a container. This is `60s` by default.
+
+`sync_period`
+: (Optional) Specify the timeout for listing historical resources.
+
+`default_indexers.enabled`
+: (Optional) Enable or disable default pod indexers when you want to specify your own.
+
+`default_matchers.enabled`
+: (Optional) Enable or disable default pod matchers when you want to specify your own.
+
+`labels.dedot`
+: (Optional) Default to be true. If set to true, then `.` in labels will be replaced with `_`.
+
+`annotations.dedot`
+: (Optional) Default to be true. If set to true, then `.` in labels will be replaced with `_`.
+
+
+## Indexers and matchers [kubernetes-indexers-and-matchers]
+
+## Indexers [_indexers]
+
+Indexers use pods metadata to create unique identifiers for each one of the pods.
+
+Available indexers are:
+
+`container`
+: Identifies the pod metadata using the IDs of its containers.
+
+`ip_port`
+: Identifies the pod metadata using combinations of its IP and its exposed ports. When using this indexer metadata is identified using the IP of the pods, and the combination if `ip:port` for each one of the ports exposed by its containers.
+
+`pod_name`
+: Identifies the pod metadata using its namespace and its name as `namespace/pod_name`.
+
+`pod_uid`
+: Identifies the pod metadata using the UID of the pod.
+
+
+## Matchers [_matchers]
+
+Matchers are used to construct the lookup keys that match with the identifiers created by indexes.
+
+### `field_format` [_field_format]
+
+Looks up pod metadata using a key created with a string format that can include event fields.
+
+This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event.
+
+For example, the following configuration uses the `ip_port` indexer to identify the pod metadata by combinations of the pod IP and its exposed ports, and uses the destination IP and port in events as match keys:
+
+```yaml
+processors:
+- add_kubernetes_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - ip_port:
+ matchers:
+ - field_format:
+ format: '%{[destination.ip]}:%{[destination.port]}'
+```
+
+
+### `fields` [_fields]
+
+Looks up pod metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used.
+
+This matcher has an option `lookup_fields` to define the files whose value will be used for lookup.
+
+For example, the following configuration uses the `ip_port` indexer to identify pods, and defines a matcher that uses the destination IP or the server IP for the lookup, the first it finds in the event:
+
+```yaml
+processors:
+- add_kubernetes_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - ip_port:
+ matchers:
+ - fields:
+ lookup_fields: ['destination.ip', 'server.ip']
+```
+
+It’s also possible to extract the matching key from fields using a regex pattern. The optional `regex_pattern` field can be used to set the pattern. The pattern **must** contain a capture group named `key`, whose value will be used as the matching key.
+
+For example, the following configuration uses the `container` indexer to identify containers by their id, and extracts the matching key from the cgroup id field added to system process metrics. This field has the form `cri-containerd-.scope`, so we need a regex pattern to obtain the container id.
+
+```yaml
+processors:
+ - add_kubernetes_metadata:
+ indexers:
+ - container:
+ matchers:
+ - fields:
+ lookup_fields: ['system.process.cgroup.id']
+ regex_pattern: 'cri-containerd-(?P[0-9a-z]+)\.scope'
+```
+
+
+
diff --git a/docs/reference/auditbeat/add-labels.md b/docs/reference/auditbeat/add-labels.md
new file mode 100644
index 000000000000..54494150390b
--- /dev/null
+++ b/docs/reference/auditbeat/add-labels.md
@@ -0,0 +1,45 @@
+---
+navigation_title: "add_labels"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-labels.html
+---
+
+# Add labels [add-labels]
+
+
+The `add_labels` processors adds a set of key-value pairs to an event. The processor will flatten nested configuration objects like arrays or dictionaries into a fully qualified name by merging nested names with a `.`. Array entries create numeric names starting with 0. Labels are always stored under the Elastic Common Schema compliant `labels` sub-dictionary.
+
+`labels`
+: dictionaries of labels to be added.
+
+For example, this configuration:
+
+```yaml
+processors:
+ - add_labels:
+ labels:
+ number: 1
+ with.dots: test
+ nested:
+ with.dots: nested
+ array:
+ - do
+ - re
+ - with.field: mi
+```
+
+Adds these fields to every event:
+
+```json
+{
+ "labels": {
+ "number": 1,
+ "with.dots": "test",
+ "nested.with.dots": "nested",
+ "array.0": "do",
+ "array.1": "re",
+ "array.2.with.field": "mi"
+ }
+}
+```
+
diff --git a/docs/reference/auditbeat/add-locale.md b/docs/reference/auditbeat/add-locale.md
new file mode 100644
index 000000000000..a2c61f897003
--- /dev/null
+++ b/docs/reference/auditbeat/add-locale.md
@@ -0,0 +1,31 @@
+---
+navigation_title: "add_locale"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-locale.html
+---
+
+# Add the local time zone [add-locale]
+
+
+The `add_locale` processor enriches each event with the machine’s time zone offset from UTC or with the name of the time zone. It supports one configuration option named `format` that controls whether an offset or time zone abbreviation is added to the event. The default format is `offset`. The processor adds the a `event.timezone` value to each event.
+
+The configuration below enables the processor with the default settings.
+
+```yaml
+processors:
+ - add_locale: ~
+```
+
+This configuration enables the processor and configures it to add the time zone abbreviation to events.
+
+```yaml
+processors:
+ - add_locale:
+ format: abbreviation
+```
+
+::::{note}
+Please note that `add_locale` differentiates between daylight savings time (DST) and regular time. For example `CEST` indicates DST and and `CET` is regular time.
+::::
+
+
diff --git a/docs/reference/auditbeat/add-network-direction.md b/docs/reference/auditbeat/add-network-direction.md
new file mode 100644
index 000000000000..64b6a9d4614f
--- /dev/null
+++ b/docs/reference/auditbeat/add-network-direction.md
@@ -0,0 +1,22 @@
+---
+navigation_title: "add_network_direction"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-network-direction.html
+---
+
+# Add network direction [add-network-direction]
+
+
+The `add_network_direction` processor attempts to compute the perimeter-based network direction given an a source and destination ip address and list of internal networks. The key `internal_networks` can contain either CIDR blocks or a list of special values enumerated in the network section of [Conditions](/reference/auditbeat/defining-processors.md#conditions).
+
+```yaml
+processors:
+ - add_network_direction:
+ source: source.ip
+ destination: destination.ip
+ target: network.direction
+ internal_networks: [ private ]
+```
+
+See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions.
+
diff --git a/docs/reference/auditbeat/add-nomad-metadata.md b/docs/reference/auditbeat/add-nomad-metadata.md
new file mode 100644
index 000000000000..cd1f670065fd
--- /dev/null
+++ b/docs/reference/auditbeat/add-nomad-metadata.md
@@ -0,0 +1,137 @@
+---
+navigation_title: "add_nomad_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-nomad-metadata.html
+---
+
+# Add Nomad metadata [add-nomad-metadata]
+
+
+::::{warning}
+This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
+::::
+
+
+The `add_nomad_metadata` processor adds fields with relevant metadata for applications deployed in Nomad.
+
+Each event is annotated with the following information:
+
+* Allocation name, identifier and status.
+* Job name and type.
+* Namespace where the job is deployed.
+* Datacenter and region where the agent running the allocation is located.
+
+```yaml
+processors:
+ - add_nomad_metadata: ~
+```
+
+It has the following settings to configure the connection:
+
+`address`
+: (Optional) The URL of the agent API used to request the metadata. It uses `http://127.0.0.1:4646` by default.
+
+`namespace`
+: (Optional) Namespace to watch. If set, only events for allocations in this namespace will be annotated.
+
+`region`
+: (Optional) Region to watch. If set, only events for allocations in this region will be annotated.
+
+`secret_id`
+: (Optional) SecretID to use when connecting with the agent API. This is an example ACL policy to apply to the token.
+
+```json
+namespace "*" {
+ policy = "read"
+}
+node {
+ policy = "read"
+}
+agent {
+ policy = "read"
+}
+```
+
+`refresh_interval`
+: (Optional) Interval used to update the cached metadata. It defaults to 30 seconds.
+
+`cleanup_timeout`
+: (Optional) After an allocation has been removed, time to wait before cleaning up their associated resources. This is useful if you expect to receive events after an allocation has been removed, which can happen when collecting logs. It defaults to 60 seconds.
+
+You can decide if Auditbeat should annotate events related to allocations in local node or on the whole cluster configuring the scope with the following settings:
+
+`scope`
+: (Optional) Scope of the resources to watch. It can be `node` to get metadata only for the allocations in a single agent, or `global`, to get metadata for allocations running on any agent. It defaults to `node`.
+
+`node`
+: (Optional) When using `scope: node`, use `node` to specify the name of the local node if it cannot be discovered automatically.
+
+For example the following configuration could be used if Auditbeat is collecting events from all the allocations in the cluster:
+
+```yaml
+processors:
+ - add_nomad_metadata:
+ scope: global
+```
+
+## Indexers and matchers [_indexers_and_matchers]
+
+Indexers and matchers are used to correlate fields in events with actual metadata. Auditbeat uses this information to know what metadata to include in each event.
+
+### Indexers [_indexers_2]
+
+Indexers use allocation metadata to create unique identifiers for each one of the pods.
+
+Avaliable indexers are: `allocation_name`:: Identifies allocations by its name and namespace (as `/`) `allocation_uuid`:: Identifies allocations by its unique identifier.
+
+
+### Matchers [_matchers_2]
+
+Matchers are used to construct the lookup keys that match with the identifiers created by indexes.
+
+
+### `field_format` [_field_format_2]
+
+Looks up allocation metadata using a key created with a string format that can include event fields.
+
+This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event.
+
+For example, the following configuration uses the `allocation_name` indexer to identify the allocation metadata by its name and namespace, and uses custom fields existing in the event as match keys:
+
+```yaml
+processors:
+- add_nomad_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - allocation_name:
+ matchers:
+ - field_format:
+ format: '%{[labels.nomad_namespace]}/%{[fields.nomad_alloc_name]}'
+```
+
+
+### `fields` [_fields_2]
+
+Looks up allocation metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used.
+
+This matcher has an option `lookup_fields` to define the fields whose value will be used for lookup.
+
+For example, the following configuration uses the `allocation_uuid` indexer to identify allocations, and defines a matcher that uses some fields where the allocation UUID can be found for lookup, the first it finds in the event:
+
+```yaml
+processors:
+- add_nomad_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - allocation_uuid:
+ matchers:
+ - fields:
+ lookup_fields: ['host.name', 'fields.nomad_alloc_uuid']
+```
+
+
+
diff --git a/docs/reference/auditbeat/add-observer-metadata.md b/docs/reference/auditbeat/add-observer-metadata.md
new file mode 100644
index 000000000000..68ea963b0359
--- /dev/null
+++ b/docs/reference/auditbeat/add-observer-metadata.md
@@ -0,0 +1,88 @@
+---
+navigation_title: "add_observer_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-observer-metadata.html
+---
+
+# Add Observer metadata [add-observer-metadata]
+
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+```yaml
+processors:
+ - add_observer_metadata:
+ cache.ttl: 5m
+ geo:
+ name: nyc-dc1-rack1
+ location: 40.7128, -74.0060
+ continent_name: North America
+ country_iso_code: US
+ region_name: New York
+ region_iso_code: NY
+ city_name: New York
+```
+
+It has the following settings:
+
+`netinfo.enabled`
+: (Optional) Default true. Include IP addresses and MAC addresses as fields observer.ip and observer.mac
+
+`cache.ttl`
+: (Optional) The processor uses an internal cache for the observer metadata. This sets the cache expiration time. The default is 5m, negative values disable caching altogether.
+
+`geo.name`
+: (Optional) User definable token to be used for identifying a discrete location. Frequently a datacenter, rack, or similar.
+
+`geo.location`
+: (Optional) Longitude and latitude in comma separated format.
+
+`geo.continent_name`
+: (Optional) Name of the continent.
+
+`geo.country_name`
+: (Optional) Name of the country.
+
+`geo.region_name`
+: (Optional) Name of the region.
+
+`geo.city_name`
+: (Optional) Name of the city.
+
+`geo.country_iso_code`
+: (Optional) ISO country code.
+
+`geo.region_iso_code`
+: (Optional) ISO region code.
+
+The `add_observer_metadata` processor annotates each event with relevant metadata from the observer machine. The fields added to the event look like the following:
+
+```json
+{
+ "observer" : {
+ "hostname" : "avce",
+ "type" : "heartbeat",
+ "vendor" : "elastic",
+ "ip" : [
+ "192.168.1.251",
+ "fe80::64b2:c3ff:fe5b:b974",
+ ],
+ "mac" : [
+ "dc:c1:02:6f:1b:ed",
+ ],
+ "geo": {
+ "continent_name": "North America",
+ "country_iso_code": "US",
+ "region_name": "New York",
+ "region_iso_code": "NY",
+ "city_name": "New York",
+ "name": "nyc-dc1-rack1",
+ "location": "40.7128, -74.0060"
+ }
+ }
+}
+```
+
diff --git a/docs/reference/auditbeat/add-process-metadata.md b/docs/reference/auditbeat/add-process-metadata.md
new file mode 100644
index 000000000000..c79193bfe66c
--- /dev/null
+++ b/docs/reference/auditbeat/add-process-metadata.md
@@ -0,0 +1,94 @@
+---
+navigation_title: "add_process_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-process-metadata.html
+---
+
+# Add process metadata [add-process-metadata]
+
+
+The `add_process_metadata` processor enriches events with information from running processes, identified by their process ID (PID).
+
+```yaml
+processors:
+ - add_process_metadata:
+ match_pids:
+ - process.pid
+```
+
+The fields added to the event look as follows:
+
+```json
+{
+ "container": {
+ "id": "b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1"
+ },
+ "process": {
+ "args": [
+ "/usr/lib/systemd/systemd",
+ "--switched-root",
+ "--system",
+ "--deserialize",
+ "22"
+ ],
+ "executable": "/usr/lib/systemd/systemd",
+ "name": "systemd",
+ "owner": {
+ "id": "0",
+ "name": "root"
+ },
+ "parent": {
+ "pid": 0
+ },
+ "pid": 1,
+ "start_time": "2018-08-22T08:44:50.684Z",
+ "title": "/usr/lib/systemd/systemd --switched-root --system --deserialize 22"
+ }
+}
+```
+
+Optionally, the process environment can be included, too:
+
+```json
+ ...
+ "env": {
+ "HOME": "/",
+ "TERM": "linux",
+ "BOOT_IMAGE": "/boot/vmlinuz-4.11.8-300.fc26.x86_64",
+ "LANG": "en_US.UTF-8",
+ }
+ ...
+```
+
+It has the following settings:
+
+`match_pids`
+: List of fields to lookup for a PID. The processor will search the list sequentially until the field is found in the current event, and the PID lookup will be applied to the value of this field.
+
+`target`
+: (Optional) Destination prefix where the `process` object will be created. The default is the event’s root.
+
+`include_fields`
+: (Optional) List of fields to add. By default, the processor will add all the available fields except `process.env`.
+
+`ignore_missing`
+: (Optional) When set to `false`, events that don’t contain any of the fields in match_pids will be discarded and an error will be generated. By default, this condition is ignored.
+
+`overwrite_keys`
+: (Optional) By default, if a target field already exists, it will not be overwritten, and an error will be logged. If `overwrite_keys` is set to `true`, this condition will be ignored.
+
+`restricted_fields`
+: (Optional) By default, the `process.env` field is not output, to avoid leaking sensitive data. If `restricted_fields` is `true`, the field will be present in the output.
+
+`host_path`
+: (Optional) By default, the `host_path` field is set to the root directory of the host `/`. This is the path where `/proc` is mounted. For different runtime configurations of Kubernetes or Docker, the `host_path` can be set to overwrite the default.
+
+`cgroup_prefixes`
+: (Optional) List of prefixes that will be matched against cgroup paths. When a cgroup path begins with a prefix in the list, then the last element of the path is returned as the container ID. Only one of `cgroup_prefixes` and `cgroup_rexex` should be configured. If neither are configured then a default `cgroup_regex` value is used that matches cgroup paths containing 64-character container IDs (like those from Docker, Kubernetes, and Podman).
+
+`cgroup_regex`
+: (Optional) A regular expression that will be matched against cgroup paths. It must contain one capturing group. When a cgroup path matches the regular expression then the value of the capturing group is returned as the container ID. Only one of `cgroup_prefixes` and `cgroup_rexex` should be configured. If neither are configured then a default `cgroup_regex` value is used that matches cgroup paths containing 64-character container IDs (like those from Docker, Kubernetes, and Podman).
+
+`cgroup_cache_expire_time`
+: (Optional) By default, the `cgroup_cache_expire_time` is set to 30 seconds. This is the length of time before cgroup cache elements expire in seconds. It can be set to 0 to disable the cgroup cache. In some container runtimes technology like runc, the container’s process is also process in the host kernel, and will be affected by PID rollover/reuse. The expire time needs to set smaller than the PIDs wrap around time to avoid wrong container id.
+
diff --git a/docs/reference/auditbeat/add-session-metadata.md b/docs/reference/auditbeat/add-session-metadata.md
new file mode 100644
index 000000000000..771521fcc5b7
--- /dev/null
+++ b/docs/reference/auditbeat/add-session-metadata.md
@@ -0,0 +1,89 @@
+---
+navigation_title: "add_session_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-session-metadata.html
+---
+
+# Add session metadata [add-session-metadata]
+
+
+The `add_session_metadata` processor enriches process events with additional information that users can see using the [Session View](docs-content://solutions/security/investigate/session-view.md) tool in the {{elastic-sec}} platform.
+
+::::{note}
+The current release of `add_session_metadata` processor for {{auditbeat}} is limited to virtual machines (VMs) and bare metal environments.
+::::
+
+
+Here’s an example using the `add_session_metadata` processor to enhance process events generated by the `auditd` module of {{auditbeat}}.
+
+```yaml
+auditbeat.modules:
+- module: auditd
+ processors:
+ - add_session_metadata:
+ backend: "auto"
+```
+
+## How the `add_session_metadata` processor works [add-session-metadata-explained]
+
+Using the available Linux kernel technology, the processor collects comprehensive information on all running system processes, compiling this data into a process database. When processing an event (such as those generated by the {{auditbeat}} `auditd` module), the processor queries this database to retrieve information about related processes, including the parent process, session leader, process group leader, and entry leader. It then enriches the original event with this metadata, providing a more complete picture of process relationships and system activities.
+
+This enhanced data enables the powerful [Session View](docs-content://solutions/security/investigate/session-view.md) tool in the {{elastic-sec}} platform, offering users deeper insights for analysis and investigation.
+
+### Backends [add-session-metadata-backends]
+
+The `add_session_metadata` processor operates using various backend options.
+
+* `auto` is the recommended setting. It attempts to use `kernel_tracing` first, falling back to `procfs` if necessary, ensuring compatibility even on systems without `kernel_tracing` support.
+* `kernel_tracing` gathers information about processes using either eBPF or kprobes. It will use eBPF if available, but if not, it will fall back to kprobes. eBPF requires a system with kernel support for eBPF enabled, support for eBPF ring buffer, and auditbeat running as superuser. Kprobe support requires Linux kernel 3.10.0 or above, and auditbeat running as a superuser.
+* `procfs` collects process information with the proc filesystem. This is compatible with older systems that may not support ebpf. To gather complete process info, auditbeat requires permissions to read all process data in procfs; for example, run as a superuser or have the `SYS_PTRACE` capability.
+
+
+### Containers [add-session-metadata-containers]
+
+If you are running {{auditbeat}} in a container, the container must run in the host’s PID namespace. With the `auto` or `kernel_tracing` backend, these host directories must also be mounted to the same path within the container: `/sys/kernel/debug`, `/sys/fs/bpf`.
+
+
+
+## Enable and configure Session View in {{auditbeat}} [add-session-metadata-enable]
+
+To configure and enable [Session View](docs-content://solutions/security/investigate/session-view.md) functionality, you’ll:
+
+* Add the `add_sessions_metadata` processor to your `auditbeat.yml` file.
+* Configure audit rules in your `auditbeat.yml` file.
+* Restart {{auditbeat}}.
+
+We’ll walk you through these steps in more detail.
+
+1. Edit your `auditbeat.yml` file and add this info to the modules configuration section:
+
+ ```yaml
+ auditbeat.modules:
+ - module: auditd
+ processors:
+ - add_session_metadata:
+ backend: "auto"
+ ```
+
+2. Add audit rules in the modules configuration section of `auditbeat.yml` or the `audit.rules.d` config file, depending on your configuration:
+
+ ```yaml
+ auditbeat.modules:
+ - module: auditd
+ audit_rules: |
+ ## executions
+ -a always,exit -F arch=b64 -S execve,execveat -k exec
+ -a always,exit -F arch=b64 -S exit_group
+ ## set_sid
+ -a always,exit -F arch=b64 -S setsid
+ ```
+
+3. Save your configuration changes.
+4. Restart {{auditbeat}}:
+
+ ```sh
+ sudo systemctl restart auditbeat
+ ```
+
+
+
diff --git a/docs/reference/auditbeat/add-tags.md b/docs/reference/auditbeat/add-tags.md
new file mode 100644
index 000000000000..91b45734da0b
--- /dev/null
+++ b/docs/reference/auditbeat/add-tags.md
@@ -0,0 +1,34 @@
+---
+navigation_title: "add_tags"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/add-tags.html
+---
+
+# Add tags [add-tags]
+
+
+The `add_tags` processor adds tags to a list of tags. If the target field already exists, the tags are appended to the existing list of tags.
+
+`tags`
+: List of tags to add.
+
+`target`
+: (Optional) Field the tags will be added to. Defaults to `tags`. Setting tags in `@metadata` is not supported.
+
+For example, this configuration:
+
+```yaml
+processors:
+ - add_tags:
+ tags: [web, production]
+ target: "environment"
+```
+
+Adds the environment field to every event:
+
+```json
+{
+ "environment": ["web", "production"]
+}
+```
+
diff --git a/docs/reference/auditbeat/append.md b/docs/reference/auditbeat/append.md
new file mode 100644
index 000000000000..8ef7a1c1f7f2
--- /dev/null
+++ b/docs/reference/auditbeat/append.md
@@ -0,0 +1,73 @@
+---
+navigation_title: "append"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/append.html
+---
+
+# Append Processor [append]
+
+
+The `append` processor appends one or more values to an existing array if the target field already exists and it is an array. Converts a scaler to an array and appends one or more values to it if the field exists and it is a scaler. Here the values can either be one or more static values or one or more values from the fields listed under *fields* key.
+
+`target_field`
+: The field in which you want to append the data.
+
+`fields`
+: (Optional) List of fields from which you want to copy data from. If the value is of a concrete type it will be appended directly to the target. However, if the value is an array, all the elements of the array are pushed individually to the target field.
+
+`values`
+: (Optional) List of static values you want to append to target field.
+
+`ignore_empty_values`
+: (Optional) If set to `true`, all the `""` and `nil` are omitted from being appended to the target field.
+
+`fail_on_error`
+: (Optional) If set to `true` and an error occurs, the changes are reverted and the original is returned. If set to `false`, processing continues if an error occurs. Default is `true`.
+
+`allow_duplicate`
+: (Optional) If set to `false`, the processor does not append values already present in the field. The default is `true`, which will append duplicate values in the array.
+
+`ignore_missing`
+: (Optional) Indicates whether to ignore events that lack the source field. The default is `false`, which will fail processing of an event if a field is missing.
+
+note: If you want to use `fields` parameter with fields under `message`, make sure you use `decode_json_fields` first with `target: ""`.
+
+For example, this configuration:
+
+```yaml
+processors:
+ - decode_json_fields:
+ fields: message
+ target: ""
+ - append:
+ target_field: target-field
+ fields:
+ - concrete.field
+ - array.one
+ values:
+ - static-value
+ - ""
+ ignore_missing: true
+ fail_on_error: true
+ ignore_empty_values: true
+```
+
+Copies the values of `concrete.field`, `array.one` response fields and the static values to `target-field`:
+
+```json
+{
+ "concrete": {
+ "field": "val0"
+ },
+ "array": {
+ "one": [ "val1", "val2" ]
+ },
+ "target-field": [
+ "val0",
+ "val1",
+ "val2",
+ "static-value"
+ ]
+}
+```
+
diff --git a/docs/reference/auditbeat/auditbeat-configuration-reloading.md b/docs/reference/auditbeat/auditbeat-configuration-reloading.md
new file mode 100644
index 000000000000..4b7af2a8d589
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-configuration-reloading.md
@@ -0,0 +1,50 @@
+---
+navigation_title: "Config file reloading"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-configuration-reloading.html
+---
+
+# Reload the configuration dynamically [auditbeat-configuration-reloading]
+
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+You can configure Auditbeat to dynamically reload configuration files when there are changes. To do this, you specify a path ([glob](https://golang.org/pkg/path/filepath/#Glob)) to watch for module configuration changes. When the files found by the glob change, new modules are started/stopped according to changes in the configuration files.
+
+To enable dynamic config reloading, you specify the `path` and `reload` options in the main `auditbeat.yml` config file. For example:
+
+```sh
+auditbeat.config.modules:
+ path: ${path.config}/conf.d/*.yml
+ reload.enabled: true
+ reload.period: 10s
+```
+
+**`path`**
+: A glob that defines the files to check for changes.
+
+**`reload.enabled`**
+: When set to `true`, enables dynamic config reload.
+
+**`reload.period`**
+: Specifies how often the files are checked for changes. Do not set the `period` to less than 1s because the modification time of files is often stored in seconds. Setting the `period` to less than 1s will result in unnecessary overhead.
+
+Each file found by the glob must contain a list of one or more module definitions. For example:
+
+```yaml
+- module: file_integrity
+ paths:
+ - /www/wordpress
+ - /www/wordpress/wp-admin
+ - /www/wordpress/wp-content
+ - /www/wordpress/wp-includes
+```
+
+::::{note}
+On systems with POSIX file permissions, all Beats configuration files are subject to ownership and file permission checks. If you encounter config loading errors related to file ownership, see {{beats-ref}}/config-file-permissions.html.
+::::
+
+
diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-host.md b/docs/reference/auditbeat/auditbeat-dataset-system-host.md
new file mode 100644
index 000000000000..d1abb1df5020
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-dataset-system-host.md
@@ -0,0 +1,91 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-host.html
+---
+
+# System host dataset [auditbeat-dataset-system-host]
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+This is the `host` dataset of the system module.
+
+It is implemented for Linux, macOS (Darwin), and Windows.
+
+
+### Example dashboard [_example_dashboard_2]
+
+This dataset comes with a sample dashboard:
+
+:::{image} images/auditbeat-system-host-dashboard.png
+:alt: Auditbeat System Host Dashboard
+:class: screenshot
+:::
+
+## Fields [_fields_3]
+
+For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section.
+
+Here is an example document generated by this dataset:
+
+```json
+{
+ "@timestamp": "2017-10-12T08:05:34.853Z",
+ "agent": {
+ "hostname": "host.example.com",
+ "name": "host.example.com"
+ },
+ "event": {
+ "action": "host",
+ "dataset": "host",
+ "module": "system",
+ "kind": "state"
+ },
+ "message": "Ubuntu host ubuntu-bionic (IP: 10.0.2.15) is up for 0 days, 5 hours, 11 minutes",
+ "service": {
+ "type": "system"
+ },
+ "system": {
+ "audit": {
+ "host": {
+ "architecture": "x86_64",
+ "boottime": "2018-12-10T15:48:44Z",
+ "containerized": false,
+ "hostname": "ubuntu-bionic",
+ "id": "6f7be6fb33e6c77f057266415c094408",
+ "ip": [
+ "10.0.2.15",
+ "fe80::2d:fdff:fe81:e747",
+ "172.28.128.3",
+ "fe80::a00:27ff:fe1f:7160",
+ "172.17.0.1",
+ "fe80::42:83ff:febe:1a3a",
+ "172.18.0.1",
+ "fe80::42:9eff:fed3:d888"
+ ],
+ "mac": [
+ "02-2D-FD-81-E7-47",
+ "08-00-27-1F-71-60",
+ "02-42-83-BE-1A-3A",
+ "02-42-9E-D3-D8-88"
+ ],
+ "os": {
+ "family": "debian",
+ "kernel": "4.15.0-42-generic",
+ "name": "Ubuntu",
+ "platform": "ubuntu",
+ "version": "18.04.1 LTS (Bionic Beaver)"
+ },
+ "timezone.name": "UTC",
+ "timezone.offset.sec": 0,
+ "type": "linux",
+ "uptime": 18661357350265
+ }
+ }
+ }
+}
+```
+
+
diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-login.md b/docs/reference/auditbeat/auditbeat-dataset-system-login.md
new file mode 100644
index 000000000000..e786ccd3fb00
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-dataset-system-login.md
@@ -0,0 +1,73 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-login.html
+---
+
+# System login dataset [auditbeat-dataset-system-login]
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+This is the `login` dataset of the system module.
+
+
+## Implementation [_implementation]
+
+The `login` dataset is implemented for Linux only.
+
+On Linux, the dataset reads the [utmp](https://en.wikipedia.org/wiki/Utmp) files that keep track of logins and logouts to the system. They are usually located at `/var/log/wtmp` (successful logins) and `/var/log/btmp` (failed logins).
+
+The file patterns used to locate the files can be configured using `login.wtmp_file_pattern` and `login.btmp_file_pattern`. By default, both the current files and any rotated files (e.g. `wtmp.1`, `wtmp.2`) are read.
+
+utmp files are binary, but you can display their contents using the `utmpdump` utility.
+
+
+### Example dashboard [_example_dashboard_3]
+
+The dataset comes with a sample dashboard:
+
+:::{image} images/auditbeat-system-login-dashboard.png
+:alt: Auditbeat System Login Dashboard
+:class: screenshot
+:::
+
+## Fields [_fields_4]
+
+For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section.
+
+Here is an example document generated by this dataset:
+
+```json
+{
+ "@timestamp": "2017-10-12T08:05:34.853Z",
+ "event": {
+ "action": "user_login",
+ "category": "authentication",
+ "dataset": "login",
+ "kind": "event",
+ "module": "system",
+ "origin": "/var/log/wtmp",
+ "outcome": "success",
+ "type": "authentication_success"
+ },
+ "message": "Login by user vagrant (UID: 1000) on pts/2 (PID: 14962) from 10.0.2.2 (IP: 10.0.2.2)",
+ "process": {
+ "pid": 14962
+ },
+ "service": {
+ "type": "system"
+ },
+ "source": {
+ "ip": "10.0.2.2"
+ },
+ "user": {
+ "id": 1000,
+ "name": "vagrant",
+ "terminal": "pts/2"
+ }
+}
+```
+
+
diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-package.md b/docs/reference/auditbeat/auditbeat-dataset-system-package.md
new file mode 100644
index 000000000000..1b4ed1d06266
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-dataset-system-package.md
@@ -0,0 +1,71 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-package.html
+---
+
+# System package dataset [auditbeat-dataset-system-package]
+
+This is the `package` dataset of the system module.
+
+It is implemented for Linux distributions using dpkg or rpm as their package manager, and for Homebrew on macOS (Darwin).
+
+
+### Example dashboard [_example_dashboard_4]
+
+The dataset comes with a sample dashboard:
+
+:::{image} images/auditbeat-system-package-dashboard.png
+:alt: Auditbeat System Package Dashboard
+:class: screenshot
+:::
+
+## Fields [_fields_5]
+
+For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section.
+
+Here is an example document generated by this dataset:
+
+```json
+{
+ "@timestamp": "2017-10-12T08:05:34.853Z",
+ "event": {
+ "action": "existing_package",
+ "category": [
+ "package"
+ ],
+ "dataset": "package",
+ "id": "6bed65c5-9797-4fb7-9ec7-2d1873c54371",
+ "kind": "state",
+ "module": "system",
+ "type": [
+ "info"
+ ]
+ },
+ "message": "Package zstd (1.5.4) is already installed",
+ "package": {
+ "description": "Zstandard is a real-time compression algorithm",
+ "installed": "2023-02-15T20:40:24.390086982-05:00",
+ "name": "zstd",
+ "reference": "https://facebook.github.io/zstd/",
+ "type": "brew",
+ "version": "1.5.4"
+ },
+ "service": {
+ "type": "system"
+ },
+ "system": {
+ "audit": {
+ "package": {
+ "entity_id": "SxYD3ZMh/Ym0lBIk",
+ "installtime": "2023-02-15T20:40:24.390086982-05:00",
+ "name": "zstd",
+ "summary": "Zstandard is a real-time compression algorithm",
+ "url": "https://facebook.github.io/zstd/",
+ "version": "1.5.4"
+ }
+ }
+ }
+}
+```
+
+
diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-process.md b/docs/reference/auditbeat/auditbeat-dataset-system-process.md
new file mode 100644
index 000000000000..265851b885b0
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-dataset-system-process.md
@@ -0,0 +1,96 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-process.html
+---
+
+# System process dataset [auditbeat-dataset-system-process]
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+This is the `process` dataset of the system module. It generates an event when a process starts and stops.
+
+It is implemented for Linux, macOS (Darwin), and Windows.
+
+
+## Configuration options [_configuration_options_20]
+
+**`process.state.period`**
+: The interval at which the dataset sends full state information. If set this will take precedence over `state.period`. The default value is `12h`.
+
+**`process.hash.max_file_size`**
+: The maximum size of a file in bytes for which Auditbeat will compute hashes. Files larger than this size will not be hashed. The default value is 100 MiB. For convenience units can be specified as a suffix to the value. The supported units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, `gb`, `tib`, `tb`, `pib`, `pb`, `eib`, and `eb`.
+
+**`process.hash.hash_types`**
+: A list of hash types to compute when the file changes. The supported hash types are `blake2b_256`, `blake2b_384`, `blake2b_512`, `md5`, `sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `sha512_224`, `sha512_256`, `sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, and `xxh64`. The default value is `sha1`.
+
+
+### Example dashboard [_example_dashboard_5]
+
+The dataset comes with a sample dashboard:
+
+:::{image} images/auditbeat-system-process-dashboard.png
+:alt: Auditbeat System Process Dashboard
+:class: screenshot
+:::
+
+## Fields [_fields_6]
+
+For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section.
+
+Here is an example document generated by this dataset:
+
+```json
+{
+ "@timestamp": "2017-10-12T08:05:34.853Z",
+ "event": {
+ "action": "process_stopped",
+ "dataset": "process",
+ "kind": "event",
+ "module": "system"
+ },
+ "message": "Process zsh (PID: 9086) by user elastic STOPPED",
+ "process": {
+ "args": [
+ "zsh"
+ ],
+ "entity_id": "+fYshazplsMYlr0y",
+ "executable": "/bin/zsh",
+ "hash": {
+ "sha1": "33646536613061316366353134643135613631643363383733653261373130393737633131303364"
+ },
+ "name": "zsh",
+ "pid": 9086,
+ "ppid": 9085,
+ "start": "2019-01-01T00:00:01Z",
+ "working_directory": "/home/elastic"
+ },
+ "service": {
+ "type": "system"
+ },
+ "user": {
+ "effective": {
+ "group": {
+ "id": "1000"
+ },
+ "id": "1000"
+ },
+ "group": {
+ "id": "1000",
+ "name": "elastic"
+ },
+ "id": "1000",
+ "name": "elastic",
+ "saved": {
+ "group": {
+ "id": "1000"
+ },
+ "id": "1000"
+ }
+ }
+}
+```
+
+
diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-socket.md b/docs/reference/auditbeat/auditbeat-dataset-system-socket.md
new file mode 100644
index 000000000000..8bb4566bcf2e
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-dataset-system-socket.md
@@ -0,0 +1,267 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-socket.html
+---
+
+# System socket dataset [auditbeat-dataset-system-socket]
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+This is the `socket` dataset of the system module. It allows to monitor network traffic to and from running processes. It’s main features are:
+
+* Supports TCP and UDP sockets over IPv4 and IPv6.
+* Outputs per-flow bytes and packets counters.
+* Enriches the flows with [process](ecs://reference/ecs-process.md) and [user](ecs://reference/ecs-user.md) information.
+* Provides information similar to Packetbeat’s flow monitoring with reduced CPU and memory usage.
+* Works on stock kernels without the need of custom modules, external libraries or development headers.
+* Correlates IP addresses with DNS requests.
+
+This dataset does not analyze application-layer protocols nor provide any other advanced features present in Packetbeat: - Monitor network traffic whose destination is not a local process, as is the case with traffic forwarding. - Monitor layer 2 traffic, ICMP or raw sockets.
+
+
+## Implementation [_implementation_2]
+
+It is implemented for Linux only and currently supports x86 (32 and 64 bit) architectures.
+
+The dataset uses [KProbe-based event tracing](https://www.kernel.org/doc/Documentation/trace/kprobetrace.txt) to monitor TCP and UDP sockets over IPv4 and IPv6, providing flow monitoring that includes byte and packet counters, as well as the local process and user involved in the flow. It does so by plugin into the TCP/IP stack to generate custom tracing events avoiding the need to copy network traffic to user space.
+
+By not relying on periodic polling, this approach enables the dataset to perform near real-time monitoring of the system without the risk of missing short lived connections or processes.
+
+
+## Requirements [_requirements]
+
+Features used by the `socket` dataset require a minimum Linux kernel version of 3.12 (vanilla). However, some distributions have backported those features to older kernels. The following (non-exhaustive) lists the different distributions under which the dataset is known to work:
+
+| Distribution | kernel version | Works? |
+| --- | --- | --- |
+| CentOS 6.5 | 2.6.32-431.el6 | NO[[1]](#anchor-1) |
+| CentOS 6.9 | 2.6.32-696.30.1.el6 | ✓ |
+| CentOS 7.6 | 3.10.0-957.1.3.el7 | ✓ |
+| RHEL 8 | 4.18.0-80.rhel8 | ✓ |
+| Debian 8 | 3.16.0-6 | ✓ |
+| Debian 9 | 4.9.0-8 | ✓ |
+| Debian 10 | 4.19.0-5 | ✓ |
+| SLES 12 | 4.4.73-5 | ✓ |
+| Ubuntu 12.04 | 3.2.0-126 | NO[[1]](#anchor-1) |
+| Ubuntu 14.04.6 | 3.13.0-170 | ✓ |
+| Ubuntu 16.04.3 | 4.4.0-97 | ✓ |
+| AWS Linux 2 | 4.14.138-114.102 | ✓ |
+
+$$$anchor-1$$$
+[[1]](#anchor-1): These systems lack [PERF_EVENT_IOC_ID ioctl.](https://lore.kernel.org/patchwork/patch/399251/) Support might be added in a future release.
+
+The dataset needs CAP_SYS_ADMIN and CAP_NET_ADMIN in order to work.
+
+
+### Kernel configuration [_kernel_configuration]
+
+A kernel built with the following configuration options enabled is required:
+
+* `CONFIG_KPROBE_EVENTS`: Enables the KProbes subsystem.
+* `CONFIG_DEBUG_FS`: For kernels laking `tracefs` support (<4.1).
+* `CONFIG_IPV6`: IPv6 support in the kernel is needed even if disabled with `socket.enable_ipv6: false`.
+
+These settings are enabled by default in most distributions.
+
+The following configuration settings can prevent the dataset from starting:
+
+* `/sys/kernel/debug/kprobes/enabled` must be 1.
+* `/proc/sys/net/ipv6/conf/lo/disable_ipv6` (IPv6 enabled in loopback device) is required when running with IPv6 enabled.
+
+
+### Running on docker [_running_on_docker]
+
+The dataset can monitor the Docker host when running inside a container. However it needs to run on a `privileged` container with `CAP_NET_ADMIN`. The docker container running Auditbeat needs access to the host’s tracefs or debugfs directory. This is achieved by bind-mounting `/sys`.
+
+
+## Configuration [_configuration_2]
+
+The following options are available for the `socket` dataset:
+
+* `socket.tracefs_path` (default: none)
+
+Must point to the mount-point of `tracefs` or the `tracing` directory inside `debugfs`. If this option is not specified, Auditbeat will look for the default locations: `/sys/kernel/tracing` and `/sys/kernel/debug/tracing`. If not found, it will attempt to mount `tracefs` and `debugfs` at their default locations.
+
+* `socket.enable_ipv6` (default: unset)
+
+Determines whether IPv6 must be monitored. When unset (default), IPv6 support is automatically detected. Even when IPv6 is disabled, in order to run the dataset you still need a kernel with IPv6 support (the `ipv6` module must be loaded if compiled as a module).
+
+* `socket.flow_inactive_timeout` (default: 30s)
+
+Determines how long a flow has to be inactive to be considered closed.
+
+* `socket.flow_termination_timeout` (default: 5s)
+
+Determines how long to wait after a socket has been closed for out of order packets. With TCP, some packets can be received shortly after a socket is closed. If set too low, additional flows will be generated for those packets.
+
+* `socket.socket_inactive_timeout` (default: 1m)
+
+How long a socket can be inactive to be evicted from the internal cache. A lower value reduces memory usage at the expense of some flows being reported as multiple partial flows.
+
+* `socket.perf_queue_size` (default: 4096)
+
+The number of tracing samples that can be queued for processing. A larger value uses more memory but reduces the chances of samples being lost when the system is under heavy load.
+
+* `socket.lost_queue_size` (default: 128)
+
+The number of lost samples notifications that can be queued.
+
+* `socket.ring_size_exponent` (default: 7)
+
+Controls the number of memory pages allocated for the per-CPU ring-buffer used to receive samples from the kernel. The actual amount of memory used is Number_of_CPUs x Page_Size(4KB) x 2ring_size_exponent. That is 0.5 MiB of RAM per CPU with the default value.
+
+* `socket.clock_max_drift` (default: 100ms)
+
+Defines the maximum difference between the kernel internal clock and the reference time used to timestamp events.
+
+* `socket.clock_sync_period` (default: 10s)
+
+Controls how often clock synchronization events are generated to measure drift between the kernel clock and the dataset’s reference clock.
+
+* `socket.guess_timeout` (default: 15s)
+
+The maximum time an individual guess is allowed to run.
+
+* `socket.dns.enabled` (default: true)
+
+If DNS traffic must be monitored to enrich network flows with DNS information.
+
+* `socket.dns.type` (default: af_packet)
+
+The method used to monitor DNS traffic. Currently, only `af_packet` is supported.
+
+* `socket.dns.af_packet.interface` (default: any)
+
+The network interface where DNS will be monitored.
+
+* `socket.dns.af_packet.snaplen` (default: 1024)
+
+Maximum number of bytes to copy for each captured packet.
+
+## Fields [_fields_7]
+
+For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section.
+
+Here is an example document generated by this dataset:
+
+```json
+{
+ "@timestamp":"2019-08-22T20:46:40.173Z",
+ "@metadata":{
+ "beat":"auditbeat",
+ "type":"_doc",
+ "version":"7.4.0"
+ },
+ "server":{
+ "ip":"151.101.66.217",
+ "port":80,
+ "packets":5,
+ "bytes":437
+ },
+ "user":{
+ "name":"vagrant",
+ "id":"1000"
+ },
+ "network":{
+ "packets":10,
+ "bytes":731,
+ "community_id":"1:jdjL1TkdpF1v1GM0+JxRRp+V7KI=",
+ "direction":"outbound",
+ "type":"ipv4",
+ "transport":"tcp"
+ },
+ "group":{
+ "id":"1000",
+ "name":"vagrant"
+ },
+ "client":{
+ "ip":"10.0.2.15",
+ "port":40192,
+ "packets":5,
+ "bytes":294
+ },
+ "event":{
+ "duration":30728600,
+ "module":"system",
+ "dataset":"socket",
+ "kind":"event",
+ "action":"network_flow",
+ "category":"network",
+ "start":"2019-08-22T20:46:35.001Z",
+ "end":"2019-08-22T20:46:35.032Z"
+ },
+ "ecs":{
+ "version":"1.0.1"
+ },
+ "host":{
+ "name":"stretch",
+ "containerized":false,
+ "hostname":"stretch",
+ "architecture":"x86_64",
+ "os":{
+ "name":"Debian GNU/Linux",
+ "kernel":"4.9.0-8-amd64",
+ "codename":"stretch",
+ "platform":"debian",
+ "version":"9 (stretch)",
+ "family":"debian"
+ },
+ "id":"b3531219b5b4449eadbec59d47945649"
+ },
+ "agent":{
+ "version":"7.4.0",
+ "type":"auditbeat",
+ "ephemeral_id":"f7b0ab1a-da9e-4525-9252-59ecb68139f8",
+ "hostname":"stretch",
+ "id":"88862e07-b13a-4166-b1ef-b3e55b4a0cf2"
+ },
+ "process":{
+ "pid":4970,
+ "name":"curl",
+ "args":[
+ "curl",
+ "http://elastic.co/",
+ "-o",
+ "/dev/null"
+ ],
+ "executable":"/usr/bin/curl",
+ "created":"2019-08-22T20:46:34.928Z"
+ },
+ "system":{
+ "audit":{
+ "socket":{
+ "kernel_sock_address":"0xffff8de29d337000",
+ "internal_version":"1.0.3",
+ "uid":1000,
+ "gid":1000,
+ "euid":1000,
+ "egid":1000
+ }
+ }
+ },
+ "destination":{
+ "ip":"151.101.66.217",
+ "port":80,
+ "packets":5,
+ "bytes":437
+ },
+ "source":{
+ "port":40192,
+ "packets":5,
+ "bytes":294,
+ "ip":"10.0.2.15"
+ },
+ "flow":{
+ "final":true,
+ "complete":true
+ },
+ "service":{
+ "type":"system"
+ }
+}
+```
+
+
diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-user.md b/docs/reference/auditbeat/auditbeat-dataset-system-user.md
new file mode 100644
index 000000000000..f509d11d0cdc
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-dataset-system-user.md
@@ -0,0 +1,75 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-user.html
+---
+
+# System user dataset [auditbeat-dataset-system-user]
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+This is the `user` dataset of the system module.
+
+It is implemented for Linux only.
+
+
+### Example dashboard [_example_dashboard_6]
+
+The dataset comes with a sample dashboard:
+
+:::{image} images/auditbeat-system-user-dashboard.png
+:alt: Auditbeat System User Dashboard
+:class: screenshot
+:::
+
+## Fields [_fields_8]
+
+For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section.
+
+Here is an example document generated by this dataset:
+
+```json
+{
+ "@timestamp": "2017-10-12T08:05:34.853Z",
+ "event": {
+ "action": "user_added",
+ "dataset": "user",
+ "kind": "event",
+ "module": "system"
+ },
+ "message": "New user elastic (UID: 1001, Groups: elastic,docker)",
+ "service": {
+ "type": "system"
+ },
+ "system": {
+ "audit": {
+ "user": {
+ "dir": "/home/elastic",
+ "gid": "1001",
+ "group": [
+ {
+ "gid": "1001",
+ "name": "elastic"
+ },
+ {
+ "gid": "1002",
+ "name": "docker"
+ }
+ ],
+ "name": "elastic",
+ "shell": "/bin/bash",
+ "uid": "1001"
+ }
+ }
+ },
+ "user": {
+ "entity_id": "FgDfgeDptvvfdX+L",
+ "id": "1001",
+ "name": "elastic"
+ }
+}
+```
+
+
diff --git a/docs/reference/auditbeat/auditbeat-geoip.md b/docs/reference/auditbeat/auditbeat-geoip.md
new file mode 100644
index 000000000000..f9789f920159
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-geoip.md
@@ -0,0 +1,206 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-geoip.html
+---
+
+# Enrich events with geoIP information [auditbeat-geoip]
+
+You can use Auditbeat along with the [GeoIP Processor](elasticsearch://reference/ingestion-tools/enrich-processor/geoip-processor.md) in {{es}} to export geographic location information based on IP addresses. Then you can use this information to visualize the location of IP addresses on a map in {{kib}}.
+
+The `geoip` processor adds information about the geographical location of IP addresses, based on data from the Maxmind GeoLite2 City Database. Because the processor uses a geoIP database that’s installed on {{es}}, you don’t need to install a geoIP database on the machines running Auditbeat.
+
+::::{note}
+If your use case involves using {{ls}}, you can use the [GeoIP filter](logstash://reference/plugins-filters-geoip.md) available in {{ls}} instead of using the `geoip` processor. However, using the `geoip` processor is the simplest approach when you don’t require the additional processing power of {{ls}}.
+::::
+
+
+
+## Configure the `geoip` processor [auditbeat-configuring-geoip]
+
+To configure Auditbeat and the `geoip` processor:
+
+1. Define an ingest pipeline that uses one or more `geoip` processors to add location information to the event. For example, you can use the Console in {{kib}} to create the following pipeline:
+
+ ```console
+ PUT _ingest/pipeline/geoip-info
+ {
+ "description": "Add geoip info",
+ "processors": [
+ {
+ "geoip": {
+ "field": "client.ip",
+ "target_field": "client.geo",
+ "ignore_missing": true
+ }
+ },
+ {
+ "geoip": {
+ "database_file": "GeoLite2-ASN.mmdb",
+ "field": "client.ip",
+ "target_field": "client.as",
+ "properties": [
+ "asn",
+ "organization_name"
+ ],
+ "ignore_missing": true
+ }
+ },
+ {
+ "geoip": {
+ "field": "source.ip",
+ "target_field": "source.geo",
+ "ignore_missing": true
+ }
+ },
+ {
+ "geoip": {
+ "database_file": "GeoLite2-ASN.mmdb",
+ "field": "source.ip",
+ "target_field": "source.as",
+ "properties": [
+ "asn",
+ "organization_name"
+ ],
+ "ignore_missing": true
+ }
+ },
+ {
+ "geoip": {
+ "field": "destination.ip",
+ "target_field": "destination.geo",
+ "ignore_missing": true
+ }
+ },
+ {
+ "geoip": {
+ "database_file": "GeoLite2-ASN.mmdb",
+ "field": "destination.ip",
+ "target_field": "destination.as",
+ "properties": [
+ "asn",
+ "organization_name"
+ ],
+ "ignore_missing": true
+ }
+ },
+ {
+ "geoip": {
+ "field": "server.ip",
+ "target_field": "server.geo",
+ "ignore_missing": true
+ }
+ },
+ {
+ "geoip": {
+ "database_file": "GeoLite2-ASN.mmdb",
+ "field": "server.ip",
+ "target_field": "server.as",
+ "properties": [
+ "asn",
+ "organization_name"
+ ],
+ "ignore_missing": true
+ }
+ },
+ {
+ "geoip": {
+ "field": "host.ip",
+ "target_field": "host.geo",
+ "ignore_missing": true
+ }
+ },
+ {
+ "rename": {
+ "field": "server.as.asn",
+ "target_field": "server.as.number",
+ "ignore_missing": true
+ }
+ },
+ {
+ "rename": {
+ "field": "server.as.organization_name",
+ "target_field": "server.as.organization.name",
+ "ignore_missing": true
+ }
+ },
+ {
+ "rename": {
+ "field": "client.as.asn",
+ "target_field": "client.as.number",
+ "ignore_missing": true
+ }
+ },
+ {
+ "rename": {
+ "field": "client.as.organization_name",
+ "target_field": "client.as.organization.name",
+ "ignore_missing": true
+ }
+ },
+ {
+ "rename": {
+ "field": "source.as.asn",
+ "target_field": "source.as.number",
+ "ignore_missing": true
+ }
+ },
+ {
+ "rename": {
+ "field": "source.as.organization_name",
+ "target_field": "source.as.organization.name",
+ "ignore_missing": true
+ }
+ },
+ {
+ "rename": {
+ "field": "destination.as.asn",
+ "target_field": "destination.as.number",
+ "ignore_missing": true
+ }
+ },
+ {
+ "rename": {
+ "field": "destination.as.organization_name",
+ "target_field": "destination.as.organization.name",
+ "ignore_missing": true
+ }
+ }
+ ]
+ }
+ ```
+
+ In this example, the pipeline ID is `geoip-info`. `field` specifies the field that contains the IP address to use for the geographical lookup, and `target_field` is the field that will hold the geographical information. `"ignore_missing": true` configures the pipeline to continue processing when it encounters an event that doesn’t have the specified field.
+
+ See [GeoIP Processor](elasticsearch://reference/ingestion-tools/enrich-processor/geoip-processor.md) for more options.
+
+ To learn more about adding host information to an event, see [add_host_metadata](/reference/auditbeat/add-host-metadata.md).
+
+2. In the Auditbeat config file, configure the {{es}} output to use the pipeline. Specify the pipeline ID in the `pipeline` option under `output.elasticsearch`. For example:
+
+ ```yaml
+ output.elasticsearch:
+ hosts: ["localhost:9200"]
+ pipeline: geoip-info
+ ```
+
+3. Run Auditbeat. Remember to use `sudo` if the config file is owned by root.
+
+ ```sh
+ ./auditbeat -e
+ ```
+
+ If the lookups succeed, the events are enriched with `geo_point` fields, such as `client.geo.location` and `host.geo.location`, that you can use to populate visualizations in {{kib}}.
+
+
+If you add a field that’s not already defined as a `geo_point` in the index template, add a mapping so the field gets indexed correctly.
+
+
+## Visualize locations [auditbeat-visualizing-location]
+
+To visualize the location of IP addresses, you can create a new [coordinate map](docs-content://explore-analyze/visualize/maps.md) in {{kib}} and select the location field, for example `client.geo.location` or `host.geo.location`, as the Geohash.
+
+:::{image} images/coordinate-map.png
+:alt: Coordinate map in {kib}
+:class: screenshot
+:::
+
diff --git a/docs/reference/auditbeat/auditbeat-installation-configuration.md b/docs/reference/auditbeat/auditbeat-installation-configuration.md
new file mode 100644
index 000000000000..84822a639fb7
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-installation-configuration.md
@@ -0,0 +1,345 @@
+---
+navigation_title: "Quick start"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-installation-configuration.html
+---
+
+# Auditbeat quick start: installation and configuration [auditbeat-installation-configuration]
+
+
+This guide describes how to get started quickly with audit data collection. You’ll learn how to:
+
+* install Auditbeat on each system you want to monitor
+* specify the location of your audit data
+* parse log data into fields and send it to {es}
+* visualize the log data in {kib}
+
+:::{image} images/auditbeat-auditd-dashboard.png
+:alt: Auditbeat Auditd dashboard
+:class: screenshot
+:::
+
+
+## Before you begin [_before_you_begin]
+
+You need {{es}} for storing and searching your data, and {{kib}} for visualizing and managing it.
+
+:::::::{tab-set}
+
+::::::{tab-item} Elasticsearch Service
+To get started quickly, spin up a deployment of our [hosted {{ess}}](https://www.elastic.co/cloud/elasticsearch-service). The {{ess}} is available on AWS, GCP, and Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body).
+::::::
+
+::::::{tab-item} Self-managed
+To install and run {{es}} and {{kib}}, see [Installing the {{stack}}](docs-content://deploy-manage/deploy/self-managed/deploy-cluster.md).
+::::::
+
+:::::::
+
+## Step 1: Install Auditbeat [install]
+
+Install Auditbeat on all the servers you want to monitor.
+
+To download and install Auditbeat, use the commands that work with your system:
+
+:::::::{tab-set}
+
+::::::{tab-item} DEB
+Version 9.0.0-beta1 of Auditbeat has not yet been released.
+::::::
+
+::::::{tab-item} RPM
+Version 9.0.0-beta1 of Auditbeat has not yet been released.
+::::::
+
+::::::{tab-item} MacOS
+Version 9.0.0-beta1 of Auditbeat has not yet been released.
+::::::
+
+::::::{tab-item} Linux
+Version 9.0.0-beta1 of Auditbeat has not yet been released.
+::::::
+
+::::::{tab-item} Windows
+Version 9.0.0-beta1 of Auditbeat has not yet been released.
+::::::
+
+:::::::
+The commands shown are for AMD platforms, but ARM packages are also available. Refer to the [download page](https://www.elastic.co/downloads/beats/auditbeat) for the full list of available packages.
+
+
+### Other installation options [other-installation-options]
+
+* [APT or YUM](/reference/auditbeat/setup-repositories.md)
+* [Download page](https://www.elastic.co/downloads/beats/auditbeat)
+* [Docker](/reference/auditbeat/running-on-docker.md)
+* [Kubernetes](/reference/auditbeat/running-on-kubernetes.md)
+
+
+## Step 2: Connect to the {{stack}} [set-connection]
+
+Connections to {{es}} and {{kib}} are required to set up Auditbeat.
+
+Set the connection information in `auditbeat.yml`. To locate this configuration file, see [Directory layout](/reference/auditbeat/directory-layout.md).
+
+:::::::{tab-set}
+
+::::::{tab-item} Elasticsearch Service
+Specify the [cloud.id](/reference/auditbeat/configure-cloud-id.md) of your {{ess}}, and set [cloud.auth](/reference/auditbeat/configure-cloud-id.md) to a user who is authorized to set up Auditbeat. For example:
+
+```yaml
+cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw=="
+cloud.auth: "auditbeat_setup:{pwd}" <1>
+```
+
+1. This examples shows a hard-coded password, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md).
+::::::
+
+::::::{tab-item} Self-managed
+1. Set the host and port where Auditbeat can find the {{es}} installation, and set the username and password of a user who is authorized to set up Auditbeat. For example:
+
+ ```yaml
+ output.elasticsearch:
+ hosts: ["https://myEShost:9200"]
+ username: "auditbeat_internal"
+ password: "{pwd}" <1>
+ ssl:
+ enabled: true
+ ca_trusted_fingerprint: "b9a10bbe64ee9826abeda6546fc988c8bf798b41957c33d05db736716513dc9c" <2>
+ ```
+
+ 1. This example shows a hard-coded password, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md).
+ 2. This example shows a hard-coded fingerprint, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md). The fingerprint is a HEX encoded SHA-256 of a CA certificate, when you start {{es}} for the first time, security features such as network encryption (TLS) for {{es}} are enabled by default. If you are using the self-signed certificate generated by {{es}} when it is started for the first time, you will need to add its fingerprint here. The fingerprint is printed on {{es}} start up logs, or you can refer to [connect clients to {{es}} documentation](docs-content://deploy-manage/security/security-certificates-keys.md#_connect_clients_to_es_5) for other options on retrieving it. If you are providing your own SSL certificate to {{es}} refer to [Auditbeat documentation on how to setup SSL](/reference/auditbeat/configuration-ssl.md#ssl-client-config).
+
+2. If you plan to use our pre-built {{kib}} dashboards, configure the {{kib}} endpoint. Skip this step if {{kib}} is running on the same host as {{es}}.
+
+ ```yaml
+ setup.kibana:
+ host: "mykibanahost:5601" <1>
+ username: "my_kibana_user" <2> <3>
+ password: "{pwd}"
+ ```
+
+ 1. The hostname and port of the machine where {{kib}} is running, for example, `mykibanahost:5601`. If you specify a path after the port number, include the scheme and port: `http://mykibanahost:5601/path`.
+ 2. The `username` and `password` settings for {{kib}} are optional. If you don’t specify credentials for {{kib}}, Auditbeat uses the `username` and `password` specified for the {{es}} output.
+ 3. To use the pre-built {{kib}} dashboards, this user must be authorized to view dashboards or have the `kibana_admin` [built-in role](elasticsearch://reference/elasticsearch/roles.md).
+::::::
+
+:::::::
+To learn more about required roles and privileges, see [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md).
+
+::::{note}
+You can send data to other [outputs](/reference/auditbeat/configuring-output.md), such as {{ls}}, but that requires additional configuration and setup.
+::::
+
+
+
+## Step 3: Configure data collection modules [enable-modules]
+
+Auditbeat uses [modules](/reference/auditbeat/auditbeat-modules.md) to collect audit information.
+
+By default, Auditbeat uses a configuration that’s tailored to the operating system where Auditbeat is running.
+
+To use a different configuration, change the module settings in `auditbeat.yml`.
+
+The following example shows the `file_integrity` module configured to generate events whenever a file in one of the specified paths changes on disk:
+
+```sh
+auditbeat.modules:
+
+- module: file_integrity
+ paths:
+ - /bin
+ - /usr/bin
+ - /sbin
+ - /usr/sbin
+ - /etc
+```
+
+::::{tip}
+To test your configuration file, change to the directory where the Auditbeat binary is installed, and run Auditbeat in the foreground with the following options specified: `./auditbeat test config -e`. Make sure your config files are in the path expected by Auditbeat (see [Directory layout](/reference/auditbeat/directory-layout.md)), or use the `-c` flag to specify the path to the config file.
+::::
+
+
+For more information about configuring Auditbeat, also see:
+
+* [Configure Auditbeat](/reference/auditbeat/configuring-howto-auditbeat.md)
+* [Config file format](/reference/libbeat/config-file-format.md)
+* [`auditbeat.reference.yml`](/reference/auditbeat/auditbeat-reference-yml.md): This reference configuration file shows all non-deprecated options. You’ll find it in the same location as `auditbeat.yml`.
+
+
+## Step 4: Set up assets [setup-assets]
+
+Auditbeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:
+
+1. Make sure the user specified in `auditbeat.yml` is [authorized to set up Auditbeat](/reference/auditbeat/privileges-to-setup-beats.md).
+2. From the installation directory, run:
+
+ :::::::{tab-set}
+
+::::::{tab-item} DEB
+```sh
+ auditbeat setup -e
+ ```
+::::::
+
+::::::{tab-item} RPM
+```sh
+ auditbeat setup -e
+ ```
+::::::
+
+::::::{tab-item} MacOS
+```sh
+ ./auditbeat setup -e
+ ```
+::::::
+
+::::::{tab-item} Linux
+```sh
+ ./auditbeat setup -e
+ ```
+::::::
+
+::::::{tab-item} Windows
+```sh
+ PS > .\auditbeat.exe setup -e
+ ```
+::::::
+
+::::::{tab-item} DEB
+```sh
+sudo service auditbeat start
+```
+
+::::{note}
+If you use an `init.d` script to start Auditbeat, you can’t specify command line flags (see [Command reference](/reference/auditbeat/command-line-options.md)). To specify flags, start Auditbeat in the foreground.
+::::
+
+
+Also see [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md).
+::::::
+
+::::::{tab-item} RPM
+```sh
+sudo service auditbeat start
+```
+
+::::{note}
+If you use an `init.d` script to start Auditbeat, you can’t specify command line flags (see [Command reference](/reference/auditbeat/command-line-options.md)). To specify flags, start Auditbeat in the foreground.
+::::
+
+
+Also see [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md).
+::::::
+
+::::::{tab-item} MacOS
+```sh
+sudo chown root auditbeat.yml <1>
+sudo ./auditbeat -e
+```
+
+1. You’ll be running Auditbeat as root, so you need to change ownership of the configuration file, or run Auditbeat with `--strict.perms=false` specified. See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md).
+::::::
+
+::::::{tab-item} Linux
+```sh
+sudo chown root auditbeat.yml <1>
+sudo ./auditbeat -e
+```
+
+1. You’ll be running Auditbeat as root, so you need to change ownership of the configuration file, or run Auditbeat with `--strict.perms=false` specified. See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md).
+::::::
+
+::::::{tab-item} Windows
+```sh
+PS C:\Program Files\auditbeat> Start-Service auditbeat
+```
+
+By default, Windows log files are stored in `C:\ProgramData\auditbeat\Logs`.
+::::::
+
+:::::::
+Auditbeat should begin streaming events to {{es}}.
+
+If you see a warning about too many open files, you need to increase the `ulimit`. See the [FAQ](/reference/auditbeat/ulimit.md) for more details.
+
+
+## Step 6: View your data in {{kib}} [view-data]
+
+To make it easier for you to start auditing the activities of users and processes on your system, Auditbeat comes with pre-built {{kib}} dashboards and UIs for visualizing your data.
+
+To open the dashboards:
+
+1. Launch {{kib}}:
+
+
+
+
+
+
+
+ 1. [Log in](https://cloud.elastic.co/) to your {{ecloud}} account.
+ 2. Navigate to the {{kib}} endpoint in your deployment.
+
+
+
+ Point your browser to [http://localhost:5601](http://localhost:5601), replacing `localhost` with the name of the {{kib}} host.
+
+
+
+
+2. In the side navigation, click **Discover**. To see Auditbeat data, make sure the predefined `auditbeat-*` data view is selected.
+
+ ::::{tip}
+ If you don’t see data in {{kib}}, try changing the time filter to a larger range. By default, {{kib}} shows the last 15 minutes.
+ ::::
+
+3. In the side navigation, click **Dashboard**, then select the dashboard that you want to open.
+
+The dashboards are provided as examples. We recommend that you [customize](docs-content://explore-analyze/dashboards.md) them to meet your needs.
+
+
+## What’s next? [_whats_next]
+
+Now that you have audit data streaming into {{es}}, learn how to unify your logs, metrics, uptime, and application performance data.
+
+1. Ingest data from other sources by installing and configuring other Elastic {{beats}}:
+
+ | Elastic {{beats}} | To capture |
+ | --- | --- |
+ | [{{metricbeat}}](/reference/metricbeat/metricbeat-installation-configuration.md) | Infrastructure metrics |
+ | [{{filebeat}}](/reference/filebeat/filebeat-installation-configuration.md) | Logs |
+ | [{{winlogbeat}}](/reference/winlogbeat/winlogbeat-installation-configuration.md) | Windows event logs |
+ | [{{heartbeat}}](/reference/heartbeat/heartbeat-installation-configuration.md) | Uptime information |
+ | [APM](docs-content://solutions/observability/apps/application-performance-monitoring-apm.md) | Application performance metrics |
+
+2. Use the Observability apps in {{kib}} to search across all your data:
+
+ | Elastic apps | Use to |
+ | --- | --- |
+ | [{{metrics-app}}](docs-content://solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Explore metrics about systems and services across your ecosystem |
+ | [{{logs-app}}](docs-content://solutions/observability/logs/explore-logs.md) | Tail related log data in real time |
+ | [{{uptime-app}}](docs-content://solutions/observability/apps/synthetic-monitoring.md#monitoring-uptime) | Monitor availability issues across your apps and services |
+ | [APM app](docs-content://solutions/observability/apps/overviews.md) | Monitor application performance |
+ | [{{siem-app}}](docs-content://solutions/security.md) | Analyze security events |
+
+
diff --git a/docs/reference/auditbeat/auditbeat-module-auditd.md b/docs/reference/auditbeat/auditbeat-module-auditd.md
new file mode 100644
index 000000000000..b71154b756ae
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-module-auditd.md
@@ -0,0 +1,274 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-auditd.html
+---
+
+# Auditd Module [auditbeat-module-auditd]
+
+The `auditd` module receives audit events from the Linux Audit Framework that is a part of the Linux kernel.
+
+This module is available only for Linux.
+
+
+## How it works [_how_it_works]
+
+This module establishes a subscription to the kernel to receive the events as they occur. So unlike most other modules, the `period` configuration option is unused because it is not implemented using polling.
+
+The Linux Audit Framework can send multiple messages for a single auditable event. For example, a `rename` syscall causes the kernel to send eight separate messages. Each message describes a different aspect of the activity that is occurring (the syscall itself, file paths, current working directory, process title). This module will combine all of the data from each of the messages into a single event.
+
+Messages for one event can be interleaved with messages from another event. This module will buffer the messages in order to combine related messages into a single event even if they arrive interleaved or out of order.
+
+
+## Useful commands [_useful_commands]
+
+When running Auditbeat with the `auditd` module enabled, you might find that other monitoring tools interfere with Auditbeat.
+
+For example, you might encounter errors if another process, such as `auditd`, is registered to receive data from the Linux Audit Framework. You can use these commands to see if the `auditd` service is running and stop it:
+
+* See if `auditd` is running:
+
+ ```shell
+ service auditd status
+ ```
+
+* Stop the `auditd` service:
+
+ ```shell
+ service auditd stop
+ ```
+
+* Disable `auditd` from starting on boot:
+
+ ```shell
+ chkconfig auditd off
+ ```
+
+
+To save CPU usage and disk space, you can use this command to stop `journald` from listening to audit messages:
+
+```shell
+systemctl mask systemd-journald-audit.socket
+```
+
+
+## Inspect the kernel audit system status [_inspect_the_kernel_audit_system_status]
+
+Auditbeat provides useful commands to query the state of the audit system in the Linux kernel.
+
+* See the list of installed audit rules:
+
+ ```shell
+ auditbeat show auditd-rules
+ ```
+
+ Prints the list of loaded rules, similar to `auditctl -l`:
+
+ ```shell
+ -a never,exit -S all -F pid=26253
+ -a always,exit -F arch=b32 -S all -F key=32bit-abi
+ -a always,exit -F arch=b64 -S execve,execveat -F key=exec
+ -a always,exit -F arch=b64 -S connect,accept,bind -F key=external-access
+ -w /etc/group -p wa -k identity
+ -w /etc/passwd -p wa -k identity
+ -w /etc/gshadow -p wa -k identity
+ -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F key=access
+ -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F key=access
+ ```
+
+* See the status of the audit system:
+
+ ```shell
+ auditbeat show auditd-status
+ ```
+
+ Prints the status of the kernel audit system, similar to `auditctl -s`:
+
+ ```shell
+ enabled 1
+ failure 0
+ pid 0
+ rate_limit 0
+ backlog_limit 8192
+ lost 14407
+ backlog 0
+ backlog_wait_time 0
+ features 0xf
+ ```
+
+
+
+## Configuration options [_configuration_options_17]
+
+This module has some configuration options for tuning its behavior. The following example shows all configuration options with their default values.
+
+```yaml
+- module: auditd
+ resolve_ids: true
+ failure_mode: silent
+ backlog_limit: 8192
+ rate_limit: 0
+ include_raw_message: false
+ include_warnings: false
+ backpressure_strategy: auto
+ immutable: false
+```
+
+This module also supports the [standard configuration options](#module-standard-options-auditd) described later.
+
+**`socket_type`**
+: This optional setting controls the type of socket that Auditbeat uses to receive events from the kernel. The two options are `unicast` and `multicast`.
+
+ `unicast` should be used when Auditbeat is the primary userspace daemon for receiving audit events and managing the rules. Only a single process can receive audit events through the "unicast" connection so any other daemons should be stopped (e.g. stop `auditd`).
+
+ `multicast` can be used in kernel versions 3.16 and newer. By using `multicast` Auditbeat will receive an audit event broadcast that is not exclusive to a a single process. This is ideal for situations where `auditd` is running and managing the rules.
+
+ By default Auditbeat will use `multicast` if the kernel version is 3.16 or newer and no rules have been defined. Otherwise `unicast` will be used.
+
+
+**`immutable`**
+: This boolean setting sets the audit config as immutable (`-e 2`). This option can only be used with the `socket_type: unicast` since Auditbeat needs to manage the rules to be able to set it.
+
+ It is important to note that with this setting enabled, if Auditbeat is stopped and resumed events will continue to be processed but the configuration won’t be updated until the system is restarted entirely.
+
+
+**`resolve_ids`**
+: This boolean setting enables the resolution of UIDs and GIDs to their associated names. The default value is true.
+
+**`failure_mode`**
+: This determines the kernel’s behavior on critical failures such as errors sending events to Auditbeat, the backlog limit was exceeded, the kernel ran out of memory, or the rate limit was exceeded. The options are `silent`, `log`, or `panic`. `silent` basically makes the kernel ignore the errors, `log` makes the kernel write the audit messages using `printk` so they show up in system’s syslog, and `panic` causes the kernel to panic to prevent use of the machine. Auditbeat’s default is `silent`.
+
+**`backlog_limit`**
+: This controls the maximum number of audit messages that will be buffered by the kernel.
+
+**`rate_limit`**
+: This sets a rate limit on the number of messages/sec delivered by the kernel. The default is 0, which disables rate limiting. Changing this value to anything other than zero can cause messages to be lost. The preferred approach to reduce the messaging rate is be more selective in the audit ruleset.
+
+**`include_raw_message`**
+: This boolean setting causes Auditbeat to include each of the raw messages that contributed to the event in the document as a field called `event.original`. The default value is false. This setting is primarily used for development and debugging purposes.
+
+**`include_warnings`**
+: This boolean setting causes Auditbeat to include as warnings any issues that were encountered while parsing the raw messages. The messages are written to the `error.message` field. The default value is false. When this setting is enabled the raw messages will be included in the event regardless of the `include_raw_message` config setting. This setting is primarily used for development and debugging purposes.
+
+**`audit_rules`**
+: A string containing the audit rules that should be installed to the kernel. There should be one rule per line. Comments can be embedded in the string using `#` as a prefix. The format for rules is the same used by the Linux `auditctl` utility. Auditbeat supports adding file watches (`-w`) and syscall rules (`-a` or `-A`). For more information, see [Audit rules](#audit-rules).
+
+**`audit_rule_files`**
+: A list of files to load audit rules from. This files are loaded after the rules declared in `audit_rules` are loaded. Wildcards are supported and will expand in lexicographical order. The format is the same as that of the `audit_rules` field.
+
+**`ignore_errors`**
+: This setting allows errors during rule loading and parsing to be ignored, but logged as warnings.
+
+**`backpressure_strategy`**
+: Specifies the strategy that Auditbeat uses to prevent backpressure from propagating to the kernel and impacting audited processes.
+
+ The possible values are:
+
+ * `auto` (default): Auditbeat uses the `kernel` strategy, if supported, or falls back to the `userspace` strategy.
+ * `kernel`: Auditbeat sets the `backlog_wait_time` in the kernel’s audit framework to 0. This causes events to be discarded in the kernel if the audit backlog queue fills to capacity. Requires a 3.14 kernel or newer.
+ * `userspace`: Auditbeat drops events when there is backpressure from the publishing pipeline. If no `rate_limit` is set, Auditbeat sets a rate limit of 5000. Users should test their setup and adjust the `rate_limit` option accordingly.
+ * `both`: Auditbeat uses the `kernel` and `userspace` strategies at the same time.
+ * `none`: No backpressure mitigation measures are enabled.
+
+
+
+### Standard configuration options [module-standard-options-auditd]
+
+You can specify the following options for any Auditbeat module.
+
+**`module`**
+: The name of the module to run.
+
+**`enabled`**
+: A Boolean value that specifies whether the module is enabled.
+
+**`fields`**
+: A dictionary of fields that will be sent with the dataset event. This setting is optional.
+
+**`tags`**
+: A list of tags that will be sent with the dataset event. This setting is optional.
+
+**`processors`**
+: A list of processors to apply to the data generated by the dataset.
+
+ See [Processors](/reference/auditbeat/filtering-enhancing-data.md) for information about specifying processors in your config.
+
+
+**`index`**
+: If present, this formatted string overrides the index for events from this module (for elasticsearch outputs), or sets the `raw_index` field of the event’s metadata (for other outputs). This string can only refer to the agent name and version and the event timestamp; for access to dynamic fields, use `output.elasticsearch.index` or a processor.
+
+ Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might expand to `"auditbeat-myindex-2019.12.13"`.
+
+
+**`keep_null`**
+: If this option is set to true, fields with `null` values will be published in the output document. By default, `keep_null` is set to `false`.
+
+**`service.name`**
+: A name given by the user to the service the data is collected from. It can be used for example to identify information collected from nodes of different clusters with the same `service.type`.
+
+
+## Audit rules [audit-rules]
+
+The audit rules are where you configure the activities that are audited. These rules are configured as either syscalls or files that should be monitored. For example you can track all `connect` syscalls or file system writes to `/etc/passwd`.
+
+Auditing a large number of syscalls can place a heavy load on the system so consider carefully the rules you define and try to apply filters in the rules themselves to be as selective as possible.
+
+The kernel evaluates the rules in the order in which they were defined so place the most active rules first in order to speed up evaluation.
+
+You can assign keys to each rule for better identification of the rule that triggered an event and easier filtering later in Elasticsearch.
+
+Defining any audit rules in the config causes Auditbeat to purge all existing audit rules prior to adding the rules specified in the config. Therefore it is unnecessary and unsupported to include a `-D` (delete all) rule.
+
+```sh
+auditbeat.modules:
+- module: auditd
+ audit_rules: |
+ # Things that affect identity.
+ -w /etc/group -p wa -k identity
+ -w /etc/passwd -p wa -k identity
+ -w /etc/gshadow -p wa -k identity
+ -w /etc/shadow -p wa -k identity
+
+ # Unauthorized access attempts to files (unsuccessful).
+ -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access
+ -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access
+ -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access
+ -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access
+```
+
+
+## Example configuration [_example_configuration]
+
+The Auditd module supports the common configuration options that are described under [configuring Auditbeat](/reference/auditbeat/configuration-auditbeat.md). Here is an example configuration:
+
+```yaml
+auditbeat.modules:
+- module: auditd
+ # Load audit rules from separate files. Same format as audit.rules(7).
+ audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
+ audit_rules: |
+ ## Define audit rules here.
+ ## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
+ ## examples or add your own rules.
+
+ ## If you are on a 64 bit platform, everything should be running
+ ## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
+ ## because this might be a sign of someone exploiting a hole in the 32
+ ## bit API.
+ #-a always,exit -F arch=b32 -S all -F key=32bit-abi
+
+ ## Executions.
+ #-a always,exit -F arch=b64 -S execve,execveat -k exec
+
+ ## External access (warning: these can be expensive to audit).
+ #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access
+
+ ## Identity changes.
+ #-w /etc/group -p wa -k identity
+ #-w /etc/passwd -p wa -k identity
+ #-w /etc/gshadow -p wa -k identity
+
+ ## Unauthorized access attempts.
+ #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
+ #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
+```
+
diff --git a/docs/reference/auditbeat/auditbeat-module-file_integrity.md b/docs/reference/auditbeat/auditbeat-module-file_integrity.md
new file mode 100644
index 000000000000..77009058d5c2
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-module-file_integrity.md
@@ -0,0 +1,137 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-file_integrity.html
+---
+
+# File Integrity Module [auditbeat-module-file_integrity]
+
+The `file_integrity` module sends events when a file is changed (created, updated, or deleted) on disk. The events contain file metadata and hashes.
+
+The module is implemented for Linux, macOS (Darwin), and Windows.
+
+
+## How it works [_how_it_works_2]
+
+This module uses features of the operating system to monitor file changes in realtime. When the module starts it creates a subscription with the OS to receive notifications of changes to the specified files or directories. Upon receiving notification of a change the module will read the file’s metadata and the compute a hash of the file’s contents.
+
+At startup this module will perform an initial scan of the configured files and directories to generate baseline data for the monitored paths and detect changes since the last time it was run. It uses locally persisted data in order to only send events for new or modified files.
+
+The operating system features that power this feature are as follows.
+
+* Linux - Multiple backends are supported: `auto`, `fsnotify`, `kprobes`, `ebpf`. By default, `fsnotify` is used, and therefore the kernel must have inotify support. Inotify was initially merged into the 2.6.13 Linux kernel. The eBPF backend uses modern eBPF features and supports 5.10.16+ kernels. The `Kprobes` backend uses tracefs and supports 3.10+ kernels. FSNotify doesn’t have the ability to associate user data to file events. The preferred backend can be selected by specifying the `backend` config option. Since eBPF and Kprobes are in technical preview, `auto` will default to `fsnotify`.
+* macOS (Darwin) - Uses the `FSEvents` API, present since macOS 10.5. This API coalesces multiple changes to a file into a single event. Auditbeat translates this coalesced changes into a meaningful sequence of actions. However, in rare situations the reported events may have a different ordering than what actually happened.
+* Windows - `ReadDirectoryChangesW` is used.
+
+The file integrity module should not be used to monitor paths on network file systems.
+
+
+## Configuration options [_configuration_options_18]
+
+This module has some configuration options for tuning its behavior. The following example shows all configuration options with their default values for Linux.
+
+```yaml
+- module: file_integrity
+ paths:
+ - /bin
+ - /usr/bin
+ - /sbin
+ - /usr/sbin
+ - /etc
+ exclude_files:
+ - '(?i)\.sw[nop]$'
+ - '~$'
+ - '/\.git($|/)'
+ include_files: []
+ scan_at_start: true
+ scan_rate_per_sec: 50 MiB
+ max_file_size: 100 MiB
+ hash_types: [sha1]
+ recursive: false
+```
+
+This module also supports the [standard configuration options](#module-standard-options-file_integrity) described later.
+
+**`paths`**
+: A list of paths (directories or files) to watch. Globs are not supported. The specified paths should exist when the metricset is started. Paths should be absolute, although the file integrity module will attempt to resolve relative path events to their absolute file path. Symbolic links will be resolved on module start and the link target will be watched if link resolution is successful. Changes to the symbolic link after module start will not change the watch target. If the link does not resolve to a valid target, the symbolic link itself will be watched; if the symlink target becomes valid after module start up this will not be picked up by the file system watches.
+
+**`exclude_files`**
+: A list of regular expressions used to filter out events for unwanted files. The expressions are matched against the full path of every file and directory. When used in conjunction with `include_files`, file paths need to match both `include_files` and not match `exclude_files` to be selected. By default, no files are excluded. See [*Regular expression support*](/reference/auditbeat/regexp-support.md) for a list of supported regexp patterns. It is recommended to wrap regular expressions in single quotation marks to avoid issues with YAML escaping rules.
+
+**`include_files`**
+: A list of regular expressions used to specify which files to select. When configured, only files matching the pattern will be monitored. The expressions are matched against the full path of every file and directory. When used in conjunction with `exclude_files`, file paths need to match both `include_files` and not match `exclude_files` to be selected. By default, all files are selected. See [*Regular expression support*](/reference/auditbeat/regexp-support.md) for a list of supported regexp patterns. It is recommended to wrap regular expressions in single quotation marks to avoid issues with YAML escaping rules.
+
+**`scan_at_start`**
+: A boolean value that controls if Auditbeat scans over the configured file paths at startup and send events for the files that have been modified since the last time Auditbeat was running. The default value is true.
+
+ This feature depends on data stored locally in `path.data` in order to determine if a file has changed. The first time Auditbeat runs it will send an event for each file it encounters.
+
+
+**`scan_rate_per_sec`**
+: When `scan_at_start` is enabled this sets an average read rate defined in bytes per second for the initial scan. This throttles the amount of CPU and I/O that Auditbeat consumes at startup. The default value is "50 MiB". Setting the value to "0" disables throttling. For convenience units can be specified as a suffix to the value. The supported units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, `gb`, `tib`, `tb`, `pib`, `pb`, `eib`, and `eb`.
+
+**`max_file_size`**
+: The maximum size of a file in bytes for which Auditbeat will compute hashes and run file parsers. Files larger than this size will not be hashed or analysed by configured file parsers. The default value is 100 MiB. For convenience, units can be specified as a suffix to the value. The supported units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, `gb`, `tib`, `tb`, `pib`, `pb`, `eib`, and `eb`.
+
+**`hash_types`**
+: A list of hash types to compute when the file changes. The supported hash types are `blake2b_256`, `blake2b_384`, `blake2b_512`, `md5`, `sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `sha512_224`, `sha512_256`, `sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, and `xxh64`. The default value is `sha1`.
+
+**`file_parsers`**
+: A list of `file_integrity` fields under `file` that will be populated by file format parsers. The available fields that can be analysed are listed in the auditbeat.reference.yml file. File parsers are run on all files within the `max_file_size` limit in the configured paths during a scan or when a file event involves the file. Files that are not targets of the specific file parser are only sniffed to examine whether analysis should proceed. This will usually only involve reading a small number of bytes.
+
+**`recursive`**
+: By default, the watches set to the paths specified in `paths` are not recursive. This means that only changes to the contents of this directories are watched. If `recursive` is set to `true`, the `file_integrity` module will watch for changes on this directories and all their subdirectories.
+
+**`backend`**
+: (**Linux only**) Select the backend which will be used to source events. Valid values: `auto`, `fsnotify`, `kprobes`, `ebpf`. Default: `fsnotify`.
+
+
+### Standard configuration options [module-standard-options-file_integrity]
+
+You can specify the following options for any Auditbeat module.
+
+**`module`**
+: The name of the module to run.
+
+**`enabled`**
+: A Boolean value that specifies whether the module is enabled.
+
+**`fields`**
+: A dictionary of fields that will be sent with the dataset event. This setting is optional.
+
+**`tags`**
+: A list of tags that will be sent with the dataset event. This setting is optional.
+
+**`processors`**
+: A list of processors to apply to the data generated by the dataset.
+
+ See [Processors](/reference/auditbeat/filtering-enhancing-data.md) for information about specifying processors in your config.
+
+
+**`index`**
+: If present, this formatted string overrides the index for events from this module (for elasticsearch outputs), or sets the `raw_index` field of the event’s metadata (for other outputs). This string can only refer to the agent name and version and the event timestamp; for access to dynamic fields, use `output.elasticsearch.index` or a processor.
+
+ Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might expand to `"auditbeat-myindex-2019.12.13"`.
+
+
+**`keep_null`**
+: If this option is set to true, fields with `null` values will be published in the output document. By default, `keep_null` is set to `false`.
+
+**`service.name`**
+: A name given by the user to the service the data is collected from. It can be used for example to identify information collected from nodes of different clusters with the same `service.type`.
+
+
+## Example configuration [_example_configuration_2]
+
+The File Integrity module supports the common configuration options that are described under [configuring Auditbeat](/reference/auditbeat/configuration-auditbeat.md). Here is an example configuration:
+
+```yaml
+auditbeat.modules:
+- module: file_integrity
+ paths:
+ - /bin
+ - /usr/bin
+ - /sbin
+ - /usr/sbin
+ - /etc
+```
+
diff --git a/docs/reference/auditbeat/auditbeat-module-system.md b/docs/reference/auditbeat/auditbeat-module-system.md
new file mode 100644
index 000000000000..0240d427d794
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-module-system.md
@@ -0,0 +1,211 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-system.html
+---
+
+# System Module [auditbeat-module-system]
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+The `system` module collects various security related information about a system. All datasets send both periodic state information (e.g. all currently running processes) and real-time changes (e.g. when a new process starts or stops).
+
+The module is fully implemented for Linux on x86. Currently, the `socket` module is not available on ARM. Some datasets are also available for macOS (Darwin) and Windows.
+
+
+## How it works [_how_it_works_3]
+
+Each dataset sends two kinds of information: state and events.
+
+State information is sent periodically and (for some datasets) on startup. A state update will consist of one event per object that is currently active on the system (e.g. a process). All events belonging to the same state update will share the same UUID in `event.id`.
+
+The frequency of state updates can be controlled for all datasets using the `state.period` configuration option. Overrides are available per dataset. The default is `12h`.
+
+Event information is sent as the events occur (e.g. a process starts or stops). All datasets are currently using a poll model to retrieve their data. The frequency of these polls is controlled by the `period` configuration parameter.
+
+
+### Entity IDs [_entity_ids]
+
+This module populates `entity_id` fields to uniquely identify entities (users, packages, processes…) within a host. This requires Auditbeat to obtain a unique identifier for the host:
+
+* Windows: Uses the `HKLM\Software\Microsoft\Cryptography\MachineGuid` registry key.
+* macOS: Uses the value returned by `gethostuuid(2)` system call.
+* Linux: Uses the content of one of the following files, created by either `systemd` or `dbus`:
+
+ * /etc/machine-id
+ * /var/lib/dbus/machine-id
+ * /var/db/dbus/machine-id
+
+
+::::{note}
+Under CentOS 6.x, it’s possible that none of the files above exist. In that case, running `dbus-uuidgen --ensure` (provided by the `dbus` package) will generate one for you.
+::::
+
+
+
+### Example dashboard [_example_dashboard]
+
+The module comes with a sample dashboard:
+
+:::{image} images/auditbeat-system-overview-dashboard.png
+:alt: Auditbeat System Overview Dashboard
+:class: screenshot
+:::
+
+
+## Configuration options [_configuration_options_19]
+
+This module has some configuration options for controlling its behavior. The following example shows all configuration options with their default values for Linux.
+
+::::{note}
+It is recommended to configure some datasets separately. See below for a sample suggested configuration.
+::::
+
+
+```yaml
+- module: system
+ datasets:
+ - host
+ - login
+ - package
+ - process
+ - socket
+ - user
+ period: 10s
+ state.period: 12h
+
+ socket.include_localhost: false
+
+ user.detect_password_changes: true
+```
+
+This module also supports the [standard configuration options](#module-standard-options-system) described later.
+
+**`state.period`**
+: The interval at which the datasets send full state information. This option can be overridden per dataset using `{{dataset}}.state.period`.
+
+**`user.detect_password_changes`**
+: If the `user` dataset is configured and this option is set to `true`, Auditbeat will read password information in `/etc/passwd` and `/etc/shadow` to detect password changes. A hash will be kept locally in the `beat.db` file to detect changes between Auditbeat restarts. The `beat.db` file should be readable only by the root user and be treated similar to the shadow file itself.
+
+
+### Standard configuration options [module-standard-options-system]
+
+You can specify the following options for any Auditbeat module.
+
+**`module`**
+: The name of the module to run.
+
+**`datasets`**
+: A list of datasets to execute.
+
+**`enabled`**
+: A Boolean value that specifies whether the module is enabled.
+
+**`period`**
+: The frequency at which the datasets check for changes. If a system is not reachable, Auditbeat returns an error for each period. This setting is required. For most datasets, especially `process` and `socket`, a shorter period is recommended.
+
+**`fields`**
+: A dictionary of fields that will be sent with the dataset event. This setting is optional.
+
+**`tags`**
+: A list of tags that will be sent with the dataset event. This setting is optional.
+
+**`processors`**
+: A list of processors to apply to the data generated by the dataset.
+
+ See [Processors](/reference/auditbeat/filtering-enhancing-data.md) for information about specifying processors in your config.
+
+
+**`index`**
+: If present, this formatted string overrides the index for events from this module (for elasticsearch outputs), or sets the `raw_index` field of the event’s metadata (for other outputs). This string can only refer to the agent name and version and the event timestamp; for access to dynamic fields, use `output.elasticsearch.index` or a processor.
+
+ Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might expand to `"auditbeat-myindex-2019.12.13"`.
+
+
+**`keep_null`**
+: If this option is set to true, fields with `null` values will be published in the output document. By default, `keep_null` is set to `false`.
+
+**`service.name`**
+: A name given by the user to the service the data is collected from. It can be used for example to identify information collected from nodes of different clusters with the same `service.type`.
+
+
+## Suggested configuration [_suggested_configuration]
+
+Processes and sockets can be short-lived, so the chance of missing an update increases if the polling interval is too large.
+
+On the other hand, host and user information is unlikely to change frequently, so a longer polling interval can be used.
+
+```yaml
+- module: system
+ datasets:
+ - host
+ - login
+ - package
+ - user
+ period: 1m
+
+ user.detect_password_changes: true
+
+- module: system
+ datasets:
+ - process
+ - socket
+ period: 1s
+```
+
+
+## Example configuration [_example_configuration_3]
+
+The System module supports the common configuration options that are described under [configuring Auditbeat](/reference/auditbeat/configuration-auditbeat.md). Here is an example configuration:
+
+```yaml
+auditbeat.modules:
+- module: system
+ datasets:
+ - package # Installed, updated, and removed packages
+
+ period: 2m # The frequency at which the datasets check for changes
+
+- module: system
+ datasets:
+ - host # General host information, e.g. uptime, IPs
+ - login # User logins, logouts, and system boots.
+ - process # Started and stopped processes
+ - socket # Opened and closed sockets
+ - user # User information
+
+ # How often datasets send state updates with the
+ # current state of the system (e.g. all currently
+ # running processes, all open sockets).
+ state.period: 12h
+
+ # Enabled by default. Auditbeat will read password fields in
+ # /etc/passwd and /etc/shadow and store a hash locally to
+ # detect any changes.
+ user.detect_password_changes: true
+
+ # File patterns of the login record files.
+ login.wtmp_file_pattern: /var/log/wtmp*
+ login.btmp_file_pattern: /var/log/btmp*
+```
+
+
+## Datasets [_datasets]
+
+The following datasets are available:
+
+* [host](/reference/auditbeat/auditbeat-dataset-system-host.md)
+* [login](/reference/auditbeat/auditbeat-dataset-system-login.md)
+* [package](/reference/auditbeat/auditbeat-dataset-system-package.md)
+* [process](/reference/auditbeat/auditbeat-dataset-system-process.md)
+* [socket](/reference/auditbeat/auditbeat-dataset-system-socket.md)
+* [user](/reference/auditbeat/auditbeat-dataset-system-user.md)
+
+
+
+
+
+
+
diff --git a/docs/reference/auditbeat/auditbeat-modules.md b/docs/reference/auditbeat/auditbeat-modules.md
new file mode 100644
index 000000000000..a4d87b1fb4f0
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-modules.md
@@ -0,0 +1,13 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-modules.html
+---
+
+# Modules [auditbeat-modules]
+
+This section contains detailed information about the metric collecting modules contained in Auditbeat. More details about each module can be found under the links below.
+
+* [Auditd](/reference/auditbeat/auditbeat-module-auditd.md)
+* [File Integrity](/reference/auditbeat/auditbeat-module-file_integrity.md)
+* [System](/reference/auditbeat/auditbeat-module-system.md)
+
diff --git a/docs/reference/auditbeat/auditbeat-overview.md b/docs/reference/auditbeat/auditbeat-overview.md
new file mode 100644
index 000000000000..7fb6f4754fa4
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-overview.md
@@ -0,0 +1,12 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-overview.html
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/index.html
+---
+
+# Auditbeat overview [auditbeat-overview]
+
+Auditbeat is a lightweight shipper that you can install on your servers to audit the activities of users and processes on your systems. For example, you can use Auditbeat to collect and centralize audit events from the Linux Audit Framework. You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations.
+
+Auditbeat is an Elastic [Beat](https://www.elastic.co/beats). It’s based on the `libbeat` framework. For more information, see the [Beats Platform Reference](/reference/index.md).
+
diff --git a/docs/reference/auditbeat/auditbeat-reference-yml.md b/docs/reference/auditbeat/auditbeat-reference-yml.md
new file mode 100644
index 000000000000..b62983d5bcba
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-reference-yml.md
@@ -0,0 +1,1876 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-reference-yml.html
+---
+
+# auditbeat.reference.yml [auditbeat-reference-yml]
+
+The following reference file is available with your Auditbeat installation. It shows all non-deprecated Auditbeat options. You can copy from this file and paste configurations into the `auditbeat.yml` file to customize it.
+
+::::{tip}
+The reference file is located in the same directory as the `auditbeat.yml` file. To locate the file, see [Directory layout](/reference/auditbeat/directory-layout.md).
+::::
+
+
+The contents of the file are included here for your convenience.
+
+```yaml
+## Auditbeat Configuration #############################
+
+# This is a reference configuration file documenting all non-deprecated options
+# in comments. For a shorter configuration example that contains only the most
+# common options, please see auditbeat.yml in the same directory.
+#
+# You can find the full configuration reference here:
+# https://www.elastic.co/guide/en/beats/auditbeat/index.html
+
+# ============================== Config Reloading ==============================
+
+# Config reloading allows to dynamically load modules. Each file that is
+# monitored must contain one or multiple modules as a list.
+auditbeat.config.modules:
+
+ # Glob pattern for configuration reloading
+ path: ${path.config}/modules.d/*.yml
+
+ # Period on which files under path should be checked for changes
+ reload.period: 10s
+
+ # Set to true to enable config reloading
+ reload.enabled: false
+
+# Maximum amount of time to randomly delay the start of a dataset. Use 0 to
+# disable startup delay.
+auditbeat.max_start_delay: 10s
+
+# =========================== Modules configuration ============================
+auditbeat.modules:
+
+# The auditd module collects events from the audit framework in the Linux
+# kernel. You need to specify audit rules for the events that you want to audit.
+- module: auditd
+ resolve_ids: true
+ failure_mode: silent
+ backlog_limit: 8196
+ rate_limit: 0
+ include_raw_message: false
+ include_warnings: false
+
+ # Set to true to publish fields with null values in events.
+ #keep_null: false
+
+ # Load audit rules from separate files. Same format as audit.rules(7).
+ audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
+ audit_rules: |
+ ## Define audit rules here.
+ ## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
+ ## examples or add your own rules.
+
+ ## If you are on a 64 bit platform, everything should be running
+ ## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
+ ## because this might be a sign of someone exploiting a hole in the 32
+ ## bit API.
+ #-a always,exit -F arch=b32 -S all -F key=32bit-abi
+
+ ## Executions.
+ #-a always,exit -F arch=b64 -S execve,execveat -k exec
+
+ ## External access (warning: these can be expensive to audit).
+ #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access
+
+ ## Identity changes.
+ #-w /etc/group -p wa -k identity
+ #-w /etc/passwd -p wa -k identity
+ #-w /etc/gshadow -p wa -k identity
+
+ ## Unauthorized access attempts.
+ #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
+ #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
+
+# The file integrity module sends events when files are changed (created,
+# updated, deleted). The events contain file metadata and hashes.
+- module: file_integrity
+ paths:
+ - /bin
+ - /usr/bin
+ - /sbin
+ - /usr/sbin
+ - /etc
+
+ # List of regular expressions to filter out notifications for unwanted files.
+ # Wrap in single quotes to workaround YAML escaping rules. By default no files
+ # are ignored.
+ exclude_files:
+ - '(?i)\.sw[nop]$'
+ - '~$'
+ - '/\.git($|/)'
+
+ # List of regular expressions used to explicitly include files. When configured,
+ # Auditbeat will ignore files unless they match a pattern.
+ #include_files:
+ #- '/\.ssh($|/)'
+ # Select the backend which will be used to source events.
+ # "fsnotify" doesn't have the ability to associate user data to file events.
+ # Valid values: auto, fsnotify, kprobes, ebpf.
+ # Default: fsnotify.
+ backend: fsnotify
+
+ # Scan over the configured file paths at startup and send events for new or
+ # modified files since the last time Auditbeat was running.
+ scan_at_start: true
+
+ # Average scan rate. This throttles the amount of CPU and I/O that Auditbeat
+ # consumes at startup while scanning. Default is "50 MiB".
+ scan_rate_per_sec: 50 MiB
+
+ # Limit on the size of files that will be hashed. Default is "100 MiB".
+ max_file_size: 100 MiB
+
+ # Hash types to compute when the file changes. Supported types are
+ # blake2b_256, blake2b_384, blake2b_512, md5, sha1, sha224, sha256, sha384,
+ # sha512, sha512_224, sha512_256, sha3_224, sha3_256, sha3_384, sha3_512, and xxh64.
+ # Default is sha1.
+ hash_types: [sha1]
+
+ # Detect changes to files included in subdirectories. Disabled by default.
+ recursive: false
+
+ # Set to true to publish fields with null values in events.
+ #keep_null: false
+
+ # Parse detailed information for the listed fields. Field paths in the list below
+ # that are a prefix of other field paths imply the longer field path. A set of
+ # fields may be specified using an RE2 regular expression quoted in //. For example
+ # /^file\.pe\./ will match all file.pe.* fields. Note that the expression is not
+ # implicitly anchored, so the empty expression will match all fields.
+ # file_parsers:
+ # - file.elf.sections
+ # - file.elf.sections.name
+ # - file.elf.sections.physical_size
+ # - file.elf.sections.virtual_size
+ # - file.elf.sections.entropy
+ # - file.elf.sections.var_entropy
+ # - file.elf.import_hash
+ # - file.elf.imports
+ # - file.elf.imports_names_entropy
+ # - file.elf.imports_names_var_entropy
+ # - file.elf.go_import_hash
+ # - file.elf.go_imports
+ # - file.elf.go_imports_names_entropy
+ # - file.elf.go_imports_names_var_entropy
+ # - file.elf.go_stripped
+ # - file.macho.sections
+ # - file.macho.sections.name
+ # - file.macho.sections.physical_size
+ # - file.macho.sections.virtual_size
+ # - file.macho.sections.entropy
+ # - file.macho.sections.var_entropy
+ # - file.macho.import_hash
+ # - file.macho.symhash
+ # - file.macho.imports
+ # - file.macho.imports_names_entropy
+ # - file.macho.imports_names_var_entropy
+ # - file.macho.go_import_hash
+ # - file.macho.go_imports
+ # - file.macho.go_imports_names_entropy
+ # - file.macho.go_imports_names_var_entropy
+ # - file.macho.go_stripped
+ # - file.pe.sections
+ # - file.pe.sections.name
+ # - file.pe.sections.physical_size
+ # - file.pe.sections.virtual_size
+ # - file.pe.sections.entropy
+ # - file.pe.sections.var_entropy
+ # - file.pe.import_hash
+ # - file.pe.imphash
+ # - file.pe.imports
+ # - file.pe.imports_names_entropy
+ # - file.pe.imports_names_var_entropy
+ # - file.pe.go_import_hash
+ # - file.pe.go_imports
+ # - file.pe.go_imports_names_entropy
+ # - file.pe.go_imports_names_var_entropy
+ # - file.pe.go_stripped
+
+
+
+# ================================== General ===================================
+
+# The name of the shipper that publishes the network data. It can be used to group
+# all the transactions sent by a single shipper in the web interface.
+# If this option is not defined, the hostname is used.
+#name:
+
+# The tags of the shipper are included in their field with each
+# transaction published. Tags make it easy to group servers by different
+# logical properties.
+#tags: ["service-X", "web-tier"]
+
+# Optional fields that you can specify to add additional information to the
+# output. Fields can be scalar values, arrays, dictionaries, or any nested
+# combination of these.
+#fields:
+# env: staging
+
+# If this option is set to true, the custom fields are stored as top-level
+# fields in the output document instead of being grouped under a field
+# sub-dictionary. Default is false.
+#fields_under_root: false
+
+# Configure the precision of all timestamps in Auditbeat.
+# Available options: millisecond, microsecond, nanosecond
+#timestamp.precision: millisecond
+
+# Internal queue configuration for buffering events to be published.
+# Queue settings may be overridden by performance presets in the
+# Elasticsearch output. To configure them manually use "preset: custom".
+#queue:
+ # Queue type by name (default 'mem')
+ # The memory queue will present all available events (up to the outputs
+ # bulk_max_size) to the output, the moment the output is ready to serve
+ # another batch of events.
+ #mem:
+ # Max number of events the queue can buffer.
+ #events: 3200
+
+ # Hints the minimum number of events stored in the queue,
+ # before providing a batch of events to the outputs.
+ # The default value is set to 2048.
+ # A value of 0 ensures events are immediately available
+ # to be sent to the outputs.
+ #flush.min_events: 1600
+
+ # Maximum duration after which events are available to the outputs,
+ # if the number of events stored in the queue is < `flush.min_events`.
+ #flush.timeout: 10s
+
+ # The disk queue stores incoming events on disk until the output is
+ # ready for them. This allows a higher event limit than the memory-only
+ # queue and lets pending events persist through a restart.
+ #disk:
+ # The directory path to store the queue's data.
+ #path: "${path.data}/diskqueue"
+
+ # The maximum space the queue should occupy on disk. Depending on
+ # input settings, events that exceed this limit are delayed or discarded.
+ #max_size: 10GB
+
+ # The maximum size of a single queue data file. Data in the queue is
+ # stored in smaller segments that are deleted after all their events
+ # have been processed.
+ #segment_size: 1GB
+
+ # The number of events to read from disk to memory while waiting for
+ # the output to request them.
+ #read_ahead: 512
+
+ # The number of events to accept from inputs while waiting for them
+ # to be written to disk. If event data arrives faster than it
+ # can be written to disk, this setting prevents it from overflowing
+ # main memory.
+ #write_ahead: 2048
+
+ # The duration to wait before retrying when the queue encounters a disk
+ # write error.
+ #retry_interval: 1s
+
+ # The maximum length of time to wait before retrying on a disk write
+ # error. If the queue encounters repeated errors, it will double the
+ # length of its retry interval each time, up to this maximum.
+ #max_retry_interval: 30s
+
+# Sets the maximum number of CPUs that can be executed simultaneously. The
+# default is the number of logical CPUs available in the system.
+#max_procs:
+
+# ================================= Processors =================================
+
+# Processors are used to reduce the number of fields in the exported event or to
+# enhance the event with external metadata. This section defines a list of
+# processors that are applied one by one and the first one receives the initial
+# event:
+#
+# event -> filter1 -> event1 -> filter2 ->event2 ...
+#
+# The supported processors are drop_fields, drop_event, include_fields,
+# decode_json_fields, and add_cloud_metadata.
+#
+# For example, you can use the following processors to keep the fields that
+# contain CPU load percentages, but remove the fields that contain CPU ticks
+# values:
+#
+#processors:
+# - include_fields:
+# fields: ["cpu"]
+# - drop_fields:
+# fields: ["cpu.user", "cpu.system"]
+#
+# The following example drops the events that have the HTTP response code 200:
+#
+#processors:
+# - drop_event:
+# when:
+# equals:
+# http.code: 200
+#
+# The following example renames the field a to b:
+#
+#processors:
+# - rename:
+# fields:
+# - from: "a"
+# to: "b"
+#
+# The following example tokenizes the string into fields:
+#
+#processors:
+# - dissect:
+# tokenizer: "%{key1} - %{key2}"
+# field: "message"
+# target_prefix: "dissect"
+#
+# The following example enriches each event with metadata from the cloud
+# provider about the host machine. It works on EC2, GCE, DigitalOcean,
+# Tencent Cloud, and Alibaba Cloud.
+#
+#processors:
+# - add_cloud_metadata: ~
+#
+# The following example enriches each event with the machine's local time zone
+# offset from UTC.
+#
+#processors:
+# - add_locale:
+# format: offset
+#
+# The following example enriches each event with docker metadata, it matches
+# given fields to an existing container id and adds info from that container:
+#
+#processors:
+# - add_docker_metadata:
+# host: "unix:///var/run/docker.sock"
+# match_fields: ["system.process.cgroup.id"]
+# match_pids: ["process.pid", "process.parent.pid"]
+# match_source: true
+# match_source_index: 4
+# match_short_id: false
+# cleanup_timeout: 60
+# labels.dedot: false
+# # To connect to Docker over TLS you must specify a client and CA certificate.
+# #ssl:
+# # certificate_authority: "/etc/pki/root/ca.pem"
+# # certificate: "/etc/pki/client/cert.pem"
+# # key: "/etc/pki/client/cert.key"
+#
+# The following example enriches each event with docker metadata, it matches
+# container id from log path available in `source` field (by default it expects
+# it to be /var/lib/docker/containers/*/*.log).
+#
+#processors:
+# - add_docker_metadata: ~
+#
+# The following example enriches each event with host metadata.
+#
+#processors:
+# - add_host_metadata: ~
+#
+# The following example enriches each event with process metadata using
+# process IDs included in the event.
+#
+#processors:
+# - add_process_metadata:
+# match_pids: ["system.process.ppid"]
+# target: system.process.parent
+#
+# The following example decodes fields containing JSON strings
+# and replaces the strings with valid JSON objects.
+#
+#processors:
+# - decode_json_fields:
+# fields: ["field1", "field2", ...]
+# process_array: false
+# max_depth: 1
+# target: ""
+# overwrite_keys: false
+#
+#processors:
+# - decompress_gzip_field:
+# from: "field1"
+# to: "field2"
+# ignore_missing: false
+# fail_on_error: true
+#
+# The following example copies the value of the message to message_copied
+#
+#processors:
+# - copy_fields:
+# fields:
+# - from: message
+# to: message_copied
+# fail_on_error: true
+# ignore_missing: false
+#
+# The following example truncates the value of the message to 1024 bytes
+#
+#processors:
+# - truncate_fields:
+# fields:
+# - message
+# max_bytes: 1024
+# fail_on_error: false
+# ignore_missing: true
+#
+# The following example preserves the raw message under event.original
+#
+#processors:
+# - copy_fields:
+# fields:
+# - from: message
+# to: event.original
+# fail_on_error: false
+# ignore_missing: true
+# - truncate_fields:
+# fields:
+# - event.original
+# max_bytes: 1024
+# fail_on_error: false
+# ignore_missing: true
+#
+# The following example URL-decodes the value of field1 to field2
+#
+#processors:
+# - urldecode:
+# fields:
+# - from: "field1"
+# to: "field2"
+# ignore_missing: false
+# fail_on_error: true
+
+# =============================== Elastic Cloud ================================
+
+# These settings simplify using Auditbeat with the Elastic Cloud (https://cloud.elastic.co/).
+
+# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
+# `setup.kibana.host` options.
+# You can find the `cloud.id` in the Elastic Cloud web UI.
+#cloud.id:
+
+# The cloud.auth setting overwrites the `output.elasticsearch.username` and
+# `output.elasticsearch.password` settings. The format is `:`.
+#cloud.auth:
+
+# ================================== Outputs ===================================
+
+# Configure what output to use when sending the data collected by the beat.
+
+# ---------------------------- Elasticsearch Output ----------------------------
+output.elasticsearch:
+ # Boolean flag to enable or disable the output module.
+ #enabled: true
+
+ # Array of hosts to connect to.
+ # Scheme and port can be left out and will be set to the default (http and 9200)
+ # In case you specify and additional path, the scheme is required: http://localhost:9200/path
+ # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
+ hosts: ["localhost:9200"]
+
+ # Performance presets configure other output fields to recommended values
+ # based on a performance priority.
+ # Options are "balanced", "throughput", "scale", "latency" and "custom".
+ # Default if unspecified: "custom"
+ preset: balanced
+
+ # Set gzip compression level. Set to 0 to disable compression.
+ # This field may conflict with performance presets. To set it
+ # manually use "preset: custom".
+ # The default is 1.
+ #compression_level: 1
+
+ # Configure escaping HTML symbols in strings.
+ #escape_html: false
+
+ # Protocol - either `http` (default) or `https`.
+ #protocol: "https"
+
+ # Authentication credentials - either API key or username/password.
+ #api_key: "id:api_key"
+ #username: "elastic"
+ #password: "changeme"
+
+ # Dictionary of HTTP parameters to pass within the URL with index operations.
+ #parameters:
+ #param1: value1
+ #param2: value2
+
+ # Number of workers per Elasticsearch host.
+ # This field may conflict with performance presets. To set it
+ # manually use "preset: custom".
+ #worker: 1
+
+ # If set to true and multiple hosts are configured, the output plugin load
+ # balances published events onto all Elasticsearch hosts. If set to false,
+ # the output plugin sends all events to only one host (determined at random)
+ # and will switch to another host if the currently selected one becomes
+ # unreachable. The default value is true.
+ #loadbalance: true
+
+ # Optional data stream or index name. The default is "auditbeat-%{[agent.version]}".
+ # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
+ #index: "auditbeat-%{[agent.version]}"
+
+ # Optional ingest pipeline. By default, no pipeline will be used.
+ #pipeline: ""
+
+ # Optional HTTP path
+ #path: "/elasticsearch"
+
+ # Custom HTTP headers to add to each request
+ #headers:
+ # X-My-Header: Contents of the header
+
+ # Proxy server URL
+ #proxy_url: http://proxy:3128
+
+ # Whether to disable proxy settings for outgoing connections. If true, this
+ # takes precedence over both the proxy_url field and any environment settings
+ # (HTTP_PROXY, HTTPS_PROXY). The default is false.
+ #proxy_disable: false
+
+ # The number of times a particular Elasticsearch index operation is attempted. If
+ # the indexing operation doesn't succeed after this many retries, the events are
+ # dropped. The default is 3.
+ #max_retries: 3
+
+ # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
+ # This field may conflict with performance presets. To set it
+ # manually use "preset: custom".
+ # The default is 1600.
+ #bulk_max_size: 1600
+
+ # The number of seconds to wait before trying to reconnect to Elasticsearch
+ # after a network error. After waiting backoff.init seconds, the Beat
+ # tries to reconnect. If the attempt fails, the backoff timer is increased
+ # exponentially up to backoff.max. After a successful connection, the backoff
+ # timer is reset. The default is 1s.
+ #backoff.init: 1s
+
+ # The maximum number of seconds to wait before attempting to connect to
+ # Elasticsearch after a network error. The default is 60s.
+ #backoff.max: 60s
+
+ # The maximum amount of time an idle connection will remain idle
+ # before closing itself. Zero means use the default of 60s. The
+ # format is a Go language duration (example 60s is 60 seconds).
+ # This field may conflict with performance presets. To set it
+ # manually use "preset: custom".
+ # The default is 3s.
+ # idle_connection_timeout: 3s
+
+ # Configure HTTP request timeout before failing a request to Elasticsearch.
+ #timeout: 90
+
+ # Prevents auditbeat from connecting to older Elasticsearch versions when set to `false`
+ #allow_older_versions: true
+
+ # Use SSL settings for HTTPS.
+ #ssl.enabled: true
+
+ # Controls the verification of certificates. Valid values are:
+ # * full, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate.
+ # * strict, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate. If the Subject Alternative
+ # Name is empty, it returns an error.
+ # * certificate, which verifies that the provided certificate is signed by a
+ # trusted authority (CA), but does not perform any hostname verification.
+ # * none, which performs no verification of the server's certificate. This
+ # mode disables many of the security benefits of SSL/TLS and should only be used
+ # after very careful consideration. It is primarily intended as a temporary
+ # diagnostic mechanism when attempting to resolve TLS errors; its use in
+ # production environments is strongly discouraged.
+ # The default value is full.
+ #ssl.verification_mode: full
+
+ # List of supported/valid TLS versions. By default all TLS versions from 1.1
+ # up to 1.3 are enabled.
+ #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3]
+
+ # List of root certificates for HTTPS server verifications
+ #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
+
+ # Certificate for SSL client authentication
+ #ssl.certificate: "/etc/pki/client/cert.pem"
+
+ # Client certificate key
+ #ssl.key: "/etc/pki/client/cert.key"
+
+ # Optional passphrase for decrypting the certificate key.
+ #ssl.key_passphrase: ''
+
+ # Configure cipher suites to be used for SSL connections
+ #ssl.cipher_suites: []
+
+ # Configure curve types for ECDHE-based cipher suites
+ #ssl.curve_types: []
+
+ # Configure what types of renegotiation are supported. Valid options are
+ # never, once, and freely. Default is never.
+ #ssl.renegotiation: never
+
+ # Configure a pin that can be used to do extra validation of the verified certificate chain,
+ # this allow you to ensure that a specific certificate is used to validate the chain of trust.
+ #
+ # The pin is a base64 encoded string of the SHA-256 fingerprint.
+ #ssl.ca_sha256: ""
+
+ # A root CA HEX encoded fingerprint. During the SSL handshake if the
+ # fingerprint matches the root CA certificate, it will be added to
+ # the provided list of root CAs (`certificate_authorities`), if the
+ # list is empty or not defined, the matching certificate will be the
+ # only one in the list. Then the normal SSL validation happens.
+ #ssl.ca_trusted_fingerprint: ""
+
+
+ # Enables restarting auditbeat if any file listed by `key`,
+ # `certificate`, or `certificate_authorities` is modified.
+ # This feature IS NOT supported on Windows.
+ #ssl.restart_on_cert_change.enabled: false
+
+ # Period to scan for changes on CA certificate files
+ #ssl.restart_on_cert_change.period: 1m
+
+ # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set.
+ #kerberos.enabled: true
+
+ # Authentication type to use with Kerberos. Available options: keytab, password.
+ #kerberos.auth_type: password
+
+ # Path to the keytab file. It is used when auth_type is set to keytab.
+ #kerberos.keytab: /etc/elastic.keytab
+
+ # Path to the Kerberos configuration.
+ #kerberos.config_path: /etc/krb5.conf
+
+ # Name of the Kerberos user.
+ #kerberos.username: elastic
+
+ # Password of the Kerberos user. It is used when auth_type is set to password.
+ #kerberos.password: changeme
+
+ # Kerberos realm.
+ #kerberos.realm: ELASTIC
+
+
+# ------------------------------ Logstash Output -------------------------------
+#output.logstash:
+ # Boolean flag to enable or disable the output module.
+ #enabled: true
+
+ # The Logstash hosts
+ #hosts: ["localhost:5044"]
+
+ # Number of workers per Logstash host.
+ #worker: 1
+
+ # Set gzip compression level.
+ #compression_level: 3
+
+ # Configure escaping HTML symbols in strings.
+ #escape_html: false
+
+ # Optional maximum time to live for a connection to Logstash, after which the
+ # connection will be re-established. A value of `0s` (the default) will
+ # disable this feature.
+ #
+ # Not yet supported for async connections (i.e. with the "pipelining" option set)
+ #ttl: 30s
+
+ # Optionally load-balance events between Logstash hosts. Default is false.
+ #loadbalance: false
+
+ # Number of batches to be sent asynchronously to Logstash while processing
+ # new batches.
+ #pipelining: 2
+
+ # If enabled only a subset of events in a batch of events is transferred per
+ # transaction. The number of events to be sent increases up to `bulk_max_size`
+ # if no error is encountered.
+ #slow_start: false
+
+ # The number of seconds to wait before trying to reconnect to Logstash
+ # after a network error. After waiting backoff.init seconds, the Beat
+ # tries to reconnect. If the attempt fails, the backoff timer is increased
+ # exponentially up to backoff.max. After a successful connection, the backoff
+ # timer is reset. The default is 1s.
+ #backoff.init: 1s
+
+ # The maximum number of seconds to wait before attempting to connect to
+ # Logstash after a network error. The default is 60s.
+ #backoff.max: 60s
+
+ # Optional index name. The default index name is set to auditbeat
+ # in all lowercase.
+ #index: 'auditbeat'
+
+ # SOCKS5 proxy server URL
+ #proxy_url: socks5://user:password@socks5-server:2233
+
+ # Resolve names locally when using a proxy server. Defaults to false.
+ #proxy_use_local_resolver: false
+
+ # Use SSL settings for HTTPS.
+ #ssl.enabled: true
+
+ # Controls the verification of certificates. Valid values are:
+ # * full, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate.
+ # * strict, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate. If the Subject Alternative
+ # Name is empty, it returns an error.
+ # * certificate, which verifies that the provided certificate is signed by a
+ # trusted authority (CA), but does not perform any hostname verification.
+ # * none, which performs no verification of the server's certificate. This
+ # mode disables many of the security benefits of SSL/TLS and should only be used
+ # after very careful consideration. It is primarily intended as a temporary
+ # diagnostic mechanism when attempting to resolve TLS errors; its use in
+ # production environments is strongly discouraged.
+ # The default value is full.
+ #ssl.verification_mode: full
+
+ # List of supported/valid TLS versions. By default all TLS versions from 1.1
+ # up to 1.3 are enabled.
+ #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3]
+
+ # List of root certificates for HTTPS server verifications
+ #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
+
+ # Certificate for SSL client authentication
+ #ssl.certificate: "/etc/pki/client/cert.pem"
+
+ # Client certificate key
+ #ssl.key: "/etc/pki/client/cert.key"
+
+ # Optional passphrase for decrypting the certificate key.
+ #ssl.key_passphrase: ''
+
+ # Configure cipher suites to be used for SSL connections
+ #ssl.cipher_suites: []
+
+ # Configure curve types for ECDHE-based cipher suites
+ #ssl.curve_types: []
+
+ # Configure what types of renegotiation are supported. Valid options are
+ # never, once, and freely. Default is never.
+ #ssl.renegotiation: never
+
+ # Configure a pin that can be used to do extra validation of the verified certificate chain,
+ # this allow you to ensure that a specific certificate is used to validate the chain of trust.
+ #
+ # The pin is a base64 encoded string of the SHA-256 fingerprint.
+ #ssl.ca_sha256: ""
+
+ # A root CA HEX encoded fingerprint. During the SSL handshake if the
+ # fingerprint matches the root CA certificate, it will be added to
+ # the provided list of root CAs (`certificate_authorities`), if the
+ # list is empty or not defined, the matching certificate will be the
+ # only one in the list. Then the normal SSL validation happens.
+ #ssl.ca_trusted_fingerprint: ""
+
+ # Enables restarting auditbeat if any file listed by `key`,
+ # `certificate`, or `certificate_authorities` is modified.
+ # This feature IS NOT supported on Windows.
+ #ssl.restart_on_cert_change.enabled: false
+
+ # Period to scan for changes on CA certificate files
+ #ssl.restart_on_cert_change.period: 1m
+
+ # The number of times to retry publishing an event after a publishing failure.
+ # After the specified number of retries, the events are typically dropped.
+ # Some Beats, such as Filebeat and Winlogbeat, ignore the max_retries setting
+ # and retry until all events are published. Set max_retries to a value less
+ # than 0 to retry until all events are published. The default is 3.
+ #max_retries: 3
+
+ # The maximum number of events to bulk in a single Logstash request. The
+ # default is 2048.
+ #bulk_max_size: 2048
+
+ # The number of seconds to wait for responses from the Logstash server before
+ # timing out. The default is 30s.
+ #timeout: 30s
+
+# -------------------------------- Kafka Output --------------------------------
+#output.kafka:
+ # Boolean flag to enable or disable the output module.
+ #enabled: true
+
+ # The list of Kafka broker addresses from which to fetch the cluster metadata.
+ # The cluster metadata contain the actual Kafka brokers events are published
+ # to.
+ #hosts: ["localhost:9092"]
+
+ # The Kafka topic used for produced events. The setting can be a format string
+ # using any event field. To set the topic from document type use `%{[type]}`.
+ #topic: beats
+
+ # The Kafka event key setting. Use format string to create a unique event key.
+ # By default no event key will be generated.
+ #key: ''
+
+ # The Kafka event partitioning strategy. Default hashing strategy is `hash`
+ # using the `output.kafka.key` setting or randomly distributes events if
+ # `output.kafka.key` is not configured.
+ #partition.hash:
+ # If enabled, events will only be published to partitions with reachable
+ # leaders. Default is false.
+ #reachable_only: false
+
+ # Configure alternative event field names used to compute the hash value.
+ # If empty `output.kafka.key` setting will be used.
+ # Default value is empty list.
+ #hash: []
+
+ # Authentication details. Password is required if username is set.
+ #username: ''
+ #password: ''
+
+ # SASL authentication mechanism used. Can be one of PLAIN, SCRAM-SHA-256 or SCRAM-SHA-512.
+ # Defaults to PLAIN when `username` and `password` are configured.
+ #sasl.mechanism: ''
+
+ # Kafka version Auditbeat is assumed to run against. Defaults to the "1.0.0".
+ #version: '1.0.0'
+
+ # Configure JSON encoding
+ #codec.json:
+ # Pretty-print JSON event
+ #pretty: false
+
+ # Configure escaping HTML symbols in strings.
+ #escape_html: false
+
+ # Metadata update configuration. Metadata contains leader information
+ # used to decide which broker to use when publishing.
+ #metadata:
+ # Max metadata request retry attempts when cluster is in middle of leader
+ # election. Defaults to 3 retries.
+ #retry.max: 3
+
+ # Wait time between retries during leader elections. Default is 250ms.
+ #retry.backoff: 250ms
+
+ # Refresh metadata interval. Defaults to every 10 minutes.
+ #refresh_frequency: 10m
+
+ # Strategy for fetching the topics metadata from the broker. Default is false.
+ #full: false
+
+ # The number of times to retry publishing an event after a publishing failure.
+ # After the specified number of retries, events are typically dropped.
+ # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
+ # all events are published. Set max_retries to a value less than 0 to retry
+ # until all events are published. The default is 3.
+ #max_retries: 3
+
+ # The number of seconds to wait before trying to republish to Kafka
+ # after a network error. After waiting backoff.init seconds, the Beat
+ # tries to republish. If the attempt fails, the backoff timer is increased
+ # exponentially up to backoff.max. After a successful publish, the backoff
+ # timer is reset. The default is 1s.
+ #backoff.init: 1s
+
+ # The maximum number of seconds to wait before attempting to republish to
+ # Kafka after a network error. The default is 60s.
+ #backoff.max: 60s
+
+ # The maximum number of events to bulk in a single Kafka request. The default
+ # is 2048.
+ #bulk_max_size: 2048
+
+ # Duration to wait before sending bulk Kafka request. 0 is no delay. The default
+ # is 0.
+ #bulk_flush_frequency: 0s
+
+ # The number of seconds to wait for responses from the Kafka brokers before
+ # timing out. The default is 30s.
+ #timeout: 30s
+
+ # The maximum duration a broker will wait for number of required ACKs. The
+ # default is 10s.
+ #broker_timeout: 10s
+
+ # The number of messages buffered for each Kafka broker. The default is 256.
+ #channel_buffer_size: 256
+
+ # The keep-alive period for an active network connection. If 0s, keep-alives
+ # are disabled. The default is 0 seconds.
+ #keep_alive: 0
+
+ # Sets the output compression codec. Must be one of none, snappy and gzip. The
+ # default is gzip.
+ #compression: gzip
+
+ # Set the compression level. Currently only gzip provides a compression level
+ # between 0 and 9. The default value is chosen by the compression algorithm.
+ #compression_level: 4
+
+ # The maximum permitted size of JSON-encoded messages. Bigger messages will be
+ # dropped. The default value is 1000000 (bytes). This value should be equal to
+ # or less than the broker's message.max.bytes.
+ #max_message_bytes: 1000000
+
+ # The ACK reliability level required from broker. 0=no response, 1=wait for
+ # local commit, -1=wait for all replicas to commit. The default is 1. Note:
+ # If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
+ # on error.
+ #required_acks: 1
+
+ # The configurable ClientID used for logging, debugging, and auditing
+ # purposes. The default is "beats".
+ #client_id: beats
+
+ # Use SSL settings for HTTPS.
+ #ssl.enabled: true
+
+ # Controls the verification of certificates. Valid values are:
+ # * full, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate.
+ # * strict, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate. If the Subject Alternative
+ # Name is empty, it returns an error.
+ # * certificate, which verifies that the provided certificate is signed by a
+ # trusted authority (CA), but does not perform any hostname verification.
+ # * none, which performs no verification of the server's certificate. This
+ # mode disables many of the security benefits of SSL/TLS and should only be used
+ # after very careful consideration. It is primarily intended as a temporary
+ # diagnostic mechanism when attempting to resolve TLS errors; its use in
+ # production environments is strongly discouraged.
+ # The default value is full.
+ #ssl.verification_mode: full
+
+ # List of supported/valid TLS versions. By default all TLS versions from 1.1
+ # up to 1.3 are enabled.
+ #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3]
+
+ # List of root certificates for HTTPS server verifications
+ #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
+
+ # Certificate for SSL client authentication
+ #ssl.certificate: "/etc/pki/client/cert.pem"
+
+ # Client certificate key
+ #ssl.key: "/etc/pki/client/cert.key"
+
+ # Optional passphrase for decrypting the certificate key.
+ #ssl.key_passphrase: ''
+
+ # Configure cipher suites to be used for SSL connections
+ #ssl.cipher_suites: []
+
+ # Configure curve types for ECDHE-based cipher suites
+ #ssl.curve_types: []
+
+ # Configure what types of renegotiation are supported. Valid options are
+ # never, once, and freely. Default is never.
+ #ssl.renegotiation: never
+
+ # Configure a pin that can be used to do extra validation of the verified certificate chain,
+ # this allow you to ensure that a specific certificate is used to validate the chain of trust.
+ #
+ # The pin is a base64 encoded string of the SHA-256 fingerprint.
+ #ssl.ca_sha256: ""
+
+ # A root CA HEX encoded fingerprint. During the SSL handshake if the
+ # fingerprint matches the root CA certificate, it will be added to
+ # the provided list of root CAs (`certificate_authorities`), if the
+ # list is empty or not defined, the matching certificate will be the
+ # only one in the list. Then the normal SSL validation happens.
+ #ssl.ca_trusted_fingerprint: ""
+
+ # Enables restarting auditbeat if any file listed by `key`,
+ # `certificate`, or `certificate_authorities` is modified.
+ # This feature IS NOT supported on Windows.
+ #ssl.restart_on_cert_change.enabled: false
+
+ # Period to scan for changes on CA certificate files
+ #ssl.restart_on_cert_change.period: 1m
+
+ # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set.
+ #kerberos.enabled: true
+
+ # Authentication type to use with Kerberos. Available options: keytab, password.
+ #kerberos.auth_type: password
+
+ # Path to the keytab file. It is used when auth_type is set to keytab.
+ #kerberos.keytab: /etc/security/keytabs/kafka.keytab
+
+ # Path to the Kerberos configuration.
+ #kerberos.config_path: /etc/krb5.conf
+
+ # The service name. Service principal name is contructed from
+ # service_name/hostname@realm.
+ #kerberos.service_name: kafka
+
+ # Name of the Kerberos user.
+ #kerberos.username: elastic
+
+ # Password of the Kerberos user. It is used when auth_type is set to password.
+ #kerberos.password: changeme
+
+ # Kerberos realm.
+ #kerberos.realm: ELASTIC
+
+ # Enables Kerberos FAST authentication. This may
+ # conflict with certain Active Directory configurations.
+ #kerberos.enable_krb5_fast: false
+
+# -------------------------------- Redis Output --------------------------------
+#output.redis:
+ # Boolean flag to enable or disable the output module.
+ #enabled: true
+
+ # Configure JSON encoding
+ #codec.json:
+ # Pretty print json event
+ #pretty: false
+
+ # Configure escaping HTML symbols in strings.
+ #escape_html: false
+
+ # The list of Redis servers to connect to. If load-balancing is enabled, the
+ # events are distributed to the servers in the list. If one server becomes
+ # unreachable, the events are distributed to the reachable servers only.
+ # The hosts setting supports redis and rediss urls with custom password like
+ # redis://:password@localhost:6379.
+ #hosts: ["localhost:6379"]
+
+ # The name of the Redis list or channel the events are published to. The
+ # default is auditbeat.
+ #key: auditbeat
+
+ # The password to authenticate to Redis with. The default is no authentication.
+ #password:
+
+ # The Redis database number where the events are published. The default is 0.
+ #db: 0
+
+ # The Redis data type to use for publishing events. If the data type is list,
+ # the Redis RPUSH command is used. If the data type is channel, the Redis
+ # PUBLISH command is used. The default value is list.
+ #datatype: list
+
+ # The number of workers to use for each host configured to publish events to
+ # Redis. Use this setting along with the loadbalance option. For example, if
+ # you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
+ # host).
+ #worker: 1
+
+ # If set to true and multiple hosts or workers are configured, the output
+ # plugin load balances published events onto all Redis hosts. If set to false,
+ # the output plugin sends all events to only one host (determined at random)
+ # and will switch to another host if the currently selected one becomes
+ # unreachable. The default value is true.
+ #loadbalance: true
+
+ # The Redis connection timeout in seconds. The default is 5 seconds.
+ #timeout: 5s
+
+ # The number of times to retry publishing an event after a publishing failure.
+ # After the specified number of retries, the events are typically dropped.
+ # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
+ # all events are published. Set max_retries to a value less than 0 to retry
+ # until all events are published. The default is 3.
+ #max_retries: 3
+
+ # The number of seconds to wait before trying to reconnect to Redis
+ # after a network error. After waiting backoff.init seconds, the Beat
+ # tries to reconnect. If the attempt fails, the backoff timer is increased
+ # exponentially up to backoff.max. After a successful connection, the backoff
+ # timer is reset. The default is 1s.
+ #backoff.init: 1s
+
+ # The maximum number of seconds to wait before attempting to connect to
+ # Redis after a network error. The default is 60s.
+ #backoff.max: 60s
+
+ # The maximum number of events to bulk in a single Redis request or pipeline.
+ # The default is 2048.
+ #bulk_max_size: 2048
+
+ # The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
+ # value must be a URL with a scheme of socks5://.
+ #proxy_url:
+
+ # This option determines whether Redis hostnames are resolved locally when
+ # using a proxy. The default value is false, which means that name resolution
+ # occurs on the proxy server.
+ #proxy_use_local_resolver: false
+
+ # Use SSL settings for HTTPS.
+ #ssl.enabled: true
+
+ # Controls the verification of certificates. Valid values are:
+ # * full, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate.
+ # * strict, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate. If the Subject Alternative
+ # Name is empty, it returns an error.
+ # * certificate, which verifies that the provided certificate is signed by a
+ # trusted authority (CA), but does not perform any hostname verification.
+ # * none, which performs no verification of the server's certificate. This
+ # mode disables many of the security benefits of SSL/TLS and should only be used
+ # after very careful consideration. It is primarily intended as a temporary
+ # diagnostic mechanism when attempting to resolve TLS errors; its use in
+ # production environments is strongly discouraged.
+ # The default value is full.
+ #ssl.verification_mode: full
+
+ # List of supported/valid TLS versions. By default all TLS versions from 1.1
+ # up to 1.3 are enabled.
+ #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3]
+
+ # List of root certificates for HTTPS server verifications
+ #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
+
+ # Certificate for SSL client authentication
+ #ssl.certificate: "/etc/pki/client/cert.pem"
+
+ # Client certificate key
+ #ssl.key: "/etc/pki/client/cert.key"
+
+ # Optional passphrase for decrypting the certificate key.
+ #ssl.key_passphrase: ''
+
+ # Configure cipher suites to be used for SSL connections
+ #ssl.cipher_suites: []
+
+ # Configure curve types for ECDHE-based cipher suites
+ #ssl.curve_types: []
+
+ # Configure what types of renegotiation are supported. Valid options are
+ # never, once, and freely. Default is never.
+ #ssl.renegotiation: never
+
+ # Configure a pin that can be used to do extra validation of the verified certificate chain,
+ # this allow you to ensure that a specific certificate is used to validate the chain of trust.
+ #
+ # The pin is a base64 encoded string of the SHA-256 fingerprint.
+ #ssl.ca_sha256: ""
+
+ # A root CA HEX encoded fingerprint. During the SSL handshake if the
+ # fingerprint matches the root CA certificate, it will be added to
+ # the provided list of root CAs (`certificate_authorities`), if the
+ # list is empty or not defined, the matching certificate will be the
+ # only one in the list. Then the normal SSL validation happens.
+ #ssl.ca_trusted_fingerprint: ""
+
+
+# -------------------------------- File Output ---------------------------------
+#output.file:
+ # Boolean flag to enable or disable the output module.
+ #enabled: true
+
+ # Configure JSON encoding
+ #codec.json:
+ # Pretty-print JSON event
+ #pretty: false
+
+ # Configure escaping HTML symbols in strings.
+ #escape_html: false
+
+ # Path to the directory where to save the generated files. The option is
+ # mandatory.
+ #path: "/tmp/auditbeat"
+
+ # Name of the generated files. The default is `auditbeat` and it generates
+ # files: `auditbeat-{datetime}.ndjson`, `auditbeat-{datetime}-1.ndjson`, etc.
+ #filename: auditbeat
+
+ # Maximum size in kilobytes of each file. When this size is reached, and on
+ # every Auditbeat restart, the files are rotated. The default value is 10240
+ # kB.
+ #rotate_every_kb: 10000
+
+ # Maximum number of files under path. When this number of files is reached,
+ # the oldest file is deleted and the rest are shifted from last to first. The
+ # default is 7 files.
+ #number_of_files: 7
+
+ # Permissions to use for file creation. The default is 0600.
+ #permissions: 0600
+
+ # Configure automatic file rotation on every startup. The default is true.
+ #rotate_on_startup: true
+
+# ------------------------------- Console Output -------------------------------
+#output.console:
+ # Boolean flag to enable or disable the output module.
+ #enabled: true
+
+ # Configure JSON encoding
+ #codec.json:
+ # Pretty-print JSON event
+ #pretty: false
+
+ # Configure escaping HTML symbols in strings.
+ #escape_html: false
+
+# =================================== Paths ====================================
+
+# The home path for the Auditbeat installation. This is the default base path
+# for all other path settings and for miscellaneous files that come with the
+# distribution (for example, the sample dashboards).
+# If not set by a CLI flag or in the configuration file, the default for the
+# home path is the location of the binary.
+#path.home:
+
+# The configuration path for the Auditbeat installation. This is the default
+# base path for configuration files, including the main YAML configuration file
+# and the Elasticsearch template file. If not set by a CLI flag or in the
+# configuration file, the default for the configuration path is the home path.
+#path.config: ${path.home}
+
+# The data path for the Auditbeat installation. This is the default base path
+# for all the files in which Auditbeat needs to store its data. If not set by a
+# CLI flag or in the configuration file, the default for the data path is a data
+# subdirectory inside the home path.
+#path.data: ${path.home}/data
+
+# The logs path for a Auditbeat installation. This is the default location for
+# the Beat's log files. If not set by a CLI flag or in the configuration file,
+# the default for the logs path is a logs subdirectory inside the home path.
+#path.logs: ${path.home}/logs
+
+# ================================== Keystore ==================================
+
+# Location of the Keystore containing the keys and their sensitive values.
+#keystore.path: "${path.config}/beats.keystore"
+
+# ================================= Dashboards =================================
+
+# These settings control loading the sample dashboards to the Kibana index. Loading
+# the dashboards are disabled by default and can be enabled either by setting the
+# options here or by using the `-setup` CLI flag or the `setup` command.
+#setup.dashboards.enabled: false
+
+# The directory from where to read the dashboards. The default is the `kibana`
+# folder in the home path.
+#setup.dashboards.directory: ${path.home}/kibana
+
+# The URL from where to download the dashboard archive. It is used instead of
+# the directory if it has a value.
+#setup.dashboards.url:
+
+# The file archive (zip file) from where to read the dashboards. It is used instead
+# of the directory when it has a value.
+#setup.dashboards.file:
+
+# In case the archive contains the dashboards from multiple Beats, this lets you
+# select which one to load. You can load all the dashboards in the archive by
+# setting this to the empty string.
+#setup.dashboards.beat: auditbeat
+
+# The name of the Kibana index to use for setting the configuration. Default is ".kibana"
+#setup.dashboards.kibana_index: .kibana
+
+# The Elasticsearch index name. This overwrites the index name defined in the
+# dashboards and index pattern. Example: testbeat-*
+#setup.dashboards.index:
+
+# Always use the Kibana API for loading the dashboards instead of autodetecting
+# how to install the dashboards by first querying Elasticsearch.
+#setup.dashboards.always_kibana: false
+
+# If true and Kibana is not reachable at the time when dashboards are loaded,
+# it will retry to reconnect to Kibana instead of exiting with an error.
+#setup.dashboards.retry.enabled: false
+
+# Duration interval between Kibana connection retries.
+#setup.dashboards.retry.interval: 1s
+
+# Maximum number of retries before exiting with an error, 0 for unlimited retrying.
+#setup.dashboards.retry.maximum: 0
+
+# ================================== Template ==================================
+
+# A template is used to set the mapping in Elasticsearch
+# By default template loading is enabled and the template is loaded.
+# These settings can be adjusted to load your own template or overwrite existing ones.
+
+# Set to false to disable template loading.
+#setup.template.enabled: true
+
+# Template name. By default the template name is "auditbeat-%{[agent.version]}"
+# The template name and pattern has to be set in case the Elasticsearch index pattern is modified.
+#setup.template.name: "auditbeat-%{[agent.version]}"
+
+# Template pattern. By default the template pattern is "auditbeat-%{[agent.version]}" to apply to the default index settings.
+# The template name and pattern has to be set in case the Elasticsearch index pattern is modified.
+#setup.template.pattern: "auditbeat-%{[agent.version]}"
+
+# Path to fields.yml file to generate the template
+#setup.template.fields: "${path.config}/fields.yml"
+
+# A list of fields to be added to the template and Kibana index pattern. Also
+# specify setup.template.overwrite: true to overwrite the existing template.
+#setup.template.append_fields:
+#- name: field_name
+# type: field_type
+
+# Enable JSON template loading. If this is enabled, the fields.yml is ignored.
+#setup.template.json.enabled: false
+
+# Path to the JSON template file
+#setup.template.json.path: "${path.config}/template.json"
+
+# Name under which the template is stored in Elasticsearch
+#setup.template.json.name: ""
+
+# Set this option if the JSON template is a data stream.
+#setup.template.json.data_stream: false
+
+# Overwrite existing template
+# Do not enable this option for more than one instance of auditbeat as it might
+# overload your Elasticsearch with too many update requests.
+#setup.template.overwrite: false
+
+# Elasticsearch template settings
+setup.template.settings:
+
+ # A dictionary of settings to place into the settings.index dictionary
+ # of the Elasticsearch template. For more details, please check
+ # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html
+ #index:
+ #number_of_shards: 1
+ #codec: best_compression
+
+ # A dictionary of settings for the _source field. For more details, please check
+ # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html
+ #_source:
+ #enabled: false
+
+# ====================== Index Lifecycle Management (ILM) ======================
+
+# Configure index lifecycle management (ILM) to manage the backing indices
+# of your data streams.
+
+# Enable ILM support. Valid values are true, or false.
+#setup.ilm.enabled: true
+
+# Set the lifecycle policy name. The default policy name is
+# 'beatname'.
+#setup.ilm.policy_name: "mypolicy"
+
+# The path to a JSON file that contains a lifecycle policy configuration. Used
+# to load your own lifecycle policy.
+#setup.ilm.policy_file:
+
+# Disable the check for an existing lifecycle policy. The default is true.
+# If you set this option to false, lifecycle policy will not be installed,
+# even if setup.ilm.overwrite is set to true.
+#setup.ilm.check_exists: true
+
+# Overwrite the lifecycle policy at startup. The default is false.
+#setup.ilm.overwrite: false
+
+# ======================== Data Stream Lifecycle (DSL) =========================
+
+# Configure Data Stream Lifecycle to manage data streams while connected to Serverless elasticsearch.
+# These settings are mutually exclusive with ILM settings which are not supported in Serverless projects.
+
+# Enable DSL support. Valid values are true, or false.
+#setup.dsl.enabled: true
+
+# Set the lifecycle policy name or pattern. For DSL, this name must match the data stream that the lifecycle is for.
+# The default data stream pattern is auditbeat-%{[agent.version]}"
+# The template string `%{[agent.version]}` will resolve to the current stack version.
+# The other possible template value is `%{[beat.name]}`.
+#setup.dsl.data_stream_pattern: "auditbeat-%{[agent.version]}"
+
+# The path to a JSON file that contains a lifecycle policy configuration. Used
+# to load your own lifecycle policy.
+# If no custom policy is specified, a default policy with a lifetime of 7 days will be created.
+#setup.dsl.policy_file:
+
+# Disable the check for an existing lifecycle policy. The default is true. If
+# you disable this check, set setup.dsl.overwrite: true so the lifecycle policy
+# can be installed.
+#setup.dsl.check_exists: true
+
+# Overwrite the lifecycle policy at startup. The default is false.
+#setup.dsl.overwrite: false
+
+# =================================== Kibana ===================================
+
+# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
+# This requires a Kibana endpoint configuration.
+setup.kibana:
+
+ # Kibana Host
+ # Scheme and port can be left out and will be set to the default (http and 5601)
+ # In case you specify and additional path, the scheme is required: http://localhost:5601/path
+ # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
+ #host: "localhost:5601"
+
+ # Optional protocol and basic auth credentials.
+ #protocol: "https"
+ #username: "elastic"
+ #password: "changeme"
+
+ # Optional HTTP path
+ #path: ""
+
+ # Optional Kibana space ID.
+ #space.id: ""
+
+ # Custom HTTP headers to add to each request
+ #headers:
+ # X-My-Header: Contents of the header
+
+ # Use SSL settings for HTTPS.
+ #ssl.enabled: true
+
+ # Controls the verification of certificates. Valid values are:
+ # * full, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate.
+ # * strict, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate. If the Subject Alternative
+ # Name is empty, it returns an error.
+ # * certificate, which verifies that the provided certificate is signed by a
+ # trusted authority (CA), but does not perform any hostname verification.
+ # * none, which performs no verification of the server's certificate. This
+ # mode disables many of the security benefits of SSL/TLS and should only be used
+ # after very careful consideration. It is primarily intended as a temporary
+ # diagnostic mechanism when attempting to resolve TLS errors; its use in
+ # production environments is strongly discouraged.
+ # The default value is full.
+ #ssl.verification_mode: full
+
+ # List of supported/valid TLS versions. By default all TLS versions from 1.1
+ # up to 1.3 are enabled.
+ #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3]
+
+ # List of root certificates for HTTPS server verifications
+ #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
+
+ # Certificate for SSL client authentication
+ #ssl.certificate: "/etc/pki/client/cert.pem"
+
+ # Client certificate key
+ #ssl.key: "/etc/pki/client/cert.key"
+
+ # Optional passphrase for decrypting the certificate key.
+ #ssl.key_passphrase: ''
+
+ # Configure cipher suites to be used for SSL connections
+ #ssl.cipher_suites: []
+
+ # Configure curve types for ECDHE-based cipher suites
+ #ssl.curve_types: []
+
+ # Configure what types of renegotiation are supported. Valid options are
+ # never, once, and freely. Default is never.
+ #ssl.renegotiation: never
+
+ # Configure a pin that can be used to do extra validation of the verified certificate chain,
+ # this allow you to ensure that a specific certificate is used to validate the chain of trust.
+ #
+ # The pin is a base64 encoded string of the SHA-256 fingerprint.
+ #ssl.ca_sha256: ""
+
+ # A root CA HEX encoded fingerprint. During the SSL handshake if the
+ # fingerprint matches the root CA certificate, it will be added to
+ # the provided list of root CAs (`certificate_authorities`), if the
+ # list is empty or not defined, the matching certificate will be the
+ # only one in the list. Then the normal SSL validation happens.
+ #ssl.ca_trusted_fingerprint: ""
+
+
+# ================================== Logging ===================================
+
+# There are four options for the log output: file, stderr, syslog, eventlog
+# The file output is the default.
+
+# Sets log level. The default log level is info.
+# Available log levels are: error, warning, info, debug
+#logging.level: info
+
+# Enable debug output for selected components. To enable all selectors use ["*"]
+# Other available selectors are "beat", "publisher", "service"
+# Multiple selectors can be chained.
+#logging.selectors: [ ]
+
+# Send all logging output to stderr. The default is false.
+#logging.to_stderr: false
+
+# Send all logging output to syslog. The default is false.
+#logging.to_syslog: false
+
+# Send all logging output to Windows Event Logs. The default is false.
+#logging.to_eventlog: false
+
+# If enabled, Auditbeat periodically logs its internal metrics that have changed
+# in the last period. For each metric that changed, the delta from the value at
+# the beginning of the period is logged. Also, the total values for
+# all non-zero internal metrics are logged on shutdown. The default is true.
+#logging.metrics.enabled: true
+
+# The period after which to log the internal metrics. The default is 30s.
+#logging.metrics.period: 30s
+
+# A list of metrics namespaces to report in the logs. Defaults to [stats].
+# `stats` contains general Beat metrics. `dataset` may be present in some
+# Beats and contains module or input metrics.
+#logging.metrics.namespaces: [stats]
+
+# Logging to rotating files. Set logging.to_files to false to disable logging to
+# files.
+logging.to_files: true
+logging.files:
+ # Configure the path where the logs are written. The default is the logs directory
+ # under the home path (the binary location).
+ #path: /var/log/auditbeat
+
+ # The name of the files where the logs are written to.
+ #name: auditbeat
+
+ # Configure log file size limit. If the limit is reached, log file will be
+ # automatically rotated.
+ #rotateeverybytes: 10485760 # = 10MB
+
+ # Number of rotated log files to keep. The oldest files will be deleted first.
+ #keepfiles: 7
+
+ # The permissions mask to apply when rotating log files. The default value is 0600.
+ # Must be a valid Unix-style file permissions mask expressed in octal notation.
+ #permissions: 0600
+
+ # Enable log file rotation on time intervals in addition to the size-based rotation.
+ # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
+ # are boundary-aligned with minutes, hours, days, weeks, months, and years as
+ # reported by the local system clock. All other intervals are calculated from the
+ # Unix epoch. Defaults to disabled.
+ #interval: 0
+
+ # Rotate existing logs on startup rather than appending them to the existing
+ # file. Defaults to true.
+ # rotateonstartup: true
+
+#=============================== Events Logging ===============================
+# Some outputs will log raw events on errors like indexing errors in the
+# Elasticsearch output, to prevent logging raw events (that may contain
+# sensitive information) together with other log messages, a different
+# log file, only for log entries containing raw events, is used. It will
+# use the same level, selectors and all other configurations from the
+# default logger, but it will have it's own file configuration.
+#
+# Having a different log file for raw events also prevents event data
+# from drowning out the regular log files.
+#
+# IMPORTANT: No matter the default logger output configuration, raw events
+# will **always** be logged to a file configured by `logging.event_data.files`.
+
+# logging.event_data:
+# Logging to rotating files. Set logging.to_files to false to disable logging to
+# files.
+#logging.event_data.to_files: true
+#logging.event_data:
+ # Configure the path where the logs are written. The default is the logs directory
+ # under the home path (the binary location).
+ #path: /var/log/auditbeat
+
+ # The name of the files where the logs are written to.
+ #name: auditbeat-events-data
+
+ # Configure log file size limit. If the limit is reached, log file will be
+ # automatically rotated.
+ #rotateeverybytes: 5242880 # = 5MB
+
+ # Number of rotated log files to keep. The oldest files will be deleted first.
+ #keepfiles: 2
+
+ # The permissions mask to apply when rotating log files. The default value is 0600.
+ # Must be a valid Unix-style file permissions mask expressed in octal notation.
+ #permissions: 0600
+
+ # Enable log file rotation on time intervals in addition to the size-based rotation.
+ # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
+ # are boundary-aligned with minutes, hours, days, weeks, months, and years as
+ # reported by the local system clock. All other intervals are calculated from the
+ # Unix epoch. Defaults to disabled.
+ #interval: 0
+
+ # Rotate existing logs on startup rather than appending them to the existing
+ # file. Defaults to false.
+ # rotateonstartup: false
+
+# ============================= X-Pack Monitoring ==============================
+# Auditbeat can export internal metrics to a central Elasticsearch monitoring
+# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
+# reporting is disabled by default.
+
+# Set to true to enable the monitoring reporter.
+#monitoring.enabled: false
+
+# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
+# Auditbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
+# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
+#monitoring.cluster_uuid:
+
+# Uncomment to send the metrics to Elasticsearch. Most settings from the
+# Elasticsearch output are accepted here as well.
+# Note that the settings should point to your Elasticsearch *monitoring* cluster.
+# Any setting that is not set is automatically inherited from the Elasticsearch
+# output configuration, so if you have the Elasticsearch output configured such
+# that it is pointing to your Elasticsearch monitoring cluster, you can simply
+# uncomment the following line.
+#monitoring.elasticsearch:
+
+ # Array of hosts to connect to.
+ # Scheme and port can be left out and will be set to the default (http and 9200)
+ # In case you specify an additional path, the scheme is required: http://localhost:9200/path
+ # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
+ #hosts: ["localhost:9200"]
+
+ # Set gzip compression level.
+ #compression_level: 0
+
+ # Protocol - either `http` (default) or `https`.
+ #protocol: "https"
+
+ # Authentication credentials - either API key or username/password.
+ #api_key: "id:api_key"
+ #username: "beats_system"
+ #password: "changeme"
+
+ # Dictionary of HTTP parameters to pass within the URL with index operations.
+ #parameters:
+ #param1: value1
+ #param2: value2
+
+ # Custom HTTP headers to add to each request
+ #headers:
+ # X-My-Header: Contents of the header
+
+ # Proxy server url
+ #proxy_url: http://proxy:3128
+
+ # The number of times a particular Elasticsearch index operation is attempted. If
+ # the indexing operation doesn't succeed after this many retries, the events are
+ # dropped. The default is 3.
+ #max_retries: 3
+
+ # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
+ # The default is 50.
+ #bulk_max_size: 50
+
+ # The number of seconds to wait before trying to reconnect to Elasticsearch
+ # after a network error. After waiting backoff.init seconds, the Beat
+ # tries to reconnect. If the attempt fails, the backoff timer is increased
+ # exponentially up to backoff.max. After a successful connection, the backoff
+ # timer is reset. The default is 1s.
+ #backoff.init: 1s
+
+ # The maximum number of seconds to wait before attempting to connect to
+ # Elasticsearch after a network error. The default is 60s.
+ #backoff.max: 60s
+
+ # Configure HTTP request timeout before failing a request to Elasticsearch.
+ #timeout: 90
+
+ # Use SSL settings for HTTPS.
+ #ssl.enabled: true
+
+ # Controls the verification of certificates. Valid values are:
+ # * full, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate.
+ # * strict, which verifies that the provided certificate is signed by a trusted
+ # authority (CA) and also verifies that the server's hostname (or IP address)
+ # matches the names identified within the certificate. If the Subject Alternative
+ # Name is empty, it returns an error.
+ # * certificate, which verifies that the provided certificate is signed by a
+ # trusted authority (CA), but does not perform any hostname verification.
+ # * none, which performs no verification of the server's certificate. This
+ # mode disables many of the security benefits of SSL/TLS and should only be used
+ # after very careful consideration. It is primarily intended as a temporary
+ # diagnostic mechanism when attempting to resolve TLS errors; its use in
+ # production environments is strongly discouraged.
+ # The default value is full.
+ #ssl.verification_mode: full
+
+ # List of supported/valid TLS versions. By default all TLS versions from 1.1
+ # up to 1.3 are enabled.
+ #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3]
+
+ # List of root certificates for HTTPS server verifications
+ #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
+
+ # Certificate for SSL client authentication
+ #ssl.certificate: "/etc/pki/client/cert.pem"
+
+ # Client certificate key
+ #ssl.key: "/etc/pki/client/cert.key"
+
+ # Optional passphrase for decrypting the certificate key.
+ #ssl.key_passphrase: ''
+
+ # Configure cipher suites to be used for SSL connections
+ #ssl.cipher_suites: []
+
+ # Configure curve types for ECDHE-based cipher suites
+ #ssl.curve_types: []
+
+ # Configure what types of renegotiation are supported. Valid options are
+ # never, once, and freely. Default is never.
+ #ssl.renegotiation: never
+
+ # Configure a pin that can be used to do extra validation of the verified certificate chain,
+ # this allow you to ensure that a specific certificate is used to validate the chain of trust.
+ #
+ # The pin is a base64 encoded string of the SHA-256 fingerprint.
+ #ssl.ca_sha256: ""
+
+ # A root CA HEX encoded fingerprint. During the SSL handshake if the
+ # fingerprint matches the root CA certificate, it will be added to
+ # the provided list of root CAs (`certificate_authorities`), if the
+ # list is empty or not defined, the matching certificate will be the
+ # only one in the list. Then the normal SSL validation happens.
+ #ssl.ca_trusted_fingerprint: ""
+
+ # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set.
+ #kerberos.enabled: true
+
+ # Authentication type to use with Kerberos. Available options: keytab, password.
+ #kerberos.auth_type: password
+
+ # Path to the keytab file. It is used when auth_type is set to keytab.
+ #kerberos.keytab: /etc/elastic.keytab
+
+ # Path to the Kerberos configuration.
+ #kerberos.config_path: /etc/krb5.conf
+
+ # Name of the Kerberos user.
+ #kerberos.username: elastic
+
+ # Password of the Kerberos user. It is used when auth_type is set to password.
+ #kerberos.password: changeme
+
+ # Kerberos realm.
+ #kerberos.realm: ELASTIC
+
+ #metrics.period: 10s
+ #state.period: 1m
+
+# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts`
+# setting. You can find the value for this setting in the Elastic Cloud web UI.
+#monitoring.cloud.id:
+
+# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username`
+# and `monitoring.elasticsearch.password` settings. The format is `:`.
+#monitoring.cloud.auth:
+
+# =============================== HTTP Endpoint ================================
+
+# Each beat can expose internal metrics through an HTTP endpoint. For security
+# reasons the endpoint is disabled by default. This feature is currently experimental.
+# Stats can be accessed through http://localhost:5066/stats. For pretty JSON output
+# append ?pretty to the URL.
+
+# Defines if the HTTP endpoint is enabled.
+#http.enabled: false
+
+# The HTTP endpoint will bind to this hostname, IP address, unix socket, or named pipe.
+# When using IP addresses, it is recommended to only use localhost.
+#http.host: localhost
+
+# Port on which the HTTP endpoint will bind. Default is 5066.
+#http.port: 5066
+
+# Define which user should be owning the named pipe.
+#http.named_pipe.user:
+
+# Define which permissions should be applied to the named pipe, use the Security
+# Descriptor Definition Language (SDDL) to define the permission. This option cannot be used with
+# `http.user`.
+#http.named_pipe.security_descriptor:
+
+# Defines if the HTTP pprof endpoints are enabled.
+# It is recommended that this is only enabled on localhost as these endpoints may leak data.
+#http.pprof.enabled: false
+
+# Controls the fraction of goroutine blocking events that are reported in the
+# blocking profile.
+#http.pprof.block_profile_rate: 0
+
+# Controls the fraction of memory allocations that are recorded and reported in
+# the memory profile.
+#http.pprof.mem_profile_rate: 524288
+
+# Controls the fraction of mutex contention events that are reported in the
+# mutex profile.
+#http.pprof.mutex_profile_rate: 0
+
+# ============================== Process Security ==============================
+
+# Enable or disable seccomp system call filtering on Linux. Default is enabled.
+#seccomp.enabled: true
+
+# ============================== Instrumentation ===============================
+
+# Instrumentation support for the auditbeat.
+#instrumentation:
+ # Set to true to enable instrumentation of auditbeat.
+ #enabled: false
+
+ # Environment in which auditbeat is running on (eg: staging, production, etc.)
+ #environment: ""
+
+ # APM Server hosts to report instrumentation results to.
+ #hosts:
+ # - http://localhost:8200
+
+ # API Key for the APM Server(s).
+ # If api_key is set then secret_token will be ignored.
+ #api_key:
+
+ # Secret token for the APM Server(s).
+ #secret_token:
+
+ # Enable profiling of the server, recording profile samples as events.
+ #
+ # This feature is experimental.
+ #profiling:
+ #cpu:
+ # Set to true to enable CPU profiling.
+ #enabled: false
+ #interval: 60s
+ #duration: 10s
+ #heap:
+ # Set to true to enable heap profiling.
+ #enabled: false
+ #interval: 60s
+
+# ================================= Migration ==================================
+
+# This allows to enable 6.7 migration aliases
+#migration.6_to_7.enabled: false
+
+# =============================== Feature Flags ================================
+
+# Enable and configure feature flags.
+#features:
+# fqdn:
+# enabled: true
+```
+
diff --git a/docs/reference/auditbeat/auditbeat-starting.md b/docs/reference/auditbeat/auditbeat-starting.md
new file mode 100644
index 000000000000..2ff51fec383d
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-starting.md
@@ -0,0 +1,70 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-starting.html
+---
+
+# Start Auditbeat [auditbeat-starting]
+
+Before starting Auditbeat:
+
+* Follow the steps in [Quick start: installation and configuration](/reference/auditbeat/auditbeat-installation-configuration.md) to install, configure, and set up the Auditbeat environment.
+* Make sure {{kib}} and {{es}} are running.
+* Make sure the user specified in `auditbeat.yml` is [authorized to publish events](/reference/auditbeat/privileges-to-publish-events.md).
+
+To start Auditbeat, run:
+
+:::::::{tab-set}
+
+::::::{tab-item} DEB
+```sh
+sudo service auditbeat start
+```
+
+::::{note}
+If you use an `init.d` script to start Auditbeat, you can’t specify command line flags (see [Command reference](/reference/auditbeat/command-line-options.md)). To specify flags, start Auditbeat in the foreground.
+::::
+
+
+Also see [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md).
+::::::
+
+::::::{tab-item} RPM
+```sh
+sudo service auditbeat start
+```
+
+::::{note}
+If you use an `init.d` script to start Auditbeat, you can’t specify command line flags (see [Command reference](/reference/auditbeat/command-line-options.md)). To specify flags, start Auditbeat in the foreground.
+::::
+
+
+Also see [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md).
+::::::
+
+::::::{tab-item} MacOS
+```sh
+sudo chown root auditbeat.yml <1>
+sudo ./auditbeat -e
+```
+
+1. You’ll be running Auditbeat as root, so you need to change ownership of the configuration file, or run Auditbeat with `--strict.perms=false` specified. See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md).
+::::::
+
+::::::{tab-item} Linux
+```sh
+sudo chown root auditbeat.yml <1>
+sudo ./auditbeat -e
+```
+
+1. You’ll be running Auditbeat as root, so you need to change ownership of the configuration file, or run Auditbeat with `--strict.perms=false` specified. See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md).
+::::::
+
+::::::{tab-item} Windows
+```sh
+PS C:\Program Files\auditbeat> Start-Service auditbeat
+```
+
+By default, Windows log files are stored in `C:\ProgramData\auditbeat\Logs`.
+::::::
+
+:::::::
diff --git a/docs/reference/auditbeat/auditbeat-template.md b/docs/reference/auditbeat/auditbeat-template.md
new file mode 100644
index 000000000000..78188f074945
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat-template.md
@@ -0,0 +1,228 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-template.html
+---
+
+# Load the Elasticsearch index template [auditbeat-template]
+
+{{es}} uses [index templates](docs-content://manage-data/data-store/templates.md) to define:
+
+* Settings that control the behavior of your data stream and backing indices. The settings include the lifecycle policy used to manage backing indices as they grow and age.
+* Mappings that determine how fields are analyzed. Each mapping sets the [{{es}} datatype](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md) to use for a specific data field.
+
+The recommended index template file for Auditbeat is installed by the Auditbeat packages. If you accept the default configuration in the `auditbeat.yml` config file, Auditbeat loads the template automatically after successfully connecting to {{es}}. If the template already exists, it’s not overwritten unless you configure Auditbeat to do so.
+
+::::{note}
+A connection to {{es}} is required to load the index template. If the output is not {{es}} (or {{ess}}), you must [load the template manually](#load-template-manually).
+::::
+
+
+This page shows how to change the default template loading behavior to:
+
+* [Load your own index template](#load-custom-template)
+* [Overwrite an existing index template](#overwrite-template)
+* [Disable automatic index template loading](#disable-template-loading)
+* [Load the index template manually](#load-template-manually)
+
+For a full list of template setup options, see [Elasticsearch index template](/reference/auditbeat/configuration-template.md).
+
+
+## Load your own index template [load-custom-template]
+
+To load your own index template, set the following options:
+
+```yaml
+setup.template.name: "your_template_name"
+setup.template.fields: "path/to/fields.yml"
+```
+
+If the template already exists, it’s not overwritten unless you configure Auditbeat to do so.
+
+You can load templates for both data streams and indices.
+
+
+## Overwrite an existing index template [overwrite-template]
+
+::::{warning}
+Do not enable this option for more than one instance of Auditbeat. If you start multiple instances at the same time, it can overload your {{es}} with too many template update requests.
+::::
+
+
+To overwrite a template that’s already loaded into {{es}}, set:
+
+```yaml
+setup.template.overwrite: true
+```
+
+
+## Disable automatic index template loading [disable-template-loading]
+
+You may want to disable automatic template loading if you’re using an output other than {{es}} and need to load the template manually. To disable automatic template loading, set:
+
+```yaml
+setup.template.enabled: false
+```
+
+If you disable automatic template loading, you must load the index template manually.
+
+
+## Load the index template manually [load-template-manually]
+
+To load the index template manually, run the [`setup`](/reference/auditbeat/command-line-options.md#setup-command) command. A connection to {{es}} is required. If another output is enabled, you need to temporarily disable that output and enable {{es}} by using the `-E` option. The examples here assume that Logstash output is enabled. You can omit the `-E` flags if {{es}} output is already enabled.
+
+If you are connecting to a secured {{es}} cluster, make sure you’ve configured credentials as described in the [Quick start: installation and configuration](/reference/auditbeat/auditbeat-installation-configuration.md).
+
+If the host running Auditbeat does not have direct connectivity to {{es}}, see [Load the index template manually (alternate method)](#load-template-manually-alternate).
+
+To load the template, use the appropriate command for your system.
+
+**deb and rpm:**
+
+```sh
+auditbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
+```
+
+**mac:**
+
+```sh
+./auditbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
+```
+
+**linux:**
+
+```sh
+./auditbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
+```
+
+**docker:**
+
+```sh
+docker run --rm docker.elastic.co/beats/auditbeat:9.0.0-beta1 setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
+```
+
+**win:**
+
+Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**).
+
+From the PowerShell prompt, change to the directory where you installed Auditbeat, and run:
+
+```sh
+PS > .\auditbeat.exe setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
+```
+
+
+### Force Kibana to look at newest documents [force-kibana-new]
+
+If you’ve already used Auditbeat to index data into {{es}}, the index may contain old documents. After you load the index template, you can delete the old documents from `auditbeat-*` to force Kibana to look at the newest documents.
+
+Use this command:
+
+**deb and rpm:**
+
+```sh
+curl -XDELETE 'http://localhost:9200/auditbeat-*'
+```
+
+**mac:**
+
+```sh
+curl -XDELETE 'http://localhost:9200/auditbeat-*'
+```
+
+**linux:**
+
+```sh
+curl -XDELETE 'http://localhost:9200/auditbeat-*'
+```
+
+**win:**
+
+```sh
+PS > Invoke-RestMethod -Method Delete "http://localhost:9200/auditbeat-*"
+```
+
+This command deletes all indices that match the pattern `auditbeat`. Before running this command, make sure you want to delete all indices that match the pattern.
+
+
+## Load the index template manually (alternate method) [load-template-manually-alternate]
+
+If the host running Auditbeat does not have direct connectivity to {{es}}, you can export the index template to a file, move it to a machine that does have connectivity, and then install the template manually.
+
+To export the index template, run:
+
+**deb and rpm:**
+
+```sh
+auditbeat export template > auditbeat.template.json
+```
+
+**mac:**
+
+```sh
+./auditbeat export template > auditbeat.template.json
+```
+
+**linux:**
+
+```sh
+./auditbeat export template > auditbeat.template.json
+```
+
+**win:**
+
+```sh
+PS > .\auditbeat.exe export template --es.version 9.0.0-beta1 | Out-File -Encoding UTF8 auditbeat.template.json
+```
+
+To install the template, run:
+
+**deb and rpm:**
+
+```sh
+curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_index_template/auditbeat-9.0.0-beta1 -d@auditbeat.template.json
+```
+
+**mac:**
+
+```sh
+curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_index_template/auditbeat-9.0.0-beta1 -d@auditbeat.template.json
+```
+
+**linux:**
+
+```sh
+curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_index_template/auditbeat-9.0.0-beta1 -d@auditbeat.template.json
+```
+
+**win:**
+
+```sh
+PS > Invoke-RestMethod -Method Put -ContentType "application/json" -InFile auditbeat.template.json -Uri http://localhost:9200/_index_template/auditbeat-9.0.0-beta1
+```
+
+Once you have loaded the index template, load the data stream as well. If you do not load it, you have to give the publisher user `manage` permission on auditbeat-9.0.0-beta1 index.
+
+**deb and rpm:**
+
+```sh
+curl -XPUT http://localhost:9200/_data_stream/auditbeat-9.0.0-beta1
+```
+
+**mac:**
+
+```sh
+curl -XPUT http://localhost:9200/_data_stream/auditbeat-9.0.0-beta1
+```
+
+**linux:**
+
+```sh
+curl -XPUT http://localhost:9200/_data_stream/auditbeat-9.0.0-beta1
+```
+
+**win:**
+
+```sh
+PS > Invoke-RestMethod -Method Put -Uri http://localhost:9200/_data_stream/auditbeat-9.0.0-beta1
+```
+
diff --git a/docs/reference/auditbeat/auditbeat.md b/docs/reference/auditbeat/auditbeat.md
new file mode 100644
index 000000000000..0be0c0096c02
--- /dev/null
+++ b/docs/reference/auditbeat/auditbeat.md
@@ -0,0 +1,8 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/index.html
+---
+
+# Auditbeat
+
+Just a placeholder for a top index page.
diff --git a/docs/reference/auditbeat/bandwidth-throttling.md b/docs/reference/auditbeat/bandwidth-throttling.md
new file mode 100644
index 000000000000..8c8b2961fdc5
--- /dev/null
+++ b/docs/reference/auditbeat/bandwidth-throttling.md
@@ -0,0 +1,20 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/bandwidth-throttling.html
+---
+
+# Auditbeat uses too much bandwidth [bandwidth-throttling]
+
+If you need to limit bandwidth usage, we recommend that you configure the network stack on your OS to perform bandwidth throttling.
+
+For example, the following Linux commands cap the connection between Auditbeat and Logstash by setting a limit of 50 kbps on TCP connections over port 5044:
+
+```shell
+tc qdisc add dev $DEV root handle 1: htb
+tc class add dev $DEV parent 1:1 classid 1:10 htb rate 50kbps ceil 50kbps
+tc filter add dev $DEV parent 1:0 prio 1 protocol ip handle 10 fw flowid 1:10
+iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
+```
+
+Using OS tools to perform bandwidth throttling gives you better control over policies. For example, you can use OS tools to cap bandwidth during the day, but not at night. Or you can leave the bandwidth uncapped, but assign a low priority to the traffic.
+
diff --git a/docs/reference/auditbeat/beats-api-keys.md b/docs/reference/auditbeat/beats-api-keys.md
new file mode 100644
index 000000000000..2772c23eadd0
--- /dev/null
+++ b/docs/reference/auditbeat/beats-api-keys.md
@@ -0,0 +1,142 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/beats-api-keys.html
+---
+
+# Grant access using API keys [beats-api-keys]
+
+Instead of using usernames and passwords, you can use API keys to grant access to {{es}} resources. You can set API keys to expire at a certain time, and you can explicitly invalidate them. Any user with the `manage_api_key` or `manage_own_api_key` cluster privilege can create API keys.
+
+Auditbeat instances typically send both collected data and monitoring information to {{es}}. If you are sending both to the same cluster, you can use the same API key. For different clusters, you need to use an API key per cluster.
+
+::::{note}
+For security reasons, we recommend using a unique API key per Auditbeat instance. You can create as many API keys per user as necessary.
+::::
+
+
+::::{important}
+Review [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md) before creating API keys for Auditbeat.
+::::
+
+
+
+## Create an API key for publishing [beats-api-key-publish]
+
+To create an API key to use for writing data to {{es}}, use the [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key), for example:
+
+```console
+POST /_security/api_key
+{
+ "name": "auditbeat_host001", <1>
+ "role_descriptors": {
+ "auditbeat_writer": { <2>
+ "cluster": ["monitor", "read_ilm", "read_pipeline"],
+ "index": [
+ {
+ "names": ["auditbeat-*"],
+ "privileges": ["view_index_metadata", "create_doc", "auto_configure"]
+ }
+ ]
+ }
+ }
+}
+```
+
+1. Name of the API key
+2. Granted privileges, see [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md)
+
+
+::::{note}
+See [Create a *publishing* user](/reference/auditbeat/privileges-to-publish-events.md) for the list of privileges required to publish events.
+::::
+
+
+The return value will look something like this:
+
+```console-result
+{
+ "id":"TiNAGG4BaaMdaH1tRfuU", <1>
+ "name":"auditbeat_host001",
+ "api_key":"KnR6yE41RrSowb0kQ0HWoA" <2>
+}
+```
+
+1. Unique id for this API key
+2. Generated API key
+
+
+You can now use this API key in your `auditbeat.yml` configuration file like this:
+
+```yaml
+output.elasticsearch:
+ api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1>
+```
+
+1. Format is `id:api_key` (as returned by [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key))
+
+
+
+## Create an API key for monitoring [beats-api-key-monitor]
+
+To create an API key to use for sending monitoring data to {{es}}, use the [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key), for example:
+
+```console
+POST /_security/api_key
+{
+ "name": "auditbeat_host001", <1>
+ "role_descriptors": {
+ "auditbeat_monitoring": { <2>
+ "cluster": ["monitor"],
+ "index": [
+ {
+ "names": [".monitoring-beats-*"],
+ "privileges": ["create_index", "create"]
+ }
+ ]
+ }
+ }
+}
+```
+
+1. Name of the API key
+2. Granted privileges, see [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md)
+
+
+::::{note}
+See [Create a *monitoring* user](/reference/auditbeat/privileges-to-publish-monitoring.md) for the list of privileges required to send monitoring data.
+::::
+
+
+The return value will look something like this:
+
+```console-result
+{
+ "id":"TiNAGG4BaaMdaH1tRfuU", <1>
+ "name":"auditbeat_host001",
+ "api_key":"KnR6yE41RrSowb0kQ0HWoA" <2>
+}
+```
+
+1. Unique id for this API key
+2. Generated API key
+
+
+You can now use this API key in your `auditbeat.yml` configuration file like this:
+
+```yaml
+monitoring.elasticsearch:
+ api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1>
+```
+
+1. Format is `id:api_key` (as returned by [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key))
+
+
+
+## Learn more about API keys [learn-more-api-keys]
+
+See the {{es}} API key documentation for more information:
+
+* [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key)
+* [Get API key information](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-api-key)
+* [Invalidate API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-invalidate-api-key)
+
diff --git a/docs/reference/auditbeat/change-index-name.md b/docs/reference/auditbeat/change-index-name.md
new file mode 100644
index 000000000000..468db6c9284c
--- /dev/null
+++ b/docs/reference/auditbeat/change-index-name.md
@@ -0,0 +1,23 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/change-index-name.html
+---
+
+# Change the index name [change-index-name]
+
+Auditbeat uses data streams named `auditbeat-9.0.0-beta1`. To use a different name, set the [`index`](/reference/auditbeat/elasticsearch-output.md#index-option-es) option in the {{es}} output. You also need to configure the `setup.template.name` and `setup.template.pattern` options to match the new name. For example:
+
+```sh
+output.elasticsearch.index: "customname-%{[agent.version]}"
+setup.template.name: "customname-%{[agent.version]}"
+setup.template.pattern: "customname-%{[agent.version]}"
+```
+
+If you’re using pre-built Kibana dashboards, also set the `setup.dashboards.index` option. For example:
+
+```yaml
+setup.dashboards.index: "customname-*"
+```
+
+For a full list of template setup options, see [Elasticsearch index template](/reference/auditbeat/configuration-template.md).
+
diff --git a/docs/reference/auditbeat/command-line-options.md b/docs/reference/auditbeat/command-line-options.md
new file mode 100644
index 000000000000..ae37d5282d90
--- /dev/null
+++ b/docs/reference/auditbeat/command-line-options.md
@@ -0,0 +1,362 @@
+---
+navigation_title: "Command reference"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/command-line-options.html
+---
+
+# Auditbeat command reference [command-line-options]
+
+
+Auditbeat provides a command-line interface for starting Auditbeat and performing common tasks, like testing configuration files and loading dashboards.
+
+The command-line also supports [global flags](#global-flags) for controlling global behaviors.
+
+::::{tip}
+Use `sudo` to run the following commands if:
+
+* the config file is owned by `root`, or
+* Auditbeat is configured to capture data that requires `root` access
+
+::::
+
+
+Some of the features described here require an Elastic license. For more information, see [https://www.elastic.co/subscriptions](https://www.elastic.co/subscriptions) and [License Management](docs-content://deploy-manage/license/manage-your-license-in-self-managed-cluster.md).
+
+| Commands | |
+| --- | --- |
+| [`export`](#export-command) | Exports the configuration, index template, ILM policy, or a dashboard to stdout. |
+| [`help`](#help-command) | Shows help for any command. |
+| [`keystore`](#keystore-command) | Manages the [secrets keystore](/reference/auditbeat/keystore.md). |
+| [`run`](#run-command) | Runs Auditbeat. This command is used by default if you start Auditbeat without specifying a command. |
+| [`setup`](#setup-command) | Sets up the initial environment, including the index template, ILM policy and write alias, and {{kib}} dashboards (when available). |
+| [`test`](#test-command) | Tests the configuration. |
+| [`version`](#version-command) | Shows information about the current version. |
+
+Also see [Global flags](#global-flags).
+
+## `export` command [export-command]
+
+Exports the configuration, index template, ILM policy, or a dashboard to stdout. You can use this command to quickly view your configuration, see the contents of the index template and the ILM policy, or export a dashboard from {{kib}}.
+
+**SYNOPSIS**
+
+```sh
+auditbeat export SUBCOMMAND [FLAGS]
+```
+
+**SUBCOMMANDS**
+
+**`config`**
+: Exports the current configuration to stdout. If you use the `-c` flag, this command exports the configuration that’s defined in the specified file.
+
+$$$dashboard-subcommand$$$**`dashboard`**
+: Exports a dashboard. You can use this option to store a dashboard on disk in a module and load it automatically. For example, to export the dashboard to a JSON file, run:
+
+ ```shell
+ auditbeat export dashboard --id="DASHBOARD_ID" > dashboard.json
+ ```
+
+ To find the `DASHBOARD_ID`, look at the URL for the dashboard in {{kib}}. By default, `export dashboard` writes the dashboard to stdout. The example shows how to write the dashboard to a JSON file so that you can import it later. The JSON file will contain the dashboard with all visualizations and searches. You must load the index pattern separately for Auditbeat.
+
+ To load the dashboard, copy the generated `dashboard.json` file into the `kibana/6/dashboard` directory of Auditbeat, and run `auditbeat setup --dashboards` to import the dashboard.
+
+ If {{kib}} is not running on `localhost:5061`, you must also adjust the Auditbeat configuration under `setup.kibana`.
+
+
+$$$template-subcommand$$$**`template`**
+: Exports the index template to stdout. You can specify the `--es.version` flag to further define what gets exported. Furthermore you can export the template to a file instead of `stdout` by defining a directory via `--dir`.
+
+$$$ilm-policy-subcommand$$$
+
+**`ilm-policy`**
+: Exports the index lifecycle management policy to stdout. You can specify the `--es.version` and a `--dir` to which the policy should be exported as a file rather than exporting to `stdout`.
+
+**FLAGS**
+
+**`--es.version VERSION`**
+: When used with [`template`](#template-subcommand), exports an index template that is compatible with the specified version. When used with [`ilm-policy`](#ilm-policy-subcommand), exports the ILM policy if the specified ES version is enabled for ILM.
+
+**`-h, --help`**
+: Shows help for the `export` command.
+
+**`--dir DIRNAME`**
+: Define a directory to which the template, pipelines, and ILM policy should be exported to as files instead of printing them to `stdout`.
+
+**`--id DASHBOARD_ID`**
+: When used with [`dashboard`](#dashboard-subcommand), specifies the dashboard ID.
+
+Also see [Global flags](#global-flags).
+
+**EXAMPLES**
+
+```sh
+auditbeat export config
+auditbeat export template --es.version 9.0.0-beta1
+auditbeat export dashboard --id="a7b35890-8baa-11e8-9676-ef67484126fb" > dashboard.json
+```
+
+
+## `help` command [help-command]
+
+Shows help for any command. If no command is specified, shows help for the `run` command.
+
+**SYNOPSIS**
+
+```sh
+auditbeat help COMMAND_NAME [FLAGS]
+```
+
+**`COMMAND_NAME`**
+: Specifies the name of the command to show help for.
+
+**FLAGS**
+
+**`-h, --help`**
+: Shows help for the `help` command.
+
+Also see [Global flags](#global-flags).
+
+**EXAMPLE**
+
+```sh
+auditbeat help export
+```
+
+
+## `keystore` command [keystore-command]
+
+Manages the [secrets keystore](/reference/auditbeat/keystore.md).
+
+**SYNOPSIS**
+
+```sh
+auditbeat keystore SUBCOMMAND [FLAGS]
+```
+
+**SUBCOMMANDS**
+
+**`add KEY`**
+: Adds the specified key to the keystore. Use the `--force` flag to overwrite an existing key. Use the `--stdin` flag to pass the value through `stdin`.
+
+**`create`**
+: Creates a keystore to hold secrets. Use the `--force` flag to overwrite the existing keystore.
+
+**`list`**
+: Lists the keys in the keystore.
+
+**`remove KEY`**
+: Removes the specified key from the keystore.
+
+**FLAGS**
+
+**`--force`**
+: Valid with the `add` and `create` subcommands. When used with `add`, overwrites the specified key. When used with `create`, overwrites the keystore.
+
+**`--stdin`**
+: When used with `add`, uses the stdin as the source of the key’s value.
+
+**`-h, --help`**
+: Shows help for the `keystore` command.
+
+Also see [Global flags](#global-flags).
+
+**EXAMPLES**
+
+```sh
+auditbeat keystore create
+auditbeat keystore add ES_PWD
+auditbeat keystore remove ES_PWD
+auditbeat keystore list
+```
+
+See [Secrets keystore](/reference/auditbeat/keystore.md) for more examples.
+
+
+## `run` command [run-command]
+
+Runs Auditbeat. This command is used by default if you start Auditbeat without specifying a command.
+
+**SYNOPSIS**
+
+```sh
+auditbeat run [FLAGS]
+```
+
+Or:
+
+```sh
+auditbeat [FLAGS]
+```
+
+**FLAGS**
+
+**`-N, --N`**
+: Disables publishing for testing purposes. This option disables all outputs except the [File output](/reference/auditbeat/file-output.md).
+
+**`--cpuprofile FILE`**
+: Writes CPU profile data to the specified file. This option is useful for troubleshooting Auditbeat.
+
+**`-h, --help`**
+: Shows help for the `run` command.
+
+**`--httpprof [HOST]:PORT`**
+: Starts an http server for profiling. This option is useful for troubleshooting and profiling Auditbeat.
+
+**`--memprofile FILE`**
+: Writes memory profile data to the specified output file. This option is useful for troubleshooting Auditbeat.
+
+**`--system.hostfs MOUNT_POINT`**
+: Specifies the mount point of the host’s filesystem for use in monitoring a host. This flag is depricated, and an alternate hostfs should be specified via the `hostfs` module config value.
+
+Also see [Global flags](#global-flags).
+
+**EXAMPLE**
+
+```sh
+auditbeat run -e
+```
+
+Or:
+
+```sh
+auditbeat -e
+```
+
+
+## `setup` command [setup-command]
+
+Sets up the initial environment, including the index template, ILM policy and write alias, and {{kib}} dashboards (when available)
+
+* The index template ensures that fields are mapped correctly in Elasticsearch. If index lifecycle management is enabled it also ensures that the defined ILM policy and write alias are connected to the indices matching the index template. The ILM policy takes care of the lifecycle of an index, when to do a rollover, when to move an index from the hot phase to the next phase, etc.
+* The {{kib}} dashboards make it easier for you to visualize Auditbeat data in {{kib}}.
+
+This command sets up the environment without actually running Auditbeat and ingesting data. Specify optional flags to set up a subset of assets.
+
+**SYNOPSIS**
+
+```sh
+auditbeat setup [FLAGS]
+```
+
+**FLAGS**
+
+**`--dashboards`**
+: Sets up the {{kib}} dashboards (when available). This option loads the dashboards from the Auditbeat package. For more options, such as loading customized dashboards, see [Importing Existing Beat Dashboards](http://www.elastic.co/guide/en/beats/devguide/master/import-dashboards.md) in the *Beats Developer Guide*.
+
+**`-h, --help`**
+: Shows help for the `setup` command.
+
+**`--index-management`**
+: Sets up components related to Elasticsearch index management including template, ILM policy, and write alias (if supported and configured).
+
+Also see [Global flags](#global-flags).
+
+**EXAMPLES**
+
+```sh
+auditbeat setup --dashboards
+auditbeat setup --index-management
+```
+
+
+## `test` command [test-command]
+
+Tests the configuration.
+
+**SYNOPSIS**
+
+```sh
+auditbeat test SUBCOMMAND [FLAGS]
+```
+
+**SUBCOMMANDS**
+
+**`config`**
+: Tests the configuration settings.
+
+**`output`**
+: Tests that Auditbeat can connect to the output by using the current settings.
+
+**FLAGS**
+
+**`-h, --help`**
+: Shows help for the `test` command.
+
+Also see [Global flags](#global-flags).
+
+**EXAMPLE**
+
+```sh
+auditbeat test config
+```
+
+
+## `version` command [version-command]
+
+Shows information about the current version.
+
+**SYNOPSIS**
+
+```sh
+auditbeat version [FLAGS]
+```
+
+**FLAGS**
+
+**`-h, --help`**
+: Shows help for the `version` command.
+
+Also see [Global flags](#global-flags).
+
+**EXAMPLE**
+
+```sh
+auditbeat version
+```
+
+
+## Global flags [global-flags]
+
+These global flags are available whenever you run Auditbeat.
+
+**`-E, --E "SETTING_NAME=VALUE"`**
+: Overrides a specific configuration setting. You can specify multiple overrides. For example:
+
+ ```sh
+ auditbeat -E "name=mybeat" -E "output.elasticsearch.hosts=['http://myhost:9200']"
+ ```
+
+ This setting is applied to the currently running Auditbeat process. The Auditbeat configuration file is not changed.
+
+
+**`-c, --c FILE`**
+: Specifies the configuration file to use for Auditbeat. The file you specify here is relative to `path.config`. If the `-c` flag is not specified, the default config file, `auditbeat.yml`, is used.
+
+**`-d, --d SELECTORS`**
+: Enables debugging for the specified selectors. For the selectors, you can specify a comma-separated list of components, or you can use `-d "*"` to enable debugging for all components. For example, `-d "publisher"` displays all the publisher-related messages.
+
+**`-e, --e`**
+: Logs to stderr and disables syslog/file output.
+
+**`--environment`**
+: For logging purposes, specifies the environment that Auditbeat is running in. This setting is used to select a default log output when no log output is configured. Supported values are: `systemd`, `container`, `macos_service`, and `windows_service`. If `systemd` or `container` is specified, Auditbeat will log to stdout and stderr by default.
+
+**`--path.config`**
+: Sets the path for configuration files. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details.
+
+**`--path.data`**
+: Sets the path for data files. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details.
+
+**`--path.home`**
+: Sets the path for miscellaneous files. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details.
+
+**`--path.logs`**
+: Sets the path for log files. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details.
+
+**`--strict.perms`**
+: Sets strict permission checking on configuration files. The default is `--strict.perms=true`. See [Config file ownership and permissions](/reference/libbeat/config-file-permissions.md) for more information.
+
+**`-v, --v`**
+: Logs INFO-level messages.
+
+
diff --git a/docs/reference/auditbeat/community-id.md b/docs/reference/auditbeat/community-id.md
new file mode 100644
index 000000000000..c5882878e85a
--- /dev/null
+++ b/docs/reference/auditbeat/community-id.md
@@ -0,0 +1,41 @@
+---
+navigation_title: "community_id"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/community-id.html
+---
+
+# Community ID Network Flow Hash [community-id]
+
+
+The `community_id` processor computes a network flow hash according to the [Community ID Flow Hash specification](https://github.com/corelight/community-id-spec).
+
+The flow hash is useful for correlating all network events related to a single flow. For example you can filter on a community ID value and you might get back the Netflow records from multiple collectors and layer 7 protocol records from Packetbeat.
+
+By default the processor is configured to read the flow parameters from the appropriate Elastic Common Schema (ECS) fields. If you are processing ECS data then no parameters are required.
+
+```yaml
+processors:
+ - community_id:
+```
+
+If the data does not conform to ECS then you can customize the field names that the processor reads from. You can also change the `target` field which is where the computed hash is written to.
+
+```yaml
+processors:
+ - community_id:
+ fields:
+ source_ip: my_source_ip
+ source_port: my_source_port
+ destination_ip: my_dest_ip
+ destination_port: my_dest_port
+ iana_number: my_iana_number
+ transport: my_transport
+ icmp_type: my_icmp_type
+ icmp_code: my_icmp_code
+ target: network.community_id
+```
+
+If the necessary fields are not present in the event then the processor will silently continue without adding the target field.
+
+The processor also accepts an optional `seed` parameter that must be a 16-bit unsigned integer. This value gets incorporated into all generated hashes.
+
diff --git a/docs/reference/auditbeat/configuration-auditbeat.md b/docs/reference/auditbeat/configuration-auditbeat.md
new file mode 100644
index 000000000000..0d13f0766834
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-auditbeat.md
@@ -0,0 +1,32 @@
+---
+navigation_title: "Modules"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-auditbeat.html
+---
+
+# Configure modules [configuration-auditbeat]
+
+
+To enable specific modules you add entries to the `auditbeat.modules` list in the `auditbeat.yml` config file. Each entry in the list begins with a dash (-) and is followed by settings for that module.
+
+The following example shows a configuration that runs the `auditd` and `file_integrity` modules.
+
+```yaml
+auditbeat.modules:
+
+- module: auditd
+ audit_rules: |
+ -w /etc/passwd -p wa -k identity
+ -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
+
+- module: file_integrity
+ paths:
+ - /bin
+ - /usr/bin
+ - /sbin
+ - /usr/sbin
+ - /etc
+```
+
+The configuration details vary by module. See the [module documentation](/reference/auditbeat/auditbeat-modules.md) for more detail about configuring the available modules.
+
diff --git a/docs/reference/auditbeat/configuration-dashboards.md b/docs/reference/auditbeat/configuration-dashboards.md
new file mode 100644
index 000000000000..942260c97103
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-dashboards.md
@@ -0,0 +1,103 @@
+---
+navigation_title: "Kibana dashboards"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-dashboards.html
+---
+
+# Configure Kibana dashboard loading [configuration-dashboards]
+
+
+Auditbeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Auditbeat data in Kibana.
+
+To load the dashboards, you can either enable dashboard loading in the `setup.dashboards` section of the `auditbeat.yml` config file, or you can run the `setup` command. Dashboard loading is disabled by default.
+
+When dashboard loading is enabled, Auditbeat uses the Kibana API to load the sample dashboards. Dashboard loading is only attempted when Auditbeat starts up. If Kibana is not available at startup, Auditbeat will stop with an error.
+
+To enable dashboard loading, add the following setting to the config file:
+
+```yaml
+setup.dashboards.enabled: true
+```
+
+
+## Configuration options [_configuration_options_12]
+
+You can specify the following options in the `setup.dashboards` section of the `auditbeat.yml` config file:
+
+
+### `setup.dashboards.enabled` [_setup_dashboards_enabled]
+
+If this option is set to true, Auditbeat loads the sample Kibana dashboards from the local `kibana` directory in the home path of the Auditbeat installation.
+
+::::{note}
+Auditbeat loads dashboards on startup if either `enabled` is set to `true` or the `setup.dashboards` section is included in the configuration.
+::::
+
+
+::::{note}
+When dashboard loading is enabled, Auditbeat overwrites any existing dashboards that match the names of the dashboards you are loading. This happens every time Auditbeat starts.
+::::
+
+
+If no other options are set, the dashboard are loaded from the local `kibana` directory in the home path of the Auditbeat installation. To load dashboards from a different location, you can configure one of the following options: [`setup.dashboards.directory`](#directory-option), [`setup.dashboards.url`](#url-option), or [`setup.dashboards.file`](#file-option).
+
+
+### `setup.dashboards.directory` [directory-option]
+
+The directory that contains the dashboards to load. The default is the `kibana` folder in the home path.
+
+
+### `setup.dashboards.url` [url-option]
+
+The URL to use for downloading the dashboard archive. If this option is set, Auditbeat downloads the dashboard archive from the specified URL instead of using the local directory.
+
+
+### `setup.dashboards.file` [file-option]
+
+The file archive (zip file) that contains the dashboards to load. If this option is set, Auditbeat looks for a dashboard archive in the specified path instead of using the local directory.
+
+
+### `setup.dashboards.beat` [_setup_dashboards_beat]
+
+In case the archive contains the dashboards for multiple Beats, this setting lets you select the Beat for which you want to load dashboards. To load all the dashboards in the archive, set this option to an empty string. The default is `"auditbeat"`.
+
+
+### `setup.dashboards.kibana_index` [_setup_dashboards_kibana_index]
+
+The name of the Kibana index to use for setting the configuration. The default is `".kibana"`
+
+
+### `setup.dashboards.index` [_setup_dashboards_index]
+
+The Elasticsearch index name. This setting overwrites the index name defined in the dashboards and index pattern. Example: `"testbeat-*"`
+
+::::{note}
+This setting only works for Kibana 6.0 and newer.
+::::
+
+
+
+### `setup.dashboards.always_kibana` [_setup_dashboards_always_kibana]
+
+Force loading of dashboards using the Kibana API without querying Elasticsearch for the version. The default is `false`.
+
+
+### `setup.dashboards.retry.enabled` [_setup_dashboards_retry_enabled]
+
+If this option is set to true, and Kibana is not reachable at the time when dashboards are loaded, Auditbeat will retry to reconnect to Kibana instead of exiting with an error. Disabled by default.
+
+
+### `setup.dashboards.retry.interval` [_setup_dashboards_retry_interval]
+
+Duration interval between Kibana connection retries. Defaults to 1 second.
+
+
+### `setup.dashboards.retry.maximum` [_setup_dashboards_retry_maximum]
+
+Maximum number of retries before exiting with an error. Set to 0 for unlimited retrying. Default is unlimited.
+
+
+### `setup.dashboards.string_replacements` [_setup_dashboards_string_replacements]
+
+The needle and replacements string map, which is used to replace needle string in dashboards and their references contents.
+
diff --git a/docs/reference/auditbeat/configuration-feature-flags.md b/docs/reference/auditbeat/configuration-feature-flags.md
new file mode 100644
index 000000000000..7a64f2939d68
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-feature-flags.md
@@ -0,0 +1,54 @@
+---
+navigation_title: "Feature flags"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-feature-flags.html
+---
+
+# Configure feature flags [configuration-feature-flags]
+
+
+The Feature Flags section of the `auditbeat.yml` config file contains settings in Auditbeat that are disabled by default. These may include experimental features, changes to behaviors within Auditbeat, or settings that could cause a breaking change. For example a setting that changes information included in events might be inconsistent with the naming pattern expected in your configured Auditbeat output.
+
+To enable any of the settings listed on this page, change the associated `enabled` flag from `false` to `true`.
+
+```yaml
+features:
+ mysetting:
+ enabled: true
+```
+
+
+## Configuration options [_configuration_options_16]
+
+You can specify the following options in the `features` section of the `auditbeat.yml` config file:
+
+
+### `fqdn` [_fqdn]
+
+Contains configuration for the FQDN reporting feature. When this feature is enabled, the fully-qualified domain name for the host is reported in the `host.name` field in events produced by Auditbeat.
+
+::::{warning}
+This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
+::::
+
+
+For FQDN reporting to work as expected, the hostname of the current host must either:
+
+* Have a CNAME entry defined in DNS.
+* Have one of its corresponding IP addresses respond successfully to a reverse DNS lookup.
+
+If neither pre-requisite is satisfied, `host.name` continues to report the hostname of the current host as if the FQDN feature flag were not enabled.
+
+Example configuration:
+
+```yaml
+features:
+ fqdn:
+ enabled: true
+```
+
+
+#### `enabled` [_enabled_10]
+
+Set to `true` to enable the FQDN reporting feature of Auditbeat. Defaults to `false`.
+
diff --git a/docs/reference/auditbeat/configuration-general-options.md b/docs/reference/auditbeat/configuration-general-options.md
new file mode 100644
index 000000000000..fca99fbe88e3
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-general-options.md
@@ -0,0 +1,88 @@
+---
+navigation_title: "General settings"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-general-options.html
+---
+
+# Configure general settings [configuration-general-options]
+
+
+You can specify settings in the `auditbeat.yml` config file to control the general behavior of Auditbeat.
+
+
+## General configuration options [configuration-general]
+
+
+These options are supported by all Elastic Beats. Because they are common options, they are not namespaced.
+
+Here is an example configuration:
+
+```yaml
+name: "my-shipper"
+tags: ["service-X", "web-tier"]
+```
+
+
+### `name` [_name]
+
+The name of the Beat. If this option is empty, the `hostname` of the server is used. The name is included as the `agent.name` field in each published transaction. You can use the name to group all transactions sent by a single Beat.
+
+Example:
+
+```yaml
+name: "my-shipper"
+```
+
+
+### `tags` [_tags]
+
+A list of tags that the Beat includes in the `tags` field of each published transaction. Tags make it easy to group servers by different logical properties. For example, if you have a cluster of web servers, you can add the "webservers" tag to the Beat on each server, and then use filters and queries in the Kibana web interface to get visualisations for the whole group of servers.
+
+Example:
+
+```yaml
+tags: ["my-service", "hardware", "test"]
+```
+
+
+### `fields` [libbeat-configuration-fields]
+
+Optional fields that you can specify to add additional information to the output. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. By default, the fields that you specify here will be grouped under a `fields` sub-dictionary in the output document. To store the custom fields as top-level fields, set the `fields_under_root` option to true.
+
+Example:
+
+```yaml
+fields: {project: "myproject", instance-id: "574734885120952459"}
+```
+
+
+### `fields_under_root` [_fields_under_root]
+
+If this option is set to true, the custom [fields](#libbeat-configuration-fields) are stored as top-level fields in the output document instead of being grouped under a `fields` sub-dictionary. If the custom field names conflict with other field names, then the custom fields overwrite the other fields.
+
+Example:
+
+```yaml
+fields_under_root: true
+fields:
+ instance_id: i-10a64379
+ region: us-east-1
+```
+
+
+### `processors` [_processors]
+
+A list of processors to apply to the data generated by the beat.
+
+See [Processors](/reference/auditbeat/filtering-enhancing-data.md) for information about specifying processors in your config.
+
+
+### `max_procs` [_max_procs]
+
+Sets the maximum number of CPUs that can be executing simultaneously. The default is the number of logical CPUs available in the system.
+
+
+### `timestamp.precision` [_timestamp_precision]
+
+Configure the precision of all timestamps. By default it is set to millisecond. Available options: millisecond, microsecond, nanosecond
+
diff --git a/docs/reference/auditbeat/configuration-instrumentation.md b/docs/reference/auditbeat/configuration-instrumentation.md
new file mode 100644
index 000000000000..6b935e91ebc6
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-instrumentation.md
@@ -0,0 +1,87 @@
+---
+navigation_title: "Instrumentation"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-instrumentation.html
+---
+
+# Configure APM instrumentation [configuration-instrumentation]
+
+
+Libbeat uses the Elastic APM Go Agent to instrument its publishing pipeline. Currently, only the Elasticsearch output is instrumented. To gain insight into the performance of Auditbeat, you can enable this instrumentation and send trace data to the APM Integration.
+
+Example configuration with instrumentation enabled:
+
+```yaml
+instrumentation:
+ enabled: true
+ environment: production
+ hosts:
+ - "http://localhost:8200"
+ api_key: L5ER6FEvjkmlfalBealQ3f3fLqf03fazfOV
+```
+
+
+## Configuration options [_configuration_options_15]
+
+You can specify the following options in the `instrumentation` section of the `auditbeat.yml` config file:
+
+
+### `enabled` [_enabled_9]
+
+Set to `true` to enable instrumentation of Auditbeat. Defaults to `false`.
+
+
+### `environment` [_environment]
+
+Set the environment in which Auditbeat is running, for example, `staging`, `production`, `dev`, etc. Environments can be filtered in the [APM app](docs-content://solutions/observability/apps/overviews.md).
+
+
+### `hosts` [_hosts_3]
+
+The APM integration [host](docs-content://reference/ingestion-tools/observability/apm-settings.md) to report instrumentation data to. Defaults to `http://localhost:8200`.
+
+
+### `api_key` [_api_key_2]
+
+The [API Key](docs-content://reference/ingestion-tools/observability/apm-settings.md) used to secure communication with the APM Integration. If `api_key` is set then `secret_token` will be ignored.
+
+
+### `secret_token` [_secret_token]
+
+The [Secret token](docs-content://reference/ingestion-tools/observability/apm-settings.md) used to secure communication with the APM Integration.
+
+
+### `profiling.cpu.enabled` [_profiling_cpu_enabled]
+
+Set to `true` to enable CPU profiling, where profile samples are recorded as events.
+
+This feature is experimental.
+
+
+### `profiling.cpu.interval` [_profiling_cpu_interval]
+
+Configure the CPU profiling interval. Defaults to `60s`.
+
+This feature is experimental.
+
+
+### `profiling.cpu.duration` [_profiling_cpu_duration]
+
+Configure the CPU profiling duration. Defaults to `10s`.
+
+This feature is experimental.
+
+
+### `profiling.heap.enabled` [_profiling_heap_enabled]
+
+Set to `true` to enable heap profiling.
+
+This feature is experimental.
+
+
+### `profiling.heap.interval` [_profiling_heap_interval]
+
+Configure the heap profiling interval. Defaults to `60s`.
+
+This feature is experimental.
+
diff --git a/docs/reference/auditbeat/configuration-kerberos.md b/docs/reference/auditbeat/configuration-kerberos.md
new file mode 100644
index 000000000000..e3a5a183572e
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-kerberos.md
@@ -0,0 +1,90 @@
+---
+navigation_title: "Kerberos"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-kerberos.html
+---
+
+# Configure Kerberos [configuration-kerberos]
+
+
+You can specify Kerberos options with any output or input that supports Kerberos, like {{es}}.
+
+The following encryption types are supported:
+
+* aes128-cts-hmac-sha1-96
+* aes128-cts-hmac-sha256-128
+* aes256-cts-hmac-sha1-96
+* aes256-cts-hmac-sha384-192
+* des3-cbc-sha1-kd
+* rc4-hmac
+
+Example output config with Kerberos password based authentication:
+
+```yaml
+output.elasticsearch.hosts: ["http://my-elasticsearch.elastic.co:9200"]
+output.elasticsearch.kerberos.auth_type: password
+output.elasticsearch.kerberos.username: "elastic"
+output.elasticsearch.kerberos.password: "changeme"
+output.elasticsearch.kerberos.config_path: "/etc/krb5.conf"
+output.elasticsearch.kerberos.realm: "ELASTIC.CO"
+```
+
+The service principal name for the Elasticsearch instance is contructed from these options. Based on this configuration it is going to be `HTTP/my-elasticsearch.elastic.co@ELASTIC.CO`.
+
+
+## Configuration options [_configuration_options_9]
+
+You can specify the following options in the `kerberos` section of the `auditbeat.yml` config file:
+
+
+### `enabled` [_enabled_8]
+
+The `enabled` setting can be used to enable the kerberos configuration by setting it to `false`. The default value is `true`.
+
+::::{note}
+Kerberos settings are disabled if either `enabled` is set to `false` or the `kerberos` section is missing.
+::::
+
+
+
+### `auth_type` [_auth_type]
+
+There are two options to authenticate with Kerberos KDC: `password` and `keytab`.
+
+`password` expects the principal name and its password. When choosing `keytab`, you have to specify a principal name and a path to a keytab. The keytab must contain the keys of the selected principal. Otherwise, authentication will fail.
+
+
+### `config_path` [_config_path]
+
+You need to set the path to the `krb5.conf`, so Auditbeat can find the Kerberos KDC to retrieve a ticket.
+
+
+### `username` [_username_3]
+
+Name of the principal used to connect to the output.
+
+
+### `password` [_password_4]
+
+If you configured `password` for `auth_type`, you have to provide a password for the selected principal.
+
+
+### `keytab` [_keytab]
+
+If you configured `keytab` for `auth_type`, you have to provide the path to the keytab of the selected principal.
+
+
+### `service_name` [_service_name]
+
+This option can only be configured for Kafka. It is the name of the Kafka service, usually `kafka`.
+
+
+### `realm` [_realm]
+
+Name of the realm where the output resides.
+
+
+### `enable_krb5_fast` [_enable_krb5_fast]
+
+Enable Kerberos FAST authentication. This may conflict with some Active Directory installations. The default is `false`.
+
diff --git a/docs/reference/auditbeat/configuration-logging.md b/docs/reference/auditbeat/configuration-logging.md
new file mode 100644
index 000000000000..5ff9c17c26e9
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-logging.md
@@ -0,0 +1,253 @@
+---
+navigation_title: "Logging"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-logging.html
+---
+
+# Configure logging [configuration-logging]
+
+
+The `logging` section of the `auditbeat.yml` config file contains options for configuring the logging output. The logging system can write logs to the syslog or rotate log files. If logging is not explicitly configured the file output is used.
+
+```yaml
+logging.level: info
+logging.to_files: true
+logging.files:
+ path: /var/log/auditbeat
+ name: auditbeat
+ keepfiles: 7
+ permissions: 0640
+```
+
+::::{tip}
+In addition to setting logging options in the config file, you can modify the logging output configuration from the command line. See [Command reference](/reference/auditbeat/command-line-options.md).
+::::
+
+
+::::{warning}
+When Auditbeat is running on a Linux system with systemd, it uses by default the `-e` command line option, that makes it write all the logging output to stderr so it can be captured by journald. Other outputs are disabled. See [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md) to know more and learn how to change this.
+::::
+
+
+
+## Configuration options [_configuration_options_14]
+
+You can specify the following options in the `logging` section of the `auditbeat.yml` config file:
+
+
+### `logging.to_stderr` [_logging_to_stderr]
+
+When true, writes all logging output to standard error output. This is equivalent to using the `-e` command line option.
+
+
+### `logging.to_syslog` [_logging_to_syslog]
+
+When true, writes all logging output to the syslog.
+
+::::{note}
+This option is not supported on Windows.
+::::
+
+
+
+### `logging.to_eventlog` [_logging_to_eventlog]
+
+When true, writes all logging output to the Windows Event Log.
+
+
+### `logging.to_files` [_logging_to_files]
+
+When true, writes all logging output to files. The log files are automatically rotated when the log file size limit is reached.
+
+::::{note}
+Auditbeat only creates a log file if there is logging output. For example, if you set the log [`level`](#level) to `error` and there are no errors, there will be no log file in the directory specified for logs.
+::::
+
+
+
+### `logging.level` [level]
+
+Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default log level is `info`.
+
+`debug`
+: Logs debug messages, including a detailed printout of all events flushed. Also logs informational messages, warnings, errors, and critical errors. When the log level is `debug`, you can specify a list of [`selectors`](#selectors) to display debug messages for specific components. If no selectors are specified, the `*` selector is used to display debug messages for all components.
+
+`info`
+: Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors.
+
+`warning`
+: Logs warnings, errors, and critical errors.
+
+`error`
+: Logs errors and critical errors.
+
+
+### `logging.selectors` [selectors]
+
+The list of debugging-only selector tags used by different Auditbeat components. Use `*` to enable debug output for all components. Use `publisher` to display debug messages related to event publishing.
+
+::::{tip}
+The list of available selectors may change between releases, so avoid creating tests that depend on specific selectors.
+
+To see which selectors are available, run Auditbeat in debug mode (set `logging.level: debug` in the configuration). The selector name appears after the log level and is enclosed in brackets.
+
+::::
+
+
+To configure multiple selectors, use the following [YAML list syntax](/reference/libbeat/config-file-format.md):
+
+```yaml
+logging.selectors: [ harvester, input ]
+```
+
+To override selectors at the command line, use the `-d` global flag (`-d` also sets the debug log level). For more information, see [Command reference](/reference/auditbeat/command-line-options.md).
+
+
+### `logging.metrics.enabled` [_logging_metrics_enabled]
+
+By default, Auditbeat periodically logs its internal metrics that have changed in the last period. For each metric that changed, the delta from the value at the beginning of the period is logged. Also, the total values for all non-zero internal metrics are logged on shutdown. Set this to false to disable this behavior. The default is true.
+
+Here is an example log line:
+
+```shell
+2017-12-17T19:17:42.667-0500 INFO [metrics] log/log.go:110 Non-zero metrics in the last 30s: beat.info.uptime.ms=30004 beat.memstats.gc_next=5046416
+```
+
+Note that we currently offer no backwards compatible guarantees for the internal metrics and for this reason they are also not documented.
+
+
+### `logging.metrics.period` [_logging_metrics_period]
+
+The period after which to log the internal metrics. The default is 30s.
+
+
+### `logging.metrics.namespaces` [_logging_metrics_namespaces]
+
+A list of metrics namespaces to report in the logs. Defaults to `[stats]`. `stats` contains general Beat metrics. `dataset` and `inputs` may be present in some Beats and contains module or input metrics.
+
+
+### `logging.files.path` [_logging_files_path]
+
+The directory that log files are written to. The default is the logs path. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details.
+
+
+### `logging.files.name` [_logging_files_name]
+
+The name of the file that logs are written to. The default is *auditbeat*.
+
+
+### `logging.files.rotateeverybytes` [_logging_files_rotateeverybytes]
+
+The maximum size of a log file. If the limit is reached, a new log file is generated. The default size limit is 10485760 (10 MB).
+
+
+### `logging.files.keepfiles` [_logging_files_keepfiles]
+
+The number of most recent rotated log files to keep on disk. Older files are deleted during log rotation. The default value is 7. The `keepfiles` options has to be in the range of 2 to 1024 files.
+
+
+### `logging.files.permissions` [_logging_files_permissions]
+
+The permissions mask to apply when rotating log files. The default value is 0600. The `permissions` option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with *0*.
+
+The most permissive mask allowed is 0640. If a higher permissions mask is specified via this setting, it will be subject to an umask of 0027.
+
+This option is not supported on Windows.
+
+Examples:
+
+* 0640: give read and write access to the file owner, and read access to members of the group associated with the file.
+* 0600: give read and write access to the file owner, and no access to all others.
+
+
+### `logging.files.interval` [_logging_files_interval]
+
+Enable log file rotation on time intervals in addition to size-based rotation. Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h are boundary-aligned with minutes, hours, days, weeks, months, and years as reported by the local system clock. All other intervals are calculated from the unix epoch. Defaults to disabled.
+
+
+### `logging.files.rotateonstartup` [_logging_files_rotateonstartup]
+
+If the log file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to true.
+
+
+### `logging.files.redirect_stderr` [preview] [_logging_files_redirect_stderr]
+
+When true, diagnostic messages printed to Auditbeat’s standard error output will also be logged to the log file. This can be helpful in situations were Auditbeat terminates unexpectedly because an error has been detected by Go’s runtime but diagnostic information is not present in the log file. This feature is only available when logging to files (`logging.to_files` is true). Disabled by default.
+
+
+## Logging format [_logging_format]
+
+The logging format is generally the same for each logging output. The one exception is with the syslog output where the timestamp is not included in the message because syslog adds its own timestamp.
+
+Each log message consists of the following parts:
+
+* Timestamp in ISO8601 format
+* Level
+* Logger name contained in brackets (Optional)
+* File name and line number of the caller
+* Message
+* Structured data encoded in JSON (Optional)
+
+Below are some samples:
+
+`2017-12-17T18:54:16.241-0500 INFO logp/core_test.go:13 unnamed global logger`
+
+`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:16 some message`
+
+`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:19 some message {"x": 1}`
+
+
+## Configuration options for event_data logger [_configuration_options_for_event_data_logger]
+
+Some outputs will log raw events on errors like indexing errors in the Elasticsearch output, to prevent logging raw events (that may contain sensitive information) together with other log messages, a different log file, only for log entries containing raw events, is used. It will use the same level, selectors and all other configurations from the default logger, but it will have it’s own file configuration.
+
+Having a different log file for raw events also prevents event data from drowning out the regular log files.
+
+::::{important}
+No matter the default logger output configuration, raw events will **always** be logged to a file configured by `logging.event_data.files`.
+::::
+
+
+
+### `logging.event_data.files.path` [_logging_event_data_files_path]
+
+The directory that log files are written to. The default is the logs path. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details.
+
+
+### `logging.event_data.files.name` [_logging_event_data_files_name]
+
+The name of the file that logs are written to. The default is *auditbeat*-events-data.
+
+
+### `logging.event_data.files.rotateeverybytes` [_logging_event_data_files_rotateeverybytes]
+
+The maximum size of a log file. If the limit is reached, a new log file is generated. The default size limit is 5242880 (5 MB).
+
+
+### `logging.event_data.files.keepfiles` [_logging_event_data_files_keepfiles]
+
+The number of most recent rotated log files to keep on disk. Older files are deleted during log rotation. The default value is 2. The `keepfiles` options has to be in the range of 2 to 1024 files.
+
+
+### `logging.event_data.files.permissions` [_logging_event_data_files_permissions]
+
+The permissions mask to apply when rotating log files. The default value is 0600. The `permissions` option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with *0*.
+
+The most permissive mask allowed is 0640. If a higher permissions mask is specified via this setting, it will be subject to an umask of 0027.
+
+This option is not supported on Windows.
+
+Examples:
+
+* 0640: give read and write access to the file owner, and read access to members of the group associated with the file.
+* 0600: give read and write access to the file owner, and no access to all others.
+
+
+### `logging.event_data.files.interval` [_logging_event_data_files_interval]
+
+Enable log file rotation on time intervals in addition to size-based rotation. Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h are boundary-aligned with minutes, hours, days, weeks, months, and years as reported by the local system clock. All other intervals are calculated from the unix epoch. Defaults to disabled.
+
+
+### `logging.event_data.files.rotateonstartup` [_logging_event_data_files_rotateonstartup]
+
+If the log file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to false.
diff --git a/docs/reference/auditbeat/configuration-monitor.md b/docs/reference/auditbeat/configuration-monitor.md
new file mode 100644
index 000000000000..5b595db68ae2
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-monitor.md
@@ -0,0 +1,113 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-monitor.html
+---
+
+# Settings for internal collection [configuration-monitor]
+
+Use the following settings to configure internal collection when you are not using {{metricbeat}} to collect monitoring data.
+
+You specify these settings in the X-Pack monitoring section of the `auditbeat.yml` config file:
+
+## `monitoring.enabled` [_monitoring_enabled]
+
+The `monitoring.enabled` config is a boolean setting to enable or disable {{monitoring}}. If set to `true`, monitoring is enabled.
+
+The default value is `false`.
+
+
+## `monitoring.elasticsearch` [_monitoring_elasticsearch]
+
+The {{es}} instances that you want to ship your Auditbeat metrics to. This configuration option contains the following fields:
+
+
+## `monitoring.cluster_uuid` [_monitoring_cluster_uuid]
+
+The `monitoring.cluster_uuid` config identifies the {{es}} cluster under which the monitoring data will appear in the Stack Monitoring UI.
+
+### `api_key` [_api_key_3]
+
+The detail of the API key to be used to send monitoring information to {{es}}. See [*Grant access using API keys*](/reference/auditbeat/beats-api-keys.md) for more information.
+
+
+### `bulk_max_size` [_bulk_max_size_5]
+
+The maximum number of metrics to bulk in a single {{es}} bulk API index request. The default is `50`. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md).
+
+
+### `backoff.init` [_backoff_init_4]
+
+The number of seconds to wait before trying to reconnect to Elasticsearch after a network error. After waiting `backoff.init` seconds, Auditbeat tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is 1s.
+
+
+### `backoff.max` [_backoff_max_4]
+
+The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. The default is 60s.
+
+
+### `compression_level` [_compression_level_3]
+
+The gzip compression level. Setting this value to `0` disables compression. The compression level must be in the range of `1` (best speed) to `9` (best compression). The default value is `0`. Increasing the compression level reduces the network usage but increases the CPU usage.
+
+
+### `headers` [_headers_3]
+
+Custom HTTP headers to add to each request. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md).
+
+
+### `hosts` [_hosts_4]
+
+The list of {{es}} nodes to connect to. Monitoring metrics are distributed to these nodes in round robin order. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md).
+
+
+### `max_retries` [_max_retries_5]
+
+The number of times to retry sending the monitoring metrics after a failure. After the specified number of retries, the metrics are typically dropped. The default value is `3`. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md).
+
+
+### `parameters` [_parameters_2]
+
+Dictionary of HTTP parameters to pass within the url with index operations.
+
+
+### `password` [_password_6]
+
+The password that Auditbeat uses to authenticate with the {{es}} instances for shipping monitoring data.
+
+
+### `metrics.period` [_metrics_period]
+
+The time interval (in seconds) when metrics are sent to the {{es}} cluster. A new snapshot of Auditbeat metrics is generated and scheduled for publishing each period. The default value is 10 * time.Second.
+
+
+### `state.period` [_state_period]
+
+The time interval (in seconds) when state information are sent to the {{es}} cluster. A new snapshot of Auditbeat state is generated and scheduled for publishing each period. The default value is 60 * time.Second.
+
+
+### `protocol` [_protocol]
+
+The name of the protocol to use when connecting to the {{es}} cluster. The options are: `http` or `https`. The default is `http`. If you specify a URL for `hosts`, however, the value of protocol is overridden by the scheme you specify in the URL.
+
+
+### `proxy_url` [_proxy_url_4]
+
+The URL of the proxy to use when connecting to the {{es}} cluster. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md).
+
+
+### `timeout` [_timeout_5]
+
+The HTTP request timeout in seconds for the {{es}} request. The default is `90`.
+
+
+### `ssl` [_ssl_5]
+
+Configuration options for Transport Layer Security (TLS) or Secure Sockets Layer (SSL) parameters like the certificate authority (CA) to use for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to {{es}}. For more information, see [SSL](/reference/auditbeat/configuration-ssl.md).
+
+
+### `username` [_username_4]
+
+The user ID that Auditbeat uses to authenticate with the {{es}} instances for shipping monitoring data.
+
+
+
diff --git a/docs/reference/auditbeat/configuration-output-codec.md b/docs/reference/auditbeat/configuration-output-codec.md
new file mode 100644
index 000000000000..fe4682d9aaef
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-output-codec.md
@@ -0,0 +1,32 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-output-codec.html
+---
+
+# Change the output codec [configuration-output-codec]
+
+For outputs that do not require a specific encoding, you can change the encoding by using the codec configuration. You can specify either the `json` or `format` codec. By default the `json` codec is used.
+
+**`json.pretty`**: If `pretty` is set to true, events will be nicely formatted. The default is false.
+
+**`json.escape_html`**: If `escape_html` is set to true, html symbols will be escaped in strings. The default is false.
+
+Example configuration that uses the `json` codec with pretty printing enabled to write events to the console:
+
+```yaml
+output.console:
+ codec.json:
+ pretty: true
+ escape_html: false
+```
+
+**`format.string`**: Configurable format string used to create a custom formatted message.
+
+Example configurable that uses the `format` codec to print the events timestamp and message field to console:
+
+```yaml
+output.console:
+ codec.format:
+ string: '%{[@timestamp]} %{[message]}'
+```
+
diff --git a/docs/reference/auditbeat/configuration-path.md b/docs/reference/auditbeat/configuration-path.md
new file mode 100644
index 000000000000..a541aff4af20
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-path.md
@@ -0,0 +1,78 @@
+---
+navigation_title: "Project paths"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-path.html
+---
+
+# Configure project paths [configuration-path]
+
+
+The `path` section of the `auditbeat.yml` config file contains configuration options that define where Auditbeat looks for its files. For example, Auditbeat looks for the Elasticsearch template file in the configuration path and writes log files in the logs path.
+
+Please see the [Directory layout](/reference/auditbeat/directory-layout.md) section for more details.
+
+Here is an example configuration:
+
+```yaml
+path.home: /usr/share/beat
+path.config: /etc/beat
+path.data: /var/lib/beat
+path.logs: /var/log/
+```
+
+Note that it is possible to override these options by using command line flags.
+
+
+## Configuration options [_configuration_options]
+
+You can specify the following options in the `path` section of the `auditbeat.yml` config file:
+
+
+### `home` [_home]
+
+The home path for the Auditbeat installation. This is the default base path for all other path settings and for miscellaneous files that come with the distribution (for example, the sample dashboards). If not set by a CLI flag or in the configuration file, the default for the home path is the location of the Auditbeat binary.
+
+Example:
+
+```yaml
+path.home: /usr/share/beats
+```
+
+
+### `config` [_config]
+
+The configuration path for the Auditbeat installation. This is the default base path for configuration files, including the main YAML configuration file and the Elasticsearch template file. If not set by a CLI flag or in the configuration file, the default for the configuration path is the home path.
+
+Example:
+
+```yaml
+path.config: /usr/share/beats/config
+```
+
+
+### `data` [_data]
+
+The data path for the Auditbeat installation. This is the default base path for all the files in which Auditbeat needs to store its data. If not set by a CLI flag or in the configuration file, the default for the data path is a `data` subdirectory inside the home path.
+
+Example:
+
+```yaml
+path.data: /var/lib/beats
+```
+
+::::{tip}
+When running multiple Auditbeat instances on the same host, make sure they each have a distinct `path.data` value.
+::::
+
+
+
+### `logs` [_logs]
+
+The logs path for a Auditbeat installation. This is the default location for Auditbeat’s log files. If not set by a CLI flag or in the configuration file, the default for the logs path is a `logs` subdirectory inside the home path.
+
+Example:
+
+```yaml
+path.logs: /var/log/beats
+```
+
diff --git a/docs/reference/auditbeat/configuration-ssl.md b/docs/reference/auditbeat/configuration-ssl.md
new file mode 100644
index 000000000000..519ce4cac46f
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-ssl.md
@@ -0,0 +1,486 @@
+---
+navigation_title: "SSL"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-ssl.html
+---
+
+# Configure SSL [configuration-ssl]
+
+
+You can specify SSL options when you configure:
+
+* [outputs](/reference/auditbeat/configuring-output.md) that support SSL
+* the [Kibana endpoint](/reference/auditbeat/setup-kibana-endpoint.md)
+
+Example output config with SSL enabled:
+
+```yaml
+output.elasticsearch.hosts: ["https://192.168.1.42:9200"]
+output.elasticsearch.ssl.certificate_authorities: ["/etc/client/ca.pem"]
+output.elasticsearch.ssl.certificate: "/etc/client/cert.pem"
+output.elasticsearch.ssl.key: "/etc/client/cert.key"
+```
+
+Also see [*Secure communication with Logstash*](/reference/auditbeat/configuring-ssl-logstash.md).
+
+Example Kibana endpoint config with SSL enabled:
+
+```yaml
+setup.kibana.host: "https://192.0.2.255:5601"
+setup.kibana.ssl.enabled: true
+setup.kibana.ssl.certificate_authorities: ["/etc/client/ca.pem"]
+setup.kibana.ssl.certificate: "/etc/client/cert.pem"
+setup.kibana.ssl.key: "/etc/client/cert.key"
+```
+
+There are a number of SSL configuration options available to you:
+
+* [Common configuration options](#ssl-common-config)
+* [Client configuration options](#ssl-client-config)
+* [Server configuration options](#ssl-server-config)
+
+
+## Common configuration options [ssl-common-config]
+
+Common SSL configuration options can be used in both client and server configurations. You can specify the following options in the `ssl` section of each subsystem that supports SSL.
+
+
+### `enabled` [enabled]
+
+To disable SSL configuration, set the value to `false`. The default value is `true`.
+
+::::{note}
+SSL settings are disabled if either `enabled` is set to `false` or the `ssl` section is missing.
+
+::::
+
+
+
+### `supported_protocols` [supported-protocols]
+
+List of allowed SSL/TLS versions. If SSL/TLS server decides for protocol versions not configured, the connection will be dropped during or after the handshake. The setting is a list of allowed protocol versions: `TLSv1.1`, `TLSv1.2`, and `TLSv1.3`.
+
+The default value is `[TLSv1.2, TLSv1.3]`.
+
+
+### `cipher_suites` [cipher-suites]
+
+The list of cipher suites to use. The first entry has the highest priority. If this option is omitted, the Go crypto library’s [default suites](https://golang.org/pkg/crypto/tls/) are used (recommended).
+
+Note that if TLS 1.3 is enabled (which is true by default), then the default TLS 1.3 cipher suites are always included, because Go’s standard library adds them to all connections. In order to exclude the default TLS 1.3 ciphers, TLS 1.3 must also be disabled, e.g. with the setting `ssl.supported_protocols = [TLSv1.2]`.
+
+The following cipher suites are available:
+
+| Cypher | Notes |
+| --- | --- |
+| ECDHE-ECDSA-AES-128-CBC-SHA | |
+| ECDHE-ECDSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. |
+| ECDHE-ECDSA-AES-128-GCM-SHA256 | TLS 1.2 only. |
+| ECDHE-ECDSA-AES-256-CBC-SHA | |
+| ECDHE-ECDSA-AES-256-GCM-SHA384 | TLS 1.2 only. |
+| ECDHE-ECDSA-CHACHA20-POLY1305 | TLS 1.2 only. |
+| ECDHE-ECDSA-RC4-128-SHA | Disabled by default. RC4 not recommended. |
+| ECDHE-RSA-3DES-CBC3-SHA | |
+| ECDHE-RSA-AES-128-CBC-SHA | |
+| ECDHE-RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. |
+| ECDHE-RSA-AES-128-GCM-SHA256 | TLS 1.2 only. |
+| ECDHE-RSA-AES-256-CBC-SHA | |
+| ECDHE-RSA-AES-256-GCM-SHA384 | TLS 1.2 only. |
+| ECDHE-RSA-CHACHA20-POLY1205 | TLS 1.2 only. |
+| ECDHE-RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. |
+| RSA-3DES-CBC3-SHA | |
+| RSA-AES-128-CBC-SHA | |
+| RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. |
+| RSA-AES-128-GCM-SHA256 | TLS 1.2 only. |
+| RSA-AES-256-CBC-SHA | |
+| RSA-AES-256-GCM-SHA384 | TLS 1.2 only. |
+| RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. |
+
+Here is a list of acronyms used in defining the cipher suites:
+
+* 3DES: Cipher suites using triple DES
+* AES-128/256: Cipher suites using AES with 128/256-bit keys.
+* CBC: Cipher using Cipher Block Chaining as block cipher mode.
+* ECDHE: Cipher suites using Elliptic Curve Diffie-Hellman (DH) ephemeral key exchange.
+* ECDSA: Cipher suites using Elliptic Curve Digital Signature Algorithm for authentication.
+* GCM: Galois/Counter mode is used for symmetric key cryptography.
+* RC4: Cipher suites using RC4.
+* RSA: Cipher suites using RSA.
+* SHA, SHA256, SHA384: Cipher suites using SHA-1, SHA-256 or SHA-384.
+
+
+### `curve_types` [curve-types]
+
+The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange).
+
+The following elliptic curve types are available:
+
+* P-256
+* P-384
+* P-521
+* X25519
+
+
+### `ca_sha256` [ca-sha256]
+
+This configures a certificate pin that you can use to ensure that a specific certificate is part of the verified chain.
+
+The pin is a base64 encoded string of the SHA-256 of the certificate.
+
+::::{note}
+This check is not a replacement for the normal SSL validation, but it adds additional validation. If this option is used with `verification_mode` set to `none`, the check will always fail because it will not receive any verified chains.
+::::
+
+
+
+## Client configuration options [ssl-client-config]
+
+You can specify the following options in the `ssl` section of each subsystem that supports SSL.
+
+
+### `certificate_authorities` [client-certificate-authorities]
+
+The list of root certificates for verifications is required. If `certificate_authorities` is empty or not set, the system keystore is used. If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well.
+
+By default you can specify a list of files that `auditbeat` will read, but you can also embed a certificate directly in the `YAML` configuration:
+
+```yaml
+certificate_authorities:
+ - |
+ -----BEGIN CERTIFICATE-----
+ MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF
+ ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2
+ MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB
+ BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n
+ fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl
+ 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t
+ /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP
+ PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41
+ CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O
+ BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux
+ 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D
+ 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw
+ 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA
+ H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu
+ 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0
+ yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk
+ sxSmbIUfc2SGJGCJD4I=
+ -----END CERTIFICATE-----
+```
+
+
+### `certificate: "/etc/client/cert.pem"` [client-certificate]
+
+The path to the certificate for SSL client authentication is only required if `client_authentication` is specified. If the certificate is not specified, client authentication is not available. The connection might fail if the server requests client authentication. If the SSL server does not require client authentication, the certificate will be loaded, but not requested or used by the server.
+
+When this option is configured, the [`key`](#client-key) option is also required. The certificate option support embedding of the certificate:
+
+```yaml
+certificate: |
+ -----BEGIN CERTIFICATE-----
+ MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF
+ ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2
+ MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB
+ BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n
+ fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl
+ 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t
+ /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP
+ PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41
+ CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O
+ BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux
+ 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D
+ 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw
+ 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA
+ H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu
+ 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0
+ yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk
+ sxSmbIUfc2SGJGCJD4I=
+ -----END CERTIFICATE-----
+```
+
+
+### `key: "/etc/client/cert.key"` [client-key]
+
+The client certificate key used for client authentication and is only required if `client_authentication` is configured. The key option support embedding of the private key:
+
+```yaml
+key: |
+ -----BEGIN PRIVATE KEY-----
+ MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI
+ sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP
+ Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F
+ KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2
+ MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z
+ HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ
+ nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx
+ Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0
+ eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/
+ Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM
+ epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve
+ Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn
+ BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8
+ VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU
+ zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5
+ GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA
+ 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7
+ TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF
+ hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li
+ e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze
+ Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T
+ kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+
+ kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav
+ NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K
+ 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc
+ nygO9KTJuUiBrLr0AHEnqko=
+ -----END PRIVATE KEY-----
+```
+
+
+### `key_passphrase` [client-key-passphrase]
+
+The passphrase used to decrypt an encrypted key stored in the configured `key` file.
+
+
+### `verification_mode` [client-verification-mode]
+
+Controls the verification of server certificates. Valid values are:
+
+`full`
+: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate.
+
+`strict`
+: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error.
+
+`certificate`
+: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification.
+
+`none`
+: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged.
+
+ The default value is `full`.
+
+
+
+### `ca_trusted_fingerprint` [ca_trusted_fingerprint]
+
+A HEX encoded SHA-256 of a CA certificate. If this certificate is present in the chain during the handshake, it will be added to the `certificate_authorities` list and the handshake will continue normaly.
+
+To get the fingerprint from a CA certificate on a Unix-like system, you can use the following command, where `ca.crt` is the certificate.
+
+```
+openssl x509 -fingerprint -sha256 -noout -in ./ca.crt | awk --field-separator="=" '{print $2}' | sed 's/://g'
+```
+
+
+## Server configuration options [ssl-server-config]
+
+You can specify the following options in the `ssl` section of each subsystem that supports SSL.
+
+
+### `certificate_authorities` [server-certificate-authorities]
+
+The list of root certificates for client verifications is only required if `client_authentication` is configured. If `certificate_authorities` is empty or not set, and `client_authentication` is configured, the system keystore is used.
+
+If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well. By default you can specify a list of files that `auditbeat` will read, but you can also embed a certificate directly in the `YAML` configuration:
+
+```yaml
+certificate_authorities:
+ - |
+ -----BEGIN CERTIFICATE-----
+ MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF
+ ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2
+ MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB
+ BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n
+ fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl
+ 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t
+ /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP
+ PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41
+ CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O
+ BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux
+ 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D
+ 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw
+ 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA
+ H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu
+ 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0
+ yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk
+ sxSmbIUfc2SGJGCJD4I=
+ -----END CERTIFICATE-----
+```
+
+
+### `certificate: "/etc/server/cert.pem"` [server-certificate]
+
+The end-entity (leaf) certificate that the server uses to identify itself. If the certificate is signed by a certificate authority (CA), then it should include intermediate CA certificates, sorted from leaf to root. For servers, a `certificate` and [`key`](#server-key) must be specified.
+
+The certificate option supports embedding of the PEM certificate content. This example contains the leaf certificate followed by issuer’s certificate.
+
+```yaml
+certificate: |
+ -----BEGIN CERTIFICATE-----
+ MIIF2jCCA8KgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBlMQswCQYDVQQGEwJVUzEW
+ MBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8g
+ UmVhbDEOMAwGA1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwHhcNMjMxMDMw
+ MTkyMzU4WhcNMjMxMDMxMTkyMzU4WjB2MQswCQYDVQQGEwJVUzEWMBQGA1UEBxMN
+ U2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8gUmVhbDEOMAwG
+ A1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMxDzANBgNVBAMTBnNlcnZlcjCC
+ AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALW37cart7l0KE3LCStFbiGm
+ Rr/QSkuPv+Y+SXFT4zXrMFP3mOfUCVsR4lugv+jmql9qjbwR9jKsgKXA1kSvNXSZ
+ lLYWRcNnQ+QzwKxJf/jy246nSfqb2FKvVMs580lDwKHHxn/FSpHV93O4Goy5cLfF
+ ACE7BSdJdxl5DVAMmmkzd6gBGgN8dQIbcyJYuIZYQt44PqSYh/BomTyOXKrmvX4y
+ t7/pF+ldJjWZq/6SfCq6WE0jSrpI1P/42Qd9h5Tsnl6qsUGA2Tz5ZqKz2cyxaIlK
+ wL9tYDionfFIl+jZcxkGPF2a14O1TycCI0B/z+0VL+HR/8fKAB0NdP+QRLaPWOrn
+ DvraAO+bVKC6VrQyUYNUOwtd2gMUqm6Hzrf4s3wjP754eSJkvnSoSAB6l7ZmJKe5
+ Pz5oDDOVPwKHv/MrhsCSMNFeXSEO+rq9TtYEAFQI5rFGHlURga8kA1T1pirHyEtS
+ 2o8GUSPSHVulaPdFnHg4xfTexfRYLCqya75ISJuY2/+2GblCie/re1GFitZCZ46/
+ xiQQDOjgL96soDVZ+cTtMpXanslgDapTts9LPIJTd9FUJCY1omISGiSjABRuTlCV
+ 8054ja4BKVahSd5BqqtVkWyV64SCut6kce2ndwBkyFvlZ6cteLCW7KtzYvba4XBb
+ YIAs+H+9e/bZUVhws5mFAgMBAAGjgYMwgYAwDgYDVR0PAQH/BAQDAgeAMB0GA1Ud
+ JQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATAOBgNVHQ4EBwQFAQIDBAUwPwYDVR0R
+ BDgwNoIJbG9jYWxob3N0ghFiZWF0cy5leGFtcGxlLmNvbYcEfwAAAYcQAAAAAAAA
+ AAAAAAAAAAAAATANBgkqhkiG9w0BAQsFAAOCAgEAldSZOUi+OUR46ERQuINl1oED
+ mjNsQ9FNP/RDu8mPJaNb5v2sAbcpuZb9YdnScT+d+n0+LMd5uz2g67Qr73QCpXwL
+ 9YJIs56i7qMTKXlVvRQrvF9P/zP3sm5Zfd2I/x+8oXgEeYsxAWipJ8RsbnN1dtu8
+ C4l+P0E58jjrjom11W90RiHYaT0SI2PPBTTRhYLz0HayThPZDMdFnIQqVxUYbQD5
+ ybWu77hnsvC/g2C8/N2LAdQGJJ67owMa5T3YRneiaSvvOf3I45oeLE+olGAPdrSq
+ 5Sp0G7fcAKMRPxcwYeD7V5lfYMtb+RzECpYAHT8zHKLZl6/34q2k8P8EWEpAsD80
+ +zSbCkdvNiU5lU90rV8E2baTKCg871k4O8sT48eUyDps6ZUCfT1dgefXeyOTV5bY
+ 864Zo6bWJhAJ7Qa2d4HJkqPzSbqsosHVobojgkOcMqkStLHd8sgtCoFmJMflbp7E
+ ghawl/RVFEkL9+TWy9fR8sJWRx13P8CUP6AL9kVmcU2c3gMNpvQfIii9QOnQrRsi
+ yZj9FKl+ZM49I6RQ6dY5JVgWtpVm/+GBVuy1Aj91JEjw7r1jAeir5K9LAXG8kEN9
+ irndx1SK2MMTY79lGHFGQRv3vnQGI0Wzjtn31YJ7qIFNJ1WWbAZLR9FBtzmMeXM6
+ puoJ9UYvfIcHUGPdZGU=
+ -----END CERTIFICATE-----
+ -----BEGIN CERTIFICATE-----
+ MIIFpjCCA46gAwIBAgIBATANBgkqhkiG9w0BAQsFADBlMQswCQYDVQQGEwJVUzEW
+ MBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8g
+ UmVhbDEOMAwGA1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwHhcNMjMxMDMw
+ MTkyMzU2WhcNMjMxMDMxMTkyMzU2WjBlMQswCQYDVQQGEwJVUzEWMBQGA1UEBxMN
+ U2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8gUmVhbDEOMAwG
+ A1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwggIiMA0GCSqGSIb3DQEBAQUA
+ A4ICDwAwggIKAoICAQDQP3hJt4jTIo+tBXB/R4RuBTvv6OOago9joxlNDm0abseJ
+ ehE0V8FDi0SSpa7ZiqwCGq/deu5OIWVNpFCLHeH5YBriNmB7oPkNRCleu50JsUrG
+ RjSTtBIJcu/CVpD7Q5XMbhbhYcPArrxrSreo3ox8a+2X7b8nA1xPgIcWqSCgs9iV
+ lwKHaQWNTUXYwwZG7b9WG4EJaki6t1+1QbDDJU0oWrZNg23wQEBvEVRDQs7kadvm
+ 9YtZLPULlSyV4Rk3yNW8dPXHjcz2wp3PBPIWIQe9mzYU608307TkUMVN2EEOImxl
+ Wm1RtXYvvVb1LiY0C2lYbN3jLZQzffK5RsS87ocqTQM+HvDBv/PupHDvW08wietu
+ RtRbdx/2cN0GLmOHnkWKx+GlYDZfAtIj958fTKl2hHyNqJ1pE7vksSYBwBxMFQem
+ eSGzw5pO53kmPcZO203YQ2qoJd7z1aLf7eAOqDn5zwlYNc00bZ6DwTZsyptGv9sZ
+ zcZuovppPgCN4f1I9ja/NPKep+sVKfQqR5HuOFOPFcr6oOioESJSgIvXXF9RhCVh
+ UMeZKWWSCNm1ea4h6q8OJdQfM7XXkXm+dEyF0TogC00CidZWuYMZcgXND5p/1Di5
+ PkCKPUMllCoK0oaTfFioNW7qtNbDGQrW+spwDa4kjJNKYtDD0jjPgFMgSzQ2MwID
+ AQABo2EwXzAOBgNVHQ8BAf8EBAMCAoQwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsG
+ AQUFBwMBMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFImOXc9Tv+mgn9jOsPig
+ 9vlAUTa+MA0GCSqGSIb3DQEBCwUAA4ICAQBZ9tqU88Nmgf+vDgkKMKLmLMaRCRlV
+ HcYrm7WoWLX+q6VSbmvf5eD5OrzzbAnnp16iXap8ivsAEFTo8XWh/bjl7G/2jetR
+ xZD2WHtzmAg3s4SVsEHIyFUF1ERwnjO2ndHjoIsx8ktUk1aNrmgPI6s07fkULDm+
+ 2aXyBSZ9/oimZM/s3IqYJecxwE+yyS+FiS6mSDCCVIyQXdtVAbFHegyiBYv8EbwF
+ Xz70QiqQtxotGlfts/3uN1s+xnEoWz5E6S5DQn4xQh0xiKSXPizMXou9xKzypeSW
+ qtNdwtg62jKWDaVriBfrvoCnyjjCIjmcTcvA2VLmeZShyTuIucd0lkg2NKIGeM7I
+ o33hmdiKaop1fVtj8zqXvCRa3ecmlvcxPKX0otVFORFNOfaPjH/CjW0CnP0LByGK
+ YW19w0ncJZa9cc1SlNL28lnBhW+i1+ViR02wtjabH9XO+mtxuaEPDZ1hLhhjktqI
+ Y2oFUso4C5xiTU/hrH8+cFv0dn/+zyQoLfJEQbUX9biFeytt7T4Yynwhdy7jryqH
+ fdy/QM26YnsE8D7l4mv99z+zII0IRGnQOuLTuNAIyGJUf69hCDubZFDeHV/IB9hU
+ 6GA6lBpsJlTDgfJLbtKuAHxdn1DO+uGg0GxgwggH6Vh9x9yQK2E6BaepJisL/zNB
+ RQQmEyTn1hn/eA==
+ -----END CERTIFICATE-----
+```
+
+
+### `key: "/etc/server/cert.key"` [server-key]
+
+The server certificate key used for authentication is required. The key option supports embedding of the private key:
+
+```yaml
+key: |
+ -----BEGIN PRIVATE KEY-----
+ MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI
+ sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP
+ Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F
+ KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2
+ MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z
+ HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ
+ nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx
+ Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0
+ eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/
+ Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM
+ epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve
+ Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn
+ BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8
+ VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU
+ zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5
+ GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA
+ 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7
+ TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF
+ hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li
+ e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze
+ Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T
+ kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+
+ kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav
+ NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K
+ 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc
+ nygO9KTJuUiBrLr0AHEnqko=
+ -----END PRIVATE KEY-----
+```
+
+
+### `key_passphrase` [server-key-passphrase]
+
+The passphrase is used to decrypt an encrypted key stored in the configured `key` file.
+
+
+### `verification_mode` [server-verification-mode]
+
+Controls the verification of client certificates. Valid values are:
+
+`full`
+: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate.
+
+`strict`
+: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error.
+
+`certificate`
+: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification.
+
+`none`
+: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged.
+
+ The default value is `full`.
+
+
+
+### `renegotiation` [server-renegotiation]
+
+This configures what types of TLS renegotiation are supported. The valid options are:
+
+`never`
+: Disables renegotiation.
+
+`once`
+: Allows a remote server to request renegotiation once per connection.
+
+`freely`
+: Allows a remote server to request renegotiation repeatedly.
+
+ The default value is `never`.
+
+
+
+### `restart_on_cert_change.enabled` [exit_on_cert_change_enabled]
+
+If set to `true` Auditbeat will restart if any file listed by `key`, `certificate`, or `certificate_authorities` is modified.
+
+::::{note}
+This feature is NOT supported on Windows. The default value is `false`.
+::::
+
+
+::::{note}
+This feature requres the `execve` system call to be enabled. If you have a custom seccomp policy in place, make sure to allow for `execve`.
+::::
+
+
+
+### `restart_on_cert_change.period` [restart_on_cert_change_period]
+
+Specifies how often the files are checked for changes. Do not set the period to less than 1s because the modification time of files is often stored in seconds. Setting the period to less than 1s will result in validation error and Auditbeat will not start. The default value is 1m.
+
diff --git a/docs/reference/auditbeat/configuration-template.md b/docs/reference/auditbeat/configuration-template.md
new file mode 100644
index 000000000000..e06b29197b15
--- /dev/null
+++ b/docs/reference/auditbeat/configuration-template.md
@@ -0,0 +1,112 @@
+---
+navigation_title: "Elasticsearch index template"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-template.html
+---
+
+# Configure Elasticsearch index template loading [configuration-template]
+
+
+The `setup.template` section of the `auditbeat.yml` config file specifies the [index template](docs-content://manage-data/data-store/templates.md) to use for setting mappings in Elasticsearch. If template loading is enabled (the default), Auditbeat loads the index template automatically after successfully connecting to Elasticsearch.
+
+::::{note}
+A connection to Elasticsearch is required to load the index template. If the configured output is not Elasticsearch (or {{ess}}), you must [load the template manually](/reference/auditbeat/auditbeat-template.md#load-template-manually).
+::::
+
+
+You can adjust the following settings to load your own template or overwrite an existing one.
+
+**`setup.template.enabled`**
+: Set to false to disable template loading. If this is set to false, you must [load the template manually](/reference/auditbeat/auditbeat-template.md#load-template-manually).
+
+**`setup.template.name`**
+: The name of the template. The default is `auditbeat`. The Auditbeat version is always appended to the given name, so the final name is `auditbeat-%{[agent.version]}`.
+
+**`setup.template.pattern`**
+: The template pattern to apply to the default index settings. The default pattern is `auditbeat`. The Auditbeat version is always included in the pattern, so the final pattern is `auditbeat-%{[agent.version]}`.
+
+ Example:
+
+ ```yaml
+ setup.template.name: "auditbeat"
+ setup.template.pattern: "auditbeat"
+ ```
+
+
+**`setup.template.fields`**
+: The path to the YAML file describing the fields. The default is `fields.yml`. If a relative path is set, it is considered relative to the config path. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details.
+
+**`setup.template.overwrite`**
+: A boolean that specifies whether to overwrite the existing template. The default is false. Do not enable this option if you start more than one instance of Auditbeat at the same time. It can overload {{es}} by sending too many template update requests.
+
+**`setup.template.settings`**
+: A dictionary of settings to place into the `settings.index` dictionary of the Elasticsearch template. For more details about the available Elasticsearch mapping options, please see the Elasticsearch [mapping reference](docs-content://manage-data/data-store/mapping.md).
+
+ Example:
+
+ ```yaml
+ setup.template.name: "auditbeat"
+ setup.template.fields: "fields.yml"
+ setup.template.overwrite: false
+ setup.template.settings:
+ index.number_of_shards: 1
+ index.number_of_replicas: 1
+ ```
+
+
+**`setup.template.settings._source`**
+: A dictionary of settings for the `_source` field. For the available settings, please see the Elasticsearch [reference](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md).
+
+ Example:
+
+ ```yaml
+ setup.template.name: "auditbeat"
+ setup.template.fields: "fields.yml"
+ setup.template.overwrite: false
+ setup.template.settings:
+ _source.enabled: false
+ ```
+
+
+**`setup.template.append_fields`**
+: A list of fields to be added to the template and {{kib}} index pattern. This setting adds new fields. It does not overwrite or change existing fields.
+
+ This setting is useful when your data contains fields that Auditbeat doesn’t know about in advance.
+
+ If `append_fields` is specified along with `overwrite: true`, Auditbeat overwrites the existing template and applies the new template when creating new indices. Existing indices are not affected. If you’re running multiple instances of Auditbeat with different `append_fields` settings, the last one writing the template takes precedence.
+
+ Any changes to this setting also affect the {{kib}} index pattern.
+
+ Example config:
+
+ ```yaml
+ setup.template.overwrite: true
+ setup.template.append_fields:
+ - name: test.name
+ type: keyword
+ - name: test.hostname
+ type: long
+ ```
+
+
+**`setup.template.json.enabled`**
+: Set to `true` to load a JSON-based template file. Specify the path to your {{es}} index template file and set the name of the template.
+
+ ```yaml
+ setup.template.json.enabled: true
+ setup.template.json.path: "template.json"
+ setup.template.json.name: "template-name"
+ setup.template.json.data_stream: false
+ ```
+
+
+::::{note}
+If the JSON template is used, the `fields.yml` is skipped for the template generation.
+::::
+
+
+::::{note}
+If the JSON template is a data stream, set `setup.template.json.data_stream`.
+::::
+
+
diff --git a/docs/reference/auditbeat/configure-cloud-id.md b/docs/reference/auditbeat/configure-cloud-id.md
new file mode 100644
index 000000000000..7e6e27b99092
--- /dev/null
+++ b/docs/reference/auditbeat/configure-cloud-id.md
@@ -0,0 +1,34 @@
+---
+navigation_title: "{{ess}}"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configure-cloud-id.html
+---
+
+# Configure the output for {{ess}} on {{ecloud}} [configure-cloud-id]
+
+
+Auditbeat comes with two settings that simplify the output configuration when used together with [{{ess}}](https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body). When defined, these setting overwrite settings from other parts in the configuration.
+
+Example:
+
+```yaml
+cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw=="
+cloud.auth: "elastic:{pwd}"
+```
+
+These settings can be also specified at the command line, like this:
+
+```sh
+auditbeat -e -E cloud.id="" -E cloud.auth=""
+```
+
+## `cloud.id` [_cloud_id]
+
+The Cloud ID, which can be found in the {{ess}} web console, is used by Auditbeat to resolve the {{es}} and {{kib}} URLs. This setting overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. For more on locating and configuring the Cloud ID, see [Configure Beats and Logstash with Cloud ID](docs-content://deploy-manage/deploy/cloud-enterprise/find-cloud-id.md).
+
+
+## `cloud.auth` [_cloud_auth]
+
+When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and `output.elasticsearch.password` settings. Because the Kibana settings inherit the username and password from the {{es}} output, this can also be used to set the `setup.kibana.username` and `setup.kibana.password` options.
+
+
diff --git a/docs/reference/auditbeat/configuring-howto-auditbeat.md b/docs/reference/auditbeat/configuring-howto-auditbeat.md
new file mode 100644
index 000000000000..56bf5870a0a4
--- /dev/null
+++ b/docs/reference/auditbeat/configuring-howto-auditbeat.md
@@ -0,0 +1,46 @@
+---
+navigation_title: "Configure"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-howto-auditbeat.html
+---
+
+# Configure Auditbeat [configuring-howto-auditbeat]
+
+
+::::{tip}
+To get started quickly, read [Quick start: installation and configuration](/reference/auditbeat/auditbeat-installation-configuration.md).
+::::
+
+
+To configure Auditbeat, edit the configuration file. The default configuration file is called `auditbeat.yml`. The location of the file varies by platform. To locate the file, see [Directory layout](/reference/auditbeat/directory-layout.md).
+
+There’s also a full example configuration file called `auditbeat.reference.yml` that shows all non-deprecated options.
+
+::::{tip}
+See the [Config File Format](/reference/libbeat/config-file-format.md) for more about the structure of the config file.
+::::
+
+
+The following topics describe how to configure Auditbeat:
+
+* [Modules](/reference/auditbeat/configuration-auditbeat.md)
+* [General settings](/reference/auditbeat/configuration-general-options.md)
+* [Project paths](/reference/auditbeat/configuration-path.md)
+* [Config file reloading](/reference/auditbeat/auditbeat-configuration-reloading.md)
+* [Output](/reference/auditbeat/configuring-output.md)
+* [SSL](/reference/auditbeat/configuration-ssl.md)
+* [Index lifecycle management (ILM)](/reference/auditbeat/ilm.md)
+* [Elasticsearch index template](/reference/auditbeat/configuration-template.md)
+* [{{kib}} endpoint](/reference/auditbeat/setup-kibana-endpoint.md)
+* [Kibana dashboards](/reference/auditbeat/configuration-dashboards.md)
+* [Processors](/reference/auditbeat/filtering-enhancing-data.md)
+* [Internal queue](/reference/auditbeat/configuring-internal-queue.md)
+* [Logging](/reference/auditbeat/configuration-logging.md)
+* [HTTP endpoint](/reference/auditbeat/http-endpoint.md)
+* [*Regular expression support*](/reference/auditbeat/regexp-support.md)
+* [Instrumentation](/reference/auditbeat/configuration-instrumentation.md)
+* [Feature flags](/reference/auditbeat/configuration-feature-flags.md)
+* [*auditbeat.reference.yml*](/reference/auditbeat/auditbeat-reference-yml.md)
+
+After changing configuration settings, you need to restart Auditbeat to pick up the changes.
+
diff --git a/docs/reference/auditbeat/configuring-ingest-node.md b/docs/reference/auditbeat/configuring-ingest-node.md
new file mode 100644
index 000000000000..178b83895baf
--- /dev/null
+++ b/docs/reference/auditbeat/configuring-ingest-node.md
@@ -0,0 +1,50 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-ingest-node.html
+---
+
+# Parse data using an ingest pipeline [configuring-ingest-node]
+
+When you use {{es}} for output, you can configure Auditbeat to use an [ingest pipeline](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) to pre-process documents before the actual indexing takes place in {{es}}. An ingest pipeline is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of {{ls}}. For example, you can create an ingest pipeline in {{es}} that consists of one processor that removes a field in a document followed by another processor that renames a field.
+
+After defining the pipeline in {{es}}, you simply configure Auditbeat to use the pipeline. To configure Auditbeat, you specify the pipeline ID in the `parameters` option under `elasticsearch` in the `auditbeat.yml` file:
+
+```yaml
+output.elasticsearch:
+ hosts: ["localhost:9200"]
+ pipeline: my_pipeline_id
+```
+
+For example, let’s say that you’ve defined the following pipeline in a file named `pipeline.json`:
+
+```json
+{
+ "description": "Test pipeline",
+ "processors": [
+ {
+ "lowercase": {
+ "field": "agent.name"
+ }
+ }
+ ]
+}
+```
+
+To add the pipeline in {{es}}, you would run:
+
+```shell
+curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json
+```
+
+Then in the `auditbeat.yml` file, you would specify:
+
+```yaml
+output.elasticsearch:
+ hosts: ["localhost:9200"]
+ pipeline: "test-pipeline"
+```
+
+When you run Auditbeat, the value of `agent.name` is converted to lowercase before indexing.
+
+For more information about defining a pre-processing pipeline, see the [ingest pipeline](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) documentation.
+
diff --git a/docs/reference/auditbeat/configuring-internal-queue.md b/docs/reference/auditbeat/configuring-internal-queue.md
new file mode 100644
index 000000000000..3331c1b2a1b8
--- /dev/null
+++ b/docs/reference/auditbeat/configuring-internal-queue.md
@@ -0,0 +1,144 @@
+---
+navigation_title: "Internal queue"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-internal-queue.html
+---
+
+# Configure the internal queue [configuring-internal-queue]
+
+
+Auditbeat uses an internal queue to store events before publishing them. The queue is responsible for buffering and combining events into batches that can be consumed by the outputs. The outputs will use bulk operations to send a batch of events in one transaction.
+
+You can configure the type and behavior of the internal queue by setting options in the `queue` section of the `auditbeat.yml` config file or by setting options in the `queue` section of the output. Only one queue type can be configured.
+
+This sample configuration sets the memory queue to buffer up to 4096 events:
+
+```yaml
+queue.mem:
+ events: 4096
+```
+
+
+## Configure the memory queue [configuration-internal-queue-memory]
+
+The memory queue keeps all events in memory.
+
+The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted.
+
+The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`. `flush.min_events` gives a limit on the number of events that can be included in a single batch, and `flush.timeout` specifies how long the queue should wait to completely fill an event request. If the output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of `bulk_max_size` and `flush.min_events`.
+
+`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`.
+
+In synchronous mode, an event request is always filled as soon as events are available, even if there are not enough events to fill the requested batch. This is useful when latency must be minimized. To use synchronous mode, set `flush.timeout` to 0.
+
+For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0 or 1. In this case, batch size will be capped at 1/2 the queue capacity.
+
+In asynchronous mode, an event request will wait up to the specified timeout to try and fill the requested batch completely. If the timeout expires, the queue returns a partial batch with all available events. To use asynchronous mode, set `flush.timeout` to a positive duration, e.g. `5s`.
+
+This sample configuration forwards events to the output when there are enough events to fill the output’s request (usually controlled by `bulk_max_size`, and limited to at most 512 events by `flush.min_events`), or when events have been waiting for 5s without filling the requested size:
+
+```yaml
+queue.mem:
+ events: 4096
+ flush.min_events: 512
+ flush.timeout: 5s
+```
+
+
+## Configuration options [_configuration_options_13]
+
+You can specify the following options in the `queue.mem` section of the `auditbeat.yml` config file:
+
+
+#### `events` [queue-mem-events-option]
+
+Number of events the queue can store.
+
+The default value is 3200 events.
+
+
+#### `flush.min_events` [queue-mem-flush-min-events-option]
+
+If greater than 1, specifies the maximum number of events per batch. In this case the output must wait for the queue to accumulate the requested number of events or for `flush.timeout` to expire before publishing.
+
+If 0 or 1, sets the maximum number of events per batch to half the queue size, and sets the queue to synchronous mode (equivalent to `flush.timeout` of 0).
+
+The default value is 1600.
+
+
+#### `flush.timeout` [queue-mem-flush-timeout-option]
+
+Maximum wait time for event requests from the output to be fulfilled. If set to 0s, events are returned immediately.
+
+The default value is 10s.
+
+
+## Configure the disk queue [configuration-internal-queue-disk]
+
+The disk queue stores pending events on the disk rather than main memory. This allows Beats to queue a larger number of events than is possible with the memory queue, and to save events when a Beat or device is restarted. This increased reliability comes with a performance tradeoff, as every incoming event must be written and read from the device’s disk. However, for setups where the disk is not the main bottleneck, the disk queue gives a simple and relatively low-overhead way to add a layer of robustness to incoming event data.
+
+To enable the disk queue with default settings, specify a maximum size:
+
+```yaml
+queue.disk:
+ max_size: 10GB
+```
+
+The queue will use up to the specified maximum size on disk. It will only use as much space as required. For example, if the queue is only storing 1GB of events, then it will only occupy 1GB on disk no matter how high the maximum is. Queue data is deleted from disk after it has been successfully sent to the output.
+
+
+### Configuration options [configuration-internal-queue-disk-reference]
+
+You can specify the following options in the `queue.disk` section of the `auditbeat.yml` config file:
+
+
+#### `path` [_path]
+
+The path to the directory where the disk queue should store its data files. The directory is created on startup if it doesn’t exist.
+
+The default value is `"${path.data}/diskqueue"`.
+
+
+#### `max_size` (required) [_max_size_required]
+
+The maximum size the queue should use on disk. Events that exceed this maximum will either pause their input or be discarded, depending on the input’s configuration.
+
+A value of `0` means that no maximum size is enforced, and the queue can grow up to the amount of free space on the disk. This value should be used with caution, as completely filling a system’s main disk can make it inoperable. It is best to use this setting only with a dedicated data or backup partition that will not interfere with Auditbeat or the rest of the host system.
+
+The default value is `10GB`.
+
+
+#### `segment_size` [_segment_size]
+
+Data added to the queue is stored in segment files. Each segment contains some number of events waiting to be sent to the outputs, and is deleted when all its events are sent. By default, segment size is limited to 1/10 of the maximum queue size. Using a smaller size means that the queue will use more data files, but they will be deleted more quickly after use. Using a larger size means some data will take longer to delete, but the queue will use fewer auxiliary files. It is usually fine to leave this value unchanged.
+
+The default value is `max_size / 10`.
+
+
+#### `read_ahead` [_read_ahead]
+
+The number of events that should be read from disk into memory while waiting for an output to request them. If you find outputs are slowing down because they can’t read as many events at a time, adjusting this setting upward may help, at the cost of higher memory usage.
+
+The default value is `512`.
+
+
+#### `write_ahead` [_write_ahead]
+
+The number of events the queue should accept and store in memory while waiting for them to be written to disk. If you find the queue’s memory use is too high because events are waiting too long to be written to disk, adjusting this setting downward may help, at the cost of reduced event throughput. On the other hand, if inputs are waiting or discarding events because they are being produced faster than the disk can handle, adjusting this setting upward may help, at the cost of higher memory usage.
+
+The default value is `2048`.
+
+
+#### `retry_interval` [_retry_interval]
+
+Some disk errors may block operation of the queue, for example a permission error writing to the data directory, or a disk full error while writing an event. In this case, the queue reports the error and retries after pausing for the time specified in `retry_interval`.
+
+The default value is `1s` (one second).
+
+
+#### `max_retry_interval` [_max_retry_interval]
+
+When there are multiple consecutive errors writing to the disk, the queue increases the retry interval by factors of 2 up to a maximum of `max_retry_interval`. Increase this value if you are concerned about logging too many errors or overloading the host system if the target disk becomes unavailable for an extended time.
+
+The default value is `30s` (thirty seconds).
+
diff --git a/docs/reference/auditbeat/configuring-output.md b/docs/reference/auditbeat/configuring-output.md
new file mode 100644
index 000000000000..362fd8e7dc78
--- /dev/null
+++ b/docs/reference/auditbeat/configuring-output.md
@@ -0,0 +1,31 @@
+---
+navigation_title: "Output"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-output.html
+---
+
+# Configure the output [configuring-output]
+
+
+You configure Auditbeat to write to a specific output by setting options in the Outputs section of the `auditbeat.yml` config file. Only a single output may be defined.
+
+The following topics describe how to configure each supported output. If you’ve secured the {{stack}}, also read [Secure](/reference/auditbeat/securing-auditbeat.md) for more about security-related configuration options.
+
+* [{{ess}}](/reference/auditbeat/configure-cloud-id.md)
+* [Elasticsearch](/reference/auditbeat/elasticsearch-output.md)
+* [Logstash](/reference/auditbeat/logstash-output.md)
+* [Kafka](/reference/auditbeat/kafka-output.md)
+* [Redis](/reference/auditbeat/redis-output.md)
+* [File](/reference/auditbeat/file-output.md)
+* [Console](/reference/auditbeat/console-output.md)
+* [Discard](/reference/auditbeat/discard-output.md)
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/reference/auditbeat/configuring-ssl-logstash.md b/docs/reference/auditbeat/configuring-ssl-logstash.md
new file mode 100644
index 000000000000..70f884d2e0f5
--- /dev/null
+++ b/docs/reference/auditbeat/configuring-ssl-logstash.md
@@ -0,0 +1,118 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-ssl-logstash.html
+---
+
+# Secure communication with Logstash [configuring-ssl-logstash]
+
+You can use SSL mutual authentication to secure connections between Auditbeat and Logstash. This ensures that Auditbeat sends encrypted data to trusted Logstash servers only, and that the Logstash server receives data from trusted Auditbeat clients only.
+
+To use SSL mutual authentication:
+
+1. Create a certificate authority (CA) and use it to sign the certificates that you plan to use for Auditbeat and Logstash. Creating a correct SSL/TLS infrastructure is outside the scope of this document. There are many online resources available that describe how to create certificates.
+
+ ::::{tip}
+ If you are using {{security-features}}, you can use the [elasticsearch-certutil tool](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) to generate certificates.
+ ::::
+
+2. Configure Auditbeat to use SSL. In the `auditbeat.yml` config file, specify the following settings under `ssl`:
+
+ * `certificate_authorities`: Configures Auditbeat to trust any certificates signed by the specified CA. If `certificate_authorities` is empty or not set, the trusted certificate authorities of the host system are used.
+ * `certificate` and `key`: Specifies the certificate and key that Auditbeat uses to authenticate with Logstash.
+
+ For example:
+
+ ```yaml
+ output.logstash:
+ hosts: ["logs.mycompany.com:5044"]
+ ssl.certificate_authorities: ["/etc/ca.crt"]
+ ssl.certificate: "/etc/client.crt"
+ ssl.key: "/etc/client.key"
+ ```
+
+ For more information about these configuration options, see [SSL](/reference/auditbeat/configuration-ssl.md).
+
+3. Configure Logstash to use SSL. In the Logstash config file, specify the following settings for the [Beats input plugin for Logstash](logstash://reference/plugins-inputs-beats.md):
+
+ * `ssl`: When set to true, enables Logstash to use SSL/TLS.
+ * `ssl_certificate_authorities`: Configures Logstash to trust any certificates signed by the specified CA.
+ * `ssl_certificate` and `ssl_key`: Specify the certificate and key that Logstash uses to authenticate with the client.
+ * `ssl_verify_mode`: Specifies whether the Logstash server verifies the client certificate against the CA. You need to specify either `peer` or `force_peer` to make the server ask for the certificate and validate it. If you specify `force_peer`, and Auditbeat doesn’t provide a certificate, the Logstash connection will be closed. If you choose not to use [certutil](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md), the certificates that you obtain must allow for both `clientAuth` and `serverAuth` if the extended key usage extension is present.
+
+ For example:
+
+ ```json
+ input {
+ beats {
+ port => 5044
+ ssl => true
+ ssl_certificate_authorities => ["/etc/ca.crt"]
+ ssl_certificate => "/etc/server.crt"
+ ssl_key => "/etc/server.key"
+ ssl_verify_mode => "force_peer"
+ }
+ }
+ ```
+
+ For more information about these options, see the [documentation for the Beats input plugin](logstash://reference/plugins-inputs-beats.md).
+
+
+
+## Validate the Logstash server’s certificate [testing-ssl-logstash]
+
+Before running Auditbeat, you should validate the Logstash server’s certificate. You can use `curl` to validate the certificate even though the protocol used to communicate with Logstash is not based on HTTP. For example:
+
+```shell
+curl -v --cacert ca.crt https://logs.mycompany.com:5044
+```
+
+If the test is successful, you’ll receive an empty response error:
+
+```shell
+* Rebuilt URL to: https://logs.mycompany.com:5044/
+* Trying 192.168.99.100...
+* Connected to logs.mycompany.com (192.168.99.100) port 5044 (#0)
+* TLS 1.2 connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA
+* Server certificate: logs.mycompany.com
+* Server certificate: mycompany.com
+> GET / HTTP/1.1
+> Host: logs.mycompany.com:5044
+> User-Agent: curl/7.43.0
+> Accept: */*
+>
+* Empty reply from server
+* Connection #0 to host logs.mycompany.com left intact
+curl: (52) Empty reply from server
+```
+
+The following example uses the IP address rather than the hostname to validate the certificate:
+
+```shell
+curl -v --cacert ca.crt https://192.168.99.100:5044
+```
+
+Validation for this test fails because the certificate is not valid for the specified IP address. It’s only valid for the `logs.mycompany.com`, the hostname that appears in the Subject field of the certificate.
+
+```shell
+* Rebuilt URL to: https://192.168.99.100:5044/
+* Trying 192.168.99.100...
+* Connected to 192.168.99.100 (192.168.99.100) port 5044 (#0)
+* WARNING: using IP address, SNI is being disabled by the OS.
+* SSL: certificate verification failed (result: 5)
+* Closing connection 0
+curl: (51) SSL: certificate verification failed (result: 5)
+```
+
+See the [troubleshooting docs](/reference/auditbeat/ssl-client-fails.md) for info about resolving this issue.
+
+
+## Test the Auditbeat to Logstash connection [_test_the_auditbeat_to_logstash_connection]
+
+If you have Auditbeat running as a service, first stop the service. Then test your setup by running Auditbeat in the foreground so you can quickly see any errors that occur:
+
+```sh
+auditbeat -c auditbeat.yml -e -v
+```
+
+Any errors will be printed to the console. See the [troubleshooting docs](/reference/auditbeat/ssl-client-fails.md) for info about resolving common errors.
+
diff --git a/docs/reference/auditbeat/connection-problem.md b/docs/reference/auditbeat/connection-problem.md
new file mode 100644
index 000000000000..d52751bd2302
--- /dev/null
+++ b/docs/reference/auditbeat/connection-problem.md
@@ -0,0 +1,20 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/connection-problem.html
+---
+
+# Logstash connection doesn���t work [connection-problem]
+
+You may have configured {{ls}} or Auditbeat incorrectly. To resolve the issue:
+
+* Make sure that {{ls}} is running and you can connect to it. First, try to ping the {{ls}} host to verify that you can reach it from the host running Auditbeat. Then use either `nc` or `telnet` to make sure that the port is available. For example:
+
+ ```shell
+ ping
+ telnet 5044
+ ```
+
+* Verify that the config file for Auditbeat specifies the correct port where {{ls}} is running.
+* Make sure that the {{es}} output is commented out in the config file and the {{ls}} output is uncommented.
+* Confirm that the most recent [Beats input plugin for {{ls}}](logstash://reference/plugins-inputs-beats.md) is installed and configured. Note that Beats will not connect to the Lumberjack input plugin. To learn how to install and update plugins, see [Working with plugins](logstash://reference/working-with-plugins.md).
+
diff --git a/docs/reference/auditbeat/console-output.md b/docs/reference/auditbeat/console-output.md
new file mode 100644
index 000000000000..bd1c028cb6fa
--- /dev/null
+++ b/docs/reference/auditbeat/console-output.md
@@ -0,0 +1,67 @@
+---
+navigation_title: "Console"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/console-output.html
+---
+
+# Configure the Console output [console-output]
+
+
+The Console output writes events in JSON format to stdout.
+
+::::{warning}
+The Console output should be used only for debugging issues as it can produce a large amount of logging data.
+::::
+
+
+To use this output, edit the Auditbeat configuration file to disable the {{es}} output by commenting it out, and enable the console output by adding `output.console`.
+
+Example configuration:
+
+```yaml
+output.console:
+ pretty: true
+```
+
+## Configuration options [_configuration_options_7]
+
+You can specify the following `output.console` options in the `auditbeat.yml` config file:
+
+### `enabled` [_enabled_6]
+
+The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.
+
+The default value is `true`.
+
+
+### `pretty` [_pretty]
+
+If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false.
+
+
+### `codec` [_codec_4]
+
+Output codec configuration. If the `codec` section is missing, events will be json encoded using the `pretty` option.
+
+See [Change the output codec](/reference/auditbeat/configuration-output-codec.md) for more information.
+
+
+### `bulk_max_size` [_bulk_max_size_4]
+
+The maximum number of events to buffer internally during publishing. The default is 2048.
+
+Specifying a larger batch size may add some latency and buffering during publishing. However, for Console output, this setting does not affect how events are published.
+
+Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.
+
+
+### `queue` [_queue_6]
+
+Configuration options for internal queue.
+
+See [Internal queue](/reference/auditbeat/configuring-internal-queue.md) for more information.
+
+Note:`queue` options can be set under `auditbeat.yml` or the `output` section but not both.
+
+
+
diff --git a/docs/reference/auditbeat/contributing-to-beats.md b/docs/reference/auditbeat/contributing-to-beats.md
new file mode 100644
index 000000000000..79b7d64c0734
--- /dev/null
+++ b/docs/reference/auditbeat/contributing-to-beats.md
@@ -0,0 +1,13 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/contributing-to-beats.html
+---
+
+# Contribute to Beats [contributing-to-beats]
+
+The Beats are open source and we love to receive contributions from our community — you!
+
+There are many ways to contribute, from writing tutorials or blog posts, improving the documentation, submitting bug reports and feature requests, or writing code that implements a whole new protocol, module, or Beat.
+
+The [Beats Developer Guide](http://www.elastic.co/guide/en/beats/devguide/master/index.md) is your one-stop shop for everything related to developing code for the Beats project.
+
diff --git a/docs/reference/auditbeat/convert.md b/docs/reference/auditbeat/convert.md
new file mode 100644
index 000000000000..cc42e44ba519
--- /dev/null
+++ b/docs/reference/auditbeat/convert.md
@@ -0,0 +1,42 @@
+---
+navigation_title: "convert"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/convert.html
+---
+
+# Convert [convert]
+
+
+The `convert` processor converts a field in the event to a different type, such as converting a string to an integer.
+
+The supported types include: `integer`, `long`, `float`, `double`, `string`, `boolean`, and `ip`.
+
+The `ip` type is effectively an alias for `string`, but with an added validation that the value is an IPv4 or IPv6 address.
+
+```yaml
+processors:
+ - convert:
+ fields:
+ - {from: "src_ip", to: "source.ip", type: "ip"}
+ - {from: "src_port", to: "source.port", type: "integer"}
+ ignore_missing: true
+ fail_on_error: false
+```
+
+The `convert` processor has the following configuration settings:
+
+`fields`
+: (Required) This is the list of fields to convert. At least one item must be contained in the list. Each item in the list must have a `from` key that specifies the source field. The `to` key is optional and specifies where to assign the converted value. If `to` is omitted then the `from` field is updated in-place. The `type` key specifies the data type to convert the value to. If `type` is omitted then the processor copies or renames the field without any type conversion.
+
+`ignore_missing`
+: (Optional) If `true` the processor continues to the next field when the `from` key is not found in the event. If false then the processor returns an error and does not process the remaining fields. Default is `false`.
+
+`fail_on_error`
+: (Optional) If false type conversion failures are ignored and the processor continues to the next field. Default is `true`.
+
+`tag`
+: (Optional) An identifier for this processor. Useful for debugging.
+
+`mode`
+: (Optional) When both `from` and `to` are defined for a field then `mode` controls whether to `copy` or `rename` the field when the type conversion is successful. Default is `copy`.
+
diff --git a/docs/reference/auditbeat/copy-fields.md b/docs/reference/auditbeat/copy-fields.md
new file mode 100644
index 000000000000..0cb36da4318b
--- /dev/null
+++ b/docs/reference/auditbeat/copy-fields.md
@@ -0,0 +1,45 @@
+---
+navigation_title: "copy_fields"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/copy-fields.html
+---
+
+# Copy fields [copy-fields]
+
+
+The `copy_fields` processor takes the value of a field and copies it to a new field.
+
+You cannot use this processor to replace an existing field. If the target field already exists, you must [drop](/reference/auditbeat/drop-fields.md) or [rename](/reference/auditbeat/rename-fields.md) the field before using `copy_fields`.
+
+`fields`
+: List of `from` and `to` pairs to copy from and to. It’s supported to use `@metadata.` prefix for `from` and `to` and copy values not just in/from/to the event fields but also in/from/to the event metadata.
+
+`fail_on_error`
+: (Optional) If set to `true` and an error occurs, the changes are reverted and the original is returned. If set to `false`, processing continues if an error occurs. Default is `true`.
+
+`ignore_missing`
+: (Optional) Indicates whether to ignore events that lack the source field. The default is `false`, which will fail processing of an event if a field is missing.
+
+For example, this configuration:
+
+```yaml
+processors:
+ - copy_fields:
+ fields:
+ - from: message
+ to: event.original
+ fail_on_error: false
+ ignore_missing: true
+```
+
+Copies the original `message` field to `event.original`:
+
+```json
+{
+ "message": "my-interesting-message",
+ "event": {
+ "original": "my-interesting-message"
+ }
+}
+```
+
diff --git a/docs/reference/auditbeat/could-not-locate-index-pattern.md b/docs/reference/auditbeat/could-not-locate-index-pattern.md
new file mode 100644
index 000000000000..d5aea2e11c9e
--- /dev/null
+++ b/docs/reference/auditbeat/could-not-locate-index-pattern.md
@@ -0,0 +1,20 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/could-not-locate-index-pattern.html
+---
+
+# Dashboard could not locate the index-pattern [could-not-locate-index-pattern]
+
+Typically Auditbeat sets up the index pattern automatically when it loads the index template. However, if for some reason Auditbeat loads the index template, but the index pattern does not get created correctly, you’ll see a "could not locate that index-pattern" error. To resolve this problem:
+
+1. Try running the `setup` command again. For example: `./auditbeat setup`.
+2. If that doesn’t work, go to the Management app in {{kib}}, and under **Index Patterns**, look for the pattern.
+
+ 1. If the pattern doesn’t exist, create it manually.
+
+ * Set the **Time filter field name** to `@timestamp`.
+ * Set the **Custom index pattern ID** advanced option. For example, if your custom index name is `auditbeat-customname`, set the custom index pattern ID to `auditbeat-customname-*`.
+
+
+For more information, see [Creating an index pattern](docs-content://explore-analyze/find-and-organize/data-views.md) in the {{kib}} docs.
+
diff --git a/docs/reference/auditbeat/decode-base64-field.md b/docs/reference/auditbeat/decode-base64-field.md
new file mode 100644
index 000000000000..e1cd807859a3
--- /dev/null
+++ b/docs/reference/auditbeat/decode-base64-field.md
@@ -0,0 +1,35 @@
+---
+navigation_title: "decode_base64_field"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-base64-field.html
+---
+
+# Decode Base64 fields [decode-base64-field]
+
+
+The `decode_base64_field` processor specifies a field to base64 decode. The `field` key contains a `from: old-key` and a `to: new-key` pair. `from` is the origin and `to` the target name of the field.
+
+To overwrite fields either first rename the target field or use the `drop_fields` processor to drop the field and then rename the field.
+
+```yaml
+processors:
+ - decode_base64_field:
+ field:
+ from: "field1"
+ to: "field2"
+ ignore_missing: false
+ fail_on_error: true
+```
+
+In the example above: - field1 is decoded in field2
+
+The `decode_base64_field` processor has the following configuration settings:
+
+`ignore_missing`
+: (Optional) If set to true, no error is logged in case a key which should be base64 decoded is missing. Default is `false`.
+
+`fail_on_error`
+: (Optional) If set to true, in case of an error the base64 decode of fields is stopped and the original event is returned. If set to false, decoding continues also if an error happened during decoding. Default is `true`.
+
+See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions.
+
diff --git a/docs/reference/auditbeat/decode-duration.md b/docs/reference/auditbeat/decode-duration.md
new file mode 100644
index 000000000000..b153ba498e29
--- /dev/null
+++ b/docs/reference/auditbeat/decode-duration.md
@@ -0,0 +1,25 @@
+---
+navigation_title: "decode_duration"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-duration.html
+---
+
+# Decode duration [decode-duration]
+
+
+The `decode_duration` processor decodes a Go-style duration string into a specific `format`.
+
+For more information about the Go `time.Duration` string style, refer to the [Go documentation](https://pkg.go.dev/time#Duration).
+
+| Name | Required | Default | Description | |
+| --- | --- | --- | --- | --- |
+| `field` | yes | | Which field of event needs to be decoded as `time.Duration` | |
+| `format` | yes | `milliseconds` | Supported formats: `milliseconds`/`seconds`/`minutes`/`hours` | |
+
+```yaml
+processors:
+ - decode_duration:
+ field: "app.rpc.cost"
+ format: "milliseconds"
+```
+
diff --git a/docs/reference/auditbeat/decode-json-fields.md b/docs/reference/auditbeat/decode-json-fields.md
new file mode 100644
index 000000000000..6a5e3aeba1c5
--- /dev/null
+++ b/docs/reference/auditbeat/decode-json-fields.md
@@ -0,0 +1,48 @@
+---
+navigation_title: "decode_json_fields"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-json-fields.html
+---
+
+# Decode JSON fields [decode-json-fields]
+
+
+The `decode_json_fields` processor decodes fields containing JSON strings and replaces the strings with valid JSON objects.
+
+```yaml
+processors:
+ - decode_json_fields:
+ fields: ["field1", "field2", ...]
+ process_array: false
+ max_depth: 1
+ target: ""
+ overwrite_keys: false
+ add_error_key: true
+```
+
+The `decode_json_fields` processor has the following configuration settings:
+
+`fields`
+: The fields containing JSON strings to decode.
+
+`process_array`
+: (Optional) A Boolean value that specifies whether to process arrays. The default is `false`.
+
+`max_depth`
+: (Optional) The maximum parsing depth. A value of `1` will decode the JSON objects in fields indicated in `fields`, a value of `2` will also decode the objects embedded in the fields of these parsed documents. The default is `1`.
+
+`target`
+: (Optional) The field under which the decoded JSON will be written. By default, the decoded JSON object replaces the string field from which it was read. To merge the decoded JSON fields into the root of the event, specify `target` with an empty string (`target: ""`). Note that the `null` value (`target:`) is treated as if the field was not set.
+
+`overwrite_keys`
+: (Optional) A Boolean value that specifies whether existing keys in the event are overwritten by keys from the decoded JSON object. The default value is `false`.
+
+`expand_keys`
+: (Optional) A Boolean value that specifies whether keys in the decoded JSON should be recursively de-dotted and expanded into a hierarchical object structure. For example, `{"a.b.c": 123}` would be expanded into `{"a":{"b":{"c":123}}}`.
+
+`add_error_key`
+: (Optional) If set to `true` and an error occurs while decoding JSON keys, the `error` field will become a part of the event with the error message. If set to `false`, there will not be any error in the event’s field. The default value is `false`.
+
+`document_id`
+: (Optional) JSON key that’s used as the document ID. If configured, the field will be removed from the original JSON document and stored in `@metadata._id`
+
diff --git a/docs/reference/auditbeat/decode-xml-wineventlog.md b/docs/reference/auditbeat/decode-xml-wineventlog.md
new file mode 100644
index 000000000000..6de117485e6f
--- /dev/null
+++ b/docs/reference/auditbeat/decode-xml-wineventlog.md
@@ -0,0 +1,162 @@
+---
+navigation_title: "decode_xml_wineventlog"
+mapped_pages:
+ - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-xml-wineventlog.html
+---
+
+# Decode XML Wineventlog [decode-xml-wineventlog]
+
+
+::::{warning}
+This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
+::::
+
+
+The `decode_xml_wineventlog` processor decodes Windows Event Log data in XML format that is stored under the `field` key. It outputs the result into the `target_field`.
+
+The output fields will be the same as the [winlogbeat winlog fields](/reference/winlogbeat/exported-fields-winlog.md#_winlog).
+
+The supported configuration options are:
+
+`field`
+: (Required) Source field containing the XML. Defaults to `message`.
+
+`target_field`
+: (Required) The field under which the decoded XML will be written. To merge the decoded XML fields into the root of the event specify `target_field` with an empty string (`target_field: ""`). The default value is `winlog`.
+
+`overwrite_keys`
+: (Optional) A boolean that specifies whether keys that already exist in the event are overwritten by keys from the decoded XML object. The default value is `true`.
+
+`map_ecs_fields`
+: (Optional) A boolean that specifies whether to map additional ECS fields when possible. Note that ECS field keys are placed outside of `target_field`. The default value is `true`.
+
+`ignore_missing`
+: (Optional) If `true` the processor will not return an error when a specified field does not exist. Defaults to `false`.
+
+`ignore_failure`
+: (Optional) Ignore all errors produced by the processor. Defaults to `false`.
+
+`language`
+: (Optional) The language ID the events will be rendered in. The language will be forced regardless of the system language. Forwarded events will ignore this setting. A complete list of language IDs can be found [here](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/a9eac961-e77d-41a6-90a5-ce1a8b0cdb9c). It defaults to `0`, which indicates to use the system language.
+
+Example:
+
+```yaml
+processors:
+ - decode_xml_wineventlog:
+ field: event.original
+ target_field: winlog
+```
+
+```json
+{
+ "event": {
+ "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success"
+ }
+}
+```
+
+Will produce the following output:
+
+```json
+{
+ "event": {
+ "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success",
+ "action": "Special Logon",
+ "code": "4672",
+ "kind": "event",
+ "outcome": "success",
+ "provider": "Microsoft-Windows-Security-Auditing",
+ },
+ "host": {
+ "name": "vagrant",
+ },
+ "log": {
+ "level": "information",
+ },
+ "winlog": {
+ "channel": "Security",
+ "outcome": "success",
+ "activity_id": "{ffb23523-1f32-0000-c335-b2ff321fd701}",
+ "level": "information",
+ "event_id": 4672,
+ "provider_name": "Microsoft-Windows-Security-Auditing",
+ "record_id": 11303,
+ "computer_name": "vagrant",
+ "keywords_raw": 9232379236109516800,
+ "opcode": "Info",
+ "provider_guid": "{54849625-5478-4994-a5ba-3e3b0328c30d}",
+ "event_data": {
+ "SubjectUserSid": "S-1-5-18",
+ "SubjectUserName": "SYSTEM",
+ "SubjectDomainName": "NT AUTHORITY",
+ "SubjectLogonId": "0x3e7",
+ "PrivilegeList": "SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege"
+ },
+ "task": "Special Logon",
+ "keywords": [
+ "Audit Success"
+ ],
+ "message": "Special privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege",
+ "process": {
+ "pid": 652,
+ "thread": {
+ "id": 4660
+ }
+ }
+ }
+}
+```
+
+See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions.
+
+The field mappings are as follows:
+
+| Event Field | Source XML Element | Notes |
+| --- | --- | --- |
+| `winlog.channel` | `` | |
+| `winlog.event_id` | `` | |
+| `winlog.provider_name` | `` | `Name` attribute |
+| `winlog.record_id` | `` | |
+| `winlog.task` | `` | |
+| `winlog.computer_name` | `` | |
+| `winlog.keywords` | `` | list of each `Keyword` |
+| `winlog.opcodes` | `` | |
+| `winlog.provider_guid` | `` | `Guid` attribute |
+| `winlog.version` | `` | |
+| `winlog.time_created` | `` | `SystemTime` attribute |
+| `winlog.outcome` | `` | "success" if bit 0x20000000000000 is set, "failure" if 0x10000000000000 is set |
+| `winlog.level` | `` | converted to lowercase |
+| `winlog.message` | `` | line endings removed |
+| `winlog.user.identifier` | `` | |
+| `winlog.user.domain` | `` | |
+| `winlog.user.name` | `` | |
+| `winlog.user.type` | `` | converted from integer to String |
+| `winlog.event_data` | `` | map where `Name` attribute in Data element is key, and value is the value of the Data element |
+| `winlog.user_data` | `` | map where `Name` attribute in Data element is key, and value is the value of the Data element |
+| `winlog.activity_id` | `` | |
+| `winlog.related_activity_id` | `` | |
+| `winlog.kernel_time` | `` | |
+| `winlog.process.pid` | `` | |
+| `winlog.process.thread.id` | `` | |
+| `winlog.processor_id` | `` | |
+| `winlog.processor_time` | `` | |
+| `winlog.session_id` | `` | |
+| `winlog.user_time` | `` | |
+| `winlog.error.code` | `` | |
+
+If `map_ecs_fields` is enabled then the following field mappings are also performed:
+
+| Event Field | Source XML or other field | Notes |
+| --- | --- | --- |
+| `event.code` | `winlog.event_id` | |
+| `event.kind` | `"event"` | |
+| `event.provider` | `