diff --git a/auditbeat/docs/auditbeat-filtering.asciidoc b/auditbeat/docs/auditbeat-filtering.asciidoc deleted file mode 100644 index 6919965ac540..000000000000 --- a/auditbeat/docs/auditbeat-filtering.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[filtering-and-enhancing-data]] -== Filter and enhance data with processors - -++++ -Processors -++++ - -include::{libbeat-dir}/processors.asciidoc[] - -include::{libbeat-dir}/processors-using.asciidoc[] diff --git a/auditbeat/docs/auditbeat-general-options.asciidoc b/auditbeat/docs/auditbeat-general-options.asciidoc deleted file mode 100644 index 7aec17cd6095..000000000000 --- a/auditbeat/docs/auditbeat-general-options.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[configuration-general-options]] -== Configure general settings - -++++ -General settings -++++ - -You can specify settings in the +{beatname_lc}.yml+ config file to control the -general behavior of {beatname_uc}. - -include::{libbeat-dir}/generalconfig.asciidoc[] diff --git a/auditbeat/docs/auditbeat-modules-config.asciidoc b/auditbeat/docs/auditbeat-modules-config.asciidoc deleted file mode 100644 index 2071f156b922..000000000000 --- a/auditbeat/docs/auditbeat-modules-config.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -[id="configuration-{beatname_lc}"] -== Configure modules - -++++ -Modules -++++ - -To enable specific modules you add entries to the `auditbeat.modules` list in -the +{beatname_lc}.yml+ config file. Each entry in the list begins with a dash -(-) and is followed by settings for that module. - -The following example shows a configuration that runs the `auditd` and -`file_integrity` modules. - -[source,yaml] ----- -auditbeat.modules: - -- module: auditd - audit_rules: | - -w /etc/passwd -p wa -k identity - -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access - -- module: file_integrity - paths: - - /bin - - /usr/bin - - /sbin - - /usr/sbin - - /etc ----- - -The configuration details vary by module. See the -<<{beatname_lc}-modules,module documentation>> for more detail about configuring -the available modules. diff --git a/auditbeat/docs/auditbeat-options.asciidoc b/auditbeat/docs/auditbeat-options.asciidoc deleted file mode 100644 index 8233f79cee1e..000000000000 --- a/auditbeat/docs/auditbeat-options.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Auditbeat modules. Make sure you keep the -//// descriptions generic enough to work for all modules. To include -//// this file, use: -//// -//// include::{docdir}/auditbeat-options.asciidoc[] -//// -////////////////////////////////////////////////////////////////////////// - -[id="module-standard-options-{modulename}"] -[float] -==== Standard configuration options - -You can specify the following options for any {beatname_uc} module. - -*`module`*:: The name of the module to run. - -ifeval::["{modulename}"=="system"] -*`datasets`*:: A list of datasets to execute. -endif::[] - -*`enabled`*:: A Boolean value that specifies whether the module is enabled. - -ifeval::["{modulename}"=="system"] -*`period`*:: The frequency at which the datasets check for changes. If a system -is not reachable, {beatname_uc} returns an error for each period. This setting -is required. For most datasets, especially `process` and `socket`, a shorter -period is recommended. -endif::[] - -*`fields`*:: A dictionary of fields that will be sent with the dataset event. This setting -is optional. - -*`tags`*:: A list of tags that will be sent with the dataset event. This setting is -optional. - -*`processors`*:: A list of processors to apply to the data generated by the dataset. -+ -See <> for information about specifying -processors in your config. - -*`index`*:: If present, this formatted string overrides the index for events from this -module (for elasticsearch outputs), or sets the `raw_index` field of the event's -metadata (for other outputs). This string can only refer to the agent name and -version and the event timestamp; for access to dynamic fields, use -`output.elasticsearch.index` or a processor. -+ -Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might -expand to +"{beatname_lc}-myindex-2019.12.13"+. - -*`keep_null`*:: If this option is set to true, fields with `null` values will be published in -the output document. By default, `keep_null` is set to `false`. - -*`service.name`*:: A name given by the user to the service the data is collected from. It can be -used for example to identify information collected from nodes of different -clusters with the same `service.type`. diff --git a/auditbeat/docs/configuring-howto.asciidoc b/auditbeat/docs/configuring-howto.asciidoc deleted file mode 100644 index a2de4ee5ed61..000000000000 --- a/auditbeat/docs/configuring-howto.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[id="configuring-howto-{beatname_lc}"] -= Configure {beatname_uc} - -[partintro] --- -++++ -Configure -++++ - -include::{libbeat-dir}/shared/configuring-intro.asciidoc[] - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <<{beatname_lc}-reference-yml>> - -After changing configuration settings, you need to restart {beatname_uc} to -pick up the changes. - --- - -include::./auditbeat-modules-config.asciidoc[] - -include::./auditbeat-general-options.asciidoc[] - -include::{libbeat-dir}/shared-path-config.asciidoc[] - -include::./reload-configuration.asciidoc[] - -include::{libbeat-dir}/outputconfig.asciidoc[] - -ifndef::no_kerberos[] -include::{libbeat-dir}/shared-kerberos-config.asciidoc[] -endif::[] - -include::{libbeat-dir}/shared-ssl-config.asciidoc[] - -include::{libbeat-dir}/shared-ilm.asciidoc[] - -include::{libbeat-dir}/setup-config.asciidoc[] - -include::./auditbeat-filtering.asciidoc[] - -include::{libbeat-dir}/queueconfig.asciidoc[] - -include::{libbeat-dir}/loggingconfig.asciidoc[] - -include::{libbeat-dir}/http-endpoint.asciidoc[] - -include::{libbeat-dir}/regexp.asciidoc[] - -include::{libbeat-dir}/shared-instrumentation.asciidoc[] - -include::{libbeat-dir}/shared-feature-flags.asciidoc[] - -include::{libbeat-dir}/reference-yml.asciidoc[] diff --git a/auditbeat/docs/faq-ulimit.asciidoc b/auditbeat/docs/faq-ulimit.asciidoc deleted file mode 100644 index e234d1c9d958..000000000000 --- a/auditbeat/docs/faq-ulimit.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[ulimit]] -=== {beatname_uc} fails to watch folders because too many files are open - -Because of the way file monitoring is implemented on macOS, you may see a -warning similar to the following: - -[source,shell] ----- -eventreader_fsnotify.go:42: WARN [audit.file] Failed to watch /usr/bin: too many -open files (check the max number of open files allowed with 'ulimit -a') ----- - -To resolve this issue, run {beatname_uc} with the `ulimit` set to a larger -value, for example: - -["source","sh",subs="attributes"] ----- -sudo sh -c 'ulimit -n 8192 && ./{beatname_uc} -e ----- - -Or: - -["source","sh",subs="attributes"] ----- -sudo su -ulimit -n 8192 -./{beatname_lc} -e ----- diff --git a/auditbeat/docs/faq.asciidoc b/auditbeat/docs/faq.asciidoc deleted file mode 100644 index d0f4fbe8235e..000000000000 --- a/auditbeat/docs/faq.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[faq]] -== Common problems - -This section describes common problems you might encounter with -{beatname_uc}. Also check out the -https://discuss.elastic.co/c/beats/{beatname_lc}[{beatname_uc} discussion forum]. - -include::./faq-ulimit.asciidoc[] - -include::{libbeat-dir}/faq-limit-bandwidth.asciidoc[] - -include::{libbeat-dir}/shared-faq.asciidoc[] diff --git a/auditbeat/docs/fields.asciidoc b/auditbeat/docs/fields.asciidoc deleted file mode 100644 index 9eee5f008fc1..000000000000 --- a/auditbeat/docs/fields.asciidoc +++ /dev/null @@ -1,19467 +0,0 @@ - -//// -This file is generated! See _meta/fields.yml and scripts/generate_fields_docs.py -//// - -:edit_url: - -[[exported-fields]] -= Exported fields - -[partintro] - --- -This document describes the fields that are exported by Auditbeat. They are -grouped in the following categories: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - --- -[[exported-fields-auditd]] -== Auditd fields - -These are the fields generated by the auditd module. - - - -*`user.auid`*:: -+ --- -type: alias - -alias to: user.audit.id - --- - -*`user.uid`*:: -+ --- -type: alias - -alias to: user.id - --- - -*`user.fsuid`*:: -+ --- -type: alias - -alias to: user.filesystem.id - --- - -*`user.suid`*:: -+ --- -type: alias - -alias to: user.saved.id - --- - -*`user.gid`*:: -+ --- -type: alias - -alias to: user.group.id - --- - -*`user.sgid`*:: -+ --- -type: alias - -alias to: user.saved.group.id - --- - -*`user.fsgid`*:: -+ --- -type: alias - -alias to: user.filesystem.group.id - --- - -[float] -=== name_map - -If `resolve_ids` is set to true in the configuration then `name_map` will contain a mapping of uid field names to the resolved name (e.g. auid -> root). - - - -*`user.name_map.auid`*:: -+ --- -type: alias - -alias to: user.audit.name - --- - -*`user.name_map.uid`*:: -+ --- -type: alias - -alias to: user.name - --- - -*`user.name_map.fsuid`*:: -+ --- -type: alias - -alias to: user.filesystem.name - --- - -*`user.name_map.suid`*:: -+ --- -type: alias - -alias to: user.saved.name - --- - -*`user.name_map.gid`*:: -+ --- -type: alias - -alias to: user.group.name - --- - -*`user.name_map.sgid`*:: -+ --- -type: alias - -alias to: user.saved.group.name - --- - -*`user.name_map.fsgid`*:: -+ --- -type: alias - -alias to: user.filesystem.group.name - --- - -[float] -=== selinux - -The SELinux identity of the actor. - - -*`user.selinux.user`*:: -+ --- -account submitted for authentication - -type: keyword - --- - -*`user.selinux.role`*:: -+ --- -user's SELinux role - -type: keyword - --- - -*`user.selinux.domain`*:: -+ --- -The actor's SELinux domain or type. - -type: keyword - --- - -*`user.selinux.level`*:: -+ --- -The actor's SELinux level. - -type: keyword - -example: s0 - --- - -*`user.selinux.category`*:: -+ --- -The actor's SELinux category or compartments. - -type: keyword - --- - -[float] -=== process - -Process attributes. - - -*`process.cwd`*:: -+ --- -The current working directory. - -type: alias - -alias to: process.working_directory - --- - -[float] -=== source - -Source that triggered the event. - - -*`source.path`*:: -+ --- -This is the path associated with a unix socket. - -type: keyword - --- - -[float] -=== destination - -Destination address that triggered the event. - - -*`destination.path`*:: -+ --- -This is the path associated with a unix socket. - -type: keyword - --- - - -*`auditd.message_type`*:: -+ --- -The audit message type (e.g. syscall or apparmor_denied). - - -type: keyword - -example: syscall - --- - -*`auditd.sequence`*:: -+ --- -The sequence number of the event as assigned by the kernel. Sequence numbers are stored as a uint32 in the kernel and can rollover. - - -type: long - --- - -*`auditd.session`*:: -+ --- -The session ID assigned to a login. All events related to a login session will have the same value. - - -type: keyword - --- - -*`auditd.result`*:: -+ --- -The result of the audited operation (success/fail). - -type: keyword - -example: success or fail - --- - - -[float] -=== actor - -The actor is the user that triggered the audit event. - - -*`auditd.summary.actor.primary`*:: -+ --- -The primary identity of the actor. This is the actor's original login ID. It will not change even if the user changes to another account. - - -type: keyword - --- - -*`auditd.summary.actor.secondary`*:: -+ --- -The secondary identity of the actor. This is typically the same as the primary, except for when the user has used `su`. - -type: keyword - --- - -[float] -=== object - -This is the thing or object being acted upon in the event. - - - -*`auditd.summary.object.type`*:: -+ --- -A description of the what the "thing" is (e.g. file, socket, user-session). - - -type: keyword - --- - -*`auditd.summary.object.primary`*:: -+ --- - - -type: keyword - --- - -*`auditd.summary.object.secondary`*:: -+ --- - - -type: keyword - --- - -*`auditd.summary.how`*:: -+ --- -This describes how the action was performed. Usually this is the exe or command that was being executed that triggered the event. - - -type: keyword - --- - -[float] -=== paths - -List of paths associated with the event. - - -*`auditd.paths.inode`*:: -+ --- -inode number - -type: keyword - --- - -*`auditd.paths.dev`*:: -+ --- -device name as found in /dev - -type: keyword - --- - -*`auditd.paths.obj_user`*:: -+ --- - - -type: keyword - --- - -*`auditd.paths.obj_role`*:: -+ --- - - -type: keyword - --- - -*`auditd.paths.obj_domain`*:: -+ --- - - -type: keyword - --- - -*`auditd.paths.obj_level`*:: -+ --- - - -type: keyword - --- - -*`auditd.paths.objtype`*:: -+ --- - - -type: keyword - --- - -*`auditd.paths.ouid`*:: -+ --- -file owner user ID - -type: keyword - --- - -*`auditd.paths.rdev`*:: -+ --- -the device identifier (special files only) - -type: keyword - --- - -*`auditd.paths.nametype`*:: -+ --- -kind of file operation being referenced - -type: keyword - --- - -*`auditd.paths.ogid`*:: -+ --- -file owner group ID - -type: keyword - --- - -*`auditd.paths.item`*:: -+ --- -which item is being recorded - -type: keyword - --- - -*`auditd.paths.mode`*:: -+ --- -mode flags on a file - -type: keyword - --- - -*`auditd.paths.name`*:: -+ --- -file name in avcs - -type: keyword - --- - -[float] -=== data - -The data from the audit messages. - - -*`auditd.data.action`*:: -+ --- -netfilter packet disposition - -type: keyword - --- - -*`auditd.data.minor`*:: -+ --- -device minor number - -type: keyword - --- - -*`auditd.data.acct`*:: -+ --- -a user's account name - -type: keyword - --- - -*`auditd.data.addr`*:: -+ --- -the remote address that the user is connecting from - -type: keyword - --- - -*`auditd.data.cipher`*:: -+ --- -name of crypto cipher selected - -type: keyword - --- - -*`auditd.data.id`*:: -+ --- -during account changes - -type: keyword - --- - -*`auditd.data.entries`*:: -+ --- -number of entries in the netfilter table - -type: keyword - --- - -*`auditd.data.kind`*:: -+ --- -server or client in crypto operation - -type: keyword - --- - -*`auditd.data.ksize`*:: -+ --- -key size for crypto operation - -type: keyword - --- - -*`auditd.data.spid`*:: -+ --- -sent process ID - -type: keyword - --- - -*`auditd.data.arch`*:: -+ --- -the elf architecture flags - -type: keyword - --- - -*`auditd.data.argc`*:: -+ --- -the number of arguments to an execve syscall - -type: keyword - --- - -*`auditd.data.major`*:: -+ --- -device major number - -type: keyword - --- - -*`auditd.data.unit`*:: -+ --- -systemd unit - -type: keyword - --- - -*`auditd.data.table`*:: -+ --- -netfilter table name - -type: keyword - --- - -*`auditd.data.terminal`*:: -+ --- -terminal name the user is running programs on - -type: keyword - --- - -*`auditd.data.grantors`*:: -+ --- -pam modules approving the action - -type: keyword - --- - -*`auditd.data.direction`*:: -+ --- -direction of crypto operation - -type: keyword - --- - -*`auditd.data.op`*:: -+ --- -the operation being performed that is audited - -type: keyword - --- - -*`auditd.data.tty`*:: -+ --- -tty udevice the user is running programs on - -type: keyword - --- - -*`auditd.data.syscall`*:: -+ --- -syscall number in effect when the event occurred - -type: keyword - --- - -*`auditd.data.data`*:: -+ --- -TTY text - -type: keyword - --- - -*`auditd.data.family`*:: -+ --- -netfilter protocol - -type: keyword - --- - -*`auditd.data.mac`*:: -+ --- -crypto MAC algorithm selected - -type: keyword - --- - -*`auditd.data.pfs`*:: -+ --- -perfect forward secrecy method - -type: keyword - --- - -*`auditd.data.items`*:: -+ --- -the number of path records in the event - -type: keyword - --- - -*`auditd.data.a0`*:: -+ --- - - -type: keyword - --- - -*`auditd.data.a1`*:: -+ --- - - -type: keyword - --- - -*`auditd.data.a2`*:: -+ --- - - -type: keyword - --- - -*`auditd.data.a3`*:: -+ --- - - -type: keyword - --- - -*`auditd.data.hostname`*:: -+ --- -the hostname that the user is connecting from - -type: keyword - --- - -*`auditd.data.lport`*:: -+ --- -local network port - -type: keyword - --- - -*`auditd.data.rport`*:: -+ --- -remote port number - -type: keyword - --- - -*`auditd.data.exit`*:: -+ --- -syscall exit code - -type: keyword - --- - -*`auditd.data.fp`*:: -+ --- -crypto key finger print - -type: keyword - --- - -*`auditd.data.laddr`*:: -+ --- -local network address - -type: keyword - --- - -*`auditd.data.sport`*:: -+ --- -local port number - -type: keyword - --- - -*`auditd.data.capability`*:: -+ --- -posix capabilities - -type: keyword - --- - -*`auditd.data.nargs`*:: -+ --- -the number of arguments to a socket call - -type: keyword - --- - -*`auditd.data.new-enabled`*:: -+ --- -new TTY audit enabled setting - -type: keyword - --- - -*`auditd.data.audit_backlog_limit`*:: -+ --- -audit system's backlog queue size - -type: keyword - --- - -*`auditd.data.dir`*:: -+ --- -directory name - -type: keyword - --- - -*`auditd.data.cap_pe`*:: -+ --- -process effective capability map - -type: keyword - --- - -*`auditd.data.model`*:: -+ --- -security model being used for virt - -type: keyword - --- - -*`auditd.data.new_pp`*:: -+ --- -new process permitted capability map - -type: keyword - --- - -*`auditd.data.old-enabled`*:: -+ --- -present TTY audit enabled setting - -type: keyword - --- - -*`auditd.data.oauid`*:: -+ --- -object's login user ID - -type: keyword - --- - -*`auditd.data.old`*:: -+ --- -old value - -type: keyword - --- - -*`auditd.data.banners`*:: -+ --- -banners used on printed page - -type: keyword - --- - -*`auditd.data.feature`*:: -+ --- -kernel feature being changed - -type: keyword - --- - -*`auditd.data.vm-ctx`*:: -+ --- -the vm's context string - -type: keyword - --- - -*`auditd.data.opid`*:: -+ --- -object's process ID - -type: keyword - --- - -*`auditd.data.seperms`*:: -+ --- -SELinux permissions being used - -type: keyword - --- - -*`auditd.data.seresult`*:: -+ --- -SELinux AVC decision granted/denied - -type: keyword - --- - -*`auditd.data.new-rng`*:: -+ --- -device name of rng being added from a vm - -type: keyword - --- - -*`auditd.data.old-net`*:: -+ --- -present MAC address assigned to vm - -type: keyword - --- - -*`auditd.data.sigev_signo`*:: -+ --- -signal number - -type: keyword - --- - -*`auditd.data.ino`*:: -+ --- -inode number - -type: keyword - --- - -*`auditd.data.old_enforcing`*:: -+ --- -old MAC enforcement status - -type: keyword - --- - -*`auditd.data.old-vcpu`*:: -+ --- -present number of CPU cores - -type: keyword - --- - -*`auditd.data.range`*:: -+ --- -user's SE Linux range - -type: keyword - --- - -*`auditd.data.res`*:: -+ --- -result of the audited operation(success/fail) - -type: keyword - --- - -*`auditd.data.added`*:: -+ --- -number of new files detected - -type: keyword - --- - -*`auditd.data.fam`*:: -+ --- -socket address family - -type: keyword - --- - -*`auditd.data.nlnk-pid`*:: -+ --- -pid of netlink packet sender - -type: keyword - --- - -*`auditd.data.subj`*:: -+ --- -lspp subject's context string - -type: keyword - --- - -*`auditd.data.a[0-3]`*:: -+ --- -the arguments to a syscall - -type: keyword - --- - -*`auditd.data.cgroup`*:: -+ --- -path to cgroup in sysfs - -type: keyword - --- - -*`auditd.data.kernel`*:: -+ --- -kernel's version number - -type: keyword - --- - -*`auditd.data.ocomm`*:: -+ --- -object's command line name - -type: keyword - --- - -*`auditd.data.new-net`*:: -+ --- -MAC address being assigned to vm - -type: keyword - --- - -*`auditd.data.permissive`*:: -+ --- -SELinux is in permissive mode - -type: keyword - --- - -*`auditd.data.class`*:: -+ --- -resource class assigned to vm - -type: keyword - --- - -*`auditd.data.compat`*:: -+ --- -is_compat_task result - -type: keyword - --- - -*`auditd.data.fi`*:: -+ --- -file assigned inherited capability map - -type: keyword - --- - -*`auditd.data.changed`*:: -+ --- -number of changed files - -type: keyword - --- - -*`auditd.data.msg`*:: -+ --- -the payload of the audit record - -type: keyword - --- - -*`auditd.data.dport`*:: -+ --- -remote port number - -type: keyword - --- - -*`auditd.data.new-seuser`*:: -+ --- -new SELinux user - -type: keyword - --- - -*`auditd.data.invalid_context`*:: -+ --- -SELinux context - -type: keyword - --- - -*`auditd.data.dmac`*:: -+ --- -remote MAC address - -type: keyword - --- - -*`auditd.data.ipx-net`*:: -+ --- -IPX network number - -type: keyword - --- - -*`auditd.data.iuid`*:: -+ --- -ipc object's user ID - -type: keyword - --- - -*`auditd.data.macproto`*:: -+ --- -ethernet packet type ID field - -type: keyword - --- - -*`auditd.data.obj`*:: -+ --- -lspp object context string - -type: keyword - --- - -*`auditd.data.ipid`*:: -+ --- -IP datagram fragment identifier - -type: keyword - --- - -*`auditd.data.new-fs`*:: -+ --- -file system being added to vm - -type: keyword - --- - -*`auditd.data.vm-pid`*:: -+ --- -vm's process ID - -type: keyword - --- - -*`auditd.data.cap_pi`*:: -+ --- -process inherited capability map - -type: keyword - --- - -*`auditd.data.old-auid`*:: -+ --- -previous auid value - -type: keyword - --- - -*`auditd.data.oses`*:: -+ --- -object's session ID - -type: keyword - --- - -*`auditd.data.fd`*:: -+ --- -file descriptor number - -type: keyword - --- - -*`auditd.data.igid`*:: -+ --- -ipc object's group ID - -type: keyword - --- - -*`auditd.data.new-disk`*:: -+ --- -disk being added to vm - -type: keyword - --- - -*`auditd.data.parent`*:: -+ --- -the inode number of the parent file - -type: keyword - --- - -*`auditd.data.len`*:: -+ --- -length - -type: keyword - --- - -*`auditd.data.oflag`*:: -+ --- -open syscall flags - -type: keyword - --- - -*`auditd.data.uuid`*:: -+ --- -a UUID - -type: keyword - --- - -*`auditd.data.code`*:: -+ --- -seccomp action code - -type: keyword - --- - -*`auditd.data.nlnk-grp`*:: -+ --- -netlink group number - -type: keyword - --- - -*`auditd.data.cap_fp`*:: -+ --- -file permitted capability map - -type: keyword - --- - -*`auditd.data.new-mem`*:: -+ --- -new amount of memory in KB - -type: keyword - --- - -*`auditd.data.seperm`*:: -+ --- -SELinux permission being decided on - -type: keyword - --- - -*`auditd.data.enforcing`*:: -+ --- -new MAC enforcement status - -type: keyword - --- - -*`auditd.data.new-chardev`*:: -+ --- -new character device being assigned to vm - -type: keyword - --- - -*`auditd.data.old-rng`*:: -+ --- -device name of rng being removed from a vm - -type: keyword - --- - -*`auditd.data.outif`*:: -+ --- -out interface number - -type: keyword - --- - -*`auditd.data.cmd`*:: -+ --- -command being executed - -type: keyword - --- - -*`auditd.data.hook`*:: -+ --- -netfilter hook that packet came from - -type: keyword - --- - -*`auditd.data.new-level`*:: -+ --- -new run level - -type: keyword - --- - -*`auditd.data.sauid`*:: -+ --- -sent login user ID - -type: keyword - --- - -*`auditd.data.sig`*:: -+ --- -signal number - -type: keyword - --- - -*`auditd.data.audit_backlog_wait_time`*:: -+ --- -audit system's backlog wait time - -type: keyword - --- - -*`auditd.data.printer`*:: -+ --- -printer name - -type: keyword - --- - -*`auditd.data.old-mem`*:: -+ --- -present amount of memory in KB - -type: keyword - --- - -*`auditd.data.perm`*:: -+ --- -the file permission being used - -type: keyword - --- - -*`auditd.data.old_pi`*:: -+ --- -old process inherited capability map - -type: keyword - --- - -*`auditd.data.state`*:: -+ --- -audit daemon configuration resulting state - -type: keyword - --- - -*`auditd.data.format`*:: -+ --- -audit log's format - -type: keyword - --- - -*`auditd.data.new_gid`*:: -+ --- -new group ID being assigned - -type: keyword - --- - -*`auditd.data.tcontext`*:: -+ --- -the target's or object's context string - -type: keyword - --- - -*`auditd.data.maj`*:: -+ --- -device major number - -type: keyword - --- - -*`auditd.data.watch`*:: -+ --- -file name in a watch record - -type: keyword - --- - -*`auditd.data.device`*:: -+ --- -device name - -type: keyword - --- - -*`auditd.data.grp`*:: -+ --- -group name - -type: keyword - --- - -*`auditd.data.bool`*:: -+ --- -name of SELinux boolean - -type: keyword - --- - -*`auditd.data.icmp_type`*:: -+ --- -type of icmp message - -type: keyword - --- - -*`auditd.data.new_lock`*:: -+ --- -new value of feature lock - -type: keyword - --- - -*`auditd.data.old_prom`*:: -+ --- -network promiscuity flag - -type: keyword - --- - -*`auditd.data.acl`*:: -+ --- -access mode of resource assigned to vm - -type: keyword - --- - -*`auditd.data.ip`*:: -+ --- -network address of a printer - -type: keyword - --- - -*`auditd.data.new_pi`*:: -+ --- -new process inherited capability map - -type: keyword - --- - -*`auditd.data.default-context`*:: -+ --- -default MAC context - -type: keyword - --- - -*`auditd.data.inode_gid`*:: -+ --- -group ID of the inode's owner - -type: keyword - --- - -*`auditd.data.new-log_passwd`*:: -+ --- -new value for TTY password logging - -type: keyword - --- - -*`auditd.data.new_pe`*:: -+ --- -new process effective capability map - -type: keyword - --- - -*`auditd.data.selected-context`*:: -+ --- -new MAC context assigned to session - -type: keyword - --- - -*`auditd.data.cap_fver`*:: -+ --- -file system capabilities version number - -type: keyword - --- - -*`auditd.data.file`*:: -+ --- -file name - -type: keyword - --- - -*`auditd.data.net`*:: -+ --- -network MAC address - -type: keyword - --- - -*`auditd.data.virt`*:: -+ --- -kind of virtualization being referenced - -type: keyword - --- - -*`auditd.data.cap_pp`*:: -+ --- -process permitted capability map - -type: keyword - --- - -*`auditd.data.old-range`*:: -+ --- -present SELinux range - -type: keyword - --- - -*`auditd.data.resrc`*:: -+ --- -resource being assigned - -type: keyword - --- - -*`auditd.data.new-range`*:: -+ --- -new SELinux range - -type: keyword - --- - -*`auditd.data.obj_gid`*:: -+ --- -group ID of object - -type: keyword - --- - -*`auditd.data.proto`*:: -+ --- -network protocol - -type: keyword - --- - -*`auditd.data.old-disk`*:: -+ --- -disk being removed from vm - -type: keyword - --- - -*`auditd.data.audit_failure`*:: -+ --- -audit system's failure mode - -type: keyword - --- - -*`auditd.data.inif`*:: -+ --- -in interface number - -type: keyword - --- - -*`auditd.data.vm`*:: -+ --- -virtual machine name - -type: keyword - --- - -*`auditd.data.flags`*:: -+ --- -mmap syscall flags - -type: keyword - --- - -*`auditd.data.nlnk-fam`*:: -+ --- -netlink protocol number - -type: keyword - --- - -*`auditd.data.old-fs`*:: -+ --- -file system being removed from vm - -type: keyword - --- - -*`auditd.data.old-ses`*:: -+ --- -previous ses value - -type: keyword - --- - -*`auditd.data.seqno`*:: -+ --- -sequence number - -type: keyword - --- - -*`auditd.data.fver`*:: -+ --- -file system capabilities version number - -type: keyword - --- - -*`auditd.data.qbytes`*:: -+ --- -ipc objects quantity of bytes - -type: keyword - --- - -*`auditd.data.seuser`*:: -+ --- -user's SE Linux user acct - -type: keyword - --- - -*`auditd.data.cap_fe`*:: -+ --- -file assigned effective capability map - -type: keyword - --- - -*`auditd.data.new-vcpu`*:: -+ --- -new number of CPU cores - -type: keyword - --- - -*`auditd.data.old-level`*:: -+ --- -old run level - -type: keyword - --- - -*`auditd.data.old_pp`*:: -+ --- -old process permitted capability map - -type: keyword - --- - -*`auditd.data.daddr`*:: -+ --- -remote IP address - -type: keyword - --- - -*`auditd.data.old-role`*:: -+ --- -present SELinux role - -type: keyword - --- - -*`auditd.data.ioctlcmd`*:: -+ --- -The request argument to the ioctl syscall - -type: keyword - --- - -*`auditd.data.smac`*:: -+ --- -local MAC address - -type: keyword - --- - -*`auditd.data.apparmor`*:: -+ --- -apparmor event information - -type: keyword - --- - -*`auditd.data.fe`*:: -+ --- -file assigned effective capability map - -type: keyword - --- - -*`auditd.data.perm_mask`*:: -+ --- -file permission mask that triggered a watch event - -type: keyword - --- - -*`auditd.data.ses`*:: -+ --- -login session ID - -type: keyword - --- - -*`auditd.data.cap_fi`*:: -+ --- -file inherited capability map - -type: keyword - --- - -*`auditd.data.obj_uid`*:: -+ --- -user ID of object - -type: keyword - --- - -*`auditd.data.reason`*:: -+ --- -text string denoting a reason for the action - -type: keyword - --- - -*`auditd.data.list`*:: -+ --- -the audit system's filter list number - -type: keyword - --- - -*`auditd.data.old_lock`*:: -+ --- -present value of feature lock - -type: keyword - --- - -*`auditd.data.bus`*:: -+ --- -name of subsystem bus a vm resource belongs to - -type: keyword - --- - -*`auditd.data.old_pe`*:: -+ --- -old process effective capability map - -type: keyword - --- - -*`auditd.data.new-role`*:: -+ --- -new SELinux role - -type: keyword - --- - -*`auditd.data.prom`*:: -+ --- -network promiscuity flag - -type: keyword - --- - -*`auditd.data.uri`*:: -+ --- -URI pointing to a printer - -type: keyword - --- - -*`auditd.data.audit_enabled`*:: -+ --- -audit systems's enable/disable status - -type: keyword - --- - -*`auditd.data.old-log_passwd`*:: -+ --- -present value for TTY password logging - -type: keyword - --- - -*`auditd.data.old-seuser`*:: -+ --- -present SELinux user - -type: keyword - --- - -*`auditd.data.per`*:: -+ --- -linux personality - -type: keyword - --- - -*`auditd.data.scontext`*:: -+ --- -the subject's context string - -type: keyword - --- - -*`auditd.data.tclass`*:: -+ --- -target's object classification - -type: keyword - --- - -*`auditd.data.ver`*:: -+ --- -audit daemon's version number - -type: keyword - --- - -*`auditd.data.new`*:: -+ --- -value being set in feature - -type: keyword - --- - -*`auditd.data.val`*:: -+ --- -generic value associated with the operation - -type: keyword - --- - -*`auditd.data.img-ctx`*:: -+ --- -the vm's disk image context string - -type: keyword - --- - -*`auditd.data.old-chardev`*:: -+ --- -present character device assigned to vm - -type: keyword - --- - -*`auditd.data.old_val`*:: -+ --- -current value of SELinux boolean - -type: keyword - --- - -*`auditd.data.success`*:: -+ --- -whether the syscall was successful or not - -type: keyword - --- - -*`auditd.data.inode_uid`*:: -+ --- -user ID of the inode's owner - -type: keyword - --- - -*`auditd.data.removed`*:: -+ --- -number of deleted files - -type: keyword - --- - - -*`auditd.data.socket.port`*:: -+ --- -The port number. - -type: keyword - --- - -*`auditd.data.socket.saddr`*:: -+ --- -The raw socket address structure. - -type: keyword - --- - -*`auditd.data.socket.addr`*:: -+ --- -The remote address. - -type: keyword - --- - -*`auditd.data.socket.family`*:: -+ --- -The socket family (unix, ipv4, ipv6, netlink). - -type: keyword - -example: unix - --- - -*`auditd.data.socket.path`*:: -+ --- -This is the path associated with a unix socket. - -type: keyword - --- - -*`auditd.messages`*:: -+ --- -An ordered list of the raw messages received from the kernel that were used to construct this document. This field is present if an error occurred processing the data or if `include_raw_message` is set in the config. - - -type: alias - -alias to: event.original - --- - -*`auditd.warnings`*:: -+ --- -The warnings generated by the Beat during the construction of the event. These are disabled by default and are used for development and debug purposes only. - - -type: alias - -alias to: error.message - --- - -[float] -=== geoip - -The geoip fields are defined as a convenience in case you decide to enrich the data using a geoip filter in Logstash or an Elasticsearch geoip ingest processor. - - - -*`geoip.continent_name`*:: -+ --- -The name of the continent. - - -type: keyword - --- - -*`geoip.city_name`*:: -+ --- -The name of the city. - - -type: keyword - --- - -*`geoip.region_name`*:: -+ --- -The name of the region. - - -type: keyword - --- - -*`geoip.country_iso_code`*:: -+ --- -Country ISO code. - - -type: keyword - --- - -*`geoip.location`*:: -+ --- -The longitude and latitude. - - -type: geo_point - --- - -[[exported-fields-beat-common]] -== Beat fields - -Contains common beat fields available in all event types. - - - -*`agent.hostname`*:: -+ --- -Deprecated - use agent.name or agent.id to identify an agent. - - -type: alias - -alias to: agent.name - --- - -*`beat.timezone`*:: -+ --- -type: alias - -alias to: event.timezone - --- - -*`fields`*:: -+ --- -Contains user configurable fields. - - -type: object - --- - -*`beat.name`*:: -+ --- -type: alias - -alias to: host.name - --- - -*`beat.hostname`*:: -+ --- -type: alias - -alias to: agent.name - --- - -*`timeseries.instance`*:: -+ --- -Time series instance id - -type: keyword - --- - -[[exported-fields-cloud]] -== Cloud provider metadata fields - -Metadata from cloud providers added by the add_cloud_metadata processor. - - - -*`cloud.image.id`*:: -+ --- -Image ID for the cloud instance. - - -example: ami-abcd1234 - --- - -*`meta.cloud.provider`*:: -+ --- -type: alias - -alias to: cloud.provider - --- - -*`meta.cloud.instance_id`*:: -+ --- -type: alias - -alias to: cloud.instance.id - --- - -*`meta.cloud.instance_name`*:: -+ --- -type: alias - -alias to: cloud.instance.name - --- - -*`meta.cloud.machine_type`*:: -+ --- -type: alias - -alias to: cloud.machine.type - --- - -*`meta.cloud.availability_zone`*:: -+ --- -type: alias - -alias to: cloud.availability_zone - --- - -*`meta.cloud.project_id`*:: -+ --- -type: alias - -alias to: cloud.project.id - --- - -*`meta.cloud.region`*:: -+ --- -type: alias - -alias to: cloud.region - --- - -[[exported-fields-common]] -== Common fields - -Contains common fields available in all event types. - - - -[float] -=== file - -File attributes. - - -*`file.setuid`*:: -+ --- -Set if the file has the `setuid` bit set. Omitted otherwise. - -type: boolean - -example: True - --- - -*`file.setgid`*:: -+ --- -Set if the file has the `setgid` bit set. Omitted otherwise. - -type: boolean - -example: True - --- - -*`file.origin`*:: -+ --- -An array of strings describing a possible external origin for this file. For example, the URL it was downloaded from. Only supported in macOS, via the kMDItemWhereFroms attribute. Omitted if origin information is not available. - - -type: keyword - --- - -*`file.origin.text`*:: -+ --- -This is an analyzed field that is useful for full text search on the origin data. - - -type: text - --- - -[float] -=== selinux - -The SELinux identity of the file. - - -*`file.selinux.user`*:: -+ --- -The owner of the object. - -type: keyword - --- - -*`file.selinux.role`*:: -+ --- -The object's SELinux role. - -type: keyword - --- - -*`file.selinux.domain`*:: -+ --- -The object's SELinux domain or type. - -type: keyword - --- - -*`file.selinux.level`*:: -+ --- -The object's SELinux level. - -type: keyword - -example: s0 - --- - -[float] -=== user - -User information. - - -[float] -=== audit - -Audit user information. - - -*`user.audit.id`*:: -+ --- -Audit user ID. - -type: keyword - --- - -*`user.audit.name`*:: -+ --- -Audit user name. - -type: keyword - --- - -[float] -=== filesystem - -Filesystem user information. - - -*`user.filesystem.id`*:: -+ --- -Filesystem user ID. - -type: keyword - --- - -*`user.filesystem.name`*:: -+ --- -Filesystem user name. - -type: keyword - --- - -[float] -=== group - -Filesystem group information. - - -*`user.filesystem.group.id`*:: -+ --- -Filesystem group ID. - -type: keyword - --- - -*`user.filesystem.group.name`*:: -+ --- -Filesystem group name. - -type: keyword - --- - -[float] -=== saved - -Saved user information. - - -*`user.saved.id`*:: -+ --- -Saved user ID. - -type: keyword - --- - -*`user.saved.name`*:: -+ --- -Saved user name. - -type: keyword - --- - -[float] -=== group - -Saved group information. - - -*`user.saved.group.id`*:: -+ --- -Saved group ID. - -type: keyword - --- - -*`user.saved.group.name`*:: -+ --- -Saved group name. - -type: keyword - --- - -[[exported-fields-docker-processor]] -== Docker fields - -Docker stats collected from Docker. - - - - -*`docker.container.id`*:: -+ --- -type: alias - -alias to: container.id - --- - -*`docker.container.image`*:: -+ --- -type: alias - -alias to: container.image.name - --- - -*`docker.container.name`*:: -+ --- -type: alias - -alias to: container.name - --- - -*`docker.container.labels`*:: -+ --- -Image labels. - - -type: object - --- - -[[exported-fields-ecs]] -== ECS fields - - -This section defines Elastic Common Schema (ECS) fields—a common set of fields -to be used when storing event data in {es}. - -This is an exhaustive list, and fields listed here are not necessarily used by {beatname_uc}. -The goal of ECS is to enable and encourage users of {es} to normalize their event data, -so that they can better analyze, visualize, and correlate the data represented in their events. - -See the {ecs-ref}[ECS reference] for more information. - -*`@timestamp`*:: -+ --- -Date/time when the event originated. -This is the date/time extracted from the event, typically representing when the event was generated by the source. -If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. -Required field for all events. - -type: date - -example: 2016-05-23T08:05:34.853Z - -required: True - --- - -*`labels`*:: -+ --- -Custom key/value pairs. -Can be used to add meta information to events. Should not contain nested objects. All values are stored as keyword. -Example: `docker` and `k8s` labels. - -type: object - -example: {"application": "foo-bar", "env": "production"} - --- - -*`message`*:: -+ --- -For log events the message field contains the log message, optimized for viewing in a log viewer. -For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. -If multiple messages exist, they can be combined into one message. - -type: match_only_text - -example: Hello World - --- - -*`tags`*:: -+ --- -List of keywords used to tag each event. - -type: keyword - -example: ["production", "env2"] - --- - -[float] -=== agent - -The agent fields contain the data about the software entity, if any, that collects, detects, or observes events on a host, or takes measurements on a host. -Examples include Beats. Agents may also run on observers. ECS agent.* fields shall be populated with details of the agent running on the host or observer where the event happened or the measurement was taken. - - -*`agent.build.original`*:: -+ --- -Extended build information for the agent. -This field is intended to contain any build information that a data source may provide, no specific formatting is required. - -type: keyword - -example: metricbeat version 7.6.0 (amd64), libbeat 7.6.0 [6a23e8f8f30f5001ba344e4e54d8d9cb82cb107c built 2020-02-05 23:10:10 +0000 UTC] - --- - -*`agent.ephemeral_id`*:: -+ --- -Ephemeral identifier of this agent (if one exists). -This id normally changes across restarts, but `agent.id` does not. - -type: keyword - -example: 8a4f500f - --- - -*`agent.id`*:: -+ --- -Unique identifier of this agent (if one exists). -Example: For Beats this would be beat.id. - -type: keyword - -example: 8a4f500d - --- - -*`agent.name`*:: -+ --- -Custom name of the agent. -This is a name that can be given to an agent. This can be helpful if for example two Filebeat instances are running on the same host but a human readable separation is needed on which Filebeat instance data is coming from. -If no name is given, the name is often left empty. - -type: keyword - -example: foo - --- - -*`agent.type`*:: -+ --- -Type of the agent. -The agent type always stays the same and should be given by the agent used. In case of Filebeat the agent would always be Filebeat also if two Filebeat instances are run on the same machine. - -type: keyword - -example: filebeat - --- - -*`agent.version`*:: -+ --- -Version of the agent. - -type: keyword - -example: 6.0.0-rc2 - --- - -[float] -=== as - -An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the internet. - - -*`as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== client - -A client is defined as the initiator of a network connection for events regarding sessions, connections, or bidirectional flow records. -For TCP events, the client is the initiator of the TCP connection that sends the SYN packet(s). For other protocols, the client is generally the initiator or requestor in the network transaction. Some systems use the term "originator" to refer the client in TCP connections. The client fields describe details about the system acting as the client in the network event. Client fields are usually populated in conjunction with server fields. Client fields are generally not populated for packet-level events. -Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. - - -*`client.address`*:: -+ --- -Some event client addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - --- - -*`client.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`client.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`client.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`client.bytes`*:: -+ --- -Bytes sent from the client to the server. - -type: long - -example: 184 - -format: bytes - --- - -*`client.domain`*:: -+ --- -The domain name of the client system. -This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. - -type: keyword - -example: foo.example.com - --- - -*`client.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`client.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`client.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`client.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`client.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`client.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`client.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`client.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`client.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`client.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`client.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`client.ip`*:: -+ --- -IP address of the client (IPv4 or IPv6). - -type: ip - --- - -*`client.mac`*:: -+ --- -MAC address of the client. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - --- - -*`client.nat.ip`*:: -+ --- -Translated IP of source based NAT sessions (e.g. internal client to internet). -Typically connections traversing load balancers, firewalls, or routers. - -type: ip - --- - -*`client.nat.port`*:: -+ --- -Translated port of source based NAT sessions (e.g. internal client to internet). -Typically connections traversing load balancers, firewalls, or routers. - -type: long - -format: string - --- - -*`client.packets`*:: -+ --- -Packets sent from the client to the server. - -type: long - -example: 12 - --- - -*`client.port`*:: -+ --- -Port of the client. - -type: long - -format: string - --- - -*`client.registered_domain`*:: -+ --- -The highest registered client domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`client.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`client.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`client.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`client.user.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`client.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`client.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`client.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`client.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`client.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`client.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`client.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`client.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`client.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`client.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -[float] -=== cloud - -Fields related to the cloud or infrastructure the events are coming from. - - -*`cloud.account.id`*:: -+ --- -The cloud account or organization id used to identify different entities in a multi-tenant environment. -Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. - -type: keyword - -example: 666777888999 - --- - -*`cloud.account.name`*:: -+ --- -The cloud account name or alias used to identify different entities in a multi-tenant environment. -Examples: AWS account name, Google Cloud ORG display name. - -type: keyword - -example: elastic-dev - --- - -*`cloud.availability_zone`*:: -+ --- -Availability zone in which this host, resource, or service is located. - -type: keyword - -example: us-east-1c - --- - -*`cloud.instance.id`*:: -+ --- -Instance ID of the host machine. - -type: keyword - -example: i-1234567890abcdef0 - --- - -*`cloud.instance.name`*:: -+ --- -Instance name of the host machine. - -type: keyword - --- - -*`cloud.machine.type`*:: -+ --- -Machine type of the host machine. - -type: keyword - -example: t2.medium - --- - -*`cloud.origin.account.id`*:: -+ --- -The cloud account or organization id used to identify different entities in a multi-tenant environment. -Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. - -type: keyword - -example: 666777888999 - --- - -*`cloud.origin.account.name`*:: -+ --- -The cloud account name or alias used to identify different entities in a multi-tenant environment. -Examples: AWS account name, Google Cloud ORG display name. - -type: keyword - -example: elastic-dev - --- - -*`cloud.origin.availability_zone`*:: -+ --- -Availability zone in which this host, resource, or service is located. - -type: keyword - -example: us-east-1c - --- - -*`cloud.origin.instance.id`*:: -+ --- -Instance ID of the host machine. - -type: keyword - -example: i-1234567890abcdef0 - --- - -*`cloud.origin.instance.name`*:: -+ --- -Instance name of the host machine. - -type: keyword - --- - -*`cloud.origin.machine.type`*:: -+ --- -Machine type of the host machine. - -type: keyword - -example: t2.medium - --- - -*`cloud.origin.project.id`*:: -+ --- -The cloud project identifier. -Examples: Google Cloud Project id, Azure Project id. - -type: keyword - -example: my-project - --- - -*`cloud.origin.project.name`*:: -+ --- -The cloud project name. -Examples: Google Cloud Project name, Azure Project name. - -type: keyword - -example: my project - --- - -*`cloud.origin.provider`*:: -+ --- -Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. - -type: keyword - -example: aws - --- - -*`cloud.origin.region`*:: -+ --- -Region in which this host, resource, or service is located. - -type: keyword - -example: us-east-1 - --- - -*`cloud.origin.service.name`*:: -+ --- -The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. -Examples: app engine, app service, cloud run, fargate, lambda. - -type: keyword - -example: lambda - --- - -*`cloud.project.id`*:: -+ --- -The cloud project identifier. -Examples: Google Cloud Project id, Azure Project id. - -type: keyword - -example: my-project - --- - -*`cloud.project.name`*:: -+ --- -The cloud project name. -Examples: Google Cloud Project name, Azure Project name. - -type: keyword - -example: my project - --- - -*`cloud.provider`*:: -+ --- -Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. - -type: keyword - -example: aws - --- - -*`cloud.region`*:: -+ --- -Region in which this host, resource, or service is located. - -type: keyword - -example: us-east-1 - --- - -*`cloud.service.name`*:: -+ --- -The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. -Examples: app engine, app service, cloud run, fargate, lambda. - -type: keyword - -example: lambda - --- - -*`cloud.target.account.id`*:: -+ --- -The cloud account or organization id used to identify different entities in a multi-tenant environment. -Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. - -type: keyword - -example: 666777888999 - --- - -*`cloud.target.account.name`*:: -+ --- -The cloud account name or alias used to identify different entities in a multi-tenant environment. -Examples: AWS account name, Google Cloud ORG display name. - -type: keyword - -example: elastic-dev - --- - -*`cloud.target.availability_zone`*:: -+ --- -Availability zone in which this host, resource, or service is located. - -type: keyword - -example: us-east-1c - --- - -*`cloud.target.instance.id`*:: -+ --- -Instance ID of the host machine. - -type: keyword - -example: i-1234567890abcdef0 - --- - -*`cloud.target.instance.name`*:: -+ --- -Instance name of the host machine. - -type: keyword - --- - -*`cloud.target.machine.type`*:: -+ --- -Machine type of the host machine. - -type: keyword - -example: t2.medium - --- - -*`cloud.target.project.id`*:: -+ --- -The cloud project identifier. -Examples: Google Cloud Project id, Azure Project id. - -type: keyword - -example: my-project - --- - -*`cloud.target.project.name`*:: -+ --- -The cloud project name. -Examples: Google Cloud Project name, Azure Project name. - -type: keyword - -example: my project - --- - -*`cloud.target.provider`*:: -+ --- -Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. - -type: keyword - -example: aws - --- - -*`cloud.target.region`*:: -+ --- -Region in which this host, resource, or service is located. - -type: keyword - -example: us-east-1 - --- - -*`cloud.target.service.name`*:: -+ --- -The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. -Examples: app engine, app service, cloud run, fargate, lambda. - -type: keyword - -example: lambda - --- - -[float] -=== code_signature - -These fields contain information about binary code signatures. - - -*`code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - -*`container.cpu.usage`*:: -+ --- -Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. Scaling factor: 1000. - -type: scaled_float - --- - -*`container.disk.read.bytes`*:: -+ --- -The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection. - -type: long - --- - -*`container.disk.write.bytes`*:: -+ --- -The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection. - -type: long - --- - -*`container.id`*:: -+ --- -Unique container id. - -type: keyword - --- - -*`container.image.name`*:: -+ --- -Name of the image the container was built on. - -type: keyword - --- - -*`container.image.tag`*:: -+ --- -Container image tags. - -type: keyword - --- - -*`container.labels`*:: -+ --- -Image labels. - -type: object - --- - -*`container.memory.usage`*:: -+ --- -Memory usage percentage and it ranges from 0 to 1. Scaling factor: 1000. - -type: scaled_float - --- - -*`container.name`*:: -+ --- -Container name. - -type: keyword - --- - -*`container.network.egress.bytes`*:: -+ --- -The number of bytes (gauge) sent out on all network interfaces by the container since the last metric collection. - -type: long - --- - -*`container.network.ingress.bytes`*:: -+ --- -The number of bytes received (gauge) on all network interfaces by the container since the last metric collection. - -type: long - --- - -*`container.runtime`*:: -+ --- -Runtime managing this container. - -type: keyword - -example: docker - --- - -[float] -=== data_stream - -The data_stream fields take part in defining the new data stream naming scheme. -In the new data stream naming scheme the value of the data stream fields combine to the name of the actual data stream in the following manner: `{data_stream.type}-{data_stream.dataset}-{data_stream.namespace}`. This means the fields can only contain characters that are valid as part of names of data streams. More details about this can be found in this https://www.elastic.co/blog/an-introduction-to-the-elastic-data-stream-naming-scheme[blog post]. -An Elasticsearch data stream consists of one or more backing indices, and a data stream name forms part of the backing indices names. Due to this convention, data streams must also follow index naming restrictions. For example, data stream names cannot include `\`, `/`, `*`, `?`, `"`, `<`, `>`, `|`, ` ` (space character), `,`, or `#`. Please see the Elasticsearch reference for additional https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#indices-create-api-path-params[restrictions]. - - -*`data_stream.dataset`*:: -+ --- -The field can contain anything that makes sense to signify the source of the data. -Examples include `nginx.access`, `prometheus`, `endpoint` etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. `event.dataset` should have the same value as `data_stream.dataset`. -Beyond the Elasticsearch data stream naming criteria noted above, the `dataset` value has additional restrictions: - * Must not contain `-` - * No longer than 100 characters - -type: constant_keyword - -example: nginx.access - --- - -*`data_stream.namespace`*:: -+ --- -A user defined namespace. Namespaces are useful to allow grouping of data. -Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with `default`. If no value is used, it falls back to `default`. -Beyond the Elasticsearch index naming criteria noted above, `namespace` value has the additional restrictions: - * Must not contain `-` - * No longer than 100 characters - -type: constant_keyword - -example: production - --- - -*`data_stream.type`*:: -+ --- -An overarching type for the data stream. -Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future. - -type: constant_keyword - -example: logs - --- - -[float] -=== destination - -Destination fields capture details about the receiver of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. -Destination fields are usually populated in conjunction with source fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - --- - -*`destination.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`destination.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`destination.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.bytes`*:: -+ --- -Bytes sent from the destination to the source. - -type: long - -example: 184 - -format: bytes - --- - -*`destination.domain`*:: -+ --- -The domain name of the destination system. -This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. - -type: keyword - -example: foo.example.com - --- - -*`destination.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`destination.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`destination.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`destination.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`destination.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`destination.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`destination.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`destination.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`destination.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`destination.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`destination.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`destination.ip`*:: -+ --- -IP address of the destination (IPv4 or IPv6). - -type: ip - --- - -*`destination.mac`*:: -+ --- -MAC address of the destination. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - --- - -*`destination.nat.ip`*:: -+ --- -Translated ip of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: ip - --- - -*`destination.nat.port`*:: -+ --- -Port the source session is translated to by NAT Device. -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - --- - -*`destination.packets`*:: -+ --- -Packets sent from the destination to the source. - -type: long - -example: 12 - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - --- - -*`destination.registered_domain`*:: -+ --- -The highest registered destination domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`destination.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`destination.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`destination.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`destination.user.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`destination.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`destination.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`destination.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`destination.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`destination.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`destination.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`destination.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`destination.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -[float] -=== dll - -These fields contain information about code libraries dynamically loaded into processes. - -Many operating systems refer to "shared code libraries" with different names, but this field set refers to all of the following: -* Dynamic-link library (`.dll`) commonly used on Windows -* Shared Object (`.so`) commonly used on Unix-like operating systems -* Dynamic library (`.dylib`) commonly used on macOS - - -*`dll.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`dll.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`dll.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`dll.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`dll.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`dll.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`dll.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`dll.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`dll.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`dll.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`dll.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`dll.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`dll.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`dll.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`dll.name`*:: -+ --- -Name of the library. -This generally maps to the name of the file on disk. - -type: keyword - -example: kernel32.dll - --- - -*`dll.path`*:: -+ --- -Full file path of the library. - -type: keyword - -example: C:\Windows\System32\kernel32.dll - --- - -*`dll.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`dll.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`dll.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`dll.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`dll.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`dll.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`dll.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -[float] -=== dns - -Fields describing DNS queries and answers. -DNS events should either represent a single DNS query prior to getting answers (`dns.type:query`) or they should represent a full exchange and contain the query details as well as all of the answers that were provided for this query (`dns.type:answer`). - - -*`dns.answers`*:: -+ --- -An array containing an object for each answer section returned by the server. -The main keys that should be present in these objects are defined by ECS. Records that have more information may contain more keys than what ECS defines. -Not all DNS data sources give all details about DNS answers. At minimum, answer objects must contain the `data` key. If more information is available, map as much of it to ECS as possible, and add any additional fields to the answer objects as custom fields. - -type: object - --- - -*`dns.answers.class`*:: -+ --- -The class of DNS data contained in this resource record. - -type: keyword - -example: IN - --- - -*`dns.answers.data`*:: -+ --- -The data describing the resource. -The meaning of this data depends on the type and class of the resource record. - -type: keyword - -example: 10.10.10.10 - --- - -*`dns.answers.name`*:: -+ --- -The domain name to which this resource record pertains. -If a chain of CNAME is being resolved, each answer's `name` should be the one that corresponds with the answer's `data`. It should not simply be the original `question.name` repeated. - -type: keyword - -example: www.example.com - --- - -*`dns.answers.ttl`*:: -+ --- -The time interval in seconds that this resource record may be cached before it should be discarded. Zero values mean that the data should not be cached. - -type: long - -example: 180 - --- - -*`dns.answers.type`*:: -+ --- -The type of data contained in this resource record. - -type: keyword - -example: CNAME - --- - -*`dns.header_flags`*:: -+ --- -Array of 2 letter DNS header flags. -Expected values are: AA, TC, RD, RA, AD, CD, DO. - -type: keyword - -example: ["RD", "RA"] - --- - -*`dns.id`*:: -+ --- -The DNS packet identifier assigned by the program that generated the query. The identifier is copied to the response. - -type: keyword - -example: 62111 - --- - -*`dns.op_code`*:: -+ --- -The DNS operation code that specifies the kind of query in the message. This value is set by the originator of a query and copied into the response. - -type: keyword - -example: QUERY - --- - -*`dns.question.class`*:: -+ --- -The class of records being queried. - -type: keyword - -example: IN - --- - -*`dns.question.name`*:: -+ --- -The name being queried. -If the name field contains non-printable characters (below 32 or above 126), those characters should be represented as escaped base 10 integers (\DDD). Back slashes and quotes should be escaped. Tabs, carriage returns, and line feeds should be converted to \t, \r, and \n respectively. - -type: keyword - -example: www.example.com - --- - -*`dns.question.registered_domain`*:: -+ --- -The highest registered domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`dns.question.subdomain`*:: -+ --- -The subdomain is all of the labels under the registered_domain. -If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: www - --- - -*`dns.question.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`dns.question.type`*:: -+ --- -The type of record being queried. - -type: keyword - -example: AAAA - --- - -*`dns.resolved_ip`*:: -+ --- -Array containing all IPs seen in `answers.data`. -The `answers` array can be difficult to use, because of the variety of data formats it can contain. Extracting all IP addresses seen in there to `dns.resolved_ip` makes it possible to index them as IP addresses, and makes them easier to visualize and query for. - -type: ip - -example: ["10.10.10.10", "10.10.10.11"] - --- - -*`dns.response_code`*:: -+ --- -The DNS response code. - -type: keyword - -example: NOERROR - --- - -*`dns.type`*:: -+ --- -The type of DNS event captured, query or answer. -If your source of DNS events only gives you DNS queries, you should only create dns events of type `dns.type:query`. -If your source of DNS events gives you answers as well, you should create one event per query (optionally as soon as the query is seen). And a second event containing all query details as well as an array of answers. - -type: keyword - -example: answer - --- - -[float] -=== ecs - -Meta-information specific to ECS. - - -*`ecs.version`*:: -+ --- -ECS version this event conforms to. `ecs.version` is a required field and must exist in all events. -When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events. - -type: keyword - -example: 1.0.0 - -required: True - --- - -[float] -=== elf - -These fields contain Linux Executable Linkable Format (ELF) metadata. - - -*`elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -[float] -=== error - -These fields can represent errors of any kind. -Use them for errors that happen while fetching events or in cases where the event itself contains an error. - - -*`error.code`*:: -+ --- -Error code describing the error. - -type: keyword - --- - -*`error.id`*:: -+ --- -Unique identifier for the error. - -type: keyword - --- - -*`error.message`*:: -+ --- -Error message. - -type: match_only_text - --- - -*`error.stack_trace`*:: -+ --- -The stack trace of this error in plain text. - -type: wildcard - --- - -*`error.stack_trace.text`*:: -+ --- -type: match_only_text - --- - -*`error.type`*:: -+ --- -The type of the error, for example the class name of the exception. - -type: keyword - -example: java.lang.NullPointerException - --- - -[float] -=== event - -The event fields are used for context information about the log or metric event itself. -A log is defined as an event containing details of something that happened. Log events must include the time at which the thing happened. Examples of log events include a process starting on a host, a network packet being sent from a source to a destination, or a network connection between a client and a server being initiated or closed. A metric is defined as an event containing one or more numerical measurements and the time at which the measurement was taken. Examples of metric events include memory pressure measured on a host and device temperature. See the `event.kind` definition in this section for additional details about metric and state events. - - -*`event.action`*:: -+ --- -The action captured by the event. -This describes the information in the event. It is more specific than `event.category`. Examples are `group-add`, `process-started`, `file-created`. The value is normally defined by the implementer. - -type: keyword - -example: user-password-change - --- - -*`event.agent_id_status`*:: -+ --- -Agents are normally responsible for populating the `agent.id` field value. If the system receiving events is capable of validating the value based on authentication information for the client then this field can be used to reflect the outcome of that validation. -For example if the agent's connection is authenticated with mTLS and the client cert contains the ID of the agent to which the cert was issued then the `agent.id` value in events can be checked against the certificate. If the values match then `event.agent_id_status: verified` is added to the event, otherwise one of the other allowed values should be used. -If no validation is performed then the field should be omitted. -The allowed values are: -`verified` - The `agent.id` field value matches expected value obtained from auth metadata. -`mismatch` - The `agent.id` field value does not match the expected value obtained from auth metadata. -`missing` - There was no `agent.id` field in the event to validate. -`auth_metadata_missing` - There was no auth metadata or it was missing information about the agent ID. - -type: keyword - -example: verified - --- - -*`event.category`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the second level in the ECS category hierarchy. -`event.category` represents the "big buckets" of ECS categories. For example, filtering on `event.category:process` yields all events relating to process activity. This field is closely related to `event.type`, which is used as a subcategory. -This field is an array. This will allow proper categorization of some events that fall in multiple categories. - -type: keyword - -example: authentication - --- - -*`event.code`*:: -+ --- -Identification code for this event, if one exists. -Some event sources use event codes to identify messages unambiguously, regardless of message language or wording adjustments over time. An example of this is the Windows Event ID. - -type: keyword - -example: 4648 - --- - -*`event.created`*:: -+ --- -event.created contains the date/time when the event was first read by an agent, or by your pipeline. -This field is distinct from @timestamp in that @timestamp typically contain the time extracted from the original event. -In most situations, these two timestamps will be slightly different. The difference can be used to calculate the delay between your source generating an event, and the time when your agent first processed it. This can be used to monitor your agent's or pipeline's ability to keep up with your event source. -In case the two timestamps are identical, @timestamp should be used. - -type: date - -example: 2016-05-23T08:05:34.857Z - --- - -*`event.dataset`*:: -+ --- -Name of the dataset. -If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. -It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name. - -type: keyword - -example: apache.access - --- - -*`event.duration`*:: -+ --- -Duration of the event in nanoseconds. -If event.start and event.end are known this value should be the difference between the end and start time. - -type: long - -format: duration - --- - -*`event.end`*:: -+ --- -event.end contains the date when the event ended or when the activity was last observed. - -type: date - --- - -*`event.hash`*:: -+ --- -Hash (perhaps logstash fingerprint) of raw field to be able to demonstrate log integrity. - -type: keyword - -example: 123456789012345678901234567890ABCD - --- - -*`event.id`*:: -+ --- -Unique ID to describe the event. - -type: keyword - -example: 8a4f500d - --- - -*`event.ingested`*:: -+ --- -Timestamp when an event arrived in the central data store. -This is different from `@timestamp`, which is when the event originally occurred. It's also different from `event.created`, which is meant to capture the first time an agent saw the event. -In normal conditions, assuming no tampering, the timestamps should chronologically look like this: `@timestamp` < `event.created` < `event.ingested`. - -type: date - -example: 2016-05-23T08:05:35.101Z - --- - -*`event.kind`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the highest level in the ECS category hierarchy. -`event.kind` gives high-level information about what type of information the event contains, without being specific to the contents of the event. For example, values of this field distinguish alert events from metric events. -The value of this field can be used to inform how these kinds of events should be handled. They may warrant different retention, different access control, it may also help understand whether the data coming in at a regular interval or not. - -type: keyword - -example: alert - --- - -*`event.module`*:: -+ --- -Name of the module this data is coming from. -If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), `event.module` should contain the name of this module. - -type: keyword - -example: apache - --- - -*`event.original`*:: -+ --- -Raw text message of entire event. Used to demonstrate log integrity or where the full log message (before splitting it up in multiple parts) may be required, e.g. for reindex. -This field is not indexed and doc_values are disabled. It cannot be searched, but it can be retrieved from `_source`. If users wish to override this and index this field, please see `Field data types` in the `Elasticsearch Reference`. - -type: keyword - -example: Sep 19 08:26:10 host CEF:0|Security| threatmanager|1.0|100| worm successfully stopped|10|src=10.0.0.1 dst=2.1.2.2spt=1232 - -Field is not indexed. - --- - -*`event.outcome`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. -Note that when a single transaction is described in multiple events, each event may populate different values of `event.outcome`, according to their perspective. -Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. -Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with `event.type:info`, or any events for which an outcome does not make logical sense. - -type: keyword - -example: success - --- - -*`event.provider`*:: -+ --- -Source of the event. -Event transports such as Syslog or the Windows Event Log typically mention the source of an event. It can be the name of the software that generated the event (e.g. Sysmon, httpd), or of a subsystem of the operating system (kernel, Microsoft-Windows-Security-Auditing). - -type: keyword - -example: kernel - --- - -*`event.reason`*:: -+ --- -Reason why this event happened, according to the source. -This describes the why of a particular action or outcome captured in the event. Where `event.action` captures the action from the event, `event.reason` describes why that action was taken. For example, a web proxy with an `event.action` which denied the request may also populate `event.reason` with the reason why (e.g. `blocked site`). - -type: keyword - -example: Terminated an unexpected process - --- - -*`event.reference`*:: -+ --- -Reference URL linking to additional information about this event. -This URL links to a static definition of this event. Alert events, indicated by `event.kind:alert`, are a common use case for this field. - -type: keyword - -example: https://system.example.com/event/#0001234 - --- - -*`event.risk_score`*:: -+ --- -Risk score or priority of the event (e.g. security solutions). Use your system's original value here. - -type: float - --- - -*`event.risk_score_norm`*:: -+ --- -Normalized risk score or priority of the event, on a scale of 0 to 100. -This is mainly useful if you use more than one system that assigns risk scores, and you want to see a normalized value across all systems. - -type: float - --- - -*`event.sequence`*:: -+ --- -Sequence number of the event. -The sequence number is a value published by some event sources, to make the exact ordering of events unambiguous, regardless of the timestamp precision. - -type: long - -format: string - --- - -*`event.severity`*:: -+ --- -The numeric severity of the event according to your event source. -What the different severity values mean can be different between sources and use cases. It's up to the implementer to make sure severities are consistent across events from the same source. -The Syslog severity belongs in `log.syslog.severity.code`. `event.severity` is meant to represent the severity according to the event source (e.g. firewall, IDS). If the event source does not publish its own severity, you may optionally copy the `log.syslog.severity.code` to `event.severity`. - -type: long - -example: 7 - -format: string - --- - -*`event.start`*:: -+ --- -event.start contains the date when the event started or when the activity was first observed. - -type: date - --- - -*`event.timezone`*:: -+ --- -This field should be populated when the event's timestamp does not include timezone information already (e.g. default Syslog timestamps). It's optional otherwise. -Acceptable timezone formats are: a canonical ID (e.g. "Europe/Amsterdam"), abbreviated (e.g. "EST") or an HH:mm differential (e.g. "-05:00"). - -type: keyword - --- - -*`event.type`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the third level in the ECS category hierarchy. -`event.type` represents a categorization "sub-bucket" that, when used along with the `event.category` field values, enables filtering events down to a level appropriate for single visualization. -This field is an array. This will allow proper categorization of some events that fall in multiple event types. - -type: keyword - --- - -*`event.url`*:: -+ --- -URL linking to an external system to continue investigation of this event. -This URL links to another system where in-depth investigation of the specific occurrence of this event can take place. Alert events, indicated by `event.kind:alert`, are a common use case for this field. - -type: keyword - -example: https://mysystem.example.com/alert/5271dedb-f5b0-4218-87f0-4ac4870a38fe - --- - -[float] -=== faas - -The user fields describe information about the function as a service that is relevant to the event. - - -*`faas.coldstart`*:: -+ --- -Boolean value indicating a cold start of a function. - -type: boolean - --- - -*`faas.execution`*:: -+ --- -The execution ID of the current function execution. - -type: keyword - -example: af9d5aa4-a685-4c5f-a22b-444f80b3cc28 - --- - -*`faas.trigger`*:: -+ --- -Details about the function trigger. - -type: nested - --- - -*`faas.trigger.request_id`*:: -+ --- -The ID of the trigger request , message, event, etc. - -type: keyword - -example: 123456789 - --- - -*`faas.trigger.type`*:: -+ --- -The trigger for the function execution. -Expected values are: - * http - * pubsub - * datasource - * timer - * other - -type: keyword - -example: http - --- - -[float] -=== file - -A file is defined as a set of information that has been created on, or has existed on a filesystem. -File objects can be associated with host events, network events, and/or file events (e.g., those produced by File Integrity Monitoring [FIM] products or services). File fields provide details about the affected file associated with the event or metric. - - -*`file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - --- - -*`file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - --- - -*`file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - --- - -*`file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - --- - -*`file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - --- - -*`file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - --- - -*`file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - --- - -*`file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - --- - -*`file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - --- - -*`file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - --- - -*`file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - --- - -*`file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - --- - -*`file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - --- - -*`file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - --- - -*`file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - --- - -*`file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - --- - -*`file.path.text`*:: -+ --- -type: match_only_text - --- - -*`file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - --- - -*`file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - --- - -*`file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - --- - -*`file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - --- - -*`file.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`file.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`file.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`file.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`file.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`file.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`file.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`file.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`file.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`file.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`file.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`file.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`file.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`file.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`file.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`file.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`file.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`file.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`file.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`file.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`file.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`file.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`file.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`file.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -[float] -=== geo - -Geo fields can carry data about a specific location related to an event. -This geolocation information can be derived from techniques such as Geo IP, or be user-supplied. - - -*`geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -[float] -=== group - -The group fields are meant to represent groups that are relevant to the event. - - -*`group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -[float] -=== hash - -The hash fields represent different bitwise hash algorithms and their values. -Field names for common hashes (e.g. MD5, SHA1) are predefined. Add fields for other hashes by lowercasing the hash algorithm name and using underscore separators as appropriate (snake case, e.g. sha3_512). -Note that this fieldset is used for common hashes that may be computed over a range of generic bytes. Entity-specific hashes such as ja3 or imphash are placed in the fieldsets to which they relate (tls and pe, respectively). - - -*`hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -[float] -=== host - -A host is defined as a general computing instance. -ECS host.* fields should be populated with details about the host on which the event happened, or from which the measurement was taken. Host types include hardware, virtual machines, Docker containers, and Kubernetes nodes. - - -*`host.architecture`*:: -+ --- -Operating system architecture. - -type: keyword - -example: x86_64 - --- - -*`host.cpu.usage`*:: -+ --- -Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. -Scaling factor: 1000. -For example: For a two core host, this value should be the average of the two cores, between 0 and 1. - -type: scaled_float - --- - -*`host.disk.read.bytes`*:: -+ --- -The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection. - -type: long - --- - -*`host.disk.write.bytes`*:: -+ --- -The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection. - -type: long - --- - -*`host.domain`*:: -+ --- -Name of the domain of which the host is a member. -For example, on Windows this could be the host's Active Directory domain or NetBIOS domain name. For Linux this could be the domain of the host's LDAP provider. - -type: keyword - -example: CONTOSO - --- - -*`host.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`host.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`host.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`host.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`host.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`host.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`host.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`host.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`host.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`host.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`host.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`host.hostname`*:: -+ --- -Hostname of the host. -It normally contains what the `hostname` command returns on the host machine. - -type: keyword - --- - -*`host.id`*:: -+ --- -Unique host id. -As hostname is not always unique, use values that are meaningful in your environment. -Example: The current usage of `beat.name`. - -type: keyword - --- - -*`host.ip`*:: -+ --- -Host ip addresses. - -type: ip - --- - -*`host.mac`*:: -+ --- -Host MAC addresses. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] - --- - -*`host.name`*:: -+ --- -Name of the host. -It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. - -type: keyword - --- - -*`host.network.egress.bytes`*:: -+ --- -The number of bytes (gauge) sent out on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.egress.packets`*:: -+ --- -The number of packets (gauge) sent out on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.ingress.bytes`*:: -+ --- -The number of bytes received (gauge) on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.ingress.packets`*:: -+ --- -The number of packets (gauge) received on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - --- - -*`host.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - --- - -*`host.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`host.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - --- - -*`host.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - --- - -*`host.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`host.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - --- - -*`host.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`host.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - --- - -*`host.type`*:: -+ --- -Type of host. -For Cloud providers this can be the machine type like `t2.medium`. If vm, this could be the container, for example, or other information meaningful in your environment. - -type: keyword - --- - -*`host.uptime`*:: -+ --- -Seconds the host has been up. - -type: long - -example: 1325 - --- - -[float] -=== http - -Fields related to HTTP activity. Use the `url` field set to store the url of the request. - - -*`http.request.body.bytes`*:: -+ --- -Size in bytes of the request body. - -type: long - -example: 887 - -format: bytes - --- - -*`http.request.body.content`*:: -+ --- -The full HTTP request body. - -type: wildcard - -example: Hello world - --- - -*`http.request.body.content.text`*:: -+ --- -type: match_only_text - --- - -*`http.request.bytes`*:: -+ --- -Total size in bytes of the request (body and headers). - -type: long - -example: 1437 - -format: bytes - --- - -*`http.request.id`*:: -+ --- -A unique identifier for each HTTP request to correlate logs between clients and servers in transactions. -The id may be contained in a non-standard HTTP header, such as `X-Request-ID` or `X-Correlation-ID`. - -type: keyword - -example: 123e4567-e89b-12d3-a456-426614174000 - --- - -*`http.request.method`*:: -+ --- -HTTP request method. -The value should retain its casing from the original event. For example, `GET`, `get`, and `GeT` are all considered valid values for this field. - -type: keyword - -example: POST - --- - -*`http.request.mime_type`*:: -+ --- -Mime type of the body of the request. -This value must only be populated based on the content of the request body, not on the `Content-Type` header. Comparing the mime type of a request with the request's Content-Type header can be helpful in detecting threats or misconfigured clients. - -type: keyword - -example: image/gif - --- - -*`http.request.referrer`*:: -+ --- -Referrer for this HTTP request. - -type: keyword - -example: https://blog.example.com/ - --- - -*`http.response.body.bytes`*:: -+ --- -Size in bytes of the response body. - -type: long - -example: 887 - -format: bytes - --- - -*`http.response.body.content`*:: -+ --- -The full HTTP response body. - -type: wildcard - -example: Hello world - --- - -*`http.response.body.content.text`*:: -+ --- -type: match_only_text - --- - -*`http.response.bytes`*:: -+ --- -Total size in bytes of the response (body and headers). - -type: long - -example: 1437 - -format: bytes - --- - -*`http.response.mime_type`*:: -+ --- -Mime type of the body of the response. -This value must only be populated based on the content of the response body, not on the `Content-Type` header. Comparing the mime type of a response with the response's Content-Type header can be helpful in detecting misconfigured servers. - -type: keyword - -example: image/gif - --- - -*`http.response.status_code`*:: -+ --- -HTTP response status code. - -type: long - -example: 404 - -format: string - --- - -*`http.version`*:: -+ --- -HTTP version. - -type: keyword - -example: 1.1 - --- - -[float] -=== interface - -The interface fields are used to record ingress and egress interface information when reported by an observer (e.g. firewall, router, load balancer) in the context of the observer handling a network connection. In the case of a single observer interface (e.g. network sensor on a span port) only the observer.ingress information should be populated. - - -*`interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - --- - -*`interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - --- - -*`interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - --- - -[float] -=== log - -Details about the event's logging mechanism or logging transport. -The log.* fields are typically populated with details about the logging mechanism used to create and/or transport the event. For example, syslog details belong under `log.syslog.*`. -The details specific to your event source are typically not logged under `log.*`, but rather in `event.*` or in other ECS fields. - - -*`log.file.path`*:: -+ --- -Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. -If the event wasn't read from a log file, do not populate this field. - -type: keyword - -example: /var/log/fun-times.log - --- - -*`log.level`*:: -+ --- -Original log level of the log event. -If the source of the event provides a log level or textual severity, this is the one that goes in `log.level`. If your source doesn't specify one, you may put your event transport's severity here (e.g. Syslog severity). -Some examples are `warn`, `err`, `i`, `informational`. - -type: keyword - -example: error - --- - -*`log.logger`*:: -+ --- -The name of the logger inside an application. This is usually the name of the class which initialized the logger, or can be a custom name. - -type: keyword - -example: org.elasticsearch.bootstrap.Bootstrap - --- - -*`log.origin.file.line`*:: -+ --- -The line number of the file containing the source code which originated the log event. - -type: long - -example: 42 - --- - -*`log.origin.file.name`*:: -+ --- -The name of the file containing the source code which originated the log event. -Note that this field is not meant to capture the log file. The correct field to capture the log file is `log.file.path`. - -type: keyword - -example: Bootstrap.java - --- - -*`log.origin.function`*:: -+ --- -The name of the function or method which originated the log event. - -type: keyword - -example: init - --- - -*`log.syslog`*:: -+ --- -The Syslog metadata of the event, if the event was transmitted via Syslog. Please see RFCs 5424 or 3164. - -type: object - --- - -*`log.syslog.facility.code`*:: -+ --- -The Syslog numeric facility of the log event, if available. -According to RFCs 5424 and 3164, this value should be an integer between 0 and 23. - -type: long - -example: 23 - -format: string - --- - -*`log.syslog.facility.name`*:: -+ --- -The Syslog text-based facility of the log event, if available. - -type: keyword - -example: local7 - --- - -*`log.syslog.priority`*:: -+ --- -Syslog numeric priority of the event, if available. -According to RFCs 5424 and 3164, the priority is 8 * facility + severity. This number is therefore expected to contain a value between 0 and 191. - -type: long - -example: 135 - -format: string - --- - -*`log.syslog.severity.code`*:: -+ --- -The Syslog numeric severity of the log event, if available. -If the event source publishing via Syslog provides a different numeric severity value (e.g. firewall, IDS), your source's numeric severity should go to `event.severity`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `event.severity`. - -type: long - -example: 3 - --- - -*`log.syslog.severity.name`*:: -+ --- -The Syslog numeric severity of the log event, if available. -If the event source publishing via Syslog provides a different severity value (e.g. firewall, IDS), your source's text severity should go to `log.level`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `log.level`. - -type: keyword - -example: Error - --- - -[float] -=== network - -The network is defined as the communication path over which a host or network event happens. -The network.* fields should be populated with details about the network activity associated with an event. - - -*`network.application`*:: -+ --- -When a specific application or service is identified from network connection details (source/dest IPs, ports, certificates, or wire format), this field captures the application's or service's name. -For example, the original event identifies the network connection being from a specific web service in a `https` network connection, like `facebook` or `twitter`. -The field value must be normalized to lowercase for querying. - -type: keyword - -example: aim - --- - -*`network.bytes`*:: -+ --- -Total bytes transferred in both directions. -If `source.bytes` and `destination.bytes` are known, `network.bytes` is their sum. - -type: long - -example: 368 - -format: bytes - --- - -*`network.community_id`*:: -+ --- -A hash of source and destination IPs and ports, as well as the protocol used in a communication. This is a tool-agnostic standard to identify flows. -Learn more at https://github.com/corelight/community-id-spec. - -type: keyword - -example: 1:hO+sN4H+MG5MY/8hIrXPqc4ZQz0= - --- - -*`network.direction`*:: -+ --- -Direction of the network traffic. -Recommended values are: - * ingress - * egress - * inbound - * outbound - * internal - * external - * unknown - -When mapping events from a host-based monitoring context, populate this field from the host's point of view, using the values "ingress" or "egress". -When mapping events from a network or perimeter-based monitoring context, populate this field from the point of view of the network perimeter, using the values "inbound", "outbound", "internal" or "external". -Note that "internal" is not crossing perimeter boundaries, and is meant to describe communication between two hosts within the perimeter. Note also that "external" is meant to describe traffic between two hosts that are external to the perimeter. This could for example be useful for ISPs or VPN service providers. - -type: keyword - -example: inbound - --- - -*`network.forwarded_ip`*:: -+ --- -Host IP address when the source IP address is the proxy. - -type: ip - -example: 192.1.1.2 - --- - -*`network.iana_number`*:: -+ --- -IANA Protocol Number (https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). Standardized list of protocols. This aligns well with NetFlow and sFlow related logs which use the IANA Protocol Number. - -type: keyword - -example: 6 - --- - -*`network.inner`*:: -+ --- -Network.inner fields are added in addition to network.vlan fields to describe the innermost VLAN when q-in-q VLAN tagging is present. Allowed fields include vlan.id and vlan.name. Inner vlan fields are typically used when sending traffic with multiple 802.1q encapsulations to a network sensor (e.g. Zeek, Wireshark.) - -type: object - --- - -*`network.inner.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - --- - -*`network.inner.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - --- - -*`network.name`*:: -+ --- -Name given by operators to sections of their network. - -type: keyword - -example: Guest Wifi - --- - -*`network.packets`*:: -+ --- -Total packets transferred in both directions. -If `source.packets` and `destination.packets` are known, `network.packets` is their sum. - -type: long - -example: 24 - --- - -*`network.protocol`*:: -+ --- -In the OSI Model this would be the Application Layer protocol. For example, `http`, `dns`, or `ssh`. -The field value must be normalized to lowercase for querying. - -type: keyword - -example: http - --- - -*`network.transport`*:: -+ --- -Same as network.iana_number, but instead using the Keyword name of the transport layer (udp, tcp, ipv6-icmp, etc.) -The field value must be normalized to lowercase for querying. - -type: keyword - -example: tcp - --- - -*`network.type`*:: -+ --- -In the OSI Model this would be the Network Layer. ipv4, ipv6, ipsec, pim, etc -The field value must be normalized to lowercase for querying. - -type: keyword - -example: ipv4 - --- - -*`network.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - --- - -*`network.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - --- - -[float] -=== observer - -An observer is defined as a special network, security, or application device used to detect, observe, or create network, security, or application-related events and metrics. -This could be a custom hardware appliance or a server that has been configured to run special network, security, or application software. Examples include firewalls, web proxies, intrusion detection/prevention systems, network monitoring sensors, web application firewalls, data loss prevention systems, and APM servers. The observer.* fields shall be populated with details of the system, if any, that detects, observes and/or creates a network, security, or application event or metric. Message queues and ETL components used in processing events or metrics are not considered observers in ECS. - - -*`observer.egress`*:: -+ --- -Observer.egress holds information like interface number and name, vlan, and zone information to classify egress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. - -type: object - --- - -*`observer.egress.interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - --- - -*`observer.egress.interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - --- - -*`observer.egress.interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - --- - -*`observer.egress.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - --- - -*`observer.egress.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - --- - -*`observer.egress.zone`*:: -+ --- -Network zone of outbound traffic as reported by the observer to categorize the destination area of egress traffic, e.g. Internal, External, DMZ, HR, Legal, etc. - -type: keyword - -example: Public_Internet - --- - -*`observer.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`observer.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`observer.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`observer.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`observer.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`observer.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`observer.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`observer.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`observer.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`observer.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`observer.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`observer.hostname`*:: -+ --- -Hostname of the observer. - -type: keyword - --- - -*`observer.ingress`*:: -+ --- -Observer.ingress holds information like interface number and name, vlan, and zone information to classify ingress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. - -type: object - --- - -*`observer.ingress.interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - --- - -*`observer.ingress.interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - --- - -*`observer.ingress.interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - --- - -*`observer.ingress.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - --- - -*`observer.ingress.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - --- - -*`observer.ingress.zone`*:: -+ --- -Network zone of incoming traffic as reported by the observer to categorize the source area of ingress traffic. e.g. internal, External, DMZ, HR, Legal, etc. - -type: keyword - -example: DMZ - --- - -*`observer.ip`*:: -+ --- -IP addresses of the observer. - -type: ip - --- - -*`observer.mac`*:: -+ --- -MAC addresses of the observer. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] - --- - -*`observer.name`*:: -+ --- -Custom name of the observer. -This is a name that can be given to an observer. This can be helpful for example if multiple firewalls of the same model are used in an organization. -If no custom name is needed, the field can be left empty. - -type: keyword - -example: 1_proxySG - --- - -*`observer.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - --- - -*`observer.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - --- - -*`observer.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`observer.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - --- - -*`observer.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - --- - -*`observer.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`observer.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - --- - -*`observer.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`observer.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - --- - -*`observer.product`*:: -+ --- -The product name of the observer. - -type: keyword - -example: s200 - --- - -*`observer.serial_number`*:: -+ --- -Observer serial number. - -type: keyword - --- - -*`observer.type`*:: -+ --- -The type of the observer the data is coming from. -There is no predefined list of observer types. Some examples are `forwarder`, `firewall`, `ids`, `ips`, `proxy`, `poller`, `sensor`, `APM server`. - -type: keyword - -example: firewall - --- - -*`observer.vendor`*:: -+ --- -Vendor name of the observer. - -type: keyword - -example: Symantec - --- - -*`observer.version`*:: -+ --- -Observer version. - -type: keyword - --- - -[float] -=== orchestrator - -Fields that describe the resources which container orchestrators manage or act upon. - - -*`orchestrator.api_version`*:: -+ --- -API version being used to carry out the action - -type: keyword - -example: v1beta1 - --- - -*`orchestrator.cluster.name`*:: -+ --- -Name of the cluster. - -type: keyword - --- - -*`orchestrator.cluster.url`*:: -+ --- -URL of the API used to manage the cluster. - -type: keyword - --- - -*`orchestrator.cluster.version`*:: -+ --- -The version of the cluster. - -type: keyword - --- - -*`orchestrator.namespace`*:: -+ --- -Namespace in which the action is taking place. - -type: keyword - -example: kube-system - --- - -*`orchestrator.organization`*:: -+ --- -Organization affected by the event (for multi-tenant orchestrator setups). - -type: keyword - -example: elastic - --- - -*`orchestrator.resource.name`*:: -+ --- -Name of the resource being acted upon. - -type: keyword - -example: test-pod-cdcws - --- - -*`orchestrator.resource.type`*:: -+ --- -Type of resource being acted upon. - -type: keyword - -example: service - --- - -*`orchestrator.type`*:: -+ --- -Orchestrator cluster type (e.g. kubernetes, nomad or cloudfoundry). - -type: keyword - -example: kubernetes - --- - -[float] -=== organization - -The organization fields enrich data with information about the company or entity the data is associated with. -These fields help you arrange or filter data stored in an index by one or multiple organizations. - - -*`organization.id`*:: -+ --- -Unique identifier for the organization. - -type: keyword - --- - -*`organization.name`*:: -+ --- -Organization name. - -type: keyword - --- - -*`organization.name.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - -*`os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - --- - -*`os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - --- - -*`os.full.text`*:: -+ --- -type: match_only_text - --- - -*`os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - --- - -*`os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - --- - -*`os.name.text`*:: -+ --- -type: match_only_text - --- - -*`os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - --- - -*`os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - --- - -[float] -=== package - -These fields contain information about an installed software package. It contains general information about a package, such as name, version or size. It also contains installation details, such as time or location. - - -*`package.architecture`*:: -+ --- -Package architecture. - -type: keyword - -example: x86_64 - --- - -*`package.build_version`*:: -+ --- -Additional information about the build version of the installed package. -For example use the commit SHA of a non-released package. - -type: keyword - -example: 36f4f7e89dd61b0988b12ee000b98966867710cd - --- - -*`package.checksum`*:: -+ --- -Checksum of the installed package for verification. - -type: keyword - -example: 68b329da9893e34099c7d8ad5cb9c940 - --- - -*`package.description`*:: -+ --- -Description of the package. - -type: keyword - -example: Open source programming language to build simple/reliable/efficient software. - --- - -*`package.install_scope`*:: -+ --- -Indicating how the package was installed, e.g. user-local, global. - -type: keyword - -example: global - --- - -*`package.installed`*:: -+ --- -Time when package was installed. - -type: date - --- - -*`package.license`*:: -+ --- -License under which the package was released. -Use a short name, e.g. the license identifier from SPDX License List where possible (https://spdx.org/licenses/). - -type: keyword - -example: Apache License 2.0 - --- - -*`package.name`*:: -+ --- -Package name - -type: keyword - -example: go - --- - -*`package.path`*:: -+ --- -Path where the package is installed. - -type: keyword - -example: /usr/local/Cellar/go/1.12.9/ - --- - -*`package.reference`*:: -+ --- -Home page or reference URL of the software in this package, if available. - -type: keyword - -example: https://golang.org - --- - -*`package.size`*:: -+ --- -Package size in bytes. - -type: long - -example: 62231 - -format: string - --- - -*`package.type`*:: -+ --- -Type of package. -This should contain the package file type, rather than the package manager name. Examples: rpm, dpkg, brew, npm, gem, nupkg, jar. - -type: keyword - -example: rpm - --- - -*`package.version`*:: -+ --- -Package version - -type: keyword - -example: 1.12.9 - --- - -[float] -=== pe - -These fields contain Windows Portable Executable (PE) metadata. - - -*`pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -[float] -=== process - -These fields contain information about a process. -These fields can help you correlate metrics information with a process id/name from a log message. The `process.pid` often stays in the metric itself and is copied to the global field for correlation. - - -*`process.args`*:: -+ --- -Array of process arguments, starting with the absolute path to the executable. -May be filtered to protect sensitive information. - -type: keyword - -example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] - --- - -*`process.args_count`*:: -+ --- -Length of the process.args array. -This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. - -type: long - -example: 4 - --- - -*`process.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`process.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`process.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`process.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`process.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`process.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`process.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`process.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`process.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`process.command_line`*:: -+ --- -Full command line that started the process, including the absolute path to the executable, and all arguments. -Some arguments may be filtered to protect sensitive information. - -type: wildcard - -example: /usr/bin/ssh -l user 10.0.0.16 - --- - -*`process.command_line.text`*:: -+ --- -type: match_only_text - --- - -*`process.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`process.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`process.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`process.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`process.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`process.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`process.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`process.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`process.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`process.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`process.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`process.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`process.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`process.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`process.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`process.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`process.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`process.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`process.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`process.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`process.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`process.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`process.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`process.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`process.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`process.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`process.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`process.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`process.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`process.end`*:: -+ --- -The time the process ended. - -type: date - -example: 2016-05-23T08:05:34.853Z - --- - -*`process.entity_id`*:: -+ --- -Unique identifier for the process. -The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. -Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. - -type: keyword - -example: c2c455d9f99375d - --- - -*`process.executable`*:: -+ --- -Absolute path to the process executable. - -type: keyword - -example: /usr/bin/ssh - --- - -*`process.executable.text`*:: -+ --- -type: match_only_text - --- - -*`process.exit_code`*:: -+ --- -The exit code of the process, if this is a termination event. -The field should be absent if there is no exit code for the event (e.g. process start). - -type: long - -example: 137 - --- - -*`process.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`process.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`process.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`process.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`process.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`process.name`*:: -+ --- -Process name. -Sometimes called program name or similar. - -type: keyword - -example: ssh - --- - -*`process.name.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.args`*:: -+ --- -Array of process arguments, starting with the absolute path to the executable. -May be filtered to protect sensitive information. - -type: keyword - -example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] - --- - -*`process.parent.args_count`*:: -+ --- -Length of the process.args array. -This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. - -type: long - -example: 4 - --- - -*`process.parent.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`process.parent.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`process.parent.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`process.parent.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`process.parent.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`process.parent.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`process.parent.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`process.parent.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`process.parent.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`process.parent.command_line`*:: -+ --- -Full command line that started the process, including the absolute path to the executable, and all arguments. -Some arguments may be filtered to protect sensitive information. - -type: wildcard - -example: /usr/bin/ssh -l user 10.0.0.16 - --- - -*`process.parent.command_line.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`process.parent.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`process.parent.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`process.parent.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`process.parent.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`process.parent.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`process.parent.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`process.parent.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`process.parent.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`process.parent.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`process.parent.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`process.parent.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`process.parent.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`process.parent.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`process.parent.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`process.parent.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`process.parent.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`process.parent.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`process.parent.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`process.parent.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`process.parent.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`process.parent.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`process.parent.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`process.parent.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`process.parent.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`process.parent.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`process.parent.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`process.parent.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`process.parent.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`process.parent.end`*:: -+ --- -The time the process ended. - -type: date - -example: 2016-05-23T08:05:34.853Z - --- - -*`process.parent.entity_id`*:: -+ --- -Unique identifier for the process. -The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. -Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. - -type: keyword - -example: c2c455d9f99375d - --- - -*`process.parent.executable`*:: -+ --- -Absolute path to the process executable. - -type: keyword - -example: /usr/bin/ssh - --- - -*`process.parent.executable.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.exit_code`*:: -+ --- -The exit code of the process, if this is a termination event. -The field should be absent if there is no exit code for the event (e.g. process start). - -type: long - -example: 137 - --- - -*`process.parent.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`process.parent.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`process.parent.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`process.parent.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`process.parent.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`process.parent.name`*:: -+ --- -Process name. -Sometimes called program name or similar. - -type: keyword - -example: ssh - --- - -*`process.parent.name.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`process.parent.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`process.parent.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`process.parent.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`process.parent.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`process.parent.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`process.parent.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`process.parent.pgid`*:: -+ --- -Identifier of the group of processes the process belongs to. - -type: long - -format: string - --- - -*`process.parent.pid`*:: -+ --- -Process id. - -type: long - -example: 4242 - -format: string - --- - -*`process.parent.start`*:: -+ --- -The time the process started. - -type: date - -example: 2016-05-23T08:05:34.853Z - --- - -*`process.parent.thread.id`*:: -+ --- -Thread ID. - -type: long - -example: 4242 - -format: string - --- - -*`process.parent.thread.name`*:: -+ --- -Thread name. - -type: keyword - -example: thread-0 - --- - -*`process.parent.title`*:: -+ --- -Process title. -The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. - -type: keyword - --- - -*`process.parent.title.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.uptime`*:: -+ --- -Seconds the process has been up. - -type: long - -example: 1325 - --- - -*`process.parent.working_directory`*:: -+ --- -The working directory of the process. - -type: keyword - -example: /home/alice - --- - -*`process.parent.working_directory.text`*:: -+ --- -type: match_only_text - --- - -*`process.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`process.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`process.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`process.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`process.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`process.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`process.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`process.pgid`*:: -+ --- -Identifier of the group of processes the process belongs to. - -type: long - -format: string - --- - -*`process.pid`*:: -+ --- -Process id. - -type: long - -example: 4242 - -format: string - --- - -*`process.start`*:: -+ --- -The time the process started. - -type: date - -example: 2016-05-23T08:05:34.853Z - --- - -*`process.thread.id`*:: -+ --- -Thread ID. - -type: long - -example: 4242 - -format: string - --- - -*`process.thread.name`*:: -+ --- -Thread name. - -type: keyword - -example: thread-0 - --- - -*`process.title`*:: -+ --- -Process title. -The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. - -type: keyword - --- - -*`process.title.text`*:: -+ --- -type: match_only_text - --- - -*`process.uptime`*:: -+ --- -Seconds the process has been up. - -type: long - -example: 1325 - --- - -*`process.working_directory`*:: -+ --- -The working directory of the process. - -type: keyword - -example: /home/alice - --- - -*`process.working_directory.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== registry - -Fields related to Windows Registry operations. - - -*`registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - --- - -*`registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - --- - -*`registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - --- - -*`registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - --- - -*`registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - --- - -*`registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - --- - -*`registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - --- - -[float] -=== related - -This field set is meant to facilitate pivoting around a piece of data. -Some pieces of information can be seen in many places in an ECS event. To facilitate searching for them, store an array of all seen values to their corresponding field in `related.`. -A concrete example is IP addresses, which can be under host, observer, source, destination, client, server, and network.forwarded_ip. If you append all IPs to `related.ip`, you can then search for a given IP trivially, no matter where it appeared, by querying `related.ip:192.0.2.15`. - - -*`related.hash`*:: -+ --- -All the hashes seen on your event. Populating this field, then using it to search for hashes can help in situations where you're unsure what the hash algorithm is (and therefore which key name to search). - -type: keyword - --- - -*`related.hosts`*:: -+ --- -All hostnames or other host identifiers seen on your event. Example identifiers include FQDNs, domain names, workstation names, or aliases. - -type: keyword - --- - -*`related.ip`*:: -+ --- -All of the IPs seen on your event. - -type: ip - --- - -*`related.user`*:: -+ --- -All the user names or other user identifiers seen on the event. - -type: keyword - --- - -[float] -=== rule - -Rule fields are used to capture the specifics of any observer or agent rules that generate alerts or other notable events. -Examples of data sources that would populate the rule fields include: network admission control platforms, network or host IDS/IPS, network firewalls, web application firewalls, url filters, endpoint detection and response (EDR) systems, etc. - - -*`rule.author`*:: -+ --- -Name, organization, or pseudonym of the author or authors who created the rule used to generate this event. - -type: keyword - -example: ["Star-Lord"] - --- - -*`rule.category`*:: -+ --- -A categorization value keyword used by the entity using the rule for detection of this event. - -type: keyword - -example: Attempted Information Leak - --- - -*`rule.description`*:: -+ --- -The description of the rule generating the event. - -type: keyword - -example: Block requests to public DNS over HTTPS / TLS protocols - --- - -*`rule.id`*:: -+ --- -A rule ID that is unique within the scope of an agent, observer, or other entity using the rule for detection of this event. - -type: keyword - -example: 101 - --- - -*`rule.license`*:: -+ --- -Name of the license under which the rule used to generate this event is made available. - -type: keyword - -example: Apache 2.0 - --- - -*`rule.name`*:: -+ --- -The name of the rule or signature generating the event. - -type: keyword - -example: BLOCK_DNS_over_TLS - --- - -*`rule.reference`*:: -+ --- -Reference URL to additional information about the rule used to generate this event. -The URL can point to the vendor's documentation about the rule. If that's not available, it can also be a link to a more general page describing this type of alert. - -type: keyword - -example: https://en.wikipedia.org/wiki/DNS_over_TLS - --- - -*`rule.ruleset`*:: -+ --- -Name of the ruleset, policy, group, or parent category in which the rule used to generate this event is a member. - -type: keyword - -example: Standard_Protocol_Filters - --- - -*`rule.uuid`*:: -+ --- -A rule ID that is unique within the scope of a set or group of agents, observers, or other entities using the rule for detection of this event. - -type: keyword - -example: 1100110011 - --- - -*`rule.version`*:: -+ --- -The version / revision of the rule being used for analysis. - -type: keyword - -example: 1.1 - --- - -[float] -=== server - -A Server is defined as the responder in a network connection for events regarding sessions, connections, or bidirectional flow records. -For TCP events, the server is the receiver of the initial SYN packet(s) of the TCP connection. For other protocols, the server is generally the responder in the network transaction. Some systems actually use the term "responder" to refer the server in TCP connections. The server fields describe details about the system acting as the server in the network event. Server fields are usually populated in conjunction with client fields. Server fields are generally not populated for packet-level events. -Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. - - -*`server.address`*:: -+ --- -Some event server addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - --- - -*`server.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`server.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`server.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`server.bytes`*:: -+ --- -Bytes sent from the server to the client. - -type: long - -example: 184 - -format: bytes - --- - -*`server.domain`*:: -+ --- -The domain name of the server system. -This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. - -type: keyword - -example: foo.example.com - --- - -*`server.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`server.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`server.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`server.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`server.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`server.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`server.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`server.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`server.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`server.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`server.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`server.ip`*:: -+ --- -IP address of the server (IPv4 or IPv6). - -type: ip - --- - -*`server.mac`*:: -+ --- -MAC address of the server. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - --- - -*`server.nat.ip`*:: -+ --- -Translated ip of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: ip - --- - -*`server.nat.port`*:: -+ --- -Translated port of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - --- - -*`server.packets`*:: -+ --- -Packets sent from the server to the client. - -type: long - -example: 12 - --- - -*`server.port`*:: -+ --- -Port of the server. - -type: long - -format: string - --- - -*`server.registered_domain`*:: -+ --- -The highest registered server domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`server.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`server.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`server.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`server.user.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`server.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`server.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`server.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`server.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`server.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`server.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`server.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`server.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`server.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`server.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -[float] -=== service - -The service fields describe the service for or from which the data was collected. -These fields help you find and correlate logs for a specific service and version. - - -*`service.address`*:: -+ --- -Address where data about this service was collected from. -This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). - -type: keyword - -example: 172.26.0.2:5432 - --- - -*`service.environment`*:: -+ --- -Identifies the environment where the service is running. -If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. - -type: keyword - -example: production - --- - -*`service.ephemeral_id`*:: -+ --- -Ephemeral identifier of this service (if one exists). -This id normally changes across restarts, but `service.id` does not. - -type: keyword - -example: 8a4f500f - --- - -*`service.id`*:: -+ --- -Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. -This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. -Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. - -type: keyword - -example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 - --- - -*`service.name`*:: -+ --- -Name of the service data is collected from. -The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. -In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. - -type: keyword - -example: elasticsearch-metrics - --- - -*`service.node.name`*:: -+ --- -Name of a service node. -This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. -In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn't have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. - -type: keyword - -example: instance-0000000016 - --- - -*`service.origin.address`*:: -+ --- -Address where data about this service was collected from. -This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). - -type: keyword - -example: 172.26.0.2:5432 - --- - -*`service.origin.environment`*:: -+ --- -Identifies the environment where the service is running. -If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. - -type: keyword - -example: production - --- - -*`service.origin.ephemeral_id`*:: -+ --- -Ephemeral identifier of this service (if one exists). -This id normally changes across restarts, but `service.id` does not. - -type: keyword - -example: 8a4f500f - --- - -*`service.origin.id`*:: -+ --- -Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. -This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. -Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. - -type: keyword - -example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 - --- - -*`service.origin.name`*:: -+ --- -Name of the service data is collected from. -The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. -In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. - -type: keyword - -example: elasticsearch-metrics - --- - -*`service.origin.node.name`*:: -+ --- -Name of a service node. -This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. -In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn't have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. - -type: keyword - -example: instance-0000000016 - --- - -*`service.origin.state`*:: -+ --- -Current state of the service. - -type: keyword - --- - -*`service.origin.type`*:: -+ --- -The type of the service data is collected from. -The type can be used to group and correlate logs and metrics from one service type. -Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. - -type: keyword - -example: elasticsearch - --- - -*`service.origin.version`*:: -+ --- -Version of the service the data was collected from. -This allows to look at a data set only for a specific version of a service. - -type: keyword - -example: 3.2.4 - --- - -*`service.state`*:: -+ --- -Current state of the service. - -type: keyword - --- - -*`service.target.address`*:: -+ --- -Address where data about this service was collected from. -This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). - -type: keyword - -example: 172.26.0.2:5432 - --- - -*`service.target.environment`*:: -+ --- -Identifies the environment where the service is running. -If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. - -type: keyword - -example: production - --- - -*`service.target.ephemeral_id`*:: -+ --- -Ephemeral identifier of this service (if one exists). -This id normally changes across restarts, but `service.id` does not. - -type: keyword - -example: 8a4f500f - --- - -*`service.target.id`*:: -+ --- -Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. -This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. -Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. - -type: keyword - -example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 - --- - -*`service.target.name`*:: -+ --- -Name of the service data is collected from. -The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. -In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. - -type: keyword - -example: elasticsearch-metrics - --- - -*`service.target.node.name`*:: -+ --- -Name of a service node. -This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. -In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn't have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. - -type: keyword - -example: instance-0000000016 - --- - -*`service.target.state`*:: -+ --- -Current state of the service. - -type: keyword - --- - -*`service.target.type`*:: -+ --- -The type of the service data is collected from. -The type can be used to group and correlate logs and metrics from one service type. -Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. - -type: keyword - -example: elasticsearch - --- - -*`service.target.version`*:: -+ --- -Version of the service the data was collected from. -This allows to look at a data set only for a specific version of a service. - -type: keyword - -example: 3.2.4 - --- - -*`service.type`*:: -+ --- -The type of the service data is collected from. -The type can be used to group and correlate logs and metrics from one service type. -Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. - -type: keyword - -example: elasticsearch - --- - -*`service.version`*:: -+ --- -Version of the service the data was collected from. -This allows to look at a data set only for a specific version of a service. - -type: keyword - -example: 3.2.4 - --- - -[float] -=== source - -Source fields capture details about the sender of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. -Source fields are usually populated in conjunction with destination fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. - - -*`source.address`*:: -+ --- -Some event source addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - --- - -*`source.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`source.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`source.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`source.bytes`*:: -+ --- -Bytes sent from the source to the destination. - -type: long - -example: 184 - -format: bytes - --- - -*`source.domain`*:: -+ --- -The domain name of the source system. -This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. - -type: keyword - -example: foo.example.com - --- - -*`source.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`source.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`source.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`source.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`source.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`source.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`source.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`source.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`source.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`source.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`source.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`source.ip`*:: -+ --- -IP address of the source (IPv4 or IPv6). - -type: ip - --- - -*`source.mac`*:: -+ --- -MAC address of the source. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - --- - -*`source.nat.ip`*:: -+ --- -Translated ip of source based NAT sessions (e.g. internal client to internet) -Typically connections traversing load balancers, firewalls, or routers. - -type: ip - --- - -*`source.nat.port`*:: -+ --- -Translated port of source based NAT sessions. (e.g. internal client to internet) -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - --- - -*`source.packets`*:: -+ --- -Packets sent from the source to the destination. - -type: long - -example: 12 - --- - -*`source.port`*:: -+ --- -Port of the source. - -type: long - -format: string - --- - -*`source.registered_domain`*:: -+ --- -The highest registered source domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`source.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`source.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`source.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`source.user.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`source.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`source.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`source.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`source.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`source.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`source.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`source.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`source.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`source.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`source.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -[float] -=== threat - -Fields to classify events and alerts according to a threat taxonomy such as the MITRE ATT&CK® framework. -These fields are for users to classify alerts from all of their sources (e.g. IDS, NGFW, etc.) within a common taxonomy. The threat.tactic.* fields are meant to capture the high level category of the threat (e.g. "impact"). The threat.technique.* fields are meant to capture which kind of approach is used by this detected threat, to accomplish the goal (e.g. "endpoint denial of service"). - - -*`threat.enrichments`*:: -+ --- -A list of associated indicators objects enriching the event, and the context of that association/enrichment. - -type: nested - --- - -*`threat.enrichments.indicator`*:: -+ --- -Object containing associated indicators enriching the event. - -type: object - --- - -*`threat.enrichments.indicator.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`threat.enrichments.indicator.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`threat.enrichments.indicator.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.confidence`*:: -+ --- -Identifies the vendor-neutral confidence rating using the None/Low/Medium/High scale defined in Appendix A of the STIX 2.1 framework. Vendor-specific confidence scales may be added as custom fields. -Expected values are: - * Not Specified - * None - * Low - * Medium - * High - -type: keyword - -example: Medium - --- - -*`threat.enrichments.indicator.description`*:: -+ --- -Describes the type of action conducted by the threat. - -type: keyword - -example: IP x.x.x.x was observed delivering the Angler EK. - --- - -*`threat.enrichments.indicator.email.address`*:: -+ --- -Identifies a threat indicator as an email address (irrespective of direction). - -type: keyword - -example: phish@example.com - --- - -*`threat.enrichments.indicator.file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - --- - -*`threat.enrichments.indicator.file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - --- - -*`threat.enrichments.indicator.file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`threat.enrichments.indicator.file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`threat.enrichments.indicator.file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`threat.enrichments.indicator.file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.enrichments.indicator.file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`threat.enrichments.indicator.file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`threat.enrichments.indicator.file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - --- - -*`threat.enrichments.indicator.file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - --- - -*`threat.enrichments.indicator.file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - --- - -*`threat.enrichments.indicator.file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - --- - -*`threat.enrichments.indicator.file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - --- - -*`threat.enrichments.indicator.file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`threat.enrichments.indicator.file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`threat.enrichments.indicator.file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`threat.enrichments.indicator.file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`threat.enrichments.indicator.file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`threat.enrichments.indicator.file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`threat.enrichments.indicator.file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`threat.enrichments.indicator.file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`threat.enrichments.indicator.file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`threat.enrichments.indicator.file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`threat.enrichments.indicator.file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`threat.enrichments.indicator.file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.enrichments.indicator.file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`threat.enrichments.indicator.file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - --- - -*`threat.enrichments.indicator.file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - --- - -*`threat.enrichments.indicator.file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - --- - -*`threat.enrichments.indicator.file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - --- - -*`threat.enrichments.indicator.file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - --- - -*`threat.enrichments.indicator.file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - --- - -*`threat.enrichments.indicator.file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - --- - -*`threat.enrichments.indicator.file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - --- - -*`threat.enrichments.indicator.file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - --- - -*`threat.enrichments.indicator.file.path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`threat.enrichments.indicator.file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.enrichments.indicator.file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`threat.enrichments.indicator.file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`threat.enrichments.indicator.file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`threat.enrichments.indicator.file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`threat.enrichments.indicator.file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`threat.enrichments.indicator.file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - --- - -*`threat.enrichments.indicator.file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - --- - -*`threat.enrichments.indicator.file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - --- - -*`threat.enrichments.indicator.file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - --- - -*`threat.enrichments.indicator.file.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`threat.enrichments.indicator.file.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`threat.enrichments.indicator.file.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`threat.enrichments.indicator.file.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`threat.enrichments.indicator.file.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`threat.enrichments.indicator.file.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`threat.enrichments.indicator.file.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`threat.enrichments.indicator.file.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.enrichments.indicator.file.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`threat.enrichments.indicator.file.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`threat.enrichments.indicator.file.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`threat.enrichments.indicator.file.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`threat.enrichments.indicator.file.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`threat.enrichments.indicator.file.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`threat.enrichments.indicator.file.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`threat.enrichments.indicator.file.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`threat.enrichments.indicator.file.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`threat.enrichments.indicator.file.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`threat.enrichments.indicator.file.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`threat.enrichments.indicator.file.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`threat.enrichments.indicator.file.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`threat.enrichments.indicator.file.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`threat.enrichments.indicator.file.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.enrichments.indicator.file.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`threat.enrichments.indicator.first_seen`*:: -+ --- -The date and time when intelligence source first reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`threat.enrichments.indicator.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`threat.enrichments.indicator.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`threat.enrichments.indicator.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`threat.enrichments.indicator.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`threat.enrichments.indicator.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`threat.enrichments.indicator.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`threat.enrichments.indicator.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`threat.enrichments.indicator.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`threat.enrichments.indicator.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`threat.enrichments.indicator.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`threat.enrichments.indicator.ip`*:: -+ --- -Identifies a threat indicator as an IP address (irrespective of direction). - -type: ip - -example: 1.2.3.4 - --- - -*`threat.enrichments.indicator.last_seen`*:: -+ --- -The date and time when intelligence source last reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.marking.tlp`*:: -+ --- -Traffic Light Protocol sharing markings. Recommended values are: - * WHITE - * GREEN - * AMBER - * RED - -type: keyword - -example: White - --- - -*`threat.enrichments.indicator.modified_at`*:: -+ --- -The date and time when intelligence source last modified information for this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.port`*:: -+ --- -Identifies a threat indicator as a port number (irrespective of direction). - -type: long - -example: 443 - --- - -*`threat.enrichments.indicator.provider`*:: -+ --- -The name of the indicator's provider. - -type: keyword - -example: lrz_urlhaus - --- - -*`threat.enrichments.indicator.reference`*:: -+ --- -Reference URL linking to additional information about this indicator. - -type: keyword - -example: https://system.example.com/indicator/0001234 - --- - -*`threat.enrichments.indicator.registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - --- - -*`threat.enrichments.indicator.registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - --- - -*`threat.enrichments.indicator.registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - --- - -*`threat.enrichments.indicator.registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - --- - -*`threat.enrichments.indicator.registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - --- - -*`threat.enrichments.indicator.registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - --- - -*`threat.enrichments.indicator.registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - --- - -*`threat.enrichments.indicator.scanner_stats`*:: -+ --- -Count of AV/EDR vendors that successfully detected malicious file or URL. - -type: long - -example: 4 - --- - -*`threat.enrichments.indicator.sightings`*:: -+ --- -Number of times this indicator was observed conducting threat activity. - -type: long - -example: 20 - --- - -*`threat.enrichments.indicator.type`*:: -+ --- -Type of indicator as represented by Cyber Observable in STIX 2.0. Recommended values: - * autonomous-system - * artifact - * directory - * domain-name - * email-addr - * file - * ipv4-addr - * ipv6-addr - * mac-addr - * mutex - * port - * process - * software - * url - * user-account - * windows-registry-key - * x509-certificate - -type: keyword - -example: ipv4-addr - --- - -*`threat.enrichments.indicator.url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - --- - -*`threat.enrichments.indicator.url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.enrichments.indicator.url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - --- - -*`threat.enrichments.indicator.url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - --- - -*`threat.enrichments.indicator.url.full.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - --- - -*`threat.enrichments.indicator.url.original.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.url.password`*:: -+ --- -Password of the request. - -type: keyword - --- - -*`threat.enrichments.indicator.url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - --- - -*`threat.enrichments.indicator.url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - --- - -*`threat.enrichments.indicator.url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - --- - -*`threat.enrichments.indicator.url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`threat.enrichments.indicator.url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - --- - -*`threat.enrichments.indicator.url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`threat.enrichments.indicator.url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`threat.enrichments.indicator.url.username`*:: -+ --- -Username of the request. - -type: keyword - --- - -*`threat.enrichments.indicator.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`threat.enrichments.indicator.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`threat.enrichments.indicator.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`threat.enrichments.indicator.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`threat.enrichments.indicator.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`threat.enrichments.indicator.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`threat.enrichments.indicator.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`threat.enrichments.indicator.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.enrichments.indicator.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`threat.enrichments.indicator.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`threat.enrichments.indicator.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`threat.enrichments.indicator.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`threat.enrichments.indicator.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`threat.enrichments.indicator.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`threat.enrichments.indicator.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`threat.enrichments.indicator.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`threat.enrichments.indicator.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`threat.enrichments.indicator.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`threat.enrichments.indicator.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`threat.enrichments.indicator.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`threat.enrichments.indicator.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`threat.enrichments.indicator.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`threat.enrichments.indicator.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.enrichments.indicator.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`threat.enrichments.matched.atomic`*:: -+ --- -Identifies the atomic indicator value that matched a local environment endpoint or network event. - -type: keyword - -example: bad-domain.com - --- - -*`threat.enrichments.matched.field`*:: -+ --- -Identifies the field of the atomic indicator that matched a local environment endpoint or network event. - -type: keyword - -example: file.hash.sha256 - --- - -*`threat.enrichments.matched.id`*:: -+ --- -Identifies the _id of the indicator document enriching the event. - -type: keyword - -example: ff93aee5-86a1-4a61-b0e6-0cdc313d01b5 - --- - -*`threat.enrichments.matched.index`*:: -+ --- -Identifies the _index of the indicator document enriching the event. - -type: keyword - -example: filebeat-8.0.0-2021.05.23-000011 - --- - -*`threat.enrichments.matched.type`*:: -+ --- -Identifies the type of match that caused the event to be enriched with the given indicator - -type: keyword - -example: indicator_match_rule - --- - -*`threat.framework`*:: -+ --- -Name of the threat framework used to further categorize and classify the tactic and technique of the reported threat. Framework classification can be provided by detecting systems, evaluated at ingest time, or retrospectively tagged to events. - -type: keyword - -example: MITRE ATT&CK - --- - -*`threat.group.alias`*:: -+ --- -The alias(es) of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group alias(es). - -type: keyword - -example: [ "Magecart Group 6" ] - --- - -*`threat.group.id`*:: -+ --- -The id of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group id. - -type: keyword - -example: G0037 - --- - -*`threat.group.name`*:: -+ --- -The name of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group name. - -type: keyword - -example: FIN6 - --- - -*`threat.group.reference`*:: -+ --- -The reference URL of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group reference URL. - -type: keyword - -example: https://attack.mitre.org/groups/G0037/ - --- - -*`threat.indicator.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`threat.indicator.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`threat.indicator.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.confidence`*:: -+ --- -Identifies the vendor-neutral confidence rating using the None/Low/Medium/High scale defined in Appendix A of the STIX 2.1 framework. Vendor-specific confidence scales may be added as custom fields. -Expected values are: - * Not Specified - * None - * Low - * Medium - * High - -type: keyword - -example: Medium - --- - -*`threat.indicator.description`*:: -+ --- -Describes the type of action conducted by the threat. - -type: keyword - -example: IP x.x.x.x was observed delivering the Angler EK. - --- - -*`threat.indicator.email.address`*:: -+ --- -Identifies a threat indicator as an email address (irrespective of direction). - -type: keyword - -example: phish@example.com - --- - -*`threat.indicator.file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - --- - -*`threat.indicator.file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - --- - -*`threat.indicator.file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`threat.indicator.file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`threat.indicator.file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`threat.indicator.file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`threat.indicator.file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.indicator.file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`threat.indicator.file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`threat.indicator.file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`threat.indicator.file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`threat.indicator.file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - --- - -*`threat.indicator.file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - --- - -*`threat.indicator.file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - --- - -*`threat.indicator.file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - --- - -*`threat.indicator.file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - --- - -*`threat.indicator.file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`threat.indicator.file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`threat.indicator.file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`threat.indicator.file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`threat.indicator.file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`threat.indicator.file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`threat.indicator.file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`threat.indicator.file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`threat.indicator.file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`threat.indicator.file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`threat.indicator.file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`threat.indicator.file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`threat.indicator.file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`threat.indicator.file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`threat.indicator.file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`threat.indicator.file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`threat.indicator.file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`threat.indicator.file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`threat.indicator.file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`threat.indicator.file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`threat.indicator.file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`threat.indicator.file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`threat.indicator.file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.indicator.file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`threat.indicator.file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - --- - -*`threat.indicator.file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - --- - -*`threat.indicator.file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`threat.indicator.file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - --- - -*`threat.indicator.file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - --- - -*`threat.indicator.file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - --- - -*`threat.indicator.file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - --- - -*`threat.indicator.file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - --- - -*`threat.indicator.file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - --- - -*`threat.indicator.file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - --- - -*`threat.indicator.file.path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`threat.indicator.file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.indicator.file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`threat.indicator.file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`threat.indicator.file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`threat.indicator.file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`threat.indicator.file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`threat.indicator.file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - --- - -*`threat.indicator.file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - --- - -*`threat.indicator.file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - --- - -*`threat.indicator.file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - --- - -*`threat.indicator.file.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`threat.indicator.file.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`threat.indicator.file.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`threat.indicator.file.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`threat.indicator.file.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`threat.indicator.file.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`threat.indicator.file.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`threat.indicator.file.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.indicator.file.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`threat.indicator.file.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`threat.indicator.file.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`threat.indicator.file.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`threat.indicator.file.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`threat.indicator.file.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`threat.indicator.file.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`threat.indicator.file.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`threat.indicator.file.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`threat.indicator.file.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`threat.indicator.file.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`threat.indicator.file.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`threat.indicator.file.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`threat.indicator.file.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`threat.indicator.file.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.indicator.file.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`threat.indicator.first_seen`*:: -+ --- -The date and time when intelligence source first reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`threat.indicator.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`threat.indicator.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`threat.indicator.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`threat.indicator.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`threat.indicator.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`threat.indicator.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`threat.indicator.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`threat.indicator.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`threat.indicator.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`threat.indicator.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`threat.indicator.ip`*:: -+ --- -Identifies a threat indicator as an IP address (irrespective of direction). - -type: ip - -example: 1.2.3.4 - --- - -*`threat.indicator.last_seen`*:: -+ --- -The date and time when intelligence source last reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.marking.tlp`*:: -+ --- -Traffic Light Protocol sharing markings. -Recommended values are: - * WHITE - * GREEN - * AMBER - * RED - -type: keyword - -example: WHITE - --- - -*`threat.indicator.modified_at`*:: -+ --- -The date and time when intelligence source last modified information for this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.port`*:: -+ --- -Identifies a threat indicator as a port number (irrespective of direction). - -type: long - -example: 443 - --- - -*`threat.indicator.provider`*:: -+ --- -The name of the indicator's provider. - -type: keyword - -example: lrz_urlhaus - --- - -*`threat.indicator.reference`*:: -+ --- -Reference URL linking to additional information about this indicator. - -type: keyword - -example: https://system.example.com/indicator/0001234 - --- - -*`threat.indicator.registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - --- - -*`threat.indicator.registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - --- - -*`threat.indicator.registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - --- - -*`threat.indicator.registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - --- - -*`threat.indicator.registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - --- - -*`threat.indicator.registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - --- - -*`threat.indicator.registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - --- - -*`threat.indicator.scanner_stats`*:: -+ --- -Count of AV/EDR vendors that successfully detected malicious file or URL. - -type: long - -example: 4 - --- - -*`threat.indicator.sightings`*:: -+ --- -Number of times this indicator was observed conducting threat activity. - -type: long - -example: 20 - --- - -*`threat.indicator.type`*:: -+ --- -Type of indicator as represented by Cyber Observable in STIX 2.0. -Recommended values: - * autonomous-system - * artifact - * directory - * domain-name - * email-addr - * file - * ipv4-addr - * ipv6-addr - * mac-addr - * mutex - * port - * process - * software - * url - * user-account - * windows-registry-key - * x509-certificate - -type: keyword - -example: ipv4-addr - --- - -*`threat.indicator.url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - --- - -*`threat.indicator.url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.indicator.url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - --- - -*`threat.indicator.url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - --- - -*`threat.indicator.url.full.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - --- - -*`threat.indicator.url.original.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.url.password`*:: -+ --- -Password of the request. - -type: keyword - --- - -*`threat.indicator.url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - --- - -*`threat.indicator.url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - --- - -*`threat.indicator.url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - --- - -*`threat.indicator.url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`threat.indicator.url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - --- - -*`threat.indicator.url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`threat.indicator.url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`threat.indicator.url.username`*:: -+ --- -Username of the request. - -type: keyword - --- - -*`threat.indicator.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`threat.indicator.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`threat.indicator.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`threat.indicator.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`threat.indicator.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`threat.indicator.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`threat.indicator.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`threat.indicator.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.indicator.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`threat.indicator.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`threat.indicator.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`threat.indicator.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`threat.indicator.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`threat.indicator.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`threat.indicator.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`threat.indicator.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`threat.indicator.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`threat.indicator.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`threat.indicator.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`threat.indicator.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`threat.indicator.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`threat.indicator.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`threat.indicator.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.indicator.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`threat.software.alias`*:: -+ --- -The alias(es) of the software for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® associated software description. - -type: keyword - -example: [ "X-Agent" ] - --- - -*`threat.software.id`*:: -+ --- -The id of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software id. - -type: keyword - -example: S0552 - --- - -*`threat.software.name`*:: -+ --- -The name of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software name. - -type: keyword - -example: AdFind - --- - -*`threat.software.platforms`*:: -+ --- -The platforms of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -Recommended Values: - * AWS - * Azure - * Azure AD - * GCP - * Linux - * macOS - * Network - * Office 365 - * SaaS - * Windows - -While not required, you can use a MITRE ATT&CK® software platforms. - -type: keyword - -example: [ "Windows" ] - --- - -*`threat.software.reference`*:: -+ --- -The reference URL of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software reference URL. - -type: keyword - -example: https://attack.mitre.org/software/S0552/ - --- - -*`threat.software.type`*:: -+ --- -The type of software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -Recommended values - * Malware - * Tool - - While not required, you can use a MITRE ATT&CK® software type. - -type: keyword - -example: Tool - --- - -*`threat.tactic.id`*:: -+ --- -The id of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/ ) - -type: keyword - -example: TA0002 - --- - -*`threat.tactic.name`*:: -+ --- -Name of the type of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/) - -type: keyword - -example: Execution - --- - -*`threat.tactic.reference`*:: -+ --- -The reference url of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/ ) - -type: keyword - -example: https://attack.mitre.org/tactics/TA0002/ - --- - -*`threat.technique.id`*:: -+ --- -The id of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: T1059 - --- - -*`threat.technique.name`*:: -+ --- -The name of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: Command and Scripting Interpreter - --- - -*`threat.technique.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.technique.reference`*:: -+ --- -The reference url of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: https://attack.mitre.org/techniques/T1059/ - --- - -*`threat.technique.subtechnique.id`*:: -+ --- -The full id of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: T1059.001 - --- - -*`threat.technique.subtechnique.name`*:: -+ --- -The name of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: PowerShell - --- - -*`threat.technique.subtechnique.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.technique.subtechnique.reference`*:: -+ --- -The reference url of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: https://attack.mitre.org/techniques/T1059/001/ - --- - -[float] -=== tls - -Fields related to a TLS connection. These fields focus on the TLS protocol itself and intentionally avoids in-depth analysis of the related x.509 certificate files. - - -*`tls.cipher`*:: -+ --- -String indicating the cipher used during the current connection. - -type: keyword - -example: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - --- - -*`tls.client.certificate`*:: -+ --- -PEM-encoded stand-alone certificate offered by the client. This is usually mutually-exclusive of `client.certificate_chain` since this value also exists in that list. - -type: keyword - -example: MII... - --- - -*`tls.client.certificate_chain`*:: -+ --- -Array of PEM-encoded certificates that make up the certificate chain offered by the client. This is usually mutually-exclusive of `client.certificate` since that value should be the first certificate in the chain. - -type: keyword - -example: ["MII...", "MII..."] - --- - -*`tls.client.hash.md5`*:: -+ --- -Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC - --- - -*`tls.client.hash.sha1`*:: -+ --- -Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 9E393D93138888D288266C2D915214D1D1CCEB2A - --- - -*`tls.client.hash.sha256`*:: -+ --- -Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 - --- - -*`tls.client.issuer`*:: -+ --- -Distinguished name of subject of the issuer of the x.509 certificate presented by the client. - -type: keyword - -example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com - --- - -*`tls.client.ja3`*:: -+ --- -A hash that identifies clients based on how they perform an SSL/TLS handshake. - -type: keyword - -example: d4e5b18d6b55c71272893221c96ba240 - --- - -*`tls.client.not_after`*:: -+ --- -Date/Time indicating when client certificate is no longer considered valid. - -type: date - -example: 2021-01-01T00:00:00.000Z - --- - -*`tls.client.not_before`*:: -+ --- -Date/Time indicating when client certificate is first considered valid. - -type: date - -example: 1970-01-01T00:00:00.000Z - --- - -*`tls.client.server_name`*:: -+ --- -Also called an SNI, this tells the server which hostname to which the client is attempting to connect to. When this value is available, it should get copied to `destination.domain`. - -type: keyword - -example: www.elastic.co - --- - -*`tls.client.subject`*:: -+ --- -Distinguished name of subject of the x.509 certificate presented by the client. - -type: keyword - -example: CN=myclient, OU=Documentation Team, DC=example, DC=com - --- - -*`tls.client.supported_ciphers`*:: -+ --- -Array of ciphers offered by the client during the client hello. - -type: keyword - -example: ["TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "..."] - --- - -*`tls.client.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`tls.client.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`tls.client.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`tls.client.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`tls.client.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`tls.client.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`tls.client.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`tls.client.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`tls.client.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`tls.client.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`tls.client.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`tls.client.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`tls.client.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`tls.client.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`tls.client.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`tls.client.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`tls.client.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`tls.client.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`tls.client.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`tls.client.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`tls.client.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`tls.client.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`tls.client.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`tls.client.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`tls.curve`*:: -+ --- -String indicating the curve used for the given cipher, when applicable. - -type: keyword - -example: secp256r1 - --- - -*`tls.established`*:: -+ --- -Boolean flag indicating if the TLS negotiation was successful and transitioned to an encrypted tunnel. - -type: boolean - --- - -*`tls.next_protocol`*:: -+ --- -String indicating the protocol being tunneled. Per the values in the IANA registry (https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids), this string should be lower case. - -type: keyword - -example: http/1.1 - --- - -*`tls.resumed`*:: -+ --- -Boolean flag indicating if this TLS connection was resumed from an existing TLS negotiation. - -type: boolean - --- - -*`tls.server.certificate`*:: -+ --- -PEM-encoded stand-alone certificate offered by the server. This is usually mutually-exclusive of `server.certificate_chain` since this value also exists in that list. - -type: keyword - -example: MII... - --- - -*`tls.server.certificate_chain`*:: -+ --- -Array of PEM-encoded certificates that make up the certificate chain offered by the server. This is usually mutually-exclusive of `server.certificate` since that value should be the first certificate in the chain. - -type: keyword - -example: ["MII...", "MII..."] - --- - -*`tls.server.hash.md5`*:: -+ --- -Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC - --- - -*`tls.server.hash.sha1`*:: -+ --- -Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 9E393D93138888D288266C2D915214D1D1CCEB2A - --- - -*`tls.server.hash.sha256`*:: -+ --- -Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 - --- - -*`tls.server.issuer`*:: -+ --- -Subject of the issuer of the x.509 certificate presented by the server. - -type: keyword - -example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com - --- - -*`tls.server.ja3s`*:: -+ --- -A hash that identifies servers based on how they perform an SSL/TLS handshake. - -type: keyword - -example: 394441ab65754e2207b1e1b457b3641d - --- - -*`tls.server.not_after`*:: -+ --- -Timestamp indicating when server certificate is no longer considered valid. - -type: date - -example: 2021-01-01T00:00:00.000Z - --- - -*`tls.server.not_before`*:: -+ --- -Timestamp indicating when server certificate is first considered valid. - -type: date - -example: 1970-01-01T00:00:00.000Z - --- - -*`tls.server.subject`*:: -+ --- -Subject of the x.509 certificate presented by the server. - -type: keyword - -example: CN=www.example.com, OU=Infrastructure Team, DC=example, DC=com - --- - -*`tls.server.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`tls.server.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`tls.server.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`tls.server.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`tls.server.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`tls.server.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`tls.server.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`tls.server.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`tls.server.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`tls.server.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`tls.server.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`tls.server.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`tls.server.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`tls.server.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`tls.server.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`tls.server.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`tls.server.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`tls.server.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`tls.server.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`tls.server.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`tls.server.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`tls.server.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`tls.server.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`tls.server.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`tls.version`*:: -+ --- -Numeric part of the version parsed from the original string. - -type: keyword - -example: 1.2 - --- - -*`tls.version_protocol`*:: -+ --- -Normalized lowercase protocol name parsed from original string. - -type: keyword - -example: tls - --- - -*`span.id`*:: -+ --- -Unique identifier of the span within the scope of its trace. -A span represents an operation within a transaction, such as a request to another service, or a database query. - -type: keyword - -example: 3ff9a8981b7ccd5a - --- - -*`trace.id`*:: -+ --- -Unique identifier of the trace. -A trace groups multiple events like transactions that belong together. For example, a user request handled by multiple inter-connected services. - -type: keyword - -example: 4bf92f3577b34da6a3ce929d0e0e4736 - --- - -*`transaction.id`*:: -+ --- -Unique identifier of the transaction within the scope of its trace. -A transaction is the highest level of work measured within a service, such as a request to a server. - -type: keyword - -example: 00f067aa0ba902b7 - --- - -[float] -=== url - -URL fields provide support for complete or partial URLs, and supports the breaking down into scheme, domain, path, and so on. - - -*`url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - --- - -*`url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - --- - -*`url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - --- - -*`url.full.text`*:: -+ --- -type: match_only_text - --- - -*`url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - --- - -*`url.original.text`*:: -+ --- -type: match_only_text - --- - -*`url.password`*:: -+ --- -Password of the request. - -type: keyword - --- - -*`url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - --- - -*`url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - --- - -*`url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - --- - -*`url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - --- - -*`url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`url.username`*:: -+ --- -Username of the request. - -type: keyword - --- - -[float] -=== user - -The user fields describe information about the user that is relevant to the event. -Fields can have one entry or multiple entries. If a user has more than one id, provide an array that includes all of them. - - -*`user.changes.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.changes.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.changes.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.changes.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.changes.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.changes.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.changes.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.changes.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.changes.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.changes.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.changes.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.changes.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -*`user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.effective.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.effective.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.effective.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.effective.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.effective.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.effective.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.effective.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.effective.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.effective.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.effective.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.effective.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.effective.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -*`user.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -*`user.target.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.target.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.target.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.target.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.target.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.target.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.target.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.target.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.target.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.target.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.target.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.target.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. -They often show up in web service logs coming from the parsed user agent string. - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - -type: keyword - -example: iPhone - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - -type: keyword - -example: Safari - --- - -*`user_agent.original`*:: -+ --- -Unparsed user_agent string. - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - --- - -*`user_agent.original.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - --- - -*`user_agent.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - --- - -*`user_agent.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - --- - -*`user_agent.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - -type: keyword - -example: 12.0 - --- - -[float] -=== vlan - -The VLAN fields are used to identify 802.1q tag(s) of a packet, as well as ingress and egress VLAN associations of an observer in relation to a specific packet or connection. -Network.vlan fields are used to record a single VLAN tag, or the outer tag in the case of q-in-q encapsulations, for a packet or connection as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. -Network.inner VLAN fields are used to report inner q-in-q 802.1q tags (multiple 802.1q encapsulations) as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. Network.inner VLAN fields should only be used in addition to network.vlan fields to indicate q-in-q tagging. -Observer.ingress and observer.egress VLAN values are used to record observer specific information when observer events contain discrete ingress and egress VLAN information, typically provided by firewalls, routers, or load balancers. - - -*`vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - --- - -*`vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - --- - -[float] -=== vulnerability - -The vulnerability fields describe information about a vulnerability that is relevant to an event. - - -*`vulnerability.category`*:: -+ --- -The type of system or architecture that the vulnerability affects. These may be platform-specific (for example, Debian or SUSE) or general (for example, Database or Firewall). For example (https://qualysguard.qualys.com/qwebhelp/fo_portal/knowledgebase/vulnerability_categories.htm[Qualys vulnerability categories]) -This field must be an array. - -type: keyword - -example: ["Firewall"] - --- - -*`vulnerability.classification`*:: -+ --- -The classification of the vulnerability scoring system. For example (https://www.first.org/cvss/) - -type: keyword - -example: CVSS - --- - -*`vulnerability.description`*:: -+ --- -The description of the vulnerability that provides additional context of the vulnerability. For example (https://cve.mitre.org/about/faqs.html#cve_entry_descriptions_created[Common Vulnerabilities and Exposure CVE description]) - -type: keyword - -example: In macOS before 2.12.6, there is a vulnerability in the RPC... - --- - -*`vulnerability.description.text`*:: -+ --- -type: match_only_text - --- - -*`vulnerability.enumeration`*:: -+ --- -The type of identifier used for this vulnerability. For example (https://cve.mitre.org/about/) - -type: keyword - -example: CVE - --- - -*`vulnerability.id`*:: -+ --- -The identification (ID) is the number portion of a vulnerability entry. It includes a unique identification number for the vulnerability. For example (https://cve.mitre.org/about/faqs.html#what_is_cve_id)[Common Vulnerabilities and Exposure CVE ID] - -type: keyword - -example: CVE-2019-00001 - --- - -*`vulnerability.reference`*:: -+ --- -A resource that provides additional information, context, and mitigations for the identified vulnerability. - -type: keyword - -example: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111 - --- - -*`vulnerability.report_id`*:: -+ --- -The report or scan identification number. - -type: keyword - -example: 20191018.0001 - --- - -*`vulnerability.scanner.vendor`*:: -+ --- -The name of the vulnerability scanner vendor. - -type: keyword - -example: Tenable - --- - -*`vulnerability.score.base`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Base scores cover an assessment for exploitability metrics (attack vector, complexity, privileges, and user interaction), impact metrics (confidentiality, integrity, and availability), and scope. For example (https://www.first.org/cvss/specification-document) - -type: float - -example: 5.5 - --- - -*`vulnerability.score.environmental`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Environmental scores cover an assessment for any modified Base metrics, confidentiality, integrity, and availability requirements. For example (https://www.first.org/cvss/specification-document) - -type: float - -example: 5.5 - --- - -*`vulnerability.score.temporal`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Temporal scores cover an assessment for code maturity, remediation level, and confidence. For example (https://www.first.org/cvss/specification-document) - -type: float - --- - -*`vulnerability.score.version`*:: -+ --- -The National Vulnerability Database (NVD) provides qualitative severity rankings of "Low", "Medium", and "High" for CVSS v2.0 base score ranges in addition to the severity ratings for CVSS v3.0 as they are defined in the CVSS v3.0 specification. -CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based non-profit organization, whose mission is to help computer security incident response teams across the world. For example (https://nvd.nist.gov/vuln-metrics/cvss) - -type: keyword - -example: 2.0 - --- - -*`vulnerability.severity`*:: -+ --- -The severity of the vulnerability can help with metrics and internal prioritization regarding remediation. For example (https://nvd.nist.gov/vuln-metrics/cvss) - -type: keyword - -example: Critical - --- - -[float] -=== x509 - -This implements the common core fields for x509 certificates. This information is likely logged with TLS sessions, digital signatures found in executable binaries, S/MIME information in email bodies, or analysis of files on disk. -When the certificate relates to a file, use the fields at `file.x509`. When hashes of the DER-encoded certificate are available, the `hash` data set should be populated as well (e.g. `file.hash.sha256`). -Events that contain certificate information about network connections, should use the x509 fields under the relevant TLS fields: `tls.server.x509` and/or `tls.client.x509`. - - -*`x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -[[exported-fields-file_integrity]] -== File Integrity fields - -These are the fields generated by the file_integrity module. - - -[float] -=== file - -File attributes. - - -[float] -=== elf - -These fields contain Linux Executable Linkable Format (ELF) metadata. - - -*`file.elf.go_imports`*:: -+ --- -List of imported Go language element names and types. - -type: flattened - --- - -*`file.elf.go_imports_names_entropy`*:: -+ --- -Shannon entropy calculation from the list of Go imports. - -type: long - -format: number - --- - -*`file.elf.go_imports_names_var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the list of Go imports. - -type: long - -format: number - --- - -*`file.elf.go_import_hash`*:: -+ --- -A hash of the Go language imports in an ELF file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma). - -type: keyword - -example: 10bddcb4cee42080f76c88d9ff964491 - --- - -*`file.elf.go_stripped`*:: -+ --- -Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable. - -type: boolean - --- - -*`file.elf.imports_names_entropy`*:: -+ --- -Shannon entropy calculation from the list of imported element names and types. - -type: long - -format: number - --- - -*`file.elf.imports_names_var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the list of imported element names and types. - -type: long - -format: number - --- - -*`file.elf.import_hash`*:: -+ --- -A hash of the imports in an ELF file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -This is an ELF implementation of the Windows PE imphash. - -type: keyword - -example: d41d8cd98f00b204e9800998ecf8427e - --- - -*`file.elf.sections.var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the section. - -type: long - -format: number - --- - -[float] -=== macho - -These fields contain Mach object file Format (Mach-O) metadata. - - -*`file.macho.go_imports`*:: -+ --- -List of imported Go language element names and types. - -type: flattened - --- - -*`file.macho.go_imports_names_entropy`*:: -+ --- -Shannon entropy calculation from the list of Go imports. - -type: long - -format: number - --- - -*`file.macho.go_imports_names_var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the list of Go imports. - -type: long - -format: number - --- - -*`file.macho.go_import_hash`*:: -+ --- -A hash of the Go language imports in a Mach-O file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma). - -type: keyword - -example: 10bddcb4cee42080f76c88d9ff964491 - --- - -*`file.macho.go_stripped`*:: -+ --- -Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable. - -type: boolean - --- - -*`file.macho.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`file.macho.imports_names_entropy`*:: -+ --- -Shannon entropy calculation from the list of imported element names and types. - -type: long - -format: number - --- - -*`file.macho.imports_names_var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the list of imported element names and types. - -type: long - -format: number - --- - -*`file.macho.import_hash`*:: -+ --- -A hash of the imports in a Mach-O file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -This is a synonym for symhash. - -type: keyword - -example: d3ccf195b62a9279c3c19af1080497ec - --- - -*`file.macho.sections`*:: -+ --- -An array containing an object for each section of the Mach-O file. -The keys that should be present in these objects are defined by sub-fields underneath `macho.sections.*`. - -type: nested - --- - -*`file.macho.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`file.macho.sections.var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`file.macho.sections.name`*:: -+ --- -Mach-O Section List name. - -type: keyword - --- - -*`file.macho.sections.physical_size`*:: -+ --- -Mach-O Section List physical size. - -type: long - -format: string - --- - -*`file.macho.sections.virtual_size`*:: -+ --- -Mach-O Section List virtual size. - -type: long - -format: string - --- - -*`file.macho.symhash`*:: -+ --- -A hash of the imports in a Mach-O file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. - -type: keyword - -example: d3ccf195b62a9279c3c19af1080497ec - --- - -[float] -=== pe - -These fields contain Windows Portable Executable (PE) metadata. - - -*`file.pe.go_imports`*:: -+ --- -List of imported Go language element names and types. - -type: flattened - --- - -*`file.pe.go_imports_names_entropy`*:: -+ --- -Shannon entropy calculation from the list of Go imports. - -type: long - -format: number - --- - -*`file.pe.go_imports_names_var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the list of Go imports. - -type: long - -format: number - --- - -*`file.pe.go_import_hash`*:: -+ --- -A hash of the Go language imports in a PE file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma). - -type: keyword - -example: 10bddcb4cee42080f76c88d9ff964491 - --- - -*`file.pe.go_stripped`*:: -+ --- -Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable. - -type: boolean - --- - -*`file.pe.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`file.pe.imports_names_entropy`*:: -+ --- -Shannon entropy calculation from the list of imported element names and types. - -type: long - -format: number - --- - -*`file.pe.imports_names_var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the list of imported element names and types. - -type: long - -format: number - --- - -*`file.pe.import_hash`*:: -+ --- -A hash of the imports in a PE file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -This is a synonym for imphash. - -type: keyword - --- - -*`file.pe.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `pe.sections.*`. - -type: nested - --- - -*`file.pe.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`file.pe.sections.var_entropy`*:: -+ --- -Variance for Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`file.pe.sections.name`*:: -+ --- -PE Section List name. - -type: keyword - --- - -*`file.pe.sections.physical_size`*:: -+ --- -PE Section List physical size. - -type: long - -format: string - --- - -*`file.pe.sections.virtual_size`*:: -+ --- -PE Section List virtual size. - -type: long - -format: string - --- - -[float] -=== hash - -Hashes of the file. The keys are algorithm names and the values are the hex encoded digest values. - - - -*`hash.blake2b_256`*:: -+ --- -BLAKE2b-256 hash of the file. - -type: keyword - --- - -*`hash.blake2b_384`*:: -+ --- -BLAKE2b-384 hash of the file. - -type: keyword - --- - -*`hash.blake2b_512`*:: -+ --- -BLAKE2b-512 hash of the file. - -type: keyword - --- - -*`hash.md5`*:: -+ --- -MD5 hash of the file. - -type: keyword - --- - -*`hash.sha1`*:: -+ --- -SHA1 hash of the file. - -type: keyword - --- - -*`hash.sha224`*:: -+ --- -SHA224 hash of the file. - -type: keyword - --- - -*`hash.sha256`*:: -+ --- -SHA256 hash of the file. - -type: keyword - --- - -*`hash.sha384`*:: -+ --- -SHA384 hash of the file. - -type: keyword - --- - -*`hash.sha3_224`*:: -+ --- -SHA3_224 hash of the file. - -type: keyword - --- - -*`hash.sha3_256`*:: -+ --- -SHA3_256 hash of the file. - -type: keyword - --- - -*`hash.sha3_384`*:: -+ --- -SHA3_384 hash of the file. - -type: keyword - --- - -*`hash.sha3_512`*:: -+ --- -SHA3_512 hash of the file. - -type: keyword - --- - -*`hash.sha512`*:: -+ --- -SHA512 hash of the file. - -type: keyword - --- - -*`hash.sha512_224`*:: -+ --- -SHA512/224 hash of the file. - -type: keyword - --- - -*`hash.sha512_256`*:: -+ --- -SHA512/256 hash of the file. - -type: keyword - --- - -*`hash.xxh64`*:: -+ --- -XX64 hash of the file. - -type: keyword - --- - -[[exported-fields-host-processor]] -== Host fields - -Info collected for the host machine. - - - - -*`host.containerized`*:: -+ --- -If the host is a container. - - -type: boolean - --- - -*`host.os.build`*:: -+ --- -OS build information. - - -type: keyword - -example: 18D109 - --- - -*`host.os.codename`*:: -+ --- -OS codename, if any. - - -type: keyword - -example: stretch - --- - -[[exported-fields-jolokia-autodiscover]] -== Jolokia Discovery autodiscover provider fields - -Metadata from Jolokia Discovery added by the jolokia provider. - - - -*`jolokia.agent.version`*:: -+ --- -Version number of jolokia agent. - - -type: keyword - --- - -*`jolokia.agent.id`*:: -+ --- -Each agent has a unique id which can be either provided during startup of the agent in form of a configuration parameter or being autodetected. If autodected, the id has several parts: The IP, the process id, hashcode of the agent and its type. - - -type: keyword - --- - -*`jolokia.server.product`*:: -+ --- -The container product if detected. - - -type: keyword - --- - -*`jolokia.server.version`*:: -+ --- -The container's version (if detected). - - -type: keyword - --- - -*`jolokia.server.vendor`*:: -+ --- -The vendor of the container the agent is running in. - - -type: keyword - --- - -*`jolokia.url`*:: -+ --- -The URL how this agent can be contacted. - - -type: keyword - --- - -*`jolokia.secured`*:: -+ --- -Whether the agent was configured for authentication or not. - - -type: boolean - --- - -[[exported-fields-kubernetes-processor]] -== Kubernetes fields - -Kubernetes metadata added by the kubernetes processor - - - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -*`kubernetes.pod.ip`*:: -+ --- -Kubernetes Pod IP - - -type: ip - --- - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - -*`kubernetes.node.hostname`*:: -+ --- -Kubernetes hostname as reported by the node’s kernel - - -type: keyword - --- - -*`kubernetes.labels.*`*:: -+ --- -Kubernetes labels map - - -type: object - --- - -*`kubernetes.annotations.*`*:: -+ --- -Kubernetes annotations map - - -type: object - --- - -*`kubernetes.selectors.*`*:: -+ --- -Kubernetes selectors map - - -type: object - --- - -*`kubernetes.replicaset.name`*:: -+ --- -Kubernetes replicaset name - - -type: keyword - --- - -*`kubernetes.deployment.name`*:: -+ --- -Kubernetes deployment name - - -type: keyword - --- - -*`kubernetes.statefulset.name`*:: -+ --- -Kubernetes statefulset name - - -type: keyword - --- - -*`kubernetes.container.name`*:: -+ --- -Kubernetes container name (different than the name from the runtime) - - -type: keyword - --- - -[[exported-fields-process]] -== Process fields - -Process metadata fields - - - - -*`process.exe`*:: -+ --- -type: alias - -alias to: process.executable - --- - -[float] -=== owner - -Process owner information. - - -*`process.owner.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - --- - -*`process.owner.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: albert - --- - -*`process.owner.name.text`*:: -+ --- -type: text - --- - -[[exported-fields-system]] -== System fields - -These are the fields generated by the system module. - - - - -*`event.origin`*:: -+ --- -Origin of the event. This can be a file path (e.g. `/var/log/log.1`), or the name of the system component that supplied the data (e.g. `netlink`). - - -type: keyword - --- - - -*`user.entity_id`*:: -+ --- -ID uniquely identifying the user on a host. It is computed as a SHA-256 hash of the host ID, user ID, and user name. - - -type: keyword - --- - -*`user.terminal`*:: -+ --- -Terminal of the user. - - -type: keyword - --- - - -*`process.thread.capabilities.effective`*:: -+ --- -This is the set of capabilities used by the kernel to perform permission checks for the thread. - -type: keyword - -example: ["CAP_BPF", "CAP_SYS_ADMIN"] - --- - -*`process.thread.capabilities.permitted`*:: -+ --- -This is a limiting superset for the effective capabilities that the thread may assume. - -type: keyword - -example: ["CAP_BPF", "CAP_SYS_ADMIN"] - --- - -[float] -=== hash - -Hashes of the executable. The keys are algorithm names and the values are the hex encoded digest values. - - - -*`process.hash.blake2b_256`*:: -+ --- -BLAKE2b-256 hash of the executable. - -type: keyword - --- - -*`process.hash.blake2b_384`*:: -+ --- -BLAKE2b-384 hash of the executable. - -type: keyword - --- - -*`process.hash.blake2b_512`*:: -+ --- -BLAKE2b-512 hash of the executable. - -type: keyword - --- - -*`process.hash.sha224`*:: -+ --- -SHA224 hash of the executable. - -type: keyword - --- - -*`process.hash.sha384`*:: -+ --- -SHA384 hash of the executable. - -type: keyword - --- - -*`process.hash.sha3_224`*:: -+ --- -SHA3_224 hash of the executable. - -type: keyword - --- - -*`process.hash.sha3_256`*:: -+ --- -SHA3_256 hash of the executable. - -type: keyword - --- - -*`process.hash.sha3_384`*:: -+ --- -SHA3_384 hash of the executable. - -type: keyword - --- - -*`process.hash.sha3_512`*:: -+ --- -SHA3_512 hash of the executable. - -type: keyword - --- - -*`process.hash.sha512_224`*:: -+ --- -SHA512/224 hash of the executable. - -type: keyword - --- - -*`process.hash.sha512_256`*:: -+ --- -SHA512/256 hash of the executable. - -type: keyword - --- - -*`process.hash.xxh64`*:: -+ --- -XX64 hash of the executable. - -type: keyword - --- - -[float] -=== system.audit - - - - -[float] -=== host - -`host` contains general host information. - - - -*`system.audit.host.uptime`*:: -+ --- -Uptime in nanoseconds. - - -type: long - -format: duration - --- - -*`system.audit.host.boottime`*:: -+ --- -Boot time. - - -type: date - --- - -*`system.audit.host.containerized`*:: -+ --- -Set if host is a container. - - -type: boolean - --- - -*`system.audit.host.timezone.name`*:: -+ --- -Name of the timezone of the host, e.g. BST. - - -type: keyword - --- - -*`system.audit.host.timezone.offset.sec`*:: -+ --- -Timezone offset in seconds. - - -type: long - --- - -*`system.audit.host.hostname`*:: -+ --- -Hostname. - - -type: keyword - --- - -*`system.audit.host.id`*:: -+ --- -Host ID. - - -type: keyword - --- - -*`system.audit.host.architecture`*:: -+ --- -Host architecture (e.g. x86_64). - - -type: keyword - --- - -*`system.audit.host.mac`*:: -+ --- -MAC addresses. - - -type: keyword - --- - -*`system.audit.host.ip`*:: -+ --- -IP addresses. - - -type: ip - --- - -[float] -=== os - -`os` contains information about the operating system. - - - -*`system.audit.host.os.codename`*:: -+ --- -OS codename, if any (e.g. stretch). - - -type: keyword - --- - -*`system.audit.host.os.platform`*:: -+ --- -OS platform (e.g. centos, ubuntu, windows). - - -type: keyword - --- - -*`system.audit.host.os.name`*:: -+ --- -OS name (e.g. Mac OS X). - - -type: keyword - --- - -*`system.audit.host.os.family`*:: -+ --- -OS family (e.g. redhat, debian, freebsd, windows). - - -type: keyword - --- - -*`system.audit.host.os.version`*:: -+ --- -OS version. - - -type: keyword - --- - -*`system.audit.host.os.kernel`*:: -+ --- -The operating system's kernel version. - - -type: keyword - --- - -*`system.audit.host.os.type`*:: -+ --- -OS type (see ECS os.type). - - -type: keyword - --- - -[float] -=== package - -`package` contains information about an installed or removed package. - - - -*`system.audit.package.entity_id`*:: -+ --- -ID uniquely identifying the package. It is computed as a SHA-256 hash of the - host ID, package name, and package version. - - -type: keyword - --- - -*`system.audit.package.name`*:: -+ --- -Package name. - - -type: keyword - --- - -*`system.audit.package.version`*:: -+ --- -Package version. - - -type: keyword - --- - -*`system.audit.package.release`*:: -+ --- -Package release. - - -type: keyword - --- - -*`system.audit.package.arch`*:: -+ --- -Package architecture. - - -type: keyword - --- - -*`system.audit.package.license`*:: -+ --- -Package license. - - -type: keyword - --- - -*`system.audit.package.installtime`*:: -+ --- -Package install time. - - -type: date - --- - -*`system.audit.package.size`*:: -+ --- -Package size. - - -type: long - --- - -*`system.audit.package.summary`*:: -+ --- -Package summary. - - --- - -*`system.audit.package.url`*:: -+ --- -Package URL. - - -type: keyword - --- - -[float] -=== user - -`user` contains information about the users on a system. - - - -*`system.audit.user.name`*:: -+ --- -User name. - - -type: keyword - --- - -*`system.audit.user.uid`*:: -+ --- -User ID. - - -type: keyword - --- - -*`system.audit.user.gid`*:: -+ --- -Group ID. - - -type: keyword - --- - -*`system.audit.user.dir`*:: -+ --- -User's home directory. - - -type: keyword - --- - -*`system.audit.user.shell`*:: -+ --- -Program to run at login. - - -type: keyword - --- - -*`system.audit.user.user_information`*:: -+ --- -General user information. On Linux, this is the gecos field. - - -type: keyword - --- - -*`system.audit.user.group`*:: -+ --- -`group` contains information about any groups the user is part of (beyond the user's primary group). - - -type: object - --- - -[float] -=== password - -`password` contains information about a user's password (not the password itself). - - - -*`system.audit.user.password.type`*:: -+ --- -A user's password type. Possible values are `shadow_password` (the password hash is in the shadow file), `password_disabled`, `no_password` (this is dangerous as anyone can log in), and `crypt_password` (when the password field in /etc/passwd seems to contain an encrypted password). - - -type: keyword - --- - -*`system.audit.user.password.last_changed`*:: -+ --- -The day the user's password was last changed. - - -type: date - --- - -:edit_url!: \ No newline at end of file diff --git a/auditbeat/docs/getting-started.asciidoc b/auditbeat/docs/getting-started.asciidoc deleted file mode 100644 index 0e7cb1d38da8..000000000000 --- a/auditbeat/docs/getting-started.asciidoc +++ /dev/null @@ -1,151 +0,0 @@ -[id="{beatname_lc}-installation-configuration"] -== {beatname_uc} quick start: installation and configuration - -++++ -Quick start: installation and configuration -++++ - -This guide describes how to get started quickly with audit data collection. -You'll learn how to: - -* install {beatname_uc} on each system you want to monitor -* specify the location of your audit data -* parse log data into fields and send it to {es} -* visualize the log data in {kib} - -[role="screenshot"] -image::./images/auditbeat-auditd-dashboard.png[{beatname_uc} Auditd dashboard] - -[float] -=== Before you begin - -You need {es} for storing and searching your data, and {kib} for visualizing and -managing it. - -include::{libbeat-dir}/tab-widgets/spinup-stack-widget.asciidoc[] - -[float] -[[install]] -=== Step 1: Install {beatname_uc} - -Install {beatname_uc} on all the servers you want to monitor. - -To download and install {beatname_uc}, use the commands that work with your -system: - -include::{libbeat-dir}/tab-widgets/install-widget.asciidoc[] - -The commands shown are for AMD platforms, but ARM packages are also available. -Refer to the https://www.elastic.co/downloads/beats/{beatname_lc}[download page] -for the full list of available packages. - -[float] -[[other-installation-options]] -==== Other installation options - -* <> -* https://www.elastic.co/downloads/beats/{beatname_lc}[Download page] -* <> -* <> - -[float] -[[set-connection]] -=== Step 2: Connect to the {stack} - -include::{libbeat-dir}/shared/connecting-to-es.asciidoc[] - -[float] -[[enable-modules]] -=== Step 3: Configure data collection modules - -{beatname_uc} uses <> to collect audit information. - -By default, {beatname_uc} uses a configuration that's tailored to the operating -system where {beatname_uc} is running. - -To use a different configuration, change the module settings in -+{beatname_lc}.yml+. - -The following example shows the `file_integrity` module configured to generate -events whenever a file in one of the specified paths changes on disk: - -["source","sh",subs="attributes"] -------------------------------------- -auditbeat.modules: - -- module: file_integrity - paths: - - /bin - - /usr/bin - - /sbin - - /usr/sbin - - /etc -------------------------------------- - - -include::{libbeat-dir}/shared/config-check.asciidoc[] - -[float] -[[setup-assets]] -=== Step 4: Set up assets - -{beatname_uc} comes with predefined assets for parsing, indexing, and -visualizing your data. To load these assets: - -. Make sure the user specified in +{beatname_lc}.yml+ is -<>. - -. From the installation directory, run: -+ --- -include::{libbeat-dir}/tab-widgets/setup-widget.asciidoc[] --- -+ -`-e` is optional and sends output to standard error instead of the configured log output. - -This step loads the recommended {ref}/index-templates.html[index template] for writing to {es} -and deploys the sample dashboards for visualizing the data in {kib}. - -[TIP] -===== -A connection to {es} (or {ess}) is required to set up the initial -environment. If you're using a different output, such as {ls}, see -<> and <>. -===== - -[float] -[[start]] -=== Step 5: Start {beatname_uc} - -Before starting {beatname_uc}, modify the user credentials in -+{beatname_lc}.yml+ and specify a user who is -<>. - -To start {beatname_uc}, run: - -// tag::start-step[] -include::{libbeat-dir}/tab-widgets/start-widget.asciidoc[] -// end::start-step[] - -{beatname_uc} should begin streaming events to {es}. - -If you see a warning about too many open files, you need to increase the -`ulimit`. See the <> for more details. - -[float] -[[view-data]] -=== Step 6: View your data in {kib} - -To make it easier for you to start auditing the activities of users and -processes on your system, {beatname_uc} comes with pre-built {kib} dashboards -and UIs for visualizing your data. - -include::{libbeat-dir}/shared/opendashboards.asciidoc[tag=open-dashboards] - -[float] -=== What's next? - -Now that you have audit data streaming into {es}, learn how to unify your logs, -metrics, uptime, and application performance data. - -include::{libbeat-dir}/shared/obs-apps.asciidoc[] diff --git a/auditbeat/docs/howto/howto.asciidoc b/auditbeat/docs/howto/howto.asciidoc deleted file mode 100644 index 0c0334f29021..000000000000 --- a/auditbeat/docs/howto/howto.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[[howto-guides]] -= How to guides - -[partintro] --- -Learn how to perform common {beatname_uc} configuration tasks. - -* <<{beatname_lc}-template>> -* <> -* <> -* <<{beatname_lc}-geoip>> -* <> -* <> -* <> -* <> - - --- - -include::{libbeat-dir}/howto/load-index-templates.asciidoc[] - -include::{libbeat-dir}/howto/change-index-name.asciidoc[] - -include::{libbeat-dir}/howto/load-dashboards.asciidoc[] - -include::{libbeat-dir}/shared-geoip.asciidoc[] - -include::{libbeat-dir}/shared-config-ingest.asciidoc[] - -:standalone: -include::{libbeat-dir}/shared-env-vars.asciidoc[] -:standalone!: - -:standalone: -include::{libbeat-dir}/yaml.asciidoc[] -:standalone!: - - - diff --git a/auditbeat/docs/images/auditbeat-kernel-executions-dashboard.png b/auditbeat/docs/images/auditbeat-kernel-executions-dashboard.png deleted file mode 100644 index 855bbc5eb37e..000000000000 Binary files a/auditbeat/docs/images/auditbeat-kernel-executions-dashboard.png and /dev/null differ diff --git a/auditbeat/docs/images/auditbeat-kernel-overview-dashboard.png b/auditbeat/docs/images/auditbeat-kernel-overview-dashboard.png deleted file mode 100644 index 2f08cdcddbef..000000000000 Binary files a/auditbeat/docs/images/auditbeat-kernel-overview-dashboard.png and /dev/null differ diff --git a/auditbeat/docs/images/auditbeat-kernel-sockets-dashboard.png b/auditbeat/docs/images/auditbeat-kernel-sockets-dashboard.png deleted file mode 100644 index 156c3f38f526..000000000000 Binary files a/auditbeat/docs/images/auditbeat-kernel-sockets-dashboard.png and /dev/null differ diff --git a/auditbeat/docs/index.asciidoc b/auditbeat/docs/index.asciidoc deleted file mode 100644 index bf2db3607ce7..000000000000 --- a/auditbeat/docs/index.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -= Auditbeat Reference - -:libbeat-dir: {docdir}/../../libbeat/docs - -include::{libbeat-dir}/version.asciidoc[] - -include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[] - -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -:beatname_lc: auditbeat -:beatname_uc: Auditbeat -:beatname_pkg: {beatname_lc} -:github_repo_name: beats -:discuss_forum: beats/{beatname_lc} -:beat_default_index_prefix: {beatname_lc} -:deb_os: -:rpm_os: -:mac_os: -:docker_platform: -:win_os: -:linux_os: -:no_cache_processor: -:no_decode_cef_processor: -:no_decode_csv_fields_processor: -:no_parse_aws_vpc_flow_log_processor: -:no_script_processor: -:no_timestamp_processor: - -include::{libbeat-dir}/shared-beats-attributes.asciidoc[] - -include::./overview.asciidoc[] - -include::./getting-started.asciidoc[] - -include::./setting-up-running.asciidoc[] - -include::./upgrading.asciidoc[] - -include::./configuring-howto.asciidoc[] - -include::{docdir}/howto/howto.asciidoc[] - -include::./modules.asciidoc[] - -include::./fields.asciidoc[] - -include::{libbeat-dir}/monitoring/monitoring-beats.asciidoc[] - -include::{libbeat-dir}/shared-securing-beat.asciidoc[] - -include::./troubleshooting.asciidoc[] - -include::./faq.asciidoc[] - -include::{libbeat-dir}/contributing-to-beats.asciidoc[] - - diff --git a/auditbeat/docs/modules.asciidoc b/auditbeat/docs/modules.asciidoc deleted file mode 100644 index d94daa75bad1..000000000000 --- a/auditbeat/docs/modules.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[id="{beatname_lc}-modules"] -= Modules - -[partintro] --- -This section contains detailed information about the metric collecting modules -contained in {beatname_uc}. More details about each module can be found under -the links below. - -include::modules_list.asciidoc[] diff --git a/auditbeat/docs/modules/auditd.asciidoc b/auditbeat/docs/modules/auditd.asciidoc deleted file mode 100644 index 0361dc56097e..000000000000 --- a/auditbeat/docs/modules/auditd.asciidoc +++ /dev/null @@ -1,327 +0,0 @@ -//// -This file is generated! See scripts/docs_collector.py -//// - -:modulename: auditd - -[id="{beatname_lc}-module-auditd"] -== Auditd Module - -The `auditd` module receives audit events from the Linux Audit Framework that -is a part of the Linux kernel. - -This module is available only for Linux. - -[float] -=== How it works - -This module establishes a subscription to the kernel to receive the events -as they occur. So unlike most other modules, the `period` configuration -option is unused because it is not implemented using polling. - -The Linux Audit Framework can send multiple messages for a single auditable -event. For example, a `rename` syscall causes the kernel to send eight separate -messages. Each message describes a different aspect of the activity that is -occurring (the syscall itself, file paths, current working directory, process -title). This module will combine all of the data from each of the messages -into a single event. - -Messages for one event can be interleaved with messages from another event. This -module will buffer the messages in order to combine related messages into a -single event even if they arrive interleaved or out of order. - -[float] -=== Useful commands - -When running {beatname_uc} with the `auditd` module enabled, you might find -that other monitoring tools interfere with {beatname_uc}. - -For example, you might encounter errors if another process, such as `auditd`, is -registered to receive data from the Linux Audit Framework. You can use these -commands to see if the `auditd` service is running and stop it: - -* See if `auditd` is running: -+ -[source,shell] ------ -service auditd status ------ - -* Stop the `auditd` service: -+ -[source,shell] ------ -service auditd stop ------ - -* Disable `auditd` from starting on boot: -+ -[source,shell] ------ -chkconfig auditd off ------ - -To save CPU usage and disk space, you can use this command to stop `journald` -from listening to audit messages: - -[source,shell] ------ -systemctl mask systemd-journald-audit.socket ------ - -[float] -=== Inspect the kernel audit system status - -{beatname_uc} provides useful commands to query the state of the audit system -in the Linux kernel. - -* See the list of installed audit rules: -+ -[source,shell] ------ -auditbeat show auditd-rules ------ -+ -Prints the list of loaded rules, similar to `auditctl -l`: -+ -[source,shell] ------ --a never,exit -S all -F pid=26253 --a always,exit -F arch=b32 -S all -F key=32bit-abi --a always,exit -F arch=b64 -S execve,execveat -F key=exec --a always,exit -F arch=b64 -S connect,accept,bind -F key=external-access --w /etc/group -p wa -k identity --w /etc/passwd -p wa -k identity --w /etc/gshadow -p wa -k identity --a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F key=access --a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F key=access ------ - -* See the status of the audit system: -+ -[source,shell] ------ -auditbeat show auditd-status ------ -+ -Prints the status of the kernel audit system, similar to `auditctl -s`: -+ -[source,shell] ------ -enabled 1 -failure 0 -pid 0 -rate_limit 0 -backlog_limit 8192 -lost 14407 -backlog 0 -backlog_wait_time 0 -features 0xf ------ - -[float] -=== Configuration options - -This module has some configuration options for tuning its behavior. The -following example shows all configuration options with their default values. - -[source,yaml] ----- -- module: auditd - resolve_ids: true - failure_mode: silent - backlog_limit: 8192 - rate_limit: 0 - include_raw_message: false - include_warnings: false - backpressure_strategy: auto - immutable: false ----- - -This module also supports the -<> -described later. - -*`socket_type`*:: This optional setting controls the type of -socket that {beatname_uc} uses to receive events from the kernel. The two -options are `unicast` and `multicast`. -+ -`unicast` should be used when {beatname_uc} is the primary userspace daemon for -receiving audit events and managing the rules. Only a single process can receive -audit events through the "unicast" connection so any other daemons should be -stopped (e.g. stop `auditd`). -+ -`multicast` can be used in kernel versions 3.16 and newer. By using `multicast` -{beatname_uc} will receive an audit event broadcast that is not exclusive to a -a single process. This is ideal for situations where `auditd` is running and -managing the rules. -+ -By default {beatname_uc} will use `multicast` if the kernel version is 3.16 or -newer and no rules have been defined. Otherwise `unicast` will be used. - -*`immutable`*:: This boolean setting sets the audit config as immutable (`-e 2`). -This option can only be used with the `socket_type: unicast` since {beatname_uc} -needs to manage the rules to be able to set it. -+ -It is important to note that with this setting enabled, if {beatname_uc} is -stopped and resumed events will continue to be processed but the -configuration won't be updated until the system is restarted entirely. - -*`resolve_ids`*:: This boolean setting enables the resolution of UIDs and -GIDs to their associated names. The default value is true. - -*`failure_mode`*:: This determines the kernel's behavior on critical -failures such as errors sending events to {beatname_uc}, the backlog limit was -exceeded, the kernel ran out of memory, or the rate limit was exceeded. The -options are `silent`, `log`, or `panic`. `silent` basically makes the kernel -ignore the errors, `log` makes the kernel write the audit messages using -`printk` so they show up in system's syslog, and `panic` causes the kernel to -panic to prevent use of the machine. {beatname_uc}'s default is `silent`. - -*`backlog_limit`*:: This controls the maximum number of audit messages -that will be buffered by the kernel. - -*`rate_limit`*:: This sets a rate limit on the number of messages/sec -delivered by the kernel. The default is 0, which disables rate limiting. -Changing this value to anything other than zero can cause messages to be lost. -The preferred approach to reduce the messaging rate is be more selective in the -audit ruleset. - -*`include_raw_message`*:: This boolean setting causes {beatname_uc} to -include each of the raw messages that contributed to the event in the document -as a field called `event.original`. The default value is false. This setting is -primarily used for development and debugging purposes. - -*`include_warnings`*:: This boolean setting causes {beatname_uc} to -include as warnings any issues that were encountered while parsing the raw -messages. The messages are written to the `error.message` field. The default -value is false. When this setting is enabled the raw messages will be included -in the event regardless of the `include_raw_message` config setting. This -setting is primarily used for development and debugging purposes. - -*`audit_rules`*:: A string containing the audit rules that should be -installed to the kernel. There should be one rule per line. Comments can be -embedded in the string using `#` as a prefix. The format for rules is the same -used by the Linux `auditctl` utility. {beatname_uc} supports adding file watches -(`-w`) and syscall rules (`-a` or `-A`). For more information, see -<>. - -*`audit_rule_files`*:: A list of files to load audit rules from. This files are -loaded after the rules declared in `audit_rules` are loaded. Wildcards are -supported and will expand in lexicographical order. The format is the same as -that of the `audit_rules` field. - -*`ignore_errors`*:: This setting allows errors during rule loading and parsing -to be ignored, but logged as warnings. - -*`backpressure_strategy`*:: Specifies the strategy that {beatname_uc} uses to -prevent backpressure from propagating to the kernel and impacting audited -processes. -+ --- -The possible values are: - -- `auto` (default): {beatname_uc} uses the `kernel` strategy, if supported, or -falls back to the `userspace` strategy. -- `kernel`: {beatname_uc} sets the `backlog_wait_time` in the kernel's -audit framework to 0. This causes events to be discarded in the kernel if -the audit backlog queue fills to capacity. Requires a 3.14 kernel or -newer. -- `userspace`: {beatname_uc} drops events when there is backpressure -from the publishing pipeline. If no `rate_limit` is set, {beatname_uc} sets a rate -limit of 5000. Users should test their setup and adjust the `rate_limit` -option accordingly. -- `both`: {beatname_uc} uses the `kernel` and `userspace` strategies at the same -time. -- `none`: No backpressure mitigation measures are enabled. --- - -include::{docdir}/auditbeat-options.asciidoc[] - -[float] -[[audit-rules]] -=== Audit rules - -The audit rules are where you configure the activities that are audited. These -rules are configured as either syscalls or files that should be monitored. For -example you can track all `connect` syscalls or file system writes to -`/etc/passwd`. - -Auditing a large number of syscalls can place a heavy load on the system so -consider carefully the rules you define and try to apply filters in the rules -themselves to be as selective as possible. - -The kernel evaluates the rules in the order in which they were defined so place -the most active rules first in order to speed up evaluation. - -You can assign keys to each rule for better identification of the rule that -triggered an event and easier filtering later in Elasticsearch. - -Defining any audit rules in the config causes {beatname_uc} to purge all -existing audit rules prior to adding the rules specified in the config. -Therefore it is unnecessary and unsupported to include a `-D` (delete all) rule. - -["source","sh",subs="attributes"] ----- -{beatname_lc}.modules: -- module: auditd - audit_rules: | - # Things that affect identity. - -w /etc/group -p wa -k identity - -w /etc/passwd -p wa -k identity - -w /etc/gshadow -p wa -k identity - -w /etc/shadow -p wa -k identity - - # Unauthorized access attempts to files (unsuccessful). - -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access - -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access - -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access - -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access ----- - - -[float] -=== Example configuration - -The Auditd module supports the common configuration options that are -described under <>. Here -is an example configuration: - -[source,yaml] ----- -auditbeat.modules: -- module: auditd - # Load audit rules from separate files. Same format as audit.rules(7). - audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ] - audit_rules: | - ## Define audit rules here. - ## Create file watches (-w) or syscall audits (-a or -A). Uncomment these - ## examples or add your own rules. - - ## If you are on a 64 bit platform, everything should be running - ## in 64 bit mode. This rule will detect any use of the 32 bit syscalls - ## because this might be a sign of someone exploiting a hole in the 32 - ## bit API. - #-a always,exit -F arch=b32 -S all -F key=32bit-abi - - ## Executions. - #-a always,exit -F arch=b64 -S execve,execveat -k exec - - ## External access (warning: these can be expensive to audit). - #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access - - ## Identity changes. - #-w /etc/group -p wa -k identity - #-w /etc/passwd -p wa -k identity - #-w /etc/gshadow -p wa -k identity - - ## Unauthorized access attempts. - #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access - #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access - - ----- - - -:modulename!: - diff --git a/auditbeat/docs/modules/file_integrity.asciidoc b/auditbeat/docs/modules/file_integrity.asciidoc deleted file mode 100644 index 872ba5189255..000000000000 --- a/auditbeat/docs/modules/file_integrity.asciidoc +++ /dev/null @@ -1,183 +0,0 @@ -//// -This file is generated! See scripts/docs_collector.py -//// - -:modulename: file_integrity - -[id="{beatname_lc}-module-file_integrity"] -== File Integrity Module - -The `file_integrity` module sends events when a file is changed (created, -updated, or deleted) on disk. The events contain file metadata and hashes. - -The module is implemented for Linux, macOS (Darwin), and Windows. - -[float] -=== How it works - -This module uses features of the operating system to monitor file changes in -realtime. When the module starts it creates a subscription with the OS to -receive notifications of changes to the specified files or directories. Upon -receiving notification of a change the module will read the file's metadata -and then compute a hash of the file's contents. - -At startup this module will perform an initial scan of the configured files -and directories to generate baseline data for the monitored paths and detect -changes since the last time it was run. It uses locally persisted data in order -to only send events for new or modified files. - -The operating system features that power this feature are as follows. - -* Linux - Multiple backends are supported: `auto`, `fsnotify`, `kprobes`, `ebpf`. -By default, `fsnotify` is used, and therefore the kernel must have inotify support. -Inotify was initially merged into the 2.6.13 Linux kernel. -The eBPF backend uses modern eBPF features and supports 5.10.16+ kernels. -The `Kprobes` backend uses tracefs and supports 3.10+ kernels. -FSNotify doesn't have the ability to associate user data to file events. -The preferred backend can be selected by specifying the `backend` config option. -Since eBPF and Kprobes are in technical preview, `auto` will default to `fsnotify`. -* macOS (Darwin) - Uses the `FSEvents` API, present since macOS 10.5. This API -coalesces multiple changes to a file into a single event. {beatname_uc} translates -this coalesced changes into a meaningful sequence of actions. However, -in rare situations the reported events may have a different ordering than what -actually happened. -* Windows - `ReadDirectoryChangesW` is used. - -The file integrity module should not be used to monitor paths on network file -systems. - -[float] -=== Configuration options - -This module has some configuration options for tuning its behavior. The -following example shows all configuration options with their default values for -Linux. - -[source,yaml] ----- -- module: file_integrity - paths: - - /bin - - /usr/bin - - /sbin - - /usr/sbin - - /etc - recursive: false - exclude_files: - - '(?i)\.sw[nop]$' - - '~$' - - '/\.git($|/)' - include_files: [] - scan_at_start: true - scan_rate_per_sec: 50 MiB - max_file_size: 100 MiB - hash_types: [sha1] ----- - -This module also supports the -<> -described later. - -*`paths`*:: A list of paths (directories or files) to watch. Globs are -not supported. The specified paths should exist when the metricset is started. -Paths should be absolute, although the file integrity module will attempt to -resolve relative path events to their absolute file path. Symbolic links will -be resolved on module start and the link target will be watched if link resolution -is successful. Changes to the symbolic link after module start will not change -the watch target. If the link does not resolve to a valid target, the symbolic -link itself will be watched; if the symlink target becomes valid after module -start up this will not be picked up by the file system watches. - -*`recursive`*:: By default, the watches set to the paths specified in -`paths` are not recursive. This means that only changes to the contents -of this directories are watched. If `recursive` is set to `true`, the -`file_integrity` module will watch for changes on this directory and all -its subdirectories. - -*`exclude_files`*:: A list of regular expressions used to filter out events -for unwanted files. The expressions are matched against the full path of every -file and directory. When used in conjunction with `include_files`, file paths need -to match both `include_files` and not match `exclude_files` to be selected. -By default, no files are excluded. See <> -for a list of supported regexp patterns. It is recommended to wrap regular -expressions in single quotation marks to avoid issues with YAML escaping -rules. -If `recursive` is set to true, subdirectories can also be excluded here by -specifying them. - -*`include_files`*:: A list of regular expressions used to specify which files to -select. When configured, only files matching the pattern will be monitored. -The expressions are matched against the full path of every file and directory. -When used in conjunction with `exclude_files`, file paths need -to match both `include_files` and not match `exclude_files` to be selected. -By default, all files are selected. See <> -for a list of supported regexp patterns. It is recommended to wrap regular -expressions in single quotation marks to avoid issues with YAML escaping -rules. - -*`scan_at_start`*:: A boolean value that controls if {beatname_uc} scans -over the configured file paths at startup and send events for the files -that have been modified since the last time {beatname_uc} was running. The -default value is true. -+ -This feature depends on data stored locally in `path.data` in order to determine -if a file has changed. The first time {beatname_uc} runs it will send an event -for each file it encounters. - -*`scan_rate_per_sec`*:: When `scan_at_start` is enabled this sets an -average read rate defined in bytes per second for the initial scan. This -throttles the amount of CPU and I/O that {beatname_uc} consumes at startup. -The default value is "50 MiB". Setting the value to "0" disables throttling. -For convenience units can be specified as a suffix to the value. The supported -units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, `gb`, `tib`, `tb`, -`pib`, `pb`, `eib`, and `eb`. - -*`max_file_size`*:: The maximum size of a file in bytes for which -{beatname_uc} will compute hashes and run file parsers. Files larger than this -size will not be hashed or analysed by configured file parsers. The default -value is 100 MiB. For convenience, units can be specified as a suffix to the -value. The supported units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, -`gb`, `tib`, `tb`, `pib`, `pb`, `eib`, and `eb`. - -*`hash_types`*:: A list of hash types to compute when the file changes. -The supported hash types are `blake2b_256`, `blake2b_384`, `blake2b_512`, `md5`, -`sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `sha512_224`, `sha512_256`, -`sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, and `xxh64`. The default value is `sha1`. - -*`file_parsers`*:: A list of `file_integrity` fields under `file` that will be -populated by file format parsers. The available fields that can be analysed -are listed in the auditbeat.reference.yml file. File parsers are run on all -files within the `max_file_size` limit in the configured paths during a scan or -when a file event involves the file. Files that are not targets of the specific -file parser are only sniffed to examine whether analysis should proceed. This will -usually only involve reading a small number of bytes. - -*`backend`*:: (*Linux only*) Select the backend which will be used to -source events. Valid values: `auto`, `fsnotify`, `kprobes`, `ebpf`. Default: `fsnotify`. - -include::{docdir}/auditbeat-options.asciidoc[] - - -[float] -=== Example configuration - -The File Integrity module supports the common configuration options that are -described under <>. Here -is an example configuration: - -[source,yaml] ----- -auditbeat.modules: -- module: file_integrity - paths: - - /bin - - /usr/bin - - /sbin - - /usr/sbin - - /etc - ----- - - -:modulename!: - diff --git a/auditbeat/docs/modules_list.asciidoc b/auditbeat/docs/modules_list.asciidoc deleted file mode 100644 index ed367bac1d09..000000000000 --- a/auditbeat/docs/modules_list.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -//// -This file is generated! See scripts/docs_collector.py -//// - - * <<{beatname_lc}-module-auditd,Auditd>> - * <<{beatname_lc}-module-file_integrity,File Integrity>> - * <<{beatname_lc}-module-system,System>> - - --- - -include::./modules/auditd.asciidoc[] -include::./modules/file_integrity.asciidoc[] -include::../../x-pack/auditbeat/docs/modules/system.asciidoc[] diff --git a/auditbeat/docs/overview.asciidoc b/auditbeat/docs/overview.asciidoc deleted file mode 100644 index 547638ff509f..000000000000 --- a/auditbeat/docs/overview.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[id="{beatname_lc}-overview"] -== {beatname_uc} overview - -{beatname_uc} is a lightweight shipper that you can install on your servers to -audit the activities of users and processes on your systems. For example, you -can use {beatname_uc} to collect and centralize audit events from the Linux -Audit Framework. You can also use {beatname_uc} to detect changes to critical -files, like binaries and configuration files, and identify potential security -policy violations. - -include::{libbeat-dir}/shared-libbeat-description.asciidoc[] diff --git a/auditbeat/docs/reload-configuration.asciidoc b/auditbeat/docs/reload-configuration.asciidoc deleted file mode 100644 index dab510164d89..000000000000 --- a/auditbeat/docs/reload-configuration.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[id="{beatname_lc}-configuration-reloading"] -== Reload the configuration dynamically - -++++ -Config file reloading -++++ - -beta[] - -You can configure {beatname_uc} to dynamically reload configuration files when -there are changes. To do this, you specify a path -(https://golang.org/pkg/path/filepath/#Glob[glob]) to watch for module -configuration changes. When the files found by the glob change, new modules are -started/stopped according to changes in the configuration files. - -To enable dynamic config reloading, you specify the `path` and `reload` options -in the main +{beatname_lc}.yml+ config file. For example: - -["source","sh"] ------------------------------------------------------------------------------- -auditbeat.config.modules: - path: ${path.config}/conf.d/*.yml - reload.enabled: true - reload.period: 10s ------------------------------------------------------------------------------- - -*`path`*:: A glob that defines the files to check for changes. - -*`reload.enabled`*:: When set to `true`, enables dynamic config reload. - -*`reload.period`*:: Specifies how often the files are checked for changes. Do not -set the `period` to less than 1s because the modification time of files is often -stored in seconds. Setting the `period` to less than 1s will result in -unnecessary overhead. - -Each file found by the glob must contain a list of one or more module -definitions. For example: - -[source,yaml] ------------------------------------------------------------------------------- -- module: file_integrity - paths: - - /www/wordpress - - /www/wordpress/wp-admin - - /www/wordpress/wp-content - - /www/wordpress/wp-includes ------------------------------------------------------------------------------- - -NOTE: On systems with POSIX file permissions, all Beats configuration files are -subject to ownership and file permission checks. If you encounter config loading -errors related to file ownership, see {beats-ref}/config-file-permissions.html. diff --git a/auditbeat/docs/running-on-docker.asciidoc b/auditbeat/docs/running-on-docker.asciidoc deleted file mode 100644 index dee50fa254a3..000000000000 --- a/auditbeat/docs/running-on-docker.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -include::{libbeat-dir}/shared-docker.asciidoc[] - -==== Special requirements - -Under Docker, {beatname_uc} runs as a non-root user, but requires some privileged -capabilities to operate correctly. Ensure that the +AUDIT_CONTROL+ and +AUDIT_READ+ -capabilities are available to the container. - -It is also essential to run {beatname_uc} in the host PID namespace. - -["source","sh",subs="attributes"] ----- -docker run --cap-add=AUDIT_CONTROL --cap-add=AUDIT_READ --user=root --pid=host {dockerimage} ----- diff --git a/auditbeat/docs/running-on-kubernetes.asciidoc b/auditbeat/docs/running-on-kubernetes.asciidoc deleted file mode 100644 index f5f4f0f4715e..000000000000 --- a/auditbeat/docs/running-on-kubernetes.asciidoc +++ /dev/null @@ -1,101 +0,0 @@ -[[running-on-kubernetes]] -=== Running {beatname_uc} on Kubernetes - -{beatname_uc} <> can be used on Kubernetes to -check files integrity. - -TIP: Running {ecloud} on Kubernetes? See {eck-ref}/k8s-beat.html[Run {beats} on ECK]. - -ifeval::["{release-state}"=="unreleased"] - -However, version {version} of {beatname_uc} has not yet been -released, so no Docker image is currently available for this version. - -endif::[] - - -[float] -==== Kubernetes deploy manifests - -By deploying {beatname_uc} as a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet] -we ensure we get a running instance on each node of the cluster. - -Everything is deployed under `kube-system` namespace, you can change that by -updating the YAML file. - -To get the manifests just run: - -["source", "sh", subs="attributes"] ------------------------------------------------- -curl -L -O https://raw.githubusercontent.com/elastic/beats/{branch}/deploy/kubernetes/{beatname_lc}-kubernetes.yaml ------------------------------------------------- - -[WARNING] -======================================= -If you are using Kubernetes 1.7 or earlier: {beatname_uc} uses a hostPath volume to persist internal data, it's located -under /var/lib/{beatname_lc}-data. The manifest uses folder autocreation (`DirectoryOrCreate`), which was introduced in -Kubernetes 1.8. You will need to remove `type: DirectoryOrCreate` from the manifest and create the host folder yourself. -======================================= - -[float] -==== Settings - -Some parameters are exposed in the manifest to configure logs destination, by -default they will use an existing Elasticsearch deploy if it's present, but you -may want to change that behavior, so just edit the YAML file and modify them: - -["source", "yaml", subs="attributes"] ------------------------------------------------- -- name: ELASTICSEARCH_HOST - value: elasticsearch -- name: ELASTICSEARCH_PORT - value: "9200" -- name: ELASTICSEARCH_USERNAME - value: elastic -- name: ELASTICSEARCH_PASSWORD - value: changeme ------------------------------------------------- - -[float] -===== Running {beatname_uc} on control plane nodes - -Kubernetes control plane nodes can use https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/[taints] -to limit the workloads that can run on them. To run {beatname_uc} on control plane nodes you may need to -update the Daemonset spec to include proper tolerations: - -[source,yaml] ------------------------------------------------- -spec: - tolerations: - - key: node-role.kubernetes.io/control-plane - effect: NoSchedule ------------------------------------------------- - -[float] -==== Deploy - -To deploy {beatname_uc} to Kubernetes just run: - -["source", "sh", subs="attributes"] ------------------------------------------------- -kubectl create -f {beatname_lc}-kubernetes.yaml ------------------------------------------------- - -Then you should be able to check the status by running: - -["source", "sh", subs="attributes"] ------------------------------------------------- -$ kubectl --namespace=kube-system get ds/{beatname_lc} - -NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE -{beatname_lc} 32 32 0 32 0 1m ------------------------------------------------- - -[WARNING] -======================================= -{beatname_uc} is able to monitor the file integrity of files in pods, -to do that, the directories with the container root file systems have to be -mounted as volumes in the {beatname_uc} container. For example, containers -executed with containerd have their root file systems under `/run/containerd`. -The https://raw.githubusercontent.com/elastic/beats/{branch}/deploy/kubernetes/{beatname_lc}-kubernetes.yaml[reference manifest] contains an example of this. -======================================= diff --git a/auditbeat/docs/setting-up-running.asciidoc b/auditbeat/docs/setting-up-running.asciidoc deleted file mode 100644 index 4e2bd8265f90..000000000000 --- a/auditbeat/docs/setting-up-running.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -///// -// NOTE: -// Each beat has its own setup overview to allow for the addition of content -// that is unique to each beat. -///// - -[[setting-up-and-running]] -== Set up and run {beatname_uc} - -++++ -Set up and run -++++ - -Before reading this section, see -<<{beatname_lc}-installation-configuration>> for basic -installation instructions to get you started. - -This section includes additional information on how to install, set up, and run -{beatname_uc}, including: - -* <> - -* <> - -* <> - -* <> - -* <> - -* <> - -* <> - -* <<{beatname_lc}-starting>> - -* <> - - -//MAINTAINERS: If you add a new file to this section, make sure you update the bulleted list ^^ too. - -include::{libbeat-dir}/shared-directory-layout.asciidoc[] - -include::{libbeat-dir}/keystore.asciidoc[] - -include::{libbeat-dir}/command-reference.asciidoc[] - -include::{libbeat-dir}/repositories.asciidoc[] - -include::./running-on-docker.asciidoc[] - -include::./running-on-kubernetes.asciidoc[] - -include::{libbeat-dir}/shared-systemd.asciidoc[] - -include::{libbeat-dir}/shared/start-beat.asciidoc[] - -include::{libbeat-dir}/shared/shutdown.asciidoc[] diff --git a/auditbeat/docs/troubleshooting.asciidoc b/auditbeat/docs/troubleshooting.asciidoc deleted file mode 100644 index 19eb279272b4..000000000000 --- a/auditbeat/docs/troubleshooting.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[[troubleshooting]] -= Troubleshoot - -[partintro] --- -If you have issues installing or running {beatname_uc}, read the -following tips: - -* <> -* <> -* <> -* <> - -//sets block macro for getting-help.asciidoc included in next section - --- - -[[getting-help]] -== Get Help - -include::{libbeat-dir}/getting-help.asciidoc[] - -//sets block macro for debugging.asciidoc included in next section - -[id="enable-{beatname_lc}-debugging"] -== Debug - -include::{libbeat-dir}/debugging.asciidoc[] - -//sets block macro for metrics-in-logs.asciidoc included in next section - -[id="understand-{beatname_lc}-logs"] -[role="xpack"] -== Understand metrics in {beatname_uc} logs - -++++ -Understand logged metrics -++++ - -include::{libbeat-dir}/metrics-in-logs.asciidoc[] - diff --git a/auditbeat/docs/upgrading.asciidoc b/auditbeat/docs/upgrading.asciidoc deleted file mode 100644 index 132cb1db8434..000000000000 --- a/auditbeat/docs/upgrading.asciidoc +++ /dev/null @@ -1,7 +0,0 @@ -[[upgrading-auditbeat]] -== Upgrade Auditbeat - -For information about upgrading to a new version, see: - -* {beats-ref}/breaking-changes.html[Breaking Changes] -* {beats-ref}/upgrading.html[Upgrade] diff --git a/docs/devguide/contributing.asciidoc b/docs/devguide/contributing.asciidoc deleted file mode 100644 index 0637052b96c7..000000000000 --- a/docs/devguide/contributing.asciidoc +++ /dev/null @@ -1,245 +0,0 @@ -[[beats-contributing]] -== Contributing to Beats - -If you have a bugfix or new feature that you would like to contribute, please -start by opening a topic on the https://discuss.elastic.co/c/beats[forums]. -It may be that somebody is already working on it, or that there are particular -issues that you should know about before implementing the change. - -We enjoy working with contributors to get their code accepted. There are many -approaches to fixing a problem and it is important to find the best approach -before writing too much code. After committing your code, check out the -https://www.elastic.co/community/contributor[Elastic Contributor Program] -where you can earn points and rewards for your contributions. - -The process for contributing to any of the Elastic repositories is similar. - -[float] -[[contribution-steps]] -=== Contribution Steps - -. Please make sure you have signed our -https://www.elastic.co/contributor-agreement/[Contributor License Agreement]. We -are not asking you to assign copyright to us, but to give us the right to -distribute your code without restriction. We ask this of all contributors in -order to assure our users of the origin and continuing existence of the code. -You only need to sign the CLA once. - -. Send a pull request! Push your changes to your fork of the repository and -https://help.github.com/articles/using-pull-requests[submit a pull request] using our -<>. New PRs go to the main branch. The Beats -core team will backport your PR if it is necessary. - - -In the pull request, describe what your changes do and mention -any bugs/issues related to the pull request. Please also add a changelog entry to -https://github.com/elastic/beats/blob/main/CHANGELOG.next.asciidoc[CHANGELOG.next.asciidoc]. - -[float] -[[setting-up-dev-environment]] -=== Setting Up Your Dev Environment - -The Beats are Go programs, so install the {go-version} version of -http://golang.org/[Go] which is being used for Beats development. - -After https://golang.org/doc/install[installing Go], set the -https://golang.org/doc/code.html#GOPATH[GOPATH] environment variable to point to -your workspace location, and make sure `$GOPATH/bin` is in your PATH. - -NOTE: One deterministic manner to install the proper Go version to work with Beats is to use the -https://github.com/andrewkroh/gvm[GVM] Go version manager. An example for Mac users would be: - -[source,shell,subs=attributes+] ----------------------------------------------------------------------- -gvm use {go-version} -eval $(gvm {go-version}) ----------------------------------------------------------------------- - -Then you can clone Beats git repository: - -[source,shell] ----------------------------------------------------------------------- -mkdir -p ${GOPATH}/src/github.com/elastic -git clone https://github.com/elastic/beats ${GOPATH}/src/github.com/elastic/beats ----------------------------------------------------------------------- - -NOTE: If you have multiple go paths, use `${GOPATH%%:*}` instead of `${GOPATH}`. - -Beats developers primarily use https://github.com/magefile/mage[Mage] for development. -You can install mage using a make target: - -[source,shell] --------------------------------------------------------------------------------- -make mage --------------------------------------------------------------------------------- - -Then you can compile a particular Beat by using Mage. For example, for Filebeat: - -[source,shell] --------------------------------------------------------------------------------- -cd beats/filebeat -mage build --------------------------------------------------------------------------------- - -You can list all available mage targets with: - -[source,shell] --------------------------------------------------------------------------------- -mage -l --------------------------------------------------------------------------------- - -Some of the Beats might have extra development requirements, in which case -you'll find a CONTRIBUTING.md file in the Beat directory. - -We use an http://editorconfig.org/[EditorConfig] file in the beats repository -to standardise how different editors handle whitespace, line endings, and other -coding styles in our files. Most popular editors have a -http://editorconfig.org/#download[plugin] for EditorConfig and we strongly -recommend that you install it. - -[float] -[[update-scripts]] -=== Update scripts - -The Beats use a variety of scripts based on Python, make and mage to generate configuration files -and documentation. Ensure to use the version of python listed in the https://github.com/elastic/beats/blob/main/.python-version[.python-version] file. - -The primary command for updating generated files is: - -[source,shell] --------------------------------------------------------------------------------- -make update --------------------------------------------------------------------------------- -Each Beat has its own `update` target (for both `make` and `mage`), as well as a master `update` in the repository root. -If a PR adds or removes a dependency, run `make update` in the root `beats` directory. - -Another command properly formats go source files and adds a copyright header: - -[source,shell] --------------------------------------------------------------------------------- -make fmt --------------------------------------------------------------------------------- - -Both of these commands should be run before submitting a PR. You can view all -the available make targets with `make help`. - -These commands have the following dependencies: - -* Python >= {python} -* Python https://docs.python.org/3/library/venv.html[venv module] -* https://github.com/magefile/mage[Mage] - -Python venv module is included in the standard library in Python 3. On Debian/Ubuntu -systems it also requires to install the `python3-venv` package, that includes -additional support scripts: - -[source,shell] --------------------------------------------------------------------------------- -sudo apt-get install python3-venv --------------------------------------------------------------------------------- - -[float] -[[build-target-env-vars]] -=== Selecting Build Targets - -Beats is built using the `make release` target. By default, make will select from a limited number of preset build targets: - -- darwin/amd64 -- darwin/arm64 -- linux/amd64 -- windows/amd64 - -You can change build targets using the `PLATFORMS` environment variable. Targets set with the `PLATFORMS` variable can either be a GOOS value, or a GOOS/arch pair. -For example, `linux` and `linux/amd64` are both valid targets. You can select multiple targets, and the `PLATFORMS` list is space delimited, for example `darwin windows` will build on all supported darwin and windows architectures. -In addition, you can add or remove from the list of build targets by prepending `+` or `-` to a given target. For example: `+bsd` or `-darwin`. - -You can find the complete list of supported build targets with `go tool dist list`. - -[float] -[[running-linter]] -=== Linting - -Beats uses https://golangci-lint.run/[golangci-lint]. You can run the pre-configured linter against your change: - -[source,shell] --------------------------------------------------------------------------------- -mage llc --------------------------------------------------------------------------------- - -`llc` stands for `Lint Last Change` which includes all the Go files that were changed in either the last commit (if you're on the `main` branch) or in a difference between your feature branch and the `main` branch. - -It's expected that sometimes a contributor will be asked to fix linter issues unrelated to their contribution since the linter was introduced later than changes in some of the files. - -You can also run the linter against an individual package, for example the filbeat command package: - -[source,shell] --------------------------------------------------------------------------------- -golangci-lint run ./filebeat/cmd/... --------------------------------------------------------------------------------- - -[float] -[[running-testsuite]] -=== Testing - -You can run the whole testsuite with the following command: - -[source,shell] --------------------------------------------------------------------------------- -make testsuite --------------------------------------------------------------------------------- - -Running the testsuite has the following requirements: - -* Python >= {python} -* Docker >= {docker} -* Docker-compose >= {docker-compose} - -For more details, refer to the <> guide. - -[float] -[[documentation]] -=== Documentation - -The main documentation for each Beat is located under `/docs` and is -based on https://docs.asciidoctor.org/asciidoc/latest/[AsciiDoc]. The Beats -documentation also makes extensive use of conditionals and content reuse to -ensure consistency and accuracy. Before contributing to the documentation, read -the following resources: - -* https://github.com/elastic/docs/blob/master/README.asciidoc[Docs HOWTO] -* <> - -[float] -[[dependencies]] -=== Dependencies - -In order to create Beats we rely on Golang libraries and other -external tools. - -[float] -==== Other dependencies - -Besides Go libraries, we are using development tools to generate parsers for inputs and processors. - -The following packages are required to run `go generate`: - -[float] -===== Auditbeat - -* FlatBuffers >= 1.9 - -[float] -===== Filebeat - -* Graphviz >= 2.43.0 -* Ragel >= 6.10 - - -[float] -[[changelog]] -=== Changelog - -To keep up to date with changes to the official Beats for community developers, -follow the developer changelog -https://github.com/elastic/beats/blob/main/CHANGELOG-developer.next.asciidoc[here]. - diff --git a/docs/devguide/create-metricset.asciidoc b/docs/devguide/create-metricset.asciidoc deleted file mode 100644 index 2c2d798086b1..000000000000 --- a/docs/devguide/create-metricset.asciidoc +++ /dev/null @@ -1,320 +0,0 @@ -[[creating-metricsets]] -=== Creating a Metricset - -include::generator-support-note.asciidoc[tag=metricset-generator] - -A metricset is the part of a Metricbeat module that fetches and structures the -data from the remote service. Each module can have multiple metricsets. In this guide, you learn how to create your own metricset. - -When creating a metricset for the first time, it generally helps to look at the -implementation of existing metricsets for inspiration. - -To create a new metricset: - -. Run the following command inside the metricbeat beat directory: -+ -[source,bash] ----- -make create-metricset ----- -+ -You need Python to run this command, then, you'll be prompted to enter a module and metricset name. Remember that a module represents the service you want to retrieve metrics from (like Redis) and a metricset is a specific set of grouped metrics (like `info` on Redis). Only use characters `[a-z]` -and, if required, underscores (`_`). No other characters are allowed. -+ -When you run `make create-metricset`, it creates all the basic files for your metricset, along with the required module -files if the module does not already exist. See <> for more details about the module files. -+ -NOTE: We use `{metricset}`, `{module}`, and `{beat}` in this guide as placeholders. You need to replace these with -the actual names of your metricset, module, and beat. -+ -The metricset that you created is already a functioning metricset and can be compiled. -+ -. Compile your new metricset by running the following command: -+ -[source,bash] ----- -mage update -mage build ----- -+ -The first command, `mage update`, updates all generated files with the most recent files, data, and meta information from the metricset. The second command, -`mage build`, compiles your source code and provides you with a binary called metricbeat in the same folder. You can run the -binary in debug mode with the following command: -+ -[source,bash] ----- -./metricbeat -e -d "*" ----- - -After running the mage commands, you'll find the metricset, along with its generated files, under `module/{module}/{metricset}`. This directory -contains the following files: - -* `\{metricset}.go` -* `_meta/docs.asciidoc` -* `_meta/data.json` -* `_meta/fields.yml` - -Let's look at the files in more detail next. - -[float] -==== \{metricset}.go File - -The first file is `{metricset}.go`. It contains the logic on how to fetch data from the service and convert it for sending to the output. - -The generated file looks like this: - -https://github.com/elastic/beats/blob/main/metricbeat/scripts/module/metricset/metricset.go.tmpl - -[source,go] ----- -include::../../metricbeat/scripts/module/metricset/metricset.go.tmpl[] ----- - -The `package` clause and `import` declaration are part of the base structure of each Go file. You should only -modify this part of the file if your implementation requires more imports. - -[float] -===== Initialisation - -The init method registers the metricset with the central registry. In Go the `init()` function is called -before the execution of all other code. This means the module will be automatically registered with the global registry. - -The `New` method, which is passed to `MustAddMetricSet`, will be called after the setup of the module and before starting to fetch data. You normally don't need to change this part of the file. - -[source,go] ----- -func init() { - mb.Registry.MustAddMetricSet("{module}", "{metricset}", New) -} ----- - -[float] -===== Definition - -The MetricSet type defines all fields of the metricset. As a minimum it must be composed of the `mb.BaseMetricSet` fields, -but can be extended with additional entries. These variables can be used to persist data or configuration between -multiple fetch calls. - -You can add more fields to the MetricSet type, as you can see in the following example where the `username` and `password` string fields are added: - -[source,go] ----- -type MetricSet struct { - mb.BaseMetricSet - username string - password string -} ----- - - -[float] -===== Creation - -The `New` function creates a new instance of the MetricSet. The setup process -of the MetricSet is also part of `New`. This method will be called before `Fetch` -is called the first time. - - -The `New` function also sets up the configuration by processing additional -configuration entries, if needed. - -[source,go] ----- - -func New(base mb.BaseMetricSet) (mb.MetricSet, error) { - - config := struct{}{} - - if err := base.Module().UnpackConfig(&config); err != nil { - return nil, err - } - - return &MetricSet{ - BaseMetricSet: base, - }, nil -} ----- - -[float] -===== Fetching - -The `Fetch` method is the central part of the metricset. `Fetch` is called every -time new data is retrieved. If more than one host is defined, `Fetch` is -called once for each host. The frequency of calling `Fetch` is based on the `period` -defined in the configuration file. - -`Fetch` must publish the event using the `mb.ReporterV2.Event` method. If an error -happens, `Fetch` can return an error, or if `Event` is being called in a loop, -published using the `mb.ReporterV2.Error` method. This means -that Metricbeat always sends an event, even on failure. You must make sure that the -error message helps to identify the actual error. - -The following example shows a metricset `Fetch` method with a counter that is -incremented for each `Fetch` call: - -[source,go] ----- -func (m *MetricSet) Fetch(report mb.ReporterV2) error { - - report.Event(mb.Event{ - MetricSetFields: common.MapStr{ - "counter": m.counter, - } - }) - m.counter++ - - return nil -} ----- - -The JSON output derived from the reported event will be identical to the naming and -structure you use in `common.MapStr`. For more details about `MapStr` and its functions, see the -https://godoc.org/github.com/elastic/beats/libbeat/common#MapStr[MapStr API docs]. - - -[float] -===== Multi Fetching - -`Event` can be called multiple times inside of the `Fetch` method for metricsets that might expose multiple events. -`Event` returns a bool that indicates if the metricset is already closed and no further events can be processed, -in which case `Fetch` should return immediately. If there is an error while processing one of many events, -it can be published using the `mb.ReporterV2.Error` method, as opposed to returning an error value. - -[float] -===== Parsing and Normalizing Fields - -In Metricbeat we aim to normalize the metric names from all metricsets to -respect a common <>. This -makes it easy for users to find and interpret metrics. To simplify parsing, -converting, renaming, and restructuring of the object read from the monitored -system to the Metricbeat format, we have created the -https://godoc.org/github.com/elastic/beats/libbeat/common/schema[schema] package -that allows you to declaratively define transformations. - -For example, assuming this input object: - -[source,go] ----- -input := map[string]interface{}{ - "testString": "hello", - "testInt": "42", - "testBool": "true", - "testFloat": "42.1", - "testObjString": "hello, object", -} ----- - -And the requirement to transform it into this one: - -[source,go] ----- -common.MapStr{ - "test_string": "hello", - "test_int": int64(42), - "test_bool": true, - "test_float": 42.1, - "test_obj": common.MapStr{ - "test_obj_string": "hello, object", - }, -} ----- - -You can use the schema package to transform the data, and optionally mark some fields in a schema as required or not. For example: - -[source,go] ----- -import ( - s "github.com/elastic/beats/libbeat/common/schema" - c "github.com/elastic/beats/libbeat/common/schema/mapstrstr" -) - -var ( - schema = s.Schema{ - "test_string": c.Str("testString", s.Required), <1> - "test_int": c.Int("testInt"), <2> - "test_bool": c.Bool("testBool", s.Optional), <3> - "test_float": c.Float("testFloat"), - "test_obj": s.Object{ - "test_obj_string": c.Str("testObjString", s.IgnoreAllErrors), <4> - }, - } -) - -func eventMapping(input map[string]interface{}) common.MapStr { - return schema.Apply(input) <5> -} ----- -<1> Marks a field as required. -<2> If a field has no schema option set, it is equivalent to `Required`. -<3> Marks the field as optional. -<4> Ignore any value conversion error -<5> By default, `Apply` will fail and return an error if any required field is missing. Using the optional second argument, you can specify how `Apply` handles different fields of the schema. The possible values are: -- `AllRequired` is the default behavior. Returns an error if any required field is missing, including fields that are required because no schema option is set. -- `FailOnRequired` will fail if a field explicitly marked as `required` is missing. -- `NotFoundKeys(cb func([]string))` takes a callback function that will be called with a list of missing keys, allowing for finer-grained error handling. - -In the above example, note that it is possible to create the schema object once -and apply it to all events. You can also use `ApplyTo` to add additional data to an existing `MapStr` object: -[source,go] ----- - -var ( - schema = s.Schema{ - "test_string": c.Str("testString"), - "test_int": c.Int("testInt"), - "test_bool": c.Bool("testBool"), - "test_float": c.Float("testFloat"), - "test_obj": s.Object{ - "test_obj_string": c.Str("testObjString"), - }, - } - - additionalSchema = s.Schema{ - "second_string": c.Str("secondString"), - "second_int": c.Int("secondInt"), - } -) - - data, err := schema.Apply(input) - if err != nil { - return err - } - - if m.parseMoreData{ - _, err := additionalSchema.ApplyTo(data, input) - if len(err) > 0 { <1> - return err.Err() - } - } - ----- -<1> `ApplyTo` returns a raw MultiError object, making it suitable for finer-grained error handling. - - -[float] -==== Configuration File -The configuration file for a metricset is handled by the module. If there are -multiple metricsets in one module, make sure you add all metricsets to the configuration. -For example: - -[source,go] ----- -metricbeat: - modules: - - module: {module-name} - metricsets: ["{metricset1}", "{metricset2}"] ----- - -NOTE: Make sure that you run `make collect` after updating the config file -so that your changes are also applied to the global configuration file and the docs. - -For more details about the Metricbeat configuration file, see the topic about -{metricbeat-ref}/configuration-metricbeat.html[Modules] in the Metricbeat -documentation. - - -[float] -==== What to Do Next -This topic provides basic steps for creating a metricset. For more details about metricsets -and how to extend your metricset further, see <>. - diff --git a/docs/devguide/create-module.asciidoc b/docs/devguide/create-module.asciidoc deleted file mode 100644 index 002ec717364b..000000000000 --- a/docs/devguide/create-module.asciidoc +++ /dev/null @@ -1,185 +0,0 @@ -[[creating-metricbeat-module]] -=== Creating a Metricbeat Module - -Metricbeat modules are used to group multiple metricsets together and to implement shared functionality -of the metricsets. In most cases, no implementation of the module is needed and the default module -implementation is automatically picked. - -It's important to complete the configuration and documentation files for a module. When you create a new -metricset by running `make create-metricset`, default versions of these files are generated in the `_meta` directory. - -[float] -==== Module Files - -* `config.yml` and `config.reference.yml` -* `docs.asciidoc` -* `fields.yml` - -After updating any of these files, make sure you run `make update` in your beat directory so all generated -files are updated. - - -[float] -===== config.yml and config.reference.yml - -The `config.yml` file contains the basic configuration options and looks like this: - -[source,yaml] ----- -include::../../metricbeat/scripts/module/config.yml[] ----- - -It contains the module name, your metricset, and the default period. If you have multiple -metricsets in your module, make sure that you extend the metricset array: - -[source,yaml] ----- - metricsets: ["{metricset1}", "{metricset2}"] ----- - -The `full.config.yml` file is optional and by default has the same content as the `config.yml`. It is used -to add and document more advanced configuration options that should not be part of the minimal -config file shipped by default. - -[float] -===== docs.asciidoc - -The `docs.asciidoc` file contains the documentation about your module. During generation of the -documentation, the default config file will be appended to the docs. Use this file to describe your -module in more detail and to document specific configuration options. - -[source,asciidoc] ----- -include::../../metricbeat/scripts/module/docs.asciidoc[] ----- - -[float] -===== fields.yml - -The `fields.yml` file contains the top level structure for the fields in your metricset. It's used in combination with -the `fields.yml` file in each metricset to generate the template and documentation for the fields. - -The default file looks like this: - -[source,yaml] ----- -include::../../metricbeat/scripts/module/fields.yml[] ----- - -Make sure that you update at least the description of the module. - - -[float] -==== Testing - -It's a common pattern to use a `testing.go` file in the module package to share some testing functionality among -the metricsets. This file does not have `_test.go` in the name because otherwise it would not be compiled for sub packages. - -To see an example of the `testing.go` file, look at the https://github.com/elastic/beats/tree/{branch}/metricbeat/module/mysql[mysql module]. - -[float] -===== Test a Metricbeat module manually - -To test a Metricbeat module manually, follow the steps below. - -First we have to build the Docker image which is available for the modules. The Dockerfile is located inside a `_meta` folder within each module folder. As an example let's take MySQL module. - -This steps assume you have checked out the Beats repository from Github and are inside `beats` directory. First, we have to enter in the `_meta` folder mentioned above and build the Docker image called `metricbeat-mysql`: - -[source,bash] ----- -$ cd metricbeat/module/mysql/_meta/ -$ docker build -t metricbeat-mysql . -... -Removing intermediate container 0e58cfb7b197 - ---> 9492074840ea -Step 5/5 : COPY test.cnf /etc/mysql/conf.d/test.cnf - ---> 002969e1d810 -Successfully built 002969e1d810 -Successfully tagged metricbeat-mysql:latest ----- - -Before we run the container we have just created, we also need to know which port to expose. The port is listed in the `metricbeat/{module}/_meta/env` file: - -[source,bash] ----- -$ cat env -MYSQL_DSN=root:test@tcp(mysql:3306)/ -MYSQL_HOST=mysql -MYSQL_PORT=3306 ----- - -As we see, the port is 3306. We now have all the information to start our MySQL service locally: - -[source,bash] ----- -$ docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret metricbeat-mysql ----- - -This starts the container and you can now use it for testing the MySQL module. - -To run Metricbeat with the module we need to build the binary, enable the module first. The assumption is now that you are back in the `beats` folder path: - -[source,bash] ----- -$ cd metricbeat -$ mage build -$ ./metricbeat modules enable mysql ----- - -This will enable the module and rename file `metricbeat/modules.d/mysql.yml.disabled` to `metricbeat/modules.d/mysql.yml`. According to our {metricbeat-ref}/metricbeat-module-mysql.html[documentation] we should specify username and password to user MySQL. It's always a good idea to take a look at the docs to see also that a pre-built dashboard is also available. So tweaking the config a bit, this is how it looks like: - -[source,yaml] ----- -$ cat modules.d/mysql.yml - -# Module: mysql -# Docs: https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-mysql.html - -- module: mysql - metricsets: - - status - # - galera_status - period: 10s - - # Host DSN should be defined as "user:pass@tcp(127.0.0.1:3306)/" - # or "unix(/var/lib/mysql/mysql.sock)/", - # or another DSN format supported by . - # The username and password can either be set in the DSN or using the username - # and password config options. Those specified in the DSN take precedence. - hosts: ["tcp(127.0.0.1:3306)/"] - - # Username of hosts. Empty by default. - username: root - - # Password of hosts. Empty by default. - password: secret ----- - -It's now sending data to your local Elasticsearch instance. If you need to modify the mysql config, adjust `modules.d/mysql.yml` and restart Metricbeat. - - - - -[float] -===== Run Environment tests for one module - -All the environments are setup with docker. `make integration-tests-environment` and `make system-tests-environment` can be used to run tests for all modules. In case you are developing a module it is convenient to run the tests only for one module and directly run it on your machine. - -First you need to start the environment for your module to test and expose the port to your local machine. For this you can run the following command inside the metricbeat directory: - -[source,bash] ----- -MODULE=apache PORT=80 make run-module ----- - -Note: The apache module with port 80 is taken here as an example. You must put the name and port for your own module here. - -This will start the environment and you must wait until the service is completely started. After that you can run the test which require an environment: - -[source,bash] ----- -MODULE=apache make test-module ----- - -This will run the integration and system tests connecting to the environment in your docker container. diff --git a/docs/devguide/documentation.asciidoc b/docs/devguide/documentation.asciidoc deleted file mode 100644 index 82e12a2721bb..000000000000 --- a/docs/devguide/documentation.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[[contributing-docs]] -=== Contributing to the docs - -The Beats documentation follows the tagging guidelines described in the -https://github.com/elastic/docs/blob/master/README.asciidoc[Docs HOWTO]. However -it extends these capabilities in a couple ways: - -* The documentation makes extensive use of -https://docs.asciidoctor.org/asciidoc/latest/directives/conditionals/[AsciiDoc conditionals] -to provide content that is reused across multiple books. This means that there -might not be a single source file for each published HTML page. Some files are -shared across multiple books, either as complete pages or snippets. For more -details, refer to <>. - -* The documentation includes some files that are generated from YAML source or -pieced together from content that lives in `_meta` directories under the code -(for example, the module and exported fields documentation). For more details, -refer to <>. - -[float] -[[where-to-find-files]] -==== Where to find the Beats docs source - -Because the Beats documentation makes use of shared content, doc generation -scripts, and componentization, the source files are located in several places: - -|=== -| Documentation | Location of source files - -| Main docs for the Beat, including index files -| `/docs` - -| Shared docs and Beats Platform Reference -| `libbeat/docs` - -| Processor docs -| `docs` folders under processors in `libbeat/processors/`, -`x-pack//processors/`, and `x-pack/libbeat/processors/` - -| Output docs -| `docs` folders under outputs in `libbeat/outputs/` - -| Module docs -| `_meta` folders under modules and datasets in `libbeat/module/`, -`/module/`, and `x-pack//module/` -|=== - -The https://github.com/elastic/docs/blob/master/conf.yaml[conf.yaml] file in the -`docs` repo shows all the resources used to build each book. This file is used -to drive the classic docs build and is the source of truth for file locations. - -TIP: If you can't find the source for a page you want to update, go to the -published page at www.elastic.co and click the Edit link to navigate to the -source. - -The Beats documentation build also has dependencies on the following files in -the https://github.com/elastic/docs[docs] repo: - -* `shared/versions/stack/.asciidoc` -* `shared/attributes.asciidoc` - -[float] -[[generated-docs]] -==== Generated docs - -After updating `docs.asciidoc` files in `_meta` directories, you must run the -doc collector scripts to regenerate the docs. - -Make sure you -<> and use -the correct Go version. The Go version is listed in the `version.asciidoc` file -for the branch you want to update. - -To run the docs collector scripts, change to the beats directory and run: - -`make update` - -WARNING: The `make update` command overwrites files in the `docs` directories -**without warning**. If you accidentally update a generated file and run -`make update`, your changes will be overwritten. - -To format your files, you might also need to run this command: - -`make fmt` - -The make command calls the following scripts to generate the docs: - -https://github.com/elastic/beats/blob/main/auditbeat/scripts/docs_collector.py[auditbeat/scripts/docs_collector.py] -generates: - -* `auditbeat/docs/modules_list.asciidoc` -* `auditbeat/docs/modules/*.asciidoc` - -https://github.com/elastic/beats/blob/main/filebeat/scripts/docs_collector.py[filebeat/scripts/docs_collector.py] -generates: - -* `filebeat/docs/modules_list.asciidoc` -* `filebeat/docs/modules/*.asciidoc` - -https://github.com/elastic/beats/blob/main/metricbeat/scripts/mage/docs_collector.go[metricbeat/scripts/mage/docs_collector.go] -generates: - -* `metricbeat/docs/modules_list.asciidoc` -* `metricbeat/docs/modules/*.asciidoc` - -https://github.com/elastic/beats/blob/main/libbeat/scripts/generate_fields_docs.py[libbeat/scripts/generate_fields_docs.py] -generates - -* `auditbeat/docs/fields.asciidoc` -* `filebeat/docs/fields.asciidoc` -* `heartbeat/docs/fields.asciidoc` -* `metricbeat/docs/fields.asciidoc` -* `packetbeat/docs/fields.asciidoc` -* `winlogbeat/docs/fields.asciidoc` diff --git a/docs/devguide/event-conventions.asciidoc b/docs/devguide/event-conventions.asciidoc deleted file mode 100644 index 3d2c09513272..000000000000 --- a/docs/devguide/event-conventions.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[[event-conventions]] -=== Naming Conventions - -When creating events, use the following conventions for field names and abbreviations. - -[[field-names]] -==== Field Names - -Use the following naming conventions for field names: - -- All fields must be lower case. -- Use snake case (underscores) for combining words. -- Group related fields into subdocuments by using dot (.) notation. Groups typically have common prefixes. For example, if you have fields called `CPULoad` and `CPUSystem` in a service, you would convert -them into `cpu.load` and `cpu.system` in the event. -- Avoid repeating the namespace in field names. If a word or abbreviation appears in the namespace, it's not needed in the field name. For example, instead of `cpu.cpu_load`, use `cpu.load`. -- Use <> when the metric matches one of the known units. -- Use <> and avoid using abbreviations that aren't commonly known. -- Organise the documents from general to specific to allow for namespacing. The type, such as `.pct`, should always be last. For example, `system.core.user.pct`. -- If two fields are the same, but with different units, remove the less granular one. For example, include `timeout.sec`, but don't include `timeout.min`. If a less granular value is required, you can calculate it later. -- If a field name matches the namespace used for nested fields, add `.value` to the field name. For example, instead of: -+ -[source,yaml] ----------- -workers -workers.busy -workers.idle ----------- -+ -Use: -+ -[source,yaml] ----------- -workers.value -workers.busy -workers.idle ----------- -- Do not use dots (.) in individual field names. Dots are reserved for grouping related fields into subdocuments. -- Use singular and plural names properly to reflect the field content. For example, use `requests_per_sec` rather than `request_per_sec`. - -[[units]] -==== Units - -These are well-known suffixes to represent units of stored values, use them as a dotted suffix when -possible. For example `system.memory.used.bytes` or `system.diskio.read.count`: - -[options="header"] -|======================= -|Suffix |Units -|count |item count -|pct |percentage -|day |days -|sec |seconds -|ms |millisecond -|us |microseconds -|ns |nanoseconds -|bytes |bytes -|mb |megabytes -|======================= - - -[[abbreviations]] -==== Standardised Names - -Here is a list of standardised names and units that are used across all Beats: - -[options="header"] -|======================= -|Use... |Instead of... -|avg |average -|connection |conn -|max |maximum -|min |minimum -|request |req -|msg |message -|======================= diff --git a/docs/devguide/faq.asciidoc b/docs/devguide/faq.asciidoc deleted file mode 100644 index 2f37bf0553dd..000000000000 --- a/docs/devguide/faq.asciidoc +++ /dev/null @@ -1,21 +0,0 @@ -[[dev-faq]] -=== Metricbeat Developer FAQ - -This is a list of common questions when creating a metricset and the potential answers. - -[float] -==== Metricset is not compiled - -You are compiling your Beat, but the newly created metricset is not compiled? - -Make sure that the path to your module and metricset are added as an import path either in your `main.go` -file or your `include/list.go` file. You can do this manually or by running `make imports`. - -[float] -==== Metricset is not started - -The metricset is compiled, but not started when starting Metricbeat? - -After creating your metricset, make sure you run `make collect`. This command adds the configuration -of your metricset to the default configuration. If the metricset still doesn't start, check your -default configuration file to see if the metricset is listed there. diff --git a/docs/devguide/fields-yml.asciidoc b/docs/devguide/fields-yml.asciidoc deleted file mode 100644 index 87197fc2fe91..000000000000 --- a/docs/devguide/fields-yml.asciidoc +++ /dev/null @@ -1,163 +0,0 @@ -[[event-fields-yml]] -=== Defining field mappings - -You must define the fields used by your Beat, along with their mapping details, -in `_meta/fields.yml`. After editing this file, run `make update`. - -Define the field mappings in the `fields` array: - -[source,yaml] ----------------------------------------------------------------------- -- key: mybeat - title: mybeat - description: These are the fields used by mybeat. - fields: - - name: last_name <1> - type: keyword <2> - required: true <3> - description: > <4> - The last name. - - name: first_name - type: keyword - required: true - description: > - The first name. - - name: comment - type: text - required: false - description: > - Comment made by the user. ----------------------------------------------------------------------- - -<1> `name`: The field name -<2> `type`: The field type. The value of `type` can be any datatype {ref}/mapping-types.html[available in {es}]. If no value is specified, the default type is `keyword`. -<3> `required`: Whether or not a field value is required -<4> `description`: Some information about the field contents - -==== Mapping parameters - -You can specify other mapping parameters for each field. See the -{ref}/mapping-params.html[{es} Reference] for more details about each -parameter. - -[horizontal] -`format`:: Specify a custom date format used by the field. -`multi_fields`:: For `text` or `keyword` fields, use `multi_fields` to define -multi-field mappings. -`enabled`:: Whether or not the field is enabled. -`analyzer`:: Which analyzer to use when indexing. -`search_analyzer`:: Which analyzer to use when searching. -`norms`:: Applies to `text` and `keyword` fields. Default is `false`. -`dynamic`:: Dynamic field control. Can be one of `true` (default), `false`, or -`strict`. -`index`:: Whether or not the field should be indexed. -`doc_values`:: Whether or not the field should have doc values generated. -`copy_to`:: Which field to copy the field value into. -`ignore_above`:: {es} ignores (does not index) strings that are longer than the -specified value. When this property value is missing or `0`, the `libbeat` -default value of `1024` characters is used. If the value is `-1`, the {es} -default value is used. - -For example, you can use the `copy_to` mapping parameter to copy the -`last_name` and `first_name` fields into the `full_name` field at index time: - -[source,yaml] ----------------------------------------------------------------------- -- key: mybeat - title: mybeat - description: These are the fields used by mybeat. - fields: - - name: last_name - type: text - required: true - copy_to: full_name <1> - description: > - The last name. - - name: first_name - type: text - required: true - copy_to: full_name <2> - description: > - The first name. - - name: full_name - type: text - required: false - description: > - The last_name and first_name combined into one field for easy searchability. ----------------------------------------------------------------------- -<1> Copy the value of `last_name` into `full_name` -<2> Copy the value of `first_name` into `full_name` - -There are also some {kib}-specific properties, not detailed here. These are: -`analyzed`, `count`, `searchable`, `aggregatable`, and `script`. {kib} -parameters can also be described using `pattern`, `input_format`, -`output_format`, `output_precision`, `label_template`, `url_template`, and -`open_link_in_current_tab`. - -==== Defining text multi-fields - -There are various options that you can apply when using text fields. You can -define a simple text field using the default analyzer without any other options, -as in the example shown earlier. - -To keep the original keyword value when using `text` mappings, for instance to -use in aggregations or ordering, you can use a multi-field mapping: - -[source,yaml] ----------------------------------------------------------------------- -- key: mybeat - title: mybeat - description: These are the fields used by mybeat. - fields: - - name: city - type: text - multi_fields: <1> - - name: keyword <2> - type: keyword <3> ----------------------------------------------------------------------- -<1> `multi_fields`: Define the `multi_fields` mapping parameter. -<2> `name`: This is a conventional name for a multi-field. It can be anything (`raw` is another common option) but the convention is to use `keyword`. -<3> `type`: Specify the `keyword` type to use the field in aggregations or to order documents. - -For more information, see the {ref}/multi-fields.html[{es} documentation about -multi-fields]. - -==== Defining a text analyzer in-line - -It is possible to define a new text analyzer or search analyzer in-line with -the field definition in the field's mapping parameters. - -For example, you can define a new text analyzer that does not break hyphenated names: - -[source,yaml] ----------------------------------------------------------------------- -- key: mybeat - title: mybeat - description: These are the fields used by mybeat. - fields: - - name: last_name - type: text - required: true - description: > - The last name. - analyzer: - mybeat_hyphenated_name: <1> - type: pattern <2> - pattern: "[\\W&&[^-]]+" <3> - search_analyzer: - mybeat_hyphenated_name: <4> - type: pattern - pattern: "[\\W&&[^-]]+" ----------------------------------------------------------------------- -<1> Use a newly defined text analyzer -<2> Define the custome analyzer type -<3> Specify the analyzer behaviour -<4> Use the same analyzer for the search - -The names of custom analyzers that are defined in-line may not be reused for a different -text analyzer. If a text analyzer name is reused it is checked for matching existing -instances of the analyzer. It is recommended that the analyzer name is prefixed with the -beat name to avoid name clashes. - -For more information, see {ref}/analysis-custom-analyzer.html[{es} documentation about -defining custom text analyzers]. diff --git a/docs/devguide/generator-support-note.asciidoc b/docs/devguide/generator-support-note.asciidoc deleted file mode 100644 index 25579798ed28..000000000000 --- a/docs/devguide/generator-support-note.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -// tag::metricset-generator[] -IMPORTANT: Elastic provides no warranty or support for the code used to generate -metricsets. The generator is mainly offered as guidance for developers who want -to create their own data shippers. - -// end::metricset-generator[] - -// tag::filebeat-generator[] -IMPORTANT: Elastic provides no warranty or support for the code used to generate -modules and filesets. The generator is mainly offered as guidance for developers -who want to create their own data shippers. - -// end::filebeat-generator[] \ No newline at end of file diff --git a/docs/devguide/images/beat_overview.png b/docs/devguide/images/beat_overview.png deleted file mode 100644 index 55621249ec6a..000000000000 Binary files a/docs/devguide/images/beat_overview.png and /dev/null differ diff --git a/docs/devguide/index.asciidoc b/docs/devguide/index.asciidoc deleted file mode 100644 index 3f554ee45540..000000000000 --- a/docs/devguide/index.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[[beats-reference]] -= Beats Developer Guide - -:libbeat-dir: {docdir}/../../libbeat/docs - -include::{libbeat-dir}/version.asciidoc[] - -include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[] - -:dev-guide: true -:beatname_lc: beatname -:beatname_uc: a Beat - -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -include::{libbeat-dir}/shared-beats-attributes.asciidoc[] - -include::./pull-request-guidelines.asciidoc[] - -include::./contributing.asciidoc[] - -include::./documentation.asciidoc[] - -include::./testing.asciidoc[] - -include::{libbeat-dir}/communitybeats.asciidoc[] - -include::./fields-yml.asciidoc[] - -include::./event-conventions.asciidoc[] - -include::./python.asciidoc[] - -include::./newdashboards.asciidoc[] - -include::./new_protocol.asciidoc[] - -include::./metricbeat-devguide.asciidoc[] - -include::./modules-dev-guide.asciidoc[] - -include::./migrate-dashboards.asciidoc[] diff --git a/docs/devguide/metricbeat-devguide.asciidoc b/docs/devguide/metricbeat-devguide.asciidoc deleted file mode 100644 index 265bef2b8dd6..000000000000 --- a/docs/devguide/metricbeat-devguide.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ - -[[metricbeat-developer-guide]] -== Extending Metricbeat - -Metricbeat periodically interrogates other services to fetch key metrics -information. As a developer, you can use Metricbeat in two different ways: - -* Extend Metricbeat directly -* Create your own Beat and use Metricbeat as a library - -We recommend that you start by creating your own Beat to keep the development of your own module or metricset -independent of Metricbeat. At a later stage, if you decide to add a module to Metricbeat, you can reuse -the code without making additional changes. - -This following topics describe how to contribute to Metricbeat by adding metricsets, modules, and new Beats based on Metricbeat: - -* <> -* <> -* <> -* <> -* <> - -If you would like to contribute to Metricbeat or the Beats project, also see -<>. - -[[metricbeat-dev-overview]] -=== Overview - -Metricbeat consists of modules and metricsets. A Metricbeat module is typically -named after the service the metrics are fetched from, such as redis, -mysql, and so on. Each module can contain multiple metricsets. A metricset represents -multiple metrics that are normally retrieved with one request from the remote -system. For example, the Redis `info` metricset retrieves info that you get when you -run the Redis `INFO` command, and the MySQL `status` metricset retrieves -info that you get when you issue the MySQL `SHOW GLOBAL STATUS` query. - -[float] -==== Module and Metricsets Requirements - -To guarantee the best user experience, it's important to us that only high quality -modules are part of Metricbeat. The modules and metricsets that are contributed -must meet the following requirements: - -* Complete `fields.yml` file to generate docs and Elasticsearch templates -* Documentation files -* Integration tests -* 80% test coverage (unit, integration, and system tests combined) - -Metricbeat allows you to build a wide variety of modules and metricsets on top of it. -For a module to be accepted, it should focus on fetching service metrics -directly from the service itself and not via a third-party tool. The goal is to -have as few movable parts as possible and for Metricbeat to run as close as -possible to the service that it needs to monitor. - -include::./create-metricset.asciidoc[] - -include::./metricset-details.asciidoc[] - -include::./create-module.asciidoc[] - -include::./faq.asciidoc[] diff --git a/docs/devguide/metricset-details.asciidoc b/docs/devguide/metricset-details.asciidoc deleted file mode 100644 index acb7209d0e82..000000000000 --- a/docs/devguide/metricset-details.asciidoc +++ /dev/null @@ -1,326 +0,0 @@ -[[metricset-details]] -=== Metricset Details - -This topic provides additional details about creating metricsets. - -[float] -=== Adding Special Configuration Options - -Each metricset can have its own configuration variables defined. To make use of -these variables, you must extend the `New` method. For example, let's assume that -you want to add a `password` config option to the metricset. You would extend -`beat.yml` in the following way: - -[source,yaml] ----- -metricbeat.modules: -- module: {module} - metricsets: ["{metricset}"] - password: "test1234" ----- - -To read in the new `password` config option, you need to modify the `New` method. First you define a config -struct that contains the value types to be read. You can set default values, as needed. Then you pass the config to -the `UnpackConfig` method for loading the configuration. - -Your implementation should look something like this: - -[source,go] ----- -type MetricSet struct { - mb.BaseMetricSet - password string -} - -func New(base mb.BaseMetricSet) (mb.MetricSet, error) { - - // Unpack additional configuration options. - config := struct { - Password string `config:"password"` - }{ - Password: "", - } - err := base.Module().UnpackConfig(&config) - if err != nil { - return nil, err - } - - return &MetricSet{ - BaseMetricSet: base, - password: config.Password, - }, nil -} ----- - - -[float] -==== Timeout Connections to Services - -Each time the `Fetch` method is called, it makes a request to the service, so it's -important to handle the connections correctly. We recommended that you set up the -connections in the `New` method and persist them in the `MetricSet` object. This allows -connections to be reused. - -One very important point is that connections must respect the timeout variable: -`base.Module().Config().Timeout`. If the timeout elapses before the request completes, -the request must be ended and an error must be returned to make sure the next request -can be started on time. By default the Timeout is set to Period, so one request gets -ended before a new request is made. - -If a request must be ended or has an error, make sure that you return a useful error -message. This error message is also sent to Elasticsearch, making it possible to not -only fetch metrics from the service, but also report potential problems or errors with -the metricset. - - -[float] -==== Data Transformation - -If the data transformation that has to happen in the `Fetch` method is -extensive, we recommend that you create a second file called `data.go` in the same package -as the metricset. The `data.go` file should contain a function called `eventMapping(...)`. -A separate file is not required, but is currently a best practice because it isolates the -functionality of the metricset and `Fetch` method from the data mapping. - - - -[float] -==== fields.yml - -You can find up to 3 different types of files named `fields.yml` in the beats repository for each metricbeat module: - -* `metricbeat/fields.yml`: Contains the definitions to create the Elasticsearch template, the Kibana index pattern configuration and the exported fields documentation for metricsets. To make sure the Elasticsearch template is correct, it's important to keep this file up-to-date with all the changes. Generally, you shouldn't touch this file manually because it's generated by some commands in the build environment. -* `metricbeat/module/{module}/_meta/fields.yml`: Contains the general top level structure for all metricsets in a module. -Normally you only need to modify the description in this file. Here is an example for the `fields.yml` file from the MySQL module. -+ -[source,yaml] ----- -include::../../metricbeat/module/mysql/_meta/fields.yml[] ----- -+ -* `metricbeat/module/{module}/{metricset}/_meta/fields.yml`: Contains all fields definitions retrieved by the metricset. -As field types, each field must have a core data type -{ref}/mapping-types.html#_core_datatypes[supported by elasticsearch]. Here's a very basic example that shows one group from the MySQL `status` metricset: -+ -[source,yaml] ----- -- name: status - type: group - description: > - `status` contains the metrics that were obtained by the status SQL query. - fields: - - name: aborted - type: group - description: Aborted status fields. - fields: - - name: clients - type: integer - description: > - The number of connections that were aborted because the client died without closing the connection properly. - - - name: connects - type: integer - description: > - The number of failed attempts to connect to the MySQL server. ----- -+ - -// TODO: Add link to general fields.yml developer guide - -[float] -==== Testing - -It's important to also add tests for your metricset. There are three different types of tests that you need for testing a Beat: - -* unit tests -* integration tests -* system tests - -We recommend that you use all three when you create a metricset. Unit tests are -written in Go and have no dependencies. Integration tests are also written -in Go but require the service from which the module collects metrics to also be running. -System tests for Metricbeat also require the service to be running in most cases and are -written in Python {python_major_version} based on our small Python test framework. -We use https://docs.python.org/3/library/venv.html[venv] to deal with Python dependencies. -You can simply run the command `make python-env` and then `. build/python-env/bin/activate` . - -You should use a combination of the three test types to test your metricsets because -each method has advantages and disadvantages. To get started with your own tests, it's best -to look at the existing tests. You'll find the unit and integration tests -in the `_test.go` files under existing modules and metricsets. -Integration tests usually take the form of `TestFetch` and `TestData`. -The system tests are under `tests/systems`. - - -[float] -===== Adding a Test Environment - -Integration and system tests need an environment that's running the service. You -can create this environment by using Docker and a docker-compose file. If you add a -module that requires a service, you must add the service to the virtual environment. -To do this, you: - -* Update the `docker-compose.yml` file with your environment -* Update the `docker-entrypoint.sh` script - -The `docker-compose.yml` file is at the root of Metricbeat. Most services have -existing Docker modules and can be added as simply as Redis: - -[source,yaml] ----- -redis: - image: redis:3.2.3 ----- - -To allow the Beat to access your service, make sure that you define the environment -variables in the docker-compose file and add the link to the container: - -[source,yaml] ----- -beat: - links: - - redis - environment: - - REDIS_HOST=redis - - REDIS_PORT=6379 ----- - -To make sure the service is running before the tests are started, modify the -`docker-entrypoint.sh` script to add a check that verifies your service is -running. For example, the check for Redis looks like this: - -[source,shell] ----- -waitFor ${REDIS_HOST} ${REDIS_PORT} Redis ----- - -The environment expects your service to be available as soon as it receives a response from -the given address and port. - -[float] -===== Adding the standard metricset integration tests - -There are normally two integration tests that are part of every metricset: `TestFetch` and `TestData`. -Both tests will start up a new instance of your metricset and fetch an event. In order to start a metricset, you need to create a configuration object: - -[source,go] ----- -func getConfig() map[string]interface{} { - return map[string]interface{}{ - "module": "{module}", - "metricsets": []string{"{metricset}"}, - "hosts": []string{GetEnvHost() + ":" + GetEnvPort()}, <1> - } -} - -func GetEnvHost() string { <2> - host := os.Getenv("{module}_HOST") - if len(host) == 0 { - host = "127.0.0.1" - } - return host -} - -func GetEnvPort() string { <2> - port := os.Getenv("{module}_PORT") - - if len(port) == 0 { - port = "1234" - } - return port -} - ----- -<1> Add any additional config options your metricset needs here. -<2> The endpoint used by the metricset needs to be configurable for manual and automated testing. -Environment variables should be defined in the module under `_meta/env` and included in the `docker-compose.yml` file. - -The `TestFetch` integration test will return a single event from your metricset, which you can use to test the validity of the data. -`TestData` will (re)generate the `_meta/data.json` file that documents the data reported by the metricset. - -[source,go] ----- -import ( - "os" - "testing" - - "github.com/stretchr/testify/assert" - - "github.com/elastic/beats/libbeat/tests/compose" - mbtest "github.com/elastic/beats/metricbeat/mb/testing" -) - -func TestFetch(t *testing.T) { - compose.EnsureUp(t, "{module}") <1> - - f := mbtest.NewReportingMetricSetV2Error(t, getConfig()) - - events, errs := mbtest.ReportingFetchV2Error(f) - if len(errs) > 0 { - t.Fatalf("Expected 0 errord, had %d. %v\n", len(errs), errs) - } - - assert.NotEmpty(t, events) <2> - -} - -func TestData(t *testing.T) { - - f := mbtest.NewReportingMetricSetV2Error(t, getConfig()) - - err := mbtest.WriteEventsReporterV2Error(f, t, "") <3> - if !assert.NoError(t, err) { - t.FailNow() - } -} ----- -<1> Use this to start the docker service associated with your metricset. -<2> Add any further validity checks to verify the metricset is working. -<3> `WriteEventsReporterV2Error` will take the first valid event from the metricset and write it to `_meta/data.json` - -[float] -===== Running the Tests - -To run all the tests, run `make testsuite`. To only run unit tests, run -`mage unitTest`, or for integration tests `mage integTest`. -Be aware that a running Docker environment is needed for integration and system -tests. - -To run `TestData` and generate the `data.json` file, run -`go test -tags=integration -data -run TestData` in the directory where your test is located. - -To run the integration tests for a single module, set the `MODULE` environment -variable to the name of the directory of the module. For example you can run the -following command to run integration tests for `apache` module: - -[source,shell] ----- -MODULE=apache mage integTest ----- - - -[float] -=== Documentation - -Each module must be documented. The documentation is based on asciidoc and is in -the file `module/{module}/_meta/docs.asciidoc` for the module and in `module/{module}/{metricset}/_meta/docs.asciidoc` - for the metricset. Basic documentation with the config file and an example output is automatically - generated. Use these files to document specific configuration options or usage examples. - - - - -//// -TODO: The following parts should be added as soon as the content exists or the implementation is completed. - -[float] -== Field naming -https://github.com/elastic/beats/blob/main/metricbeat/module/doc.go - -[float] -== Dashboards - -Dashboards are an important part of each metricset. Data gets much more useful -when visualized. To create dashboards for the metricset, follow the guide here -(link to dashboard guide). -//// diff --git a/docs/devguide/migrate-dashboards.asciidoc b/docs/devguide/migrate-dashboards.asciidoc deleted file mode 100644 index 453b065c90e2..000000000000 --- a/docs/devguide/migrate-dashboards.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -== Migrating dashboards from Kibana 5.x to 6.x - -This section is useful for the community Beats to migrate the Kibana 5.x dashboards to 6.x dashboards. - -In the Kibana 5.x, the saved dashboards consist of multiple JSON files, one for each dashboard, search, visualization -and index-pattern. To import a dashboard in Kibana, you need to load not only the JSON file containing the dashboard, but -also all its dependencies (searches, visualizations). - -Starting with Kibana 6.0, the dashboards are loaded by default via the Kibana API. In this case, the saved dashboard -consist of a single JSON file that includes not only the dashboard content, but also all its dependencies. - -As the format of the dashboards and index-pattern for Kibana 5.x is different than the ones for Kibana 6.x, they are placed in different -directories. Depending on the Kibana version, the 5.x or 6.x dashboards are loaded. - -The Kibana 5.x dashboards are placed under the 5.x directory that contains the following directories: -- search -- visualization -- dashboard -- index-pattern - -The Kibana 6.x dashboards and later are placed under the default directory that contains the following directories: -- dashboard -- index-pattern - -NOTE:: Please make sure the 5.x and default directories are created before running the following commands. - -To migrate your Kibana 5.x dashboards to Kibana 6.0 and above, you can import the dashboards into Kibana 5.6 and then -export them using Beats 6.0 version. - -* Start Kibana 5.6 -* Import Kibana 5.x dashboards using Beats 6.0 version. - -Before importing the dashboards, make sure you run `make update` in the Beat directory, that updates the `_meta/kibana` directory. It generates the index-pattern from -the `fields.yml` file, and places it under the `5.x/index-pattern` and `default/index-pattern` directories. In case of Metricbeat, Filebeat and Auditbeat, -it collects the dashboards from all the modules to the `_meta/kibana` directory. - -[source,shell] ------------------ -make update ------------------ - -Then load all the Beat's dashboards. For example, to load the Metricbeat rabbitmq dashboards together with the Metricbeat index-pattern into Kibana 5.6, -using the Kibana API: - -[source,shell] ------------------ -make update -./metricbeat setup -E setup.dashboards.directory=_meta/kibana ------------------ - -* Export the dashboards using Beats 6.0 version. - -You can export the dashboards via the Kibana API by using the -https://github.com/elastic/beats/blob/main/dev-tools/cmd/dashboards/export_dashboards.go[export_dashboards.go] -application. - -For example, to export the Metricbeat rabbitmq dashboard: - -[source,shell] ------------------ -cd beats/metricbeat -go run ../dev-tools/cmd/dashboards/export_dashboards.go -dashboards Metricbeat-Rabbitmq -output -module/rabbitmq/_meta/kibana/default/Metricbeat-Rabbitmq.json <1> ------------------ -<1> `Metricbeat-Rabbitmq` is the ID of the dashboard that you want to export. - -Note: You can get the dashboard ID from the URL of the dashboard in Kibana. Depending on the Kibana version the -dashboard was created, the ID consists of a name or random characters that can be separated by `-`. - -This command creates a single JSON file (Metricbeat-Rabbitmq.JSON) that contains the dashboard and all the dependencies like searches, -visualizations. The name of the output file has the format: -.json. - -Starting with Beats 6.0.0, you can create an `yml` file for each module or for the entire Beat with all the dashboards. -Below is an example of the `module.yml` file for the system module in Metricbeat. - -[source,yaml] ----------------- -dashboards: - - id: Metricbeat-system-overview <1> - file: Metricbeat-system-overview.json <2> - - - id: 79ffd6e0-faa0-11e6-947f-177f697178b8 - file: Metricbeat-host-overview.json - - - id: CPU-slash-Memory-per-container - file: Metricbeat-docker-overview.json ----------------- -<1> Dashboard ID. -<2> The JSON file where the dashboard is saved on disk. - -Using the yml file, you can export all the dashboards for a single module or for the entire Beat using a single command: - -[source,shell] ----- -cd metricbeat/module/system -go run ../../../dev-tools/cmd/dashboards/export_dashboards.go -yml module.yml ----- - diff --git a/docs/devguide/modules-dev-guide.asciidoc b/docs/devguide/modules-dev-guide.asciidoc deleted file mode 100644 index 7e5178cd651c..000000000000 --- a/docs/devguide/modules-dev-guide.asciidoc +++ /dev/null @@ -1,530 +0,0 @@ -[[filebeat-modules-devguide]] -== Creating a New Filebeat Module - -include::generator-support-note.asciidoc[tag=filebeat-generator] - -This guide will walk you through creating a new Filebeat module. - -All Filebeat modules currently live in the main -https://github.com/elastic/beats[Beats] repository. To clone the repository and -build Filebeat (which you will need for testing), please follow the general -instructions in <>. - -[float] -=== Overview - -Each Filebeat module is composed of one or more "filesets". We usually create a -module for each service that we support (`nginx` for Nginx, `mysql` for Mysql, -and so on) and a fileset for each type of log that the service creates. For -example, the Nginx module has `access` and `error` filesets. You can contribute -a new module (with at least one fileset), or a new fileset for an existing -module. - -NOTE: In this guide we use `{module}` and `{fileset}` as placeholders for the -module and fileset names. You need to replace these with the actual names you -entered when your created the module and fileset. Only use characters `[a-z]` and, if required, underscores (`_`). No other characters are allowed. - -[float] -=== Creating a new module - -Run the following command in the `filebeat` folder: - -[source,bash] ----- -make create-module MODULE={module} ----- - -After running the `make create-module` command, you'll find the module, -along with its generated files, under `module/{module}`. This -directory contains the following files: - -[source,bash] ----- -module/{module} -├── module.yml -└── _meta -    └── docs.asciidoc -    └── fields.yml -    └── kibana ----- - -Let's look at these files one by one. - -[float] -==== module.yml - -This file contains list of all the dashboards available for the module and used by `export_dashboards.go` script for exporting dashboards. -Each dashboard is defined by an id and the name of json file where the dashboard is saved locally. -At generation new fileset this file will be automatically updated with "default" dashboard settings for new fileset. -Please ensure that this settings are correct. - -[float] -==== _meta/docs.asciidoc - -This file contains module-specific documentation. You should include information -about which versions of the service were tested and the variables that are -defined in each fileset. - -[float] -==== _meta/fields.yml - -The module level `fields.yml` contains descriptions for the module-level fields. -Please review and update the title and the descriptions in this file. The title -is used as a title in the docs, so it's best to capitalize it. - -[float] -==== _meta/kibana - -This folder contains the sample Kibana dashboards for this module. To create -them, you can build them visually in Kibana and then export them with `export_dashboards`. - -The tool will export all of the dashboard dependencies (visualizations, -saved searches) automatically. - -You can see various ways of using `export_dashboards` at <>. -The recommended way to export them is to list your dashboards in your module's -`module.yml` file: - -[source,yaml] ----- -dashboards: -- id: 69f5ae20-eb02-11e7-8f04-beef1daadb05 - file: mymodule-overview.json -- id: c0a7ce90-cafe-4242-8647-534bb4c21040 - file: mymodule-errors.json ----- - -Then run `export_dashboards` like this: - -[source,shell] ----- -$ cd dev-tools/cmd/dashboards -$ make # if export_dashboard is not built yet -$ ./export_dashboards --yml '../../../filebeat/module/{module}/module.yml' ----- - -New Filebeat modules might not be compatible with Kibana 5.x. To export dashboards -that are compatible with 5.x, run the following command inside the developer -virtual environment: - -[source,shell] ----- -$ cd filebeat -$ make python-env -$ cd module/{module}/ -$ python ../../../dev-tools/export_5x_dashboards.py --regex {module} --dir _meta/kibana/5.x ----- - -Where the `--regex` parameter should match the dashboard you want to export. - -Please note that dashboards exported from Kibana 5.x are not compatible with Kibana 6.x. - -You can find more details about the process of creating and exporting the Kibana -dashboards by reading {beatsdevguide}/new-dashboards.html[this guide]. - -[float] -=== Creating a new fileset - -Run the following command in the `filebeat` folder: - -[source,bash] ----- -make create-fileset MODULE={module} FILESET={fileset} ----- - -After running the `make create-fileset` command, you'll find the fileset, -along with its generated files, under `module/{module}/{fileset}`. This -directory contains the following files: - -[source,bash] ----- -module/{module}/{fileset} -├── manifest.yml -├── config -│   └── {fileset}.yml -├── ingest -│   └── pipeline.json -├── _meta -│   └── fields.yml -│   └── kibana -│    └── default -└── test ----- - -Let's look at these files one by one. - -[float] -==== manifest.yml - -The `manifest.yml` is the control file for the module, where variables are -defined and the other files are referenced. It is a YAML file, but in many -places in the file, you can use built-in or defined variables by using the -`{{.variable}}` syntax. - -The `var` section of the file defines the fileset variables and their default -values. The module variables can be referenced in other configuration files, -and their value can be overridden at runtime by the Filebeat configuration. - -As the fileset creator, you can use any names for the variables you define. Each -variable must have a default value. So in it's simplest form, this is how you -can define a new variable: - -[source,yaml] ----- -var: - - name: pipeline - default: with_plugins ----- - -Most fileset should have a `paths` variable defined, which sets the default -paths where the log files are located: - -[source,yaml] ----- -var: - - name: paths - default: - - /example/test.log* - os.darwin: - - /usr/local/example/test.log* - - /example/test.log* - os.windows: - - c:/programdata/example/logs/test.log* ----- - -There's quite a lot going on in this file, so let's break it down: - -* The name of the variable is `paths` and the default value is an array with one - element: `"/example/test.log*"`. -* Note that variable values don't have to be strings. - They can be also numbers, objects, or as shown in this example, arrays. -* We will use the `paths` variable to set the input `paths` - setting, so "glob" values can be used here. -* Besides the `default` value, the file defines values for particular - operating systems: a default for darwin/OS X/macOS systems and a default for - Windows systems. These are introduced via the `os.darwin` and `os.windows` - keywords. The values under these keys become the default for the variable, if - Filebeat is executed on the respective OS. - -Besides the variable definition, the `manifest.yml` file also contains -references to the ingest pipeline and input configuration to use (see next -sections): - -[source,yaml] ----- -ingest_pipeline: ingest/pipeline.json -input: config/testfileset.yml ----- - -These should point to the respective files from the fileset. - -Note that when evaluating the contents of these files, the variables are -expanded, which enables you to select one file or the other depending on the -value of a variable. For example: - -[source,yaml] ----- -ingest_pipeline: ingest/{{.pipeline}}.json ----- - -This example selects the ingest pipeline file based on the value of the -`pipeline` variable. For the `pipeline` variable shown earlier, the path would -resolve to `ingest/with_plugins.json` (assuming the variable value isn't -overridden at runtime.) - -In 6.6 and later, you can specify multiple ingest pipelines. - -[source,yaml] ----- -ingest_pipeline: - - ingest/main.json - - ingest/plain_logs.json - - ingest/json_logs.json ----- - -When multiple ingest pipelines are specified the first one in the list is -considered to be the entry point pipeline. - -One reason for using multiple pipelines might be to send all logs harvested -by this fileset to the entry point pipeline and have it delegate different parts of -the processing to other pipelines. You can read details about setting -this up in <>. - -[float] -==== config/*.yml - -The `config/` folder contains template files that generate Filebeat input -configurations. The Filebeat inputs are primarily responsible for tailing -files, filtering, and multi-line stitching, so that's what you configure in the -template files. - -A typical example looks like this: - -[source,yaml] ----- -type: log -paths: -{{ range $i, $path := .paths }} - - {{$path}} -{{ end }} -exclude_files: [".gz$"] ----- - -You'll find this example in the template file that gets generated automatically -when you run `make create-fileset`. In this example, the `paths` variable is -used to construct the `paths` list for the input `paths` option. - -Any template files that you add to the `config/` folder need to generate a valid -Filebeat input configuration in YAML format. The options accepted by the -input configuration are documented in the -{filebeat-ref}/configuration-filebeat-options.html[Filebeat Inputs] section of -the Filebeat documentation. - -The template files use the templating language defined by the -https://golang.org/pkg/text/template/[Go standard library]. - -Here is another example that also configures multiline stitching: - -[source,yaml] ----- -type: log -paths: -{{ range $i, $path := .paths }} - - {{$path}} -{{ end }} -exclude_files: [".gz$"] -multiline: - pattern: "^# User@Host: " - negate: true - match: after ----- - -Although you can add multiple configuration files under the `config/` folder, -only the file indicated by the `manifest.yml` file will be loaded. You can use -variables to dynamically switch between configurations. - -[float] -==== ingest/*.json - -The `ingest/` folder contains {es} {ref}/ingest.html[ingest pipeline] -configurations. Ingest pipelines are responsible for parsing the log lines and -doing other manipulations on the data. - -The files in this folder are JSON or YAML documents representing -{ref}/pipeline.html[pipeline definitions]. Just like with the `config/` -folder, you can define multiple pipelines, but a single one is loaded at runtime -based on the information from `manifest.yml`. - -The generator creates a JSON object similar to this one: - -[source,json] ----- -{ - "description": "Pipeline for parsing {module} {fileset} logs", - "processors": [ - ], - "on_failure" : [{ - "set" : { - "field" : "error.message", - "value" : "{{ _ingest.on_failure_message }}" - } - }] -} ----- - -Alternatively, you can use YAML formatted pipelines, which uses a simpler syntax: - -[source,yaml] ----- -description: "Pipeline for parsing {module} {fileset} logs" -processors: -on_failure: - - set: - field: error.message - value: "{{ _ingest.on_failure_message }}" ----- - -From here, you would typically add processors to the `processors` array to do -the actual parsing. For information about available ingest processors, see the -{ref}/processors.html[processor reference documentation]. In -particular, you will likely find the -{ref}/grok-processor.html[grok processor] to be useful for parsing. -Here is an example for parsing the Nginx access logs. - -[source,json] ----- -{ - "grok": { - "field": "message", - "patterns":[ - "%{IPORHOST:nginx.access.remote_ip} - %{DATA:nginx.access.user_name} \\[%{HTTPDATE:nginx.access.time}\\] \"%{WORD:nginx.access.method} %{DATA:nginx.access.url} HTTP/%{NUMBER:nginx.access.http_version}\" %{NUMBER:nginx.access.response_code} %{NUMBER:nginx.access.body_sent.bytes} \"%{DATA:nginx.access.referrer}\" \"%{DATA:nginx.access.agent}\"" - ], - "ignore_missing": true - } -} ----- - -Note that you should follow the convention of naming of fields prefixed with the -module and fileset name: `{module}.{fileset}.field`, e.g. -`nginx.access.remote_ip`. Also, please review our <>. - -[[ingest-json-entry-point-pipeline]] -In 6.6 and later, ingest pipelines can use the -{ref}/conditionals-with-multiple-pipelines.html[`pipeline` processor] to delegate -parts of the processings to other pipelines. - -This can be useful if you want a fileset to ingest the same _logical_ information -presented in different formats, e.g. csv vs. json versions of the same log files. -Imagine an entry point ingest pipeline that detects the format of a log entry and then conditionally -delegates further processing of that log entry, depending on the format, to another -pipeline. - -["source","json",subs="callouts"] ----- -{ - "processors": [ - { - "grok": { - "field": "message", - "patterns": [ - "^%{CHAR:first_char}" - ], - "pattern_definitions": { - "CHAR": "." - } - } - }, - { - "pipeline": { - "if": "ctx.first_char == '{'", - "name": "{< IngestPipeline "json-log-processing-pipeline" >}" <1> - } - }, - { - "pipeline": { - "if": "ctx.first_char != '{'", - "name": "{< IngestPipeline "plain-log-processing-pipeline" >}" - } - } - ] -} ----- -<1> Use the `IngestPipeline` template function to resolve the name. This function converts the -specified name into the fully qualified pipeline ID that is stored in Elasticsearch. - -In order for the above pipeline to work, Filebeat must load the entry point pipeline -as well as any sub-pipelines into Elasticsearch. You can tell Filebeat to do -so by specifying all the necessary pipelines for the fileset in its `manifest.yml` -file. The first pipeline in the list is considered to be the entry point pipeline. - -[source,yaml] ----- -ingest_pipeline: - - ingest/main.json - - ingest/plain_logs.yml - - ingest/json_logs.json ----- - -While developing the pipeline definition, we recommend making use of the -{ref}/simulate-pipeline-api.html[Simulate Pipeline API] for testing -and quick iteration. - -By default Filebeat does not update Ingest pipelines if already loaded. If you -want to force updating your pipeline during development, use -`./filebeat setup --pipelines` command. This uploads pipelines even if they -are already available on the node. - -[float] -==== _meta/fields.yml - -The `fields.yml` file contains the top-level structure for the fields in your -fileset. It is used as the source of truth for: - -* the generated Elasticsearch mapping template -* the generated Kibana index pattern -* the generated documentation for the exported fields - -Besides the `fields.yml` file in the fileset, there is also a `fields.yml` file -at the module level, placed under `module/{module}/_meta/fields.yml`, which -should contain the fields defined at the module level, and the description of -the module itself. In most cases, you should add the fields at the fileset -level. - -After `pipeline.json` is created, it is possible to generate a base `field.yml`. - -[source,bash] ----- -make create-fields MODULE={module} FILESET={fileset} ----- - -Please, always check the generated file and make sure the fields are correct. -You must add field documentation manually. - -If the fields are correct, it is time to generate documentation, configuration -and Kibana index patterns. - -[source,bash] ----- -make update ----- - -[float] -==== test - -In the `test/` directory, you should place sample log files generated by the -service. We have integration tests, automatically executed by CI, that will run -Filebeat on each of the log files under the `test/` folder and check that there -are no parsing errors and that all fields are documented. - -In addition, assuming you have a `test.log` file, you can add a -`test.log-expected.json` file in the same directory that contains the expected -documents as they are found via an Elasticsearch search. In this case, the -integration tests will automatically check that the result is the same on each -run. - -In order to test the filesets with the sample logs and/or generate the expected output one should run the tests -locally for a specific module, using the following procedure under Filebeat directory: - -. Start an Elasticsearch instance locally. For example, using Docker: -+ -[source,bash] ----- -docker run \ - --name elasticsearch \ - -p 9200:9200 -p 9300:9300 \ - -e "xpack.security.http.ssl.enabled=false" -e "ELASTIC_PASSWORD=changeme" \ - -e "discovery.type=single-node" \ - --pull always --rm --detach \ - docker.elastic.co/elasticsearch/elasticsearch:master-SNAPSHOT ----- -. Create an "admin" user on that Elasticsearch instance: -+ -[source,bash] ----- -curl -u elastic:changeme \ - http://localhost:9200/_security/user/admin \ - -X POST -H 'Content-Type: application/json' \ - -d '{"password": "changeme", "roles": ["superuser"]}' ----- -. Create the testing binary: `make filebeat.test` -. Update fields yaml: `make update` -. Create python env: `make python-env` -. Source python env: `source ./build/python-env/bin/activate` -. Run a test, for example to check nginx access log parsing: -+ -[source,bash] ----- -INTEGRATION_TESTS=1 BEAT_STRICT_PERMS=false ES_PASS=changeme \ -TESTING_FILEBEAT_MODULES=nginx \ -pytest tests/system/test_modules.py -v --full-trace ----- -. Add and remove option env vars as required. Here are some useful ones: -* `TESTING_FILEBEAT_ALLOW_OLDER`: if set to 1, allow connecting older versions of Elasticsearch -* `TESTING_FILEBEAT_MODULES`: comma separated list of modules to test. -* `TESTING_FILEBEAT_FILESETS`: comma separated list of filesets to test. -* `TESTING_FILEBEAT_FILEPATTERN`: glob pattern for log files within the fileset to test. -* `GENERATE`: if set to 1, the expected documents will be generated. - -The filebeat logs are writen to the `build` directory. It may be useful to tail them in another terminal using `tail -F build/system-tests/run/test_modules.Test.*/output.log`. - -For example if there's a syntax error in an ingest pipeline, the test will probably just hang. The filebeat log output will contain the error message from elasticsearch. diff --git a/docs/devguide/new_protocol.asciidoc b/docs/devguide/new_protocol.asciidoc deleted file mode 100644 index defd50c0bc3f..000000000000 --- a/docs/devguide/new_protocol.asciidoc +++ /dev/null @@ -1,101 +0,0 @@ -[[new-protocol]] -== Adding a New Protocol to Packetbeat - -The following topics describe how to add a new protocol to Packetbeat: - -* <> -* <> -* <> - -[[getting-ready-new-protocol]] -=== Getting Ready - -Packetbeat is written in http://golang.org/[Go], so having Go installed and knowing the basics are prerequisites for understanding this guide. But don't worry if you aren't a Go expert. Go is a relatively new language, and very few people are experts in it. In fact, several people learned Go by contributing to Packetbeat and libbeat, including the original Packetbeat authors. - -You will also need a good understanding of the wire protocol that you want to -add support for. For standard protocols or protocols used in open source -projects, you can usually find detailed specifications and example source code. -Wireshark is a very useful tool for understanding the inner workings of the -protocols it supports. - -In some cases you can even make use of existing libraries for doing the actual -parsing and decoding of the protocol. If the particular protocol has a Go -implementation with a liberal enough license, you might be able to use it to -parse and decode individual messages instead of writing your own parser. - -Before starting, please also read the <>. - -[float] -==== Cloning and Compiling - -After you have https://golang.org/doc/install[installed Go] and set up the -https://golang.org/doc/code.html#GOPATH[GOPATH] environment variable to point to -your preferred workspace location, you can clone Packetbeat with the -following commands: - -[source,shell] ----------------------------------------------------------------------- -$ mkdir -p ${GOPATH}/src/github.com/elastic -$ cd ${GOPATH}/src/github.com/elastic -$ git clone https://github.com/elastic/beats.git ----------------------------------------------------------------------- - -Note: If you have multiple go paths use `${GOPATH%%:*}`instead of `${GOPATH}`. - -Then you can compile it with: - -[source,shell] ----------------------------------------------------------------------- -$ cd beats -$ make ----------------------------------------------------------------------- - -Note that the location where you clone is important. If you prefer working -outside of the `GOPATH` environment, you can clone to another directory and only -create a symlink to the `$GOPATH/src/github.com/elastic/` directory. - -[float] -=== Forking and Branching - -We recommend the following work flow for contributing to Packetbeat: - -* Fork Beats in GitHub to your own account - -* In the `$GOPATH/src/github.com/elastic/beats` folder, add your fork - as a new remote. For example (replace `tsg` with your GitHub account): - -[source,shell] ----------------------------------------------------------------------- -$ git remote add tsg git@github.com:tsg/beats.git ----------------------------------------------------------------------- - -* Create a new branch for your work: - -[source,shell] ----------------------------------------------------------------------- -$ git checkout -b cool_new_protocol ----------------------------------------------------------------------- - -* Commit as often as you like, and then push to your private fork with: - -[source,shell] ----------------------------------------------------------------------- -$ git push --set-upstream tsg cool_new_protocol ----------------------------------------------------------------------- - -* When you are ready to submit your PR, simply do so from the GitHub web - interface. Feel free to submit your PR early. You can still add commits to - the branch after creating the PR. Submitting the PR early gives us more time to - provide feedback and perhaps help you with it. - -[[protocol-modules]] -=== Protocol Modules - -We are working on updating this section. While you're waiting for updates, you -might want to try out the TCP protocol generator at -https://github.com/elastic/beats/tree/master/packetbeat/scripts/tcp-protocol. - -[[protocol-testing]] -=== Testing - -We are working on updating this section. diff --git a/docs/devguide/newdashboards.asciidoc b/docs/devguide/newdashboards.asciidoc deleted file mode 100644 index 9e540abb025a..000000000000 --- a/docs/devguide/newdashboards.asciidoc +++ /dev/null @@ -1,389 +0,0 @@ -[[new-dashboards]] -== Creating New Kibana Dashboards for a Beat or a Beat module - -++++ -Creating New Kibana Dashboards -++++ - - -When contributing to Beats development, you may want to add new dashboards or -customize the existing ones. To get started, you can -<> that come with the official -Beats and use them as a starting point for your own dashboards. When you're done -making changes to the dashboards in Kibana, you can use the `export_dashboards` -script to <>, along with all -dependencies, to a local directory. - -To make sure the dashboards are compatible with the latest version of Kibana and Elasticsearch, we -recommend that you use the virtual environment under -https://github.com/elastic/beats/tree/master/testing/environments[beats/testing/environments] to import, create, and -export the Kibana dashboards. - -The following topics provide more detail about importing and working with Beats dashboards: - -* <> -* <> -* <> -* <> -* <> -* <> - -[[import-dashboards]] -=== Importing Existing Beat Dashboards - -The official Beats come with Kibana dashboards, and starting with 6.0.0, they -are part of every Beat package. - -You can use the Beat executable to import all the dashboards and the index pattern for a Beat, including the dependencies such as visualizations and searches. - -To import the dashboards, run the `setup` command. - - -[source,shell] -------------------------- -./metricbeat setup -------------------------- - -The `setup` phase loads several dependencies, such as: - -- Index mapping template in Elasticsearch -- Kibana dashboards -- Ingest pipelines -- ILM policy - -The dependencies vary depending on the Beat you're setting up. - -For more details about the `setup` command, see the command-line help. For example: - -[source,shell] ----- -./metricbeat help setup - -This command does initial setup of the environment: - - * Index mapping template in Elasticsearch to ensure fields are mapped. - * Kibana dashboards (where available). - * ML jobs (where available). - * Ingest pipelines (where available). - * ILM policy (for Elasticsearch 6.5 and newer). - -Usage: - metricbeat setup [flags] - -Flags: - --dashboards Setup dashboards - -h, --help help for setup - --index-management Setup all components related to Elasticsearch index management, including template, ilm policy and rollover alias - --pipelines Setup Ingest pipelines ----- - -The flags are useful when you don't want to load everything. For example, to -import only the dashboards, use the `--dashboards` flag: - -[source,shell] ----- -./metricbeat setup --dashboards ----- - -Starting with Beats 6.0.0, the dashboards are no longer loaded directly into Elasticsearch. Instead, they are imported directly into Kibana. -Thus, if your Kibana instance is not listening on localhost, or you enabled -{xpack} for Kibana, you need to either configure the Kibana endpoint in -the config for the Beat, or pass the Kibana host and credentials as -arguments to the `setup` command. For example: - -[source,shell] ----- -./metricbeat setup -E setup.kibana.host=192.168.3.206:5601 -E setup.kibana.username=elastic -E setup.kibana.password=secret ----- - -By default, the `setup` command imports the dashboards from the `kibana` -directory, which is available in the Beat package. - -NOTE: The format of the saved dashboards is not compatible between Kibana 5.x and 6.x. Thus, the Kibana 5.x dashboards are available in -the `5.x` directory, and the Kibana 6.0 dashboards, and older are in the `default` directory. - -In case you are using customized dashboards, you can import them: - -- from a local directory: -+ -[source,shell] ----------------------------------------------------------------------- -./metricbeat setup -E setup.dashboards.directory=kibana ----------------------------------------------------------------------- - -- from a local zip archive: -+ -[source,shell] ----------------------------------------------------------------------- -./metricbeat setup -E setup.dashboards.file=metricbeat-dashboards-6.0.zip ----------------------------------------------------------------------- - -- from a zip archive available online: -+ -[source,shell] ----------------------------------------------------------------------- -./metricbeat setup -E setup.dashboards.url=path/to/url ----------------------------------------------------------------------- -+ - -See <> for a description of the `setup.dashboards` configuration options. - - -[[import-dashboards-for-development]] -==== Import Dashboards for Development - -You can make use of the Magefile from the Beat GitHub repository to import the -dashboards. If Kibana is running on localhost, then you can run the following command -from the root of the Beat: - -[source,shell] --------------------------------- -mage dashboards --------------------------------- - -[[import-dashboard-options]] -==== Kibana dashboards configuration - -The configuration file (`*.reference.yml`) of each Beat contains the `setup.dashboards` section for configuring from where to get the Kibana dashboards, as well as the name of the index pattern. -Each of these configuration options can be overwritten with the command line options by using `-E` flag. - - -*`setup.dashboards.directory=`*:: -Local directory that contains the saved dashboards and their dependencies. -The default value is the `kibana` directory available in the Beat package. - -*`setup.dashboards.file=`*:: -Local zip archive with the dashboards. The archive can contain Kibana dashboards for a single Beat or for multiple Beats. The dashboards of each Beat are placed under a separate directory with the name of the Beat. - -*`setup.dashboards.url=`*:: -Zip archive with the dashboards, available online. The archive can contain Kibana dashboards for a single Beat or for -multiple Beats. The dashboards for each Beat are placed under a separate directory with the name of the Beat. - -*`setup.dashboards.index `*:: -You should only use this option if you want to change the index pattern name that's used by default. For example, if the -default is `metricbeat-*`, you can change it to `custombeat-*`. - - -[[build-dashboards]] -=== Building Your Own Beat Dashboards - -NOTE: If you want to modify a dashboard that comes with a Beat, it's better to modify a copy of the dashboard because the Beat overwrites the dashboards during the setup phase in order to have the latest version. For duplicating a dashboard, just use the `Clone` button from the top of the page. - - -Before building your own dashboards or customizing the existing ones, you need to load: - -* the Beat index pattern, which specifies how Kibana should display the Beat fields -* the Beat dashboards that you want to customize - -For the Elastic Beats, the index pattern is available in the Beat package under -`kibana/*/index-pattern`. The index-pattern is automatically generated from the `fields.yml` file, available in the Beat package. For more details -check the <> section. - -All Beats dashboards, visualizations and saved searches must follow common naming conventions: - -* Dashboard names have prefix `[BeatName Module]`, e.g. `[Filebeat Nginx] Access logs` -* Visualizations and searches have suffix `[BeatName Module]`, e.g. `Top processes [Filebeat Nginx]` - -NOTE: You can set a custom name (skip suffix) for visualization placed on a dashboard. The original visualization will -stay intact. - -The naming convention rules can be verified with the the tool `mage check`. The command fails if it detects: - -* empty description on a dashboard -* unexpected dashboard title format (missing prefix `[BeatName ModuleName]`) -* unexpected visualization title format (missing suffix `[BeatName Module]`) - -After creating your own dashboards in Kibana, you can <> to a local -directory, and then <> in order to be able to share the dashboards with the community. - -[[generate-index-pattern]] -=== Generating the Beat Index Pattern - -The index-pattern defines the format of each field, and it's used by Kibana to know how to display the field. -If you change the fields exported by the Beat, you need to generate a new index pattern for your Beat. Otherwise, you can just use the index pattern available under the `kibana/*/index-pattern` directory. - -The Beat index pattern is generated from the `fields.yml`, which contains all -the fields exported by the Beat. For each field, besides the `type`, you can configure the -`format` field. The format informs Kibana about how to display a certain field. A good example is `percentage` or `bytes` -to display fields as `50%` or `5MB`. - -To generate the index pattern from the `fields.yml`, you need to run the following command in the Beat repository: - -[source,shell] ---------------- -make update ---------------- - -[[export-dashboards]] -=== Exporting New and Modified Beat Dashboards - -To export all the dashboards for any Elastic Beat or any community Beat, including any new or modified dashboards and all dependencies such as -visualizations, searches, you can use the Go script `export_dashboards.go` from -https://github.com/elastic/beats/tree/master/dev-tools/cmd/dashboards[dev-tools]. -See the dev-tools https://github.com/elastic/beats/tree/master/dev-tools/README.md[readme] for more info. - -Alternatively, if the scripts above are not available, you can use your Beat binary to export Kibana 6.0 dashboards or later. - -==== Exporting from Kibana 6.0 to 7.14 - -The `dev-tools/cmd/export_dashboards.go` script helps you export your customized Kibana dashboards until the v7.14.x release. -You might need to export a single dashboard or all the dashboards available for a module or Beat. - -It is also possible to use a Beat binary to export. - -==== Exporting from Kibana 7.15 or newer - -From 7.15, your Beats version must be the same as your Kibana version -to make sure the export API required is available. - -===== Migrate legacy dashboards made with Kibana 7.14 or older - -After you updated your Kibana instance to at least 7.15, you have to -export your dashboards again with either `export_dashboards.go` tool or -with your Beat. - -===== Export a single Kibana dashboard - -To export a single dashboard for a module you can use the following command inside a Beat with modules: - -[source,shell] ---------------- -MODULE=redis ID=AV4REOpp5NkDleZmzKkE mage exportDashboard ---------------- - -[source,shell] ---------------- -./filebeat export dashboard --id 7fea2930-478e-11e7-b1f0-cb29bac6bf8b --folder module/redis ---------------- - -This generates an appropriate folder under module/redis for the dashboard, separating assets into dashboards, searches, vizualizations, etc. -Each exported file is a JSON and their names are the IDs of the assets. - -NOTE: The dashboard ID is available in the dashboard URL. For example, in case the dashboard URL is -`app/kibana#/dashboard/AV4REOpp5NkDleZmzKkE?_g=()&_a=(description:'Overview%2...`, the dashboard ID is `AV4REOpp5NkDleZmzKkE`. - -===== Export all module/Beat dashboards - -Each module should contain a `module.yml` file with a list of all the dashboards available for the module. For the Beats that don't have support for modules (e.g. Packetbeat), -there is a `dashboards.yml` file that defines all the Packetbeat dashboards. - -Below, it's an example of the `module.yml` file for the system module in Metricbeat: - -[source,shell] ---------------- -dashboards: -- id: Metricbeat-system-overview - file: Metricbeat-system-overview.ndjson - -- id: 79ffd6e0-faa0-11e6-947f-177f697178b8 - file: Metricbeat-host-overview.ndjson - -- id: CPU-slash-Memory-per-container - file: Metricbeat-containers-overview.ndjson ---------------- - - -Each dashboard is defined by an `id` and the name of ndjson `file` where the dashboard is saved locally. - -By passing the yml file to the `export_dashboards.go` script or to the Beat, you can export all the dashboards defined: - -[source,shell] -------------------- -go run dev-tools/cmd/dashboards/export_dashboards.go --yml filebeat/module/system/module.yml --folder dashboards -------------------- - -[source,shell] -------------------- -./filebeat export dashboard --yml filebeat/module/system/module.yml -------------------- - - -===== Export dashboards from a Kibana Space - -If you are using the Kibana Spaces feature and want to export dashboards from a specific Space, pass the Space ID to the `export_dashboards.go` script: - -[source,shell] -------------------- -go run dev-tools/cmd/dashboards/export_dashboards.go -space-id my-space [other-options] -------------------- - -In case of running `export dashboard` of a Beat, you need to set the Space ID in `setup.kibana.space.id`. - - -==== Exporting Kibana 5.x dashboards - -To export only some Kibana dashboards for an Elastic Beat or community Beat, you can simply pass a regular expression to -the `export_dashboards.py` script to match the selected Kibana dashboards. - -Before running the `export_dashboards.py` script for the first time, you -need to create an environment that contains all the required Python packages. - -[source,shell] -------------------------- -make python-env -------------------------- - -For example, to export all Kibana dashboards that start with the **Packetbeat** name: - -[source,shell] ----------------------------------------------------------------------- -python ../dev-tools/cmd/dashboards/export_dashboards.py --regex Packetbeat* ----------------------------------------------------------------------- - -To see all the available options, read the descriptions below or run: - -[source,shell] ----------------------------------------------------------------------- -python ../dev-tools/cmd/dashboards/export_dashboards.py -h ----------------------------------------------------------------------- - -*`--url `*:: -The Elasticsearch URL. The default value is http://localhost:9200. - -*`--regex `*:: -Regular expression to match all the Kibana dashboards to be exported. This argument is required. - -*`--kibana `*:: -The Elasticsearch index pattern where Kibana saves its configuration. The default value is `.kibana`. - -*`--dir `*:: -The output directory where the dashboards and all dependencies will be saved. The default value is `output`. - -The output directory has the following structure: - -[source,shell] --------------- -output/ - index-pattern/ - dashboard/ - visualization/ - search/ --------------- - -[[archive-dashboards]] -=== Archiving Your Beat Dashboards - -The Kibana dashboards for the Elastic Beats are saved under the `kibana` directory. To create a zip archive with the -dashboards, including visualizations and searches and the index pattern, you can run the following command in the Beat -repository: - -[source,shell] --------------- -make package-dashboards --------------- - -The Makefile is part of libbeat, which means that community Beats contributors can use the commands shown here to -archive dashboards. The dashboards must be available under the `kibana` directory. - -Another option would be to create a repository only with the dashboards, and use the GitHub release functionality to -create a zip archive. - -Share the Kibana dashboards archive with the community, so other users can use your cool Kibana visualizations! - - - -[[share-beat-dashboards]] -=== Sharing Your Beat Dashboards - -When you're done with your own Beat dashboards, how about letting everyone know? You can create a topic on the https://discuss.elastic.co/c/beats[Beats -forum], and provide the link to the zip archive together with a short description. diff --git a/docs/devguide/pull-request-guidelines.asciidoc b/docs/devguide/pull-request-guidelines.asciidoc deleted file mode 100644 index 113c8aa5d53a..000000000000 --- a/docs/devguide/pull-request-guidelines.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -[[pr-review]] -== Pull request review guidelines - -Every change made to Beats must be held to a high standard, and while the responsibility for quality in a pull request ultimately lies with the author, Beats team members have the responsibility as reviewers to verify during their review process. Where this document is unclear or inappropriate let common sense and consensus override it. - -[float] -=== Code Style - -Everyone's got an opinion on style. To avoid spending time on this issue we rely almost exclusively on `go fmt` and https://houndci.com/[hound] to police style. If neither of these tools complain the code is almost certainly fine. There may be exceptions to this, but they should be extremely rare. Only override the judgement of these tools in the most unusual of situations. - -[float] -=== Flaky Tests - -As software projects grow so does the complexity of their test cases and with that the probability of some tests becoming 'flaky'. It is everyone's responsibility to handle flaky tests. If you notice a pull request build failing for a reason that is unrelated to the pushed code follow the procedure below: - -1. Create an issue using the "Flaky Test" github issue template with the "Flaky Test" label attached. -2. Create a PR to mute or fix the flaky test. -3. Merge that PR and rebase off of it before continuing with the normal PR process for your original PR. diff --git a/docs/devguide/python.asciidoc b/docs/devguide/python.asciidoc deleted file mode 100644 index 8f86e81fcc39..000000000000 --- a/docs/devguide/python.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ -[[python-beats]] -=== Python in Beats - -Python is used for Beats development, it is the language used to implement -system tests and some other tools. Python dependencies are managed by the use of -virtual environments, supported by -https://docs.python.org/3/library/venv.html[venv]. - -Beats development requires Python >= {python}. - -[[installing-python]] -==== Installing Python and venv - -Python uses to be installed in many operating systems. If it is not installed in -your system you can follow the instructions available in https://www.python.org/downloads/ - -In Ubuntu/Debian systems, Python 3 can be installed with: - -["source","sh"] ----- -sudo apt-get install python3 python3-venv ----- - -There are packages for specific minor versions, so for example if Python 3.7 -wants to be used, it can be installed with the following command: - -["source","sh"] ----- -sudo apt-get install python3.7 python3.7-venv ----- - -It is recommended to use Python >= {python}. - -[[python-virtual-environments]] -==== Working with virtual environments - -All `make` and `mage` targets manage their own virtual environments in a transparent -way, so for the most common operations required when contributing to beats, -nothing special needs to be done. - -Virtual environments used by `make` can be found in most Beats directories under -`build/python-env`, they are created by targets that need it, or can be -explicitly created by running `make python-env`. The ones used by `mage` are -created when required under `build/ve`. - -There are some environment variables that can be used to customize the creation -of these virtual environments: - -* `PYTHON_EXE`: Python executable to be used in the virtual environment. It has - to exist in the path. -* `PYTHON_ENV`: Path to the virtual environment to use. If it doesn't exist, it - is created by `make` or `mage` targets when needed. - -Virtual environments can also be used without `make` or `mage`, this is usual -for example when running individual system tests with `pytest`. There are two -ways to run commands from the virtual environment: - -* "Activating" the virtual environment in your current terminal running - `source ./build/python-env/bin/activate`. Virtual environment can be - deactivated by running `deactivate`. -* Directly running commands from the virtual environment path. For example - `pytest` can be executed as `./build/python-env/bin/pytest`. - -To recreate a virtual environment, remove its directory. All virtual -environments are also removed with `make clean`. - -[[python-older-versions]] -==== Working with older versions - -Older versions of Beats were not compatible with Python 3, if you need to -temporary work on one of these versions of Beats, and you don't want to remove -your current virtual environments, you can use environment variables to run -commands in a temporary virtual environment. - -For example you can run `make update` with Python 2.7 with the following -command: - -["source","sh"] ------ -PYTHON_EXE=python2.7 PYTHON_ENV=/tmp/venv2 make update ------ - -If you need to run tests you can also create a virtual environment and then -activate it to run commands from there: -["source","sh"] ------ -PYTHON_EXE=python2.7 PYTHON_ENV=/tmp/venv2 make python-env -source /tmp/venv2/bin/activate -... ------ diff --git a/docs/devguide/terraform.asciidoc b/docs/devguide/terraform.asciidoc deleted file mode 100644 index 0cdd0198f214..000000000000 --- a/docs/devguide/terraform.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[[terraform-beats]] -== Terraform in Beats - -Terraform is used to provision scenarios for integration testing of some cloud -features. Features implementing integration tests that require the presence of -cloud resources should have their own Terraform configuration, this configuration -can be used when developing locally to create (and destroy) resources that allow -to test these features. - -Tests requiring access to cloud providers should be disabled by default with the -use of build tags. - -[[installing-terraform]] -=== Installing Terraform - -Terraform is available in https://www.terraform.io/downloads.html - -Download it and place it in some directory in your PATH. - -`terraform` is the main command for Terraform and the only one that is usually -needed to manage configurations. Terraform will also download other plugins that -implement the specific functionality for each provider. These plugins are -automatically managed and stored in the working copy, if you want to share the -plugins between multiple working copies you can manually install them in the -user the user plugins directory located at `~/.terraform.d/plugins`, -or `%APPDATA%\terraform.d\plugins on Windows`. - -Plugins are available in https://registry.terraform.io/ - -[[using-terraform]] -=== Using Terraform - -The most important commands when using Terraform are: -* `terraform init` to do some initial checks and install the required plugins. -* `terraform apply` to create the resources defined in the configuration. -* `terraform destroy` to destroy resources previously created. - -Cloud providers use to require credentials, they can be provided with the usual -methods supported by these providers, using environment variables and/or -credential files. - -Terraform stores the last known state of the resources managed by a -configuration in a `terraform.tfstate` file. It is important to keep this file -as it is used as input by `terraform destroy`. This file is created in the same -directory where `terraform apply` is executed. - -Please take a look to Terraform documentation for more details: https://www.terraform.io/intro/index.html - -[[terraform-configurations]] -=== Terraform configuration guidelines - -The main purpouse of Terraform in Beats is to create and destroy cloud resources -required by integration tests. For these configurations there are some things to -take into account: -* Apply should work without additional inputs or files. Only input will be the - required for specific providers, using environment variables or credential - files. -* You must be able to apply the same configuration multiple times in the same - account. This will allow to have multiple builds using the same configuration - but with different instances of the resources. Some resources are already - created with unique identifiers (as EC2 instances), some others have to be - explicitly created with unique names (e.g. S3 buckets). For these cases random - suffixes can be added to identifiers. -* Destroy must work without additional input, and should be able to destroy all - the resources created by the configuration. There are some resources that need - specific flags to be destroyed by `terraform destroy`. For example S3 buckets - need a flag to force to empty the bucket before deleting it, or RDS instances - need a flag to disable snapshots on deletion. - -[[terraform-in-ci]] -=== Terraform in CI - -Integration tests that need the presence of certain resources to work can be -executed in CI if they provide a Terraform configuration to start these -resources. These tests are disabled by default in CI. - -Terraform states are archived as artifacrs of builds, this allows to manually -destroy resources created by builds that were not able to do a proper cleanup. - - - diff --git a/docs/devguide/testing.asciidoc b/docs/devguide/testing.asciidoc deleted file mode 100644 index 07f2ae21025c..000000000000 --- a/docs/devguide/testing.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[[testing]] -=== Testing - -Beats has a various sets of tests. This guide should help to understand how the different test suites work, how they are used and new tests are added. - -In general there are two major test suites: - -* Tests written in Go -* Tests written in Python - -The tests written in Go use the https://golang.org/pkg/testing/[Go Testing -package]. The tests written in Python depend on https://docs.pytest.org/en/latest/[pytest] and require a compiled and executable binary from the Go code. The python test run a beat with a specific config and params and either check if the output is as expected or if the correct things show up in the logs. - -For both of the above test suites so called integration tests exists. Integration tests in Beats are tests which require an external system like Elasticsearch to test if the integration with this service works as expected. Beats provides in its testsuite docker containers and docker-compose files to start these environments but a developer can run the required services also locally. - -==== Running Go Tests - -The Go tests can be executed in each Go package by running `go test .`. This will execute all tests which don't don't require an external service to be running. To run all non integration tests for a beat run `mage unitTest`. - -All Go tests are in the same package as the tested code itself and have the suffix `_test` in the file name. Most of the tests are in the same package as the rest of the code. Some of the tests which should be separate from the rest of the code or should not use private variables go under `{packagename}_test`. - -===== Running Go Integration Tests - -Integration tests are labelled with the `//go:build integration` build tag and use the `_integration_test.go` suffix. - -To run the integration tests use the `mage goIntegTest` target, which will start the required services using https://docs.docker.com/compose/[docker-compose] and run all integration tests. - -It is also possible to run module specific integration tests. For example, to run kafka only tests use `MODULE=kafka mage integTest -v` - -It is possible to start the `docker-compose` services manually to allow selecting which specific tests should be run. An example follows for filebeat: - -[source,bash] ----- -cd filebeat -# Pull and build the containers. Only needs to be done once unless you change the containers. -mage docker:composeBuild -# Bring up all containers, wait until they are healthy, and put them in the background. -mage docker:composeUp -# Run all integration tests. -go test ./filebeat/... -tags integration -# Stop all started containers. -mage docker:composeDown ----- - -===== Generate sample events - -Go tests support generating sample events to be used as fixtures. - -This generation can be perfomed running `go test --data`. This functionality is supported by packetbeat and Metricbeat. - -In Metricbeat, run the command from within a module like this: `go test --tags integration,azure --data --run "TestData"`. Make sure to add the relevant tags (`integration` is common then add module and metricset specific tags). - -A note about tags: the `--data` flag is a custom flag added by Metricbeat and Packetbeat frameworks. It will not be present in case tags do not match, as the relevant code will not be run and silently skipped (without the tag the test file is ignored by Go compiler so the framework doesn't load). This may happen if there are different tags in the build tags of the metricset under test (i.e. the GCP billing metricset requires the `billing` tag too). - -==== Running System (integration) Tests (Python and Go) - -The system tests are defined in the `tests/system` (for legacy Python test) and on `tests/integration` (for Go tests) directory. They require a testing binary to be available and the python environment to be set up. - -To create the testing binary run `mage buildSystemTestBinary`. This will create the test binary in the beat directory. To set up the Python testing environment run `mage pythonVirtualEnv` which will create a virtual environment with all test dependencies and print its location. To activate it, the instructions depend on your operating system. See the https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#activating-a-virtual-environment[virtualenv documentation]. - -To run the system and integration tests use the `mage pythonIntegTest` target, which will start the required services using https://docs.docker.com/compose/[docker-compose] and run all integration tests. Similar to Go integration tests, the individual steps can be done manually to allow selecting which tests should be run: - -[source,bash] ----- -# Create and activate the system test virtual environment (assumes a Unix system). -source $(mage pythonVirtualEnv)/bin/activate - -# Pull and build the containers. Only needs to be done once unless you change the containers. -mage docker:composeBuild - -# Bring up all containers, wait until they are healthy, and put them in the background. -mage docker:composeUp - -# Run all system and integration tests. -INTEGRATION_TESTS=1 pytest ./tests/system - -# Stop all started containers. -mage docker:composeDown ----- - -Filebeat's module python tests have additional documentation found in the <> guide. - -==== Test commands - -To list all mage commands run `mage -l`. A quick summary of the available test Make commands is: - -* `unit`: Go tests -* `unit-tests`: Go tests with coverage reports -* `integration-tests`: Go tests with services in local docker -* `integration-tests-environment`: Go tests inside docker with service in docker -* `fast-system-tests`: Python tests -* `system-tests`: Python tests with coverage report -* `INTEGRATION_TESTS=1 system-tests`: Python tests with local services -* `system-tests-environment`: Python tests inside docker with service in docker -* `testsuite`: Complete test suite in docker environment is run -* `test`: Runs testsuite without environment - -There are two experimental test commands: - -* `benchmark-tests`: Running Go tests with `-bench` flag -* `load-tests`: Running system tests with `LOAD_TESTS=1` flag - - -==== Coverage report - -If the tests were run to create a test coverage, the coverage report files can be found under `build/docs`. To create a more human readable file out of the `.cov` file `make coverage-report` can be used. It creates a `.html` file for each report and a `full.html` as summary of all reports together in the directory `build/coverage`. - -==== Race detection - -All tests can be run with the Go race detector enabled by setting the environment variable `RACE_DETECTOR=1`. This applies to tests in Go and Python. For Python the test binary has to be recompile when the flag is changed. Having the race detection enabled will slow down the tests. diff --git a/docs/docset.yml b/docs/docset.yml new file mode 100644 index 000000000000..48aef0b01f44 --- /dev/null +++ b/docs/docset.yml @@ -0,0 +1,491 @@ +project: 'Beats docs' +cross_links: + - docs-content + - ecs + - elasticsearch + - integration-docs + - logstash +toc: + - toc: reference + - toc: release-notes + - toc: extend +subs: + ref: "https://www.elastic.co/guide/en/elasticsearch/reference/current" + ref-bare: "https://www.elastic.co/guide/en/elasticsearch/reference" + ref-8x: "https://www.elastic.co/guide/en/elasticsearch/reference/8.1" + ref-80: "https://www.elastic.co/guide/en/elasticsearch/reference/8.0" + ref-7x: "https://www.elastic.co/guide/en/elasticsearch/reference/7.17" + ref-70: "https://www.elastic.co/guide/en/elasticsearch/reference/7.0" + ref-60: "https://www.elastic.co/guide/en/elasticsearch/reference/6.0" + ref-64: "https://www.elastic.co/guide/en/elasticsearch/reference/6.4" + xpack-ref: "https://www.elastic.co/guide/en/x-pack/6.2" + logstash-ref: "https://www.elastic.co/guide/en/logstash/current" + kibana-ref: "https://www.elastic.co/guide/en/kibana/current" + kibana-ref-all: "https://www.elastic.co/guide/en/kibana" + beats-ref-root: "https://www.elastic.co/guide/en/beats" + beats-ref: "https://www.elastic.co/guide/en/beats/libbeat/current" + beats-ref-60: "https://www.elastic.co/guide/en/beats/libbeat/6.0" + beats-ref-63: "https://www.elastic.co/guide/en/beats/libbeat/6.3" + beats-devguide: "https://www.elastic.co/guide/en/beats/devguide/current" + auditbeat-ref: "https://www.elastic.co/guide/en/beats/auditbeat/current" + packetbeat-ref: "https://www.elastic.co/guide/en/beats/packetbeat/current" + metricbeat-ref: "https://www.elastic.co/guide/en/beats/metricbeat/current" + filebeat-ref: "https://www.elastic.co/guide/en/beats/filebeat/current" + functionbeat-ref: "https://www.elastic.co/guide/en/beats/functionbeat/current" + winlogbeat-ref: "https://www.elastic.co/guide/en/beats/winlogbeat/current" + heartbeat-ref: "https://www.elastic.co/guide/en/beats/heartbeat/current" + journalbeat-ref: "https://www.elastic.co/guide/en/beats/journalbeat/current" + ingest-guide: "https://www.elastic.co/guide/en/ingest/current" + fleet-guide: "https://www.elastic.co/guide/en/fleet/current" + apm-guide-ref: "https://www.elastic.co/guide/en/apm/guide/current" + apm-guide-7x: "https://www.elastic.co/guide/en/apm/guide/7.17" + apm-app-ref: "https://www.elastic.co/guide/en/kibana/current" + apm-agents-ref: "https://www.elastic.co/guide/en/apm/agent" + apm-android-ref: "https://www.elastic.co/guide/en/apm/agent/android/current" + apm-py-ref: "https://www.elastic.co/guide/en/apm/agent/python/current" + apm-py-ref-3x: "https://www.elastic.co/guide/en/apm/agent/python/3.x" + apm-node-ref-index: "https://www.elastic.co/guide/en/apm/agent/nodejs" + apm-node-ref: "https://www.elastic.co/guide/en/apm/agent/nodejs/current" + apm-node-ref-1x: "https://www.elastic.co/guide/en/apm/agent/nodejs/1.x" + apm-rum-ref: "https://www.elastic.co/guide/en/apm/agent/rum-js/current" + apm-ruby-ref: "https://www.elastic.co/guide/en/apm/agent/ruby/current" + apm-java-ref: "https://www.elastic.co/guide/en/apm/agent/java/current" + apm-go-ref: "https://www.elastic.co/guide/en/apm/agent/go/current" + apm-dotnet-ref: "https://www.elastic.co/guide/en/apm/agent/dotnet/current" + apm-php-ref: "https://www.elastic.co/guide/en/apm/agent/php/current" + apm-ios-ref: "https://www.elastic.co/guide/en/apm/agent/swift/current" + apm-lambda-ref: "https://www.elastic.co/guide/en/apm/lambda/current" + apm-attacher-ref: "https://www.elastic.co/guide/en/apm/attacher/current" + docker-logging-ref: "https://www.elastic.co/guide/en/beats/loggingplugin/current" + esf-ref: "https://www.elastic.co/guide/en/esf/current" + kinesis-firehose-ref: "https://www.elastic.co/guide/en/kinesis/{{kinesis_version}}" + estc-welcome-current: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current" + estc-welcome: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current" + estc-welcome-all: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions" + hadoop-ref: "https://www.elastic.co/guide/en/elasticsearch/hadoop/current" + stack-ref: "https://www.elastic.co/guide/en/elastic-stack/current" + stack-ref-67: "https://www.elastic.co/guide/en/elastic-stack/6.7" + stack-ref-68: "https://www.elastic.co/guide/en/elastic-stack/6.8" + stack-ref-70: "https://www.elastic.co/guide/en/elastic-stack/7.0" + stack-ref-80: "https://www.elastic.co/guide/en/elastic-stack/8.0" + stack-ov: "https://www.elastic.co/guide/en/elastic-stack-overview/current" + stack-gs: "https://www.elastic.co/guide/en/elastic-stack-get-started/current" + stack-gs-current: "https://www.elastic.co/guide/en/elastic-stack-get-started/current" + javaclient: "https://www.elastic.co/guide/en/elasticsearch/client/java-api/current" + java-api-client: "https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current" + java-rest: "https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current" + jsclient: "https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current" + jsclient-current: "https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current" + es-ruby-client: "https://www.elastic.co/guide/en/elasticsearch/client/ruby-api/current" + es-dotnet-client: "https://www.elastic.co/guide/en/elasticsearch/client/net-api/current" + es-php-client: "https://www.elastic.co/guide/en/elasticsearch/client/php-api/current" + es-python-client: "https://www.elastic.co/guide/en/elasticsearch/client/python-api/current" + defguide: "https://www.elastic.co/guide/en/elasticsearch/guide/2.x" + painless: "https://www.elastic.co/guide/en/elasticsearch/painless/current" + plugins: "https://www.elastic.co/guide/en/elasticsearch/plugins/current" + plugins-8x: "https://www.elastic.co/guide/en/elasticsearch/plugins/8.1" + plugins-7x: "https://www.elastic.co/guide/en/elasticsearch/plugins/7.17" + plugins-6x: "https://www.elastic.co/guide/en/elasticsearch/plugins/6.8" + glossary: "https://www.elastic.co/guide/en/elastic-stack-glossary/current" + upgrade_guide: "https://www.elastic.co/products/upgrade_guide" + blog-ref: "https://www.elastic.co/blog/" + curator-ref: "https://www.elastic.co/guide/en/elasticsearch/client/curator/current" + curator-ref-current: "https://www.elastic.co/guide/en/elasticsearch/client/curator/current" + metrics-ref: "https://www.elastic.co/guide/en/metrics/current" + metrics-guide: "https://www.elastic.co/guide/en/metrics/guide/current" + logs-ref: "https://www.elastic.co/guide/en/logs/current" + logs-guide: "https://www.elastic.co/guide/en/logs/guide/current" + uptime-guide: "https://www.elastic.co/guide/en/uptime/current" + observability-guide: "https://www.elastic.co/guide/en/observability/current" + observability-guide-all: "https://www.elastic.co/guide/en/observability" + siem-guide: "https://www.elastic.co/guide/en/siem/guide/current" + security-guide: "https://www.elastic.co/guide/en/security/current" + security-guide-all: "https://www.elastic.co/guide/en/security" + endpoint-guide: "https://www.elastic.co/guide/en/endpoint/current" + sql-odbc: "https://www.elastic.co/guide/en/elasticsearch/sql-odbc/current" + ecs-ref: "https://www.elastic.co/guide/en/ecs/current" + ecs-logging-ref: "https://www.elastic.co/guide/en/ecs-logging/overview/current" + ecs-logging-go-logrus-ref: "https://www.elastic.co/guide/en/ecs-logging/go-logrus/current" + ecs-logging-go-zap-ref: "https://www.elastic.co/guide/en/ecs-logging/go-zap/current" + ecs-logging-go-zerolog-ref: "https://www.elastic.co/guide/en/ecs-logging/go-zap/current" + ecs-logging-java-ref: "https://www.elastic.co/guide/en/ecs-logging/java/current" + ecs-logging-dotnet-ref: "https://www.elastic.co/guide/en/ecs-logging/dotnet/current" + ecs-logging-nodejs-ref: "https://www.elastic.co/guide/en/ecs-logging/nodejs/current" + ecs-logging-php-ref: "https://www.elastic.co/guide/en/ecs-logging/php/current" + ecs-logging-python-ref: "https://www.elastic.co/guide/en/ecs-logging/python/current" + ecs-logging-ruby-ref: "https://www.elastic.co/guide/en/ecs-logging/ruby/current" + ml-docs: "https://www.elastic.co/guide/en/machine-learning/current" + eland-docs: "https://www.elastic.co/guide/en/elasticsearch/client/eland/current" + eql-ref: "https://eql.readthedocs.io/en/latest/query-guide" + extendtrial: "https://www.elastic.co/trialextension" + wikipedia: "https://en.wikipedia.org/wiki" + forum: "https://discuss.elastic.co/" + xpack-forum: "https://discuss.elastic.co/c/50-x-pack" + security-forum: "https://discuss.elastic.co/c/x-pack/shield" + watcher-forum: "https://discuss.elastic.co/c/x-pack/watcher" + monitoring-forum: "https://discuss.elastic.co/c/x-pack/marvel" + graph-forum: "https://discuss.elastic.co/c/x-pack/graph" + apm-forum: "https://discuss.elastic.co/c/apm" + enterprise-search-ref: "https://www.elastic.co/guide/en/enterprise-search/current" + app-search-ref: "https://www.elastic.co/guide/en/app-search/current" + workplace-search-ref: "https://www.elastic.co/guide/en/workplace-search/current" + enterprise-search-node-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/enterprise-search-node/current" + enterprise-search-php-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/php/current" + enterprise-search-python-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/python/current" + enterprise-search-ruby-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/ruby/current" + elastic-maps-service: "https://maps.elastic.co" + integrations-docs: "https://docs.elastic.co/en/integrations" + integrations-devguide: "https://www.elastic.co/guide/en/integrations-developer/current" + time-units: "https://www.elastic.co/guide/en/elasticsearch/reference/current/api-conventions.html#time-units" + byte-units: "https://www.elastic.co/guide/en/elasticsearch/reference/current/api-conventions.html#byte-units" + apm-py-ref-v: "https://www.elastic.co/guide/en/apm/agent/python/current" + apm-node-ref-v: "https://www.elastic.co/guide/en/apm/agent/nodejs/current" + apm-rum-ref-v: "https://www.elastic.co/guide/en/apm/agent/rum-js/current" + apm-ruby-ref-v: "https://www.elastic.co/guide/en/apm/agent/ruby/current" + apm-java-ref-v: "https://www.elastic.co/guide/en/apm/agent/java/current" + apm-go-ref-v: "https://www.elastic.co/guide/en/apm/agent/go/current" + apm-ios-ref-v: "https://www.elastic.co/guide/en/apm/agent/swift/current" + apm-dotnet-ref-v: "https://www.elastic.co/guide/en/apm/agent/dotnet/current" + apm-php-ref-v: "https://www.elastic.co/guide/en/apm/agent/php/current" + ecloud: "Elastic Cloud" + esf: "Elastic Serverless Forwarder" + ess: "Elasticsearch Service" + ece: "Elastic Cloud Enterprise" + eck: "Elastic Cloud on Kubernetes" + serverless-full: "Elastic Cloud Serverless" + serverless-short: "Serverless" + es-serverless: "Elasticsearch Serverless" + es3: "Elasticsearch Serverless" + obs-serverless: "Elastic Observability Serverless" + sec-serverless: "Elastic Security Serverless" + serverless-docs: "https://docs.elastic.co/serverless" + cloud: "https://www.elastic.co/guide/en/cloud/current" + ess-utm-params: "?page=docs&placement=docs-body" + ess-baymax: "?page=docs&placement=docs-body" + ess-trial: "https://cloud.elastic.co/registration?page=docs&placement=docs-body" + ess-product: "https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body" + ess-console: "https://cloud.elastic.co?page=docs&placement=docs-body" + ess-console-name: "Elasticsearch Service Console" + ess-deployments: "https://cloud.elastic.co/deployments?page=docs&placement=docs-body" + ece-ref: "https://www.elastic.co/guide/en/cloud-enterprise/current" + eck-ref: "https://www.elastic.co/guide/en/cloud-on-k8s/current" + ess-leadin: "You can run Elasticsearch on your own hardware or use our hosted Elasticsearch Service that is available on AWS, GCP, and Azure. https://cloud.elastic.co/registration{ess-utm-params}[Try the Elasticsearch Service for free]." + ess-leadin-short: "Our hosted Elasticsearch Service is available on AWS, GCP, and Azure, and you can https://cloud.elastic.co/registration{ess-utm-params}[try it for free]." + ess-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg[link=\"https://cloud.elastic.co/registration{ess-utm-params}\", title=\"Supported on Elasticsearch Service\"]" + ece-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud_ece.svg[link=\"https://cloud.elastic.co/registration{ess-utm-params}\", title=\"Supported on Elastic Cloud Enterprise\"]" + cloud-only: "This feature is designed for indirect use by https://cloud.elastic.co/registration{ess-utm-params}[Elasticsearch Service], https://www.elastic.co/guide/en/cloud-enterprise/{ece-version-link}[Elastic Cloud Enterprise], and https://www.elastic.co/guide/en/cloud-on-k8s/current[Elastic Cloud on Kubernetes]. Direct use is not supported." + ess-setting-change: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg[link=\"{ess-trial}\", title=\"Supported on {ess}\"] indicates a change to a supported https://www.elastic.co/guide/en/cloud/current/ec-add-user-settings.html[user setting] for Elasticsearch Service." + ess-skip-section: "If you use Elasticsearch Service, skip this section. Elasticsearch Service handles these changes for you." + api-cloud: "https://www.elastic.co/docs/api/doc/cloud" + api-ece: "https://www.elastic.co/docs/api/doc/cloud-enterprise" + api-kibana-serverless: "https://www.elastic.co/docs/api/doc/serverless" + es-feature-flag: "This feature is in development and not yet available for use. This documentation is provided for informational purposes only." + es-ref-dir: "'{{elasticsearch-root}}/docs/reference'" + apm-app: "APM app" + uptime-app: "Uptime app" + synthetics-app: "Synthetics app" + logs-app: "Logs app" + metrics-app: "Metrics app" + infrastructure-app: "Infrastructure app" + siem-app: "SIEM app" + security-app: "Elastic Security app" + ml-app: "Machine Learning" + dev-tools-app: "Dev Tools" + ingest-manager-app: "Ingest Manager" + stack-manage-app: "Stack Management" + stack-monitor-app: "Stack Monitoring" + alerts-ui: "Alerts and Actions" + rules-ui: "Rules" + rac-ui: "Rules and Connectors" + connectors-ui: "Connectors" + connectors-feature: "Actions and Connectors" + stack-rules-feature: "Stack Rules" + user-experience: "User Experience" + ems: "Elastic Maps Service" + ems-init: "EMS" + hosted-ems: "Elastic Maps Server" + ipm-app: "Index Pattern Management" + ingest-pipelines: "ingest pipelines" + ingest-pipelines-app: "Ingest Pipelines" + ingest-pipelines-cap: "Ingest pipelines" + ls-pipelines: "Logstash pipelines" + ls-pipelines-app: "Logstash Pipelines" + maint-windows: "maintenance windows" + maint-windows-app: "Maintenance Windows" + maint-windows-cap: "Maintenance windows" + custom-roles-app: "Custom Roles" + data-source: "data view" + data-sources: "data views" + data-source-caps: "Data View" + data-sources-caps: "Data Views" + data-source-cap: "Data view" + data-sources-cap: "Data views" + project-settings: "Project settings" + manage-app: "Management" + index-manage-app: "Index Management" + data-views-app: "Data Views" + rules-app: "Rules" + saved-objects-app: "Saved Objects" + tags-app: "Tags" + api-keys-app: "API keys" + transforms-app: "Transforms" + connectors-app: "Connectors" + files-app: "Files" + reports-app: "Reports" + maps-app: "Maps" + alerts-app: "Alerts" + crawler: "Enterprise Search web crawler" + ents: "Enterprise Search" + app-search-crawler: "App Search web crawler" + agent: "Elastic Agent" + agents: "Elastic Agents" + fleet: "Fleet" + fleet-server: "Fleet Server" + integrations-server: "Integrations Server" + ingest-manager: "Ingest Manager" + ingest-management: "ingest management" + package-manager: "Elastic Package Manager" + integrations: "Integrations" + package-registry: "Elastic Package Registry" + artifact-registry: "Elastic Artifact Registry" + aws: "AWS" + stack: "Elastic Stack" + xpack: "X-Pack" + es: "Elasticsearch" + kib: "Kibana" + esms: "Elastic Stack Monitoring Service" + esms-init: "ESMS" + ls: "Logstash" + beats: "Beats" + auditbeat: "Auditbeat" + filebeat: "Filebeat" + heartbeat: "Heartbeat" + metricbeat: "Metricbeat" + packetbeat: "Packetbeat" + winlogbeat: "Winlogbeat" + functionbeat: "Functionbeat" + journalbeat: "Journalbeat" + es-sql: "Elasticsearch SQL" + esql: "ES|QL" + elastic-agent: "Elastic Agent" + k8s: "Kubernetes" + log-driver-long: "Elastic Logging Plugin for Docker" + security: "X-Pack security" + security-features: "security features" + operator-feature: "operator privileges feature" + es-security-features: "Elasticsearch security features" + stack-security-features: "Elastic Stack security features" + endpoint-sec: "Endpoint Security" + endpoint-cloud-sec: "Endpoint and Cloud Security" + elastic-defend: "Elastic Defend" + elastic-sec: "Elastic Security" + elastic-endpoint: "Elastic Endpoint" + swimlane: "Swimlane" + sn: "ServiceNow" + sn-itsm: "ServiceNow ITSM" + sn-itom: "ServiceNow ITOM" + sn-sir: "ServiceNow SecOps" + jira: "Jira" + ibm-r: "IBM Resilient" + webhook: "Webhook" + webhook-cm: "Webhook - Case Management" + opsgenie: "Opsgenie" + bedrock: "Amazon Bedrock" + gemini: "Google Gemini" + hive: "TheHive" + monitoring: "X-Pack monitoring" + monitor-features: "monitoring features" + stack-monitor-features: "Elastic Stack monitoring features" + watcher: "Watcher" + alert-features: "alerting features" + reporting: "X-Pack reporting" + report-features: "reporting features" + graph: "X-Pack graph" + graph-features: "graph analytics features" + searchprofiler: "Search Profiler" + xpackml: "X-Pack machine learning" + ml: "machine learning" + ml-cap: "Machine learning" + ml-init: "ML" + ml-features: "machine learning features" + stack-ml-features: "Elastic Stack machine learning features" + ccr: "cross-cluster replication" + ccr-cap: "Cross-cluster replication" + ccr-init: "CCR" + ccs: "cross-cluster search" + ccs-cap: "Cross-cluster search" + ccs-init: "CCS" + ilm: "index lifecycle management" + ilm-cap: "Index lifecycle management" + ilm-init: "ILM" + dlm: "data lifecycle management" + dlm-cap: "Data lifecycle management" + dlm-init: "DLM" + search-snap: "searchable snapshot" + search-snaps: "searchable snapshots" + search-snaps-cap: "Searchable snapshots" + slm: "snapshot lifecycle management" + slm-cap: "Snapshot lifecycle management" + slm-init: "SLM" + rollup-features: "data rollup features" + ipm: "index pattern management" + ipm-cap: "Index pattern" + rollup: "rollup" + rollup-cap: "Rollup" + rollups: "rollups" + rollups-cap: "Rollups" + rollup-job: "rollup job" + rollup-jobs: "rollup jobs" + rollup-jobs-cap: "Rollup jobs" + dfeed: "datafeed" + dfeeds: "datafeeds" + dfeed-cap: "Datafeed" + dfeeds-cap: "Datafeeds" + ml-jobs: "machine learning jobs" + ml-jobs-cap: "Machine learning jobs" + anomaly-detect: "anomaly detection" + anomaly-detect-cap: "Anomaly detection" + anomaly-job: "anomaly detection job" + anomaly-jobs: "anomaly detection jobs" + anomaly-jobs-cap: "Anomaly detection jobs" + dataframe: "data frame" + dataframes: "data frames" + dataframe-cap: "Data frame" + dataframes-cap: "Data frames" + watcher-transform: "payload transform" + watcher-transforms: "payload transforms" + watcher-transform-cap: "Payload transform" + watcher-transforms-cap: "Payload transforms" + transform: "transform" + transforms: "transforms" + transform-cap: "Transform" + transforms-cap: "Transforms" + dataframe-transform: "transform" + dataframe-transform-cap: "Transform" + dataframe-transforms: "transforms" + dataframe-transforms-cap: "Transforms" + dfanalytics-cap: "Data frame analytics" + dfanalytics: "data frame analytics" + dataframe-analytics-config: "'{dataframe} analytics config'" + dfanalytics-job: "'{dataframe} analytics job'" + dfanalytics-jobs: "'{dataframe} analytics jobs'" + dfanalytics-jobs-cap: "'{dataframe-cap} analytics jobs'" + cdataframe: "continuous data frame" + cdataframes: "continuous data frames" + cdataframe-cap: "Continuous data frame" + cdataframes-cap: "Continuous data frames" + cdataframe-transform: "continuous transform" + cdataframe-transforms: "continuous transforms" + cdataframe-transforms-cap: "Continuous transforms" + ctransform: "continuous transform" + ctransform-cap: "Continuous transform" + ctransforms: "continuous transforms" + ctransforms-cap: "Continuous transforms" + oldetection: "outlier detection" + oldetection-cap: "Outlier detection" + olscore: "outlier score" + olscores: "outlier scores" + fiscore: "feature influence score" + evaluatedf-api: "evaluate {dataframe} analytics API" + evaluatedf-api-cap: "Evaluate {dataframe} analytics API" + binarysc: "binary soft classification" + binarysc-cap: "Binary soft classification" + regression: "regression" + regression-cap: "Regression" + reganalysis: "regression analysis" + reganalysis-cap: "Regression analysis" + depvar: "dependent variable" + feature-var: "feature variable" + feature-vars: "feature variables" + feature-vars-cap: "Feature variables" + classification: "classification" + classification-cap: "Classification" + classanalysis: "classification analysis" + classanalysis-cap: "Classification analysis" + infer-cap: "Inference" + infer: "inference" + lang-ident-cap: "Language identification" + lang-ident: "language identification" + data-viz: "Data Visualizer" + file-data-viz: "File Data Visualizer" + feat-imp: "feature importance" + feat-imp-cap: "Feature importance" + nlp: "natural language processing" + nlp-cap: "Natural language processing" + apm-agent: "APM agent" + apm-go-agent: "Elastic APM Go agent" + apm-go-agents: "Elastic APM Go agents" + apm-ios-agent: "Elastic APM iOS agent" + apm-ios-agents: "Elastic APM iOS agents" + apm-java-agent: "Elastic APM Java agent" + apm-java-agents: "Elastic APM Java agents" + apm-dotnet-agent: "Elastic APM .NET agent" + apm-dotnet-agents: "Elastic APM .NET agents" + apm-node-agent: "Elastic APM Node.js agent" + apm-node-agents: "Elastic APM Node.js agents" + apm-php-agent: "Elastic APM PHP agent" + apm-php-agents: "Elastic APM PHP agents" + apm-py-agent: "Elastic APM Python agent" + apm-py-agents: "Elastic APM Python agents" + apm-ruby-agent: "Elastic APM Ruby agent" + apm-ruby-agents: "Elastic APM Ruby agents" + apm-rum-agent: "Elastic APM Real User Monitoring (RUM) JavaScript agent" + apm-rum-agents: "Elastic APM RUM JavaScript agents" + apm-lambda-ext: "Elastic APM AWS Lambda extension" + project-monitors: "project monitors" + project-monitors-cap: "Project monitors" + private-location: "Private Location" + private-locations: "Private Locations" + pwd: "YOUR_PASSWORD" + esh: "ES-Hadoop" + default-dist: "default distribution" + oss-dist: "OSS-only distribution" + observability: "Observability" + api-request-title: "Request" + api-prereq-title: "Prerequisites" + api-description-title: "Description" + api-path-parms-title: "Path parameters" + api-query-parms-title: "Query parameters" + api-request-body-title: "Request body" + api-response-codes-title: "Response codes" + api-response-body-title: "Response body" + api-example-title: "Example" + api-examples-title: "Examples" + api-definitions-title: "Properties" + multi-arg: "†footnoteref:[multi-arg,This parameter accepts multiple arguments.]" + multi-arg-ref: "†footnoteref:[multi-arg]" + yes-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png[Yes,20,15]" + no-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png[No,20,15]" + es-repo: "https://github.com/elastic/elasticsearch/" + es-issue: "https://github.com/elastic/elasticsearch/issues/" + es-pull: "https://github.com/elastic/elasticsearch/pull/" + es-commit: "https://github.com/elastic/elasticsearch/commit/" + kib-repo: "https://github.com/elastic/kibana/" + kib-issue: "https://github.com/elastic/kibana/issues/" + kibana-issue: "'{kib-repo}issues/'" + kib-pull: "https://github.com/elastic/kibana/pull/" + kibana-pull: "'{kib-repo}pull/'" + kib-commit: "https://github.com/elastic/kibana/commit/" + ml-repo: "https://github.com/elastic/ml-cpp/" + ml-issue: "https://github.com/elastic/ml-cpp/issues/" + ml-pull: "https://github.com/elastic/ml-cpp/pull/" + ml-commit: "https://github.com/elastic/ml-cpp/commit/" + apm-repo: "https://github.com/elastic/apm-server/" + apm-issue: "https://github.com/elastic/apm-server/issues/" + apm-pull: "https://github.com/elastic/apm-server/pull/" + kibana-blob: "https://github.com/elastic/kibana/blob/current/" + apm-get-started-ref: "https://www.elastic.co/guide/en/apm/get-started/current" + apm-server-ref: "https://www.elastic.co/guide/en/apm/server/current" + apm-server-ref-v: "https://www.elastic.co/guide/en/apm/server/current" + apm-server-ref-m: "https://www.elastic.co/guide/en/apm/server/master" + apm-server-ref-62: "https://www.elastic.co/guide/en/apm/server/6.2" + apm-server-ref-64: "https://www.elastic.co/guide/en/apm/server/6.4" + apm-server-ref-70: "https://www.elastic.co/guide/en/apm/server/7.0" + apm-overview-ref-v: "https://www.elastic.co/guide/en/apm/get-started/current" + apm-overview-ref-70: "https://www.elastic.co/guide/en/apm/get-started/7.0" + apm-overview-ref-m: "https://www.elastic.co/guide/en/apm/get-started/master" + infra-guide: "https://www.elastic.co/guide/en/infrastructure/guide/current" + a-data-source: "a data view" + icon-bug: "pass:[]" + icon-checkInCircleFilled: "pass:[]" + icon-warningFilled: "pass:[]" diff --git a/docs/extend/_migrating_dashboards_from_kibana_5_x_to_6_x.md b/docs/extend/_migrating_dashboards_from_kibana_5_x_to_6_x.md new file mode 100644 index 000000000000..fdc6a7b93fe2 --- /dev/null +++ b/docs/extend/_migrating_dashboards_from_kibana_5_x_to_6_x.md @@ -0,0 +1,84 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/_migrating_dashboards_from_kibana_5_x_to_6_x.html +--- + +# Migrating dashboards from Kibana 5.x to 6.x [_migrating_dashboards_from_kibana_5_x_to_6_x] + +This section is useful for the community Beats to migrate the Kibana 5.x dashboards to 6.x dashboards. + +In the Kibana 5.x, the saved dashboards consist of multiple JSON files, one for each dashboard, search, visualization and index-pattern. To import a dashboard in Kibana, you need to load not only the JSON file containing the dashboard, but also all its dependencies (searches, visualizations). + +Starting with Kibana 6.0, the dashboards are loaded by default via the Kibana API. In this case, the saved dashboard consist of a single JSON file that includes not only the dashboard content, but also all its dependencies. + +As the format of the dashboards and index-pattern for Kibana 5.x is different than the ones for Kibana 6.x, they are placed in different directories. Depending on the Kibana version, the 5.x or 6.x dashboards are loaded. + +The Kibana 5.x dashboards are placed under the 5.x directory that contains the following directories: - search - visualization - dashboard - index-pattern + +The Kibana 6.x dashboards and later are placed under the default directory that contains the following directories: - dashboard - index-pattern + +NOTE +: Please make sure the 5.x and default directories are created before running the following commands. + +To migrate your Kibana 5.x dashboards to Kibana 6.0 and above, you can import the dashboards into Kibana 5.6 and then export them using Beats 6.0 version. + +* Start Kibana 5.6 +* Import Kibana 5.x dashboards using Beats 6.0 version. + +Before importing the dashboards, make sure you run `make update` in the Beat directory, that updates the `_meta/kibana` directory. It generates the index-pattern from the `fields.yml` file, and places it under the `5.x/index-pattern` and `default/index-pattern` directories. In case of Metricbeat, Filebeat and Auditbeat, it collects the dashboards from all the modules to the `_meta/kibana` directory. + +```shell +make update +``` + +Then load all the Beat’s dashboards. For example, to load the Metricbeat rabbitmq dashboards together with the Metricbeat index-pattern into Kibana 5.6, using the Kibana API: + +```shell +make update +./metricbeat setup -E setup.dashboards.directory=_meta/kibana +``` + +* Export the dashboards using Beats 6.0 version. + +You can export the dashboards via the Kibana API by using the [export_dashboards.go](https://github.com/elastic/beats/blob/main/dev-tools/cmd/dashboards/export_dashboards.go) application. + +For example, to export the Metricbeat rabbitmq dashboard: + +```shell +cd beats/metricbeat +go run ../dev-tools/cmd/dashboards/export_dashboards.go -dashboards Metricbeat-Rabbitmq -output +module/rabbitmq/_meta/kibana/default/Metricbeat-Rabbitmq.json <1> +``` + +1. `Metricbeat-Rabbitmq` is the ID of the dashboard that you want to export. + + +Note: You can get the dashboard ID from the URL of the dashboard in Kibana. Depending on the Kibana version the dashboard was created, the ID consists of a name or random characters that can be separated by `-`. + +This command creates a single JSON file (Metricbeat-Rabbitmq.JSON) that contains the dashboard and all the dependencies like searches, visualizations. The name of the output file has the format: -.json. + +Starting with Beats 6.0.0, you can create an `yml` file for each module or for the entire Beat with all the dashboards. Below is an example of the `module.yml` file for the system module in Metricbeat. + +```yaml +dashboards: + - id: Metricbeat-system-overview <1> + file: Metricbeat-system-overview.json <2> + + - id: 79ffd6e0-faa0-11e6-947f-177f697178b8 + file: Metricbeat-host-overview.json + + - id: CPU-slash-Memory-per-container + file: Metricbeat-docker-overview.json +``` + +1. Dashboard ID. +2. The JSON file where the dashboard is saved on disk. + + +Using the yml file, you can export all the dashboards for a single module or for the entire Beat using a single command: + +```shell +cd metricbeat/module/system +go run ../../../dev-tools/cmd/dashboards/export_dashboards.go -yml module.yml +``` + diff --git a/docs/extend/archive-dashboards.md b/docs/extend/archive-dashboards.md new file mode 100644 index 000000000000..09b36e0606e6 --- /dev/null +++ b/docs/extend/archive-dashboards.md @@ -0,0 +1,19 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/archive-dashboards.html +--- + +# Archiving Your Beat Dashboards [archive-dashboards] + +The Kibana dashboards for the Elastic Beats are saved under the `kibana` directory. To create a zip archive with the dashboards, including visualizations and searches and the index pattern, you can run the following command in the Beat repository: + +```shell +make package-dashboards +``` + +The Makefile is part of libbeat, which means that community Beats contributors can use the commands shown here to archive dashboards. The dashboards must be available under the `kibana` directory. + +Another option would be to create a repository only with the dashboards, and use the GitHub release functionality to create a zip archive. + +Share the Kibana dashboards archive with the community, so other users can use your cool Kibana visualizations! + diff --git a/docs/extend/build-dashboards.md b/docs/extend/build-dashboards.md new file mode 100644 index 000000000000..be67376a072b --- /dev/null +++ b/docs/extend/build-dashboards.md @@ -0,0 +1,37 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/build-dashboards.html +--- + +# Building Your Own Beat Dashboards [build-dashboards] + +::::{note} +If you want to modify a dashboard that comes with a Beat, it’s better to modify a copy of the dashboard because the Beat overwrites the dashboards during the setup phase in order to have the latest version. For duplicating a dashboard, just use the `Clone` button from the top of the page. +:::: + + +Before building your own dashboards or customizing the existing ones, you need to load: + +* the Beat index pattern, which specifies how Kibana should display the Beat fields +* the Beat dashboards that you want to customize + +For the Elastic Beats, the index pattern is available in the Beat package under `kibana/*/index-pattern`. The index-pattern is automatically generated from the `fields.yml` file, available in the Beat package. For more details check the [generate index pattern](/extend/generate-index-pattern.md) section. + +All Beats dashboards, visualizations and saved searches must follow common naming conventions: + +* Dashboard names have prefix `[BeatName Module]`, e.g. `[Filebeat Nginx] Access logs` +* Visualizations and searches have suffix `[BeatName Module]`, e.g. `Top processes [Filebeat Nginx]` + +::::{note} +You can set a custom name (skip suffix) for visualization placed on a dashboard. The original visualization will stay intact. +:::: + + +The naming convention rules can be verified with the the tool `mage check`. The command fails if it detects: + +* empty description on a dashboard +* unexpected dashboard title format (missing prefix `[BeatName ModuleName]`) +* unexpected visualization title format (missing suffix `[BeatName Module]`) + +After creating your own dashboards in Kibana, you can [export the Kibana dashboards](/extend/export-dashboards.md) to a local directory, and then [archive the dashboards](/extend/archive-dashboards.md) in order to be able to share the dashboards with the community. + diff --git a/docs/extend/community-beats.md b/docs/extend/community-beats.md new file mode 100644 index 000000000000..279a8e5df5e5 --- /dev/null +++ b/docs/extend/community-beats.md @@ -0,0 +1,336 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/community-beats.html +--- + +# Community {{beats}} [community-beats] + +::::{admonition} +**Custom Beat generator code no longer available in 8.0 and later** + +The custom Beat generator was a helper tool that allowed developers to bootstrap their custom {{beats}}. This tool was deprecated in 7.16 and is no longer available starting in 8.0. + +Developers can continue to create custom {{beats}} to address specific and targeted use cases. If you need to create a Beat from scratch, you can use the custom Beat generator tool available in version 7.16 or 7.17 to generate the custom Beat, then upgrade its various components to the 8.x release. + +:::: + + +This page lists some of the {{beats}} developed by the open source community. + +Have a question about developing a community Beat? You can post questions and discuss issues in the [{{beats}} discussion forum](https://discuss.elastic.co/tags/c/elastic-stack/beats/28/beats-development). + +Have you created a Beat that’s not listed? Add the name and description of your Beat to the source document for [Community {{beats}}](https://github.com/elastic/beats/blob/main/libbeat/docs/communitybeats.asciidoc) and [open a pull request](https://help.github.com/articles/using-pull-requests) in the [{{beats}} GitHub repository](https://github.com/elastic/beats) to get your change merged. When you’re ready, go ahead and [announce](https://discuss.elastic.co/c/announcements) your new Beat in the Elastic discussion forum. + +::::{note} +Elastic provides no warranty or support for community-sourced {{beats}}. +:::: + + +[amazonbeat](https://github.com/awormuth/amazonbeat) +: Reads data from a specified Amazon product. + +[apachebeat](https://github.com/radoondas/apachebeat) +: Reads status from Apache HTTPD server-status. + +[apexbeat](https://github.com/verticle-io/apexbeat) +: Extracts configurable contextual data and metrics from Java applications via the [APEX](http://toolkits.verticle.io) toolkit. + +[browserbeat](https://github.com/MelonSmasher/browserbeat) +: Reads and ships browser history (Chrome, Firefox, & Safari) to an Elastic output. + +[cborbeat](https://github.com/toravir/cborbeat) +: Reads from cbor encoded files (specifically log files). More: [CBOR Encoding](https://cbor.io) [Decoder](https://github.com/toravir/csd) + +[cloudflarebeat](https://github.com/hartfordfive/cloudflarebeat) +: Indexes log entries from the Cloudflare Enterprise Log Share API. + +[cloudfrontbeat](https://github.com/jarl-tornroos/cloudfrontbeat) +: Reads log events from Amazon Web Services [CloudFront](https://aws.amazon.com/cloudfront/). + +[cloudtrailbeat](https://github.com/aidan-/cloudtrailbeat) +: Reads events from Amazon Web Services' [CloudTrail](https://aws.amazon.com/cloudtrail/). + +[cloudwatchmetricbeat](https://github.com/narmitech/cloudwatchmetricbeat) +: A beat for Amazon Web Services' [CloudWatch Metrics](https://aws.amazon.com/cloudwatch/details/#other-aws-resource-monitoring). + +[cloudwatchlogsbeat](https://github.com/e-travel/cloudwatchlogsbeat) +: Reads log events from Amazon Web Services' [CloudWatch Logs](https://aws.amazon.com/cloudwatch/details/#log-monitoring). + +[collectbeat](https://github.com/eBay/collectbeat) +: Adds discovery on top of Filebeat and Metricbeat in environments like Kubernetes. + +[connbeat](https://github.com/raboof/connbeat) +: Exposes metadata about TCP connections. + +[consulbeat](https://github.com/Pravoru/consulbeat) +: Reads services health checks from consul and pushes them to Elastic. + +[discobeat](https://github.com/hellmouthengine/discobeat) +: Reads messages from Discord and indexes them in Elasticsearch + +[dockbeat](https://github.com/Ingensi/dockbeat) +: Reads Docker container statistics and indexes them in Elasticsearch. + +[earthquakebeat](https://github.com/radoondas/earthquakebeat) +: Pulls data from [USGS](https://earthquake.usgs.gov/fdsnws/event/1/) earthquake API. + +[elasticbeat](https://github.com/radoondas/elasticbeat) +: Reads status from an Elasticsearch cluster and indexes them in Elasticsearch. + +[envoyproxybeat](https://github.com/berfinsari/envoyproxybeat) +: Reads stats from the Envoy Proxy and indexes them into Elasticsearch. + +[etcdbeat](https://github.com/gamegos/etcdbeat) +: Reads stats from the Etcd v2 API and indexes them into Elasticsearch. + +[etherbeat](https://gitlab.com/hatricker/etherbeat) +: Reads blocks from Ethereum compatible blockchain and indexes them into Elasticsearch. + +[execbeat](https://github.com/christiangalsterer/execbeat) +: Periodically executes shell commands and sends the standard output and standard error to Logstash or Elasticsearch. + +[factbeat](https://github.com/jarpy/factbeat) +: Collects facts from [Facter](https://github.com/puppetlabs/facter). + +[fastcombeat](https://github.com/ctindel/fastcombeat) +: Periodically gather internet download speed from [fast.com](https://fast.com). + +[fileoccurencebeat](https://github.com/cloudronics/fileoccurancebeat) +: Checks for file existence recurssively under a given directory, handy while handling queues/pipeline buffers. + +[flowbeat](https://github.com/FStelzer/flowbeat) +: Collects, parses, and indexes [sflow](http://www.sflow.org/index.php) samples. + +[gabeat](https://github.com/GeneralElectric/GABeat) +: Collects data from Google Analytics Realtime API. + +[gcsbeat](https://github.com/GoogleCloudPlatform/gcsbeat) +: Reads data from [Google Cloud Storage](https://cloud.google.com/storage/) buckets. + +[gelfbeat](https://github.com/threatstack/gelfbeat) +: Collects and parses GELF-encoded UDP messages. + +[githubbeat](https://github.com/josephlewis42/githubbeat) +: Easily monitors GitHub repository activity. + +[gpfsbeat](https://github.com/hpcugent/gpfsbeat) +: Collects GPFS metric and quota information. + +[hackerbeat](https://github.com/ullaakut/hackerbeat) +: Indexes the top stories of HackerNews into an ElasticSearch instance. + +[hsbeat](https://github.com/YaSuenag/hsbeat) +: Reads all performance counters in Java HotSpot VM. + +[httpbeat](https://github.com/christiangalsterer/httpbeat) +: Polls multiple HTTP(S) endpoints and sends the data to Logstash or Elasticsearch. Supports all HTTP methods and proxies. + +[hsnburrowbeat](https://github.com/hsngerami/hsnburrowbeat) +: Monitors Kafka consumer lag for Burrow V1.0.0(API V3). + +[hwsensorsbeat](https://github.com/jasperla/hwsensorsbeat) +: Reads sensors information from OpenBSD. + +[icingabeat](https://github.com/icinga/icingabeat) +: Icingabeat ships events and states from Icinga 2 to Elasticsearch or Logstash. + +[IIBBeat](https://github.com/visasimbu/IIBBeat) +: Periodically executes shell commands or batch commands to collect IBM Integration node, Integration server, app status, bar file deployment time and bar file location to Logstash or Elasticsearch. + +[iobeat](https://github.com/devopsmakers/iobeat) +: Reads IO stats from /proc/diskstats on Linux. + +[jmxproxybeat](https://github.com/radoondas/jmxproxybeat) +: Reads Tomcat JMX metrics exposed over *JMX Proxy Servlet* to HTTP. + +[journalbeat](https://github.com/mheese/journalbeat) +: Used for log shipping from systemd/journald based Linux systems. + +[kafkabeat](https://github.com/justsocialapps/kafkabeat) +: Reads data from Kafka topics. + +[kafkabeat2](https://github.com/arkady-emelyanov/kafkabeat) +: Reads data (json or plain) from Kafka topics. + +[krakenbeat](https://github.com/PPACI/krakenbeat) +: Collect information on each transaction on the Kraken crypto platform. + +[lmsensorsbeat](https://github.com/eskibars/lmsensorsbeat) +: Collects data from lm-sensors (such as CPU temperatures, fan speeds, and voltages from i2c and smbus). + +[logstashbeat](https://github.com/consulthys/logstashbeat) +: Collects data from Logstash monitoring API (v5 onwards) and indexes them in Elasticsearch. + +[macwifibeat](https://github.com/bozdag/macwifibeat) +: Reads various indicators for a MacBook’s WiFi Signal Strength + +[mcqbeat](https://github.com/yedamao/mcqbeat) +: Reads the status of queues from memcacheq. + +[merakibeat](https://developer.cisco.com/codeexchange/github/repo/CiscoDevNet/merakibeat) +: Collects [wireless health](https://dashboard.meraki.com/api_docs#wireless-health) and users [location analytics](https://documentation.meraki.com/MR/Monitoring_and_Reporting/Scanning_API) data using Cisco Meraki APIs. + +[mesosbeat](https://github.com/berfinsari/mesosbeat) +: Reads stats from the Mesos API and indexes them into Elasticsearch. + +[mongobeat](https://github.com/scottcrespo/mongobeat) +: Monitors MongoDB instances and can be configured to send multiple document types to Elasticsearch. + +[mqttbeat](https://github.com/nathan-K-/mqttbeat) +: Add messages from mqtt topics to Elasticsearch. + +[mysqlbeat](https://github.com/adibendahan/mysqlbeat) +: Run any query on MySQL and send results to Elasticsearch. + +[nagioscheckbeat](https://github.com/PhaedrusTheGreek/nagioscheckbeat) +: For Nagios checks and performance data. + +[natsbeat](https://github.com/nfvsap/natsbeat) +: Collects data from NATS monitoring endpoints + +[netatmobeat](https://github.com/radoondas/netatmobeat) +: Reads data from Netatmo weather station. + +[netbeat](https://github.com/hmschreck/netbeat) +: Reads configurable data from SNMP-enabled devices. + +[nginxbeat](https://github.com/mrkschan/nginxbeat) +: Reads status from Nginx. + +[nginxupstreambeat](https://github.com/2Fast2BCn/nginxupstreambeat) +: Reads upstream status from nginx upstream module. + +[nsqbeat](https://github.com/mschneider82/nsqbeat) +: Reads data from a NSQ topic. + +[nvidiagpubeat](https://github.com/eBay/nvidiagpubeat) +: Uses nvidia-smi to grab metrics of NVIDIA GPUs. + +[o365beat](https://github.com/counteractive/o365beat) +: Ships Office 365 logs from the O365 Management Activities API + +[openconfigbeat](https://github.com/aristanetworks/openconfigbeat) +: Streams data from [OpenConfig](http://openconfig.net)-enabled network devices + +[openvpnbeat](https://github.com/nabeel-shakeel/openvpnbeat) +: Collects OpenVPN connection metrics + +[owmbeat](https://github.com/radoondas/owmbeat) +: Open Weather Map beat to pull weather data from all around the world and store and visualize them in Elastic Stack + +[packagebeat](https://github.com/joehillen/packagebeat) +: Collects information about system packages from package managers. + +[perfstatbeat](https://github.com/WuerthIT/perfstatbeat) +: Collects performance metrics on the AIX operating system. + +[phishbeat](https://github.com/stric-co/phishbeat) +: Monitors Certificate Transparency logs for phishing and defamatory domains. + +[phpfpmbeat](https://github.com/kozlice/phpfpmbeat) +: Reads status from PHP-FPM. + +[pingbeat](https://github.com/joshuar/pingbeat) +: Sends ICMP pings to a list of targets and stores the round trip time (RTT) in Elasticsearch. + +[powermaxbeat](https://github.com/kckecheng/powermaxbeat) +: Collects performance metrics from Dell EMC PowerMax storage array. + +[processbeat](https://github.com/pawankt/processbeat) +: Collects process health status and performance. + +[prombeat](https://github.com/carlpett/prombeat) +: Indexes [Prometheus](https://prometheus.io) metrics. + +[prometheusbeat](https://github.com/infonova/prometheusbeat) +: Send Prometheus metrics to Elasticsearch via the remote write feature. + +[protologbeat](https://github.com/hartfordfive/protologbeat) +: Accepts structured and unstructured logs via UDP or TCP. Can also be used to receive syslog messages or GELF formatted messages. (To be used as a successor to udplogbeat) + +[pubsubbeat](https://github.com/GoogleCloudPlatform/pubsubbeat) +: Reads data from [Google Cloud Pub/Sub](https://cloud.google.com/pubsub/). + +[redditbeat](https://github.com/voigt/redditbeat) +: Collects new Reddit Submissions of one or multiple Subreddits. + +[redisbeat](https://github.com/chrsblck/redisbeat) +: Used for Redis monitoring. + +[retsbeat](https://github.com/consulthys/retsbeat) +: Collects counts of [RETS](http://www.reso.org) resource/class records from [Multiple Listing Service](https://en.wikipedia.org/wiki/Multiple_listing_service) (MLS) servers. + +[rsbeat](https://github.com/yourdream/rsbeat) +: Ships redis slow logs to elasticsearch and analyze by Kibana. + +[safecastbeat](https://github.com/radoondas/safecastbeat) +: Pulls data from Safecast API and store them in Elasticsearch. + +[saltbeat](https://github.com/martinhoefling/saltbeat) +: Reads events from salt master event bus. + +[serialbeat](https://github.com/benben/serialbeat) +: Reads from a serial device. + +[servicebeat](https://github.com/Corwind/servicebeat) +: Send services status to Elasticsearch + +[springbeat](https://github.com/consulthys/springbeat) +: Collects health and metrics data from Spring Boot applications running with the actuator module. + +[springboot2beat](https://github.com/philkra/springboot2beat) +: Query and accumulate all metrics endpoints of a Spring Boot 2 web app via the web channel, leveraging the [mircometer.io](http://micrometer.io/) metrics facade. + +[statsdbeat](https://github.com/sentient/statsdbeat) +: Receives UDP [statsd](https://github.com/etsy/statsd/wiki) events from a statsd client. + +[supervisorctlbeat](https://github.com/Corwind/supervisorctlbeat.git) +: This beat aims to parse the supervisorctl status command output and send it to elasticsearch for indexation + +[terminalbeat](https://github.com/live-wire/terminalbeat) +: Runs an external command and forwards the [stdout](https://www.computerhope.com/jargon/s/stdout.htm) for the same to Elasticsearch/Logstash. + +[timebeat](https://timebeat.app/download.php) +: NTP and PTP clock synchonisation beat that reports accuracy metrics to elastic. Includes Kibana dashboards. + +[tracebeat](https://github.com/berfinsari/tracebeat) +: Reads traceroute output and indexes them into Elasticsearch. + +[trivybeat](https://github.com/DmitryZ-outten/trivybeat) +: Fetches Docker containers which are running on the same machine, scan CVEs of those containers using Trivy server and index them into Elasticsearch. + +[twitterbeat](https://github.com/buehler/go-elastic-twitterbeat) +: Reads tweets for specified screen names. + +[udpbeat](https://github.com/gravitational/udpbeat) +: Ships structured logs via UDP. + +[udplogbeat](https://github.com/hartfordfive/udplogbeat) +: Accept events via local UDP socket (in plain-text or JSON with ability to enforce schemas). Can also be used for applications only supporting syslog logging. + +[unifiedbeat](https://github.com/cleesmith/unifiedbeat) +: Reads records from Unified2 binary files generated by network intrusion detection software and indexes the records in Elasticsearch. + +[unitybeat](https://github.com/kckecheng/unitybeat) +: Collects performance metrics from Dell EMC Unity storage array. + +[uwsgibeat](https://github.com/mrkschan/uwsgibeat) +: Reads stats from uWSGI. + +[varnishlogbeat](https://github.com/phenomenes/varnishlogbeat) +: Reads log data from a Varnish instance and ships it to Elasticsearch. + +[varnishstatbeat](https://github.com/phenomenes/varnishstatbeat) +: Reads stats data from a Varnish instance and ships it to Elasticsearch. + +[vaultbeat](https://gitlab.com/msvechla/vaultbeat) +: Collects performance metrics and statistics from Hashicorp’s Vault. + +[wmibeat](https://github.com/eskibars/wmibeat) +: Uses WMI to grab your favorite, configurable Windows metrics. + +[yarnbeat](https://github.com/IBM/yarnbeat) +: Polls YARN and MapReduce APIs for cluster and application metrics. + +[zfsbeat](https://github.com/maireanu/zfsbeat) +: Querying ZFS Storage and Pool Status diff --git a/docs/extend/contributing-docs.md b/docs/extend/contributing-docs.md new file mode 100644 index 000000000000..c8201b2335c4 --- /dev/null +++ b/docs/extend/contributing-docs.md @@ -0,0 +1,84 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/contributing-docs.html +applies_to: + stack: discontinued 8.18 +--- + +# Contributing to the docs [contributing-docs] + +The Beats documentation follows the tagging guidelines described in the [Docs HOWTO](https://github.com/elastic/docs/blob/master/README.asciidoc). However it extends these capabilities in a couple ways: + +* The documentation makes extensive use of [AsciiDoc conditionals](https://docs.asciidoctor.org/asciidoc/latest/directives/conditionals/) to provide content that is reused across multiple books. This means that there might not be a single source file for each published HTML page. Some files are shared across multiple books, either as complete pages or snippets. For more details, refer to [Where to find the Beats docs source](#where-to-find-files). +* The documentation includes some files that are generated from YAML source or pieced together from content that lives in `_meta` directories under the code (for example, the module and exported fields documentation). For more details, refer to [Generated docs](#generated-docs). + + +## Where to find the Beats docs source [where-to-find-files] + +Because the Beats documentation makes use of shared content, doc generation scripts, and componentization, the source files are located in several places: + +| Documentation | Location of source files | +| --- | --- | +| Main docs for the Beat, including index files | `/docs` | +| Shared docs and Beats Platform Reference | `libbeat/docs` | +| Processor docs | `docs` folders under processors in `libbeat/processors/`,`x-pack//processors/`, and `x-pack/libbeat/processors/` | +| Output docs | `docs` folders under outputs in `libbeat/outputs/` | +| Module docs | `_meta` folders under modules and datasets in `libbeat/module/`,`/module/`, and `x-pack//module/` | + +The [conf.yaml](https://github.com/elastic/docs/blob/master/conf.yaml) file in the `docs` repo shows all the resources used to build each book. This file is used to drive the classic docs build and is the source of truth for file locations. + +::::{tip} +If you can’t find the source for a page you want to update, go to the published page at www.elastic.co and click the Edit link to navigate to the source. +:::: + + +The Beats documentation build also has dependencies on the following files in the [docs](https://github.com/elastic/docs) repo: + +* `shared/versions/stack/.asciidoc` +* `shared/attributes.asciidoc` + + +## Generated docs [generated-docs] + +After updating `docs.asciidoc` files in `_meta` directories, you must run the doc collector scripts to regenerate the docs. + +Make sure you [set up your Beats development environment](./index.md#setting-up-dev-environment) and use the correct Go version. The Go version is listed in the `version.asciidoc` file for the branch you want to update. + +To run the docs collector scripts, change to the beats directory and run: + +`make update` + +::::{warning} +The `make update` command overwrites files in the `docs` directories **without warning**. If you accidentally update a generated file and run `make update`, your changes will be overwritten. +:::: + + +To format your files, you might also need to run this command: + +`make fmt` + +The make command calls the following scripts to generate the docs: + +[auditbeat/scripts/docs_collector.py](https://github.com/elastic/beats/blob/main/auditbeat/scripts/docs_collector.py) generates: + +* `auditbeat/docs/modules_list.asciidoc` +* `auditbeat/docs/modules/*.asciidoc` + +[filebeat/scripts/docs_collector.py](https://github.com/elastic/beats/blob/main/filebeat/scripts/docs_collector.py) generates: + +* `filebeat/docs/modules_list.asciidoc` +* `filebeat/docs/modules/*.asciidoc` + +[metricbeat/scripts/mage/docs_collector.go](https://github.com/elastic/beats/blob/main/metricbeat/scripts/mage/docs_collector.go) generates: + +* `metricbeat/docs/modules_list.asciidoc` +* `metricbeat/docs/modules/*.asciidoc` + +[libbeat/scripts/generate_fields_docs.py](https://github.com/elastic/beats/blob/main/libbeat/scripts/generate_fields_docs.py) generates + +* `auditbeat/docs/fields.asciidoc` +* `filebeat/docs/fields.asciidoc` +* `heartbeat/docs/fields.asciidoc` +* `metricbeat/docs/fields.asciidoc` +* `packetbeat/docs/fields.asciidoc` +* `winlogbeat/docs/fields.asciidoc` diff --git a/docs/extend/creating-metricbeat-module.md b/docs/extend/creating-metricbeat-module.md new file mode 100644 index 000000000000..69accfff000c --- /dev/null +++ b/docs/extend/creating-metricbeat-module.md @@ -0,0 +1,176 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/creating-metricbeat-module.html +--- + +# Creating a Metricbeat Module [creating-metricbeat-module] + +Metricbeat modules are used to group multiple metricsets together and to implement shared functionality of the metricsets. In most cases, no implementation of the module is needed and the default module implementation is automatically picked. + +It’s important to complete the configuration and documentation files for a module. When you create a new metricset by running `make create-metricset`, default versions of these files are generated in the `_meta` directory. + + +## Module Files [_module_files] + +* `config.yml` and `config.reference.yml` +* `docs.asciidoc` +* `fields.yml` + +After updating any of these files, make sure you run `make update` in your beat directory so all generated files are updated. + + +### config.yml and config.reference.yml [_config_yml_and_config_reference_yml] + +The `config.yml` file contains the basic configuration options and looks like this: + +```yaml +- module: {module} + metricsets: ["{metricset}"] + enabled: false + period: 10s + hosts: ["localhost"] +``` + +It contains the module name, your metricset, and the default period. If you have multiple metricsets in your module, make sure that you extend the metricset array: + +```yaml + metricsets: ["{metricset1}", "{metricset2}"] +``` + +The `full.config.yml` file is optional and by default has the same content as the `config.yml`. It is used to add and document more advanced configuration options that should not be part of the minimal config file shipped by default. + + +### docs.asciidoc [_docs_asciidoc] + +The `docs.asciidoc` file contains the documentation about your module. During generation of the documentation, the default config file will be appended to the docs. Use this file to describe your module in more detail and to document specific configuration options. + +```asciidoc +This is the {module} module. +``` + + +### fields.yml [_fields_yml_2] + +The `fields.yml` file contains the top level structure for the fields in your metricset. It’s used in combination with the `fields.yml` file in each metricset to generate the template and documentation for the fields. + +The default file looks like this: + +```yaml +- key: {module} + title: "{module}" + release: beta + description: > + {module} module + fields: + - name: {module} + type: group + description: > + fields: +``` + +Make sure that you update at least the description of the module. + + +## Testing [_testing_2] + +It’s a common pattern to use a `testing.go` file in the module package to share some testing functionality among the metricsets. This file does not have `_test.go` in the name because otherwise it would not be compiled for sub packages. + +To see an example of the `testing.go` file, look at the [mysql module](https://github.com/elastic/beats/tree/master/metricbeat/module/mysql). + + +### Test a Metricbeat module manually [_test_a_metricbeat_module_manually] + +To test a Metricbeat module manually, follow the steps below. + +First we have to build the Docker image which is available for the modules. The Dockerfile is located inside a `_meta` folder within each module folder. As an example let’s take MySQL module. + +This steps assume you have checked out the Beats repository from Github and are inside `beats` directory. First, we have to enter in the `_meta` folder mentioned above and build the Docker image called `metricbeat-mysql`: + +```bash +$ cd metricbeat/module/mysql/_meta/ +$ docker build -t metricbeat-mysql . +... +Removing intermediate container 0e58cfb7b197 + ---> 9492074840ea +Step 5/5 : COPY test.cnf /etc/mysql/conf.d/test.cnf + ---> 002969e1d810 +Successfully built 002969e1d810 +Successfully tagged metricbeat-mysql:latest +``` + +Before we run the container we have just created, we also need to know which port to expose. The port is listed in the `metricbeat/{{module}}/_meta/env` file: + +```bash +$ cat env +MYSQL_DSN=root:test@tcp(mysql:3306)/ +MYSQL_HOST=mysql +MYSQL_PORT=3306 +``` + +As we see, the port is 3306. We now have all the information to start our MySQL service locally: + +```bash +$ docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret metricbeat-mysql +``` + +This starts the container and you can now use it for testing the MySQL module. + +To run Metricbeat with the module we need to build the binary, enable the module first. The assumption is now that you are back in the `beats` folder path: + +```bash +$ cd metricbeat +$ mage build +$ ./metricbeat modules enable mysql +``` + +This will enable the module and rename file `metricbeat/modules.d/mysql.yml.disabled` to `metricbeat/modules.d/mysql.yml`. According to our [documentation](/reference/metricbeat/metricbeat-module-mysql.md) we should specify username and password to user MySQL. It’s always a good idea to take a look at the docs to see also that a pre-built dashboard is also available. So tweaking the config a bit, this is how it looks like: + +```yaml +$ cat modules.d/mysql.yml + +# Module: mysql +# Docs: /beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-mysql.md + +- module: mysql + metricsets: + - status + # - galera_status + period: 10s + + # Host DSN should be defined as "user:pass@tcp(127.0.0.1:3306)/" + # or "unix(/var/lib/mysql/mysql.sock)/", + # or another DSN format supported by . + # The username and password can either be set in the DSN or using the username + # and password config options. Those specified in the DSN take precedence. + hosts: ["tcp(127.0.0.1:3306)/"] + + # Username of hosts. Empty by default. + username: root + + # Password of hosts. Empty by default. + password: secret +``` + +It’s now sending data to your local Elasticsearch instance. If you need to modify the mysql config, adjust `modules.d/mysql.yml` and restart Metricbeat. + + +### Run Environment tests for one module [_run_environment_tests_for_one_module] + +All the environments are setup with docker. `make integration-tests-environment` and `make system-tests-environment` can be used to run tests for all modules. In case you are developing a module it is convenient to run the tests only for one module and directly run it on your machine. + +First you need to start the environment for your module to test and expose the port to your local machine. For this you can run the following command inside the metricbeat directory: + +```bash +MODULE=apache PORT=80 make run-module +``` + +Note: The apache module with port 80 is taken here as an example. You must put the name and port for your own module here. + +This will start the environment and you must wait until the service is completely started. After that you can run the test which require an environment: + +```bash +MODULE=apache make test-module +``` + +This will run the integration and system tests connecting to the environment in your docker container. + diff --git a/docs/extend/creating-metricsets.md b/docs/extend/creating-metricsets.md new file mode 100644 index 000000000000..134078c4b929 --- /dev/null +++ b/docs/extend/creating-metricsets.md @@ -0,0 +1,332 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/creating-metricsets.html +--- + +# Creating a Metricset [creating-metricsets] + +::::{important} +Elastic provides no warranty or support for the code used to generate metricsets. The generator is mainly offered as guidance for developers who want to create their own data shippers. +:::: + + +A metricset is the part of a Metricbeat module that fetches and structures the data from the remote service. Each module can have multiple metricsets. In this guide, you learn how to create your own metricset. + +When creating a metricset for the first time, it generally helps to look at the implementation of existing metricsets for inspiration. + +To create a new metricset: + +1. Run the following command inside the metricbeat beat directory: + + ```bash + make create-metricset + ``` + + You need Python to run this command, then, you’ll be prompted to enter a module and metricset name. Remember that a module represents the service you want to retrieve metrics from (like Redis) and a metricset is a specific set of grouped metrics (like `info` on Redis). Only use characters `[a-z]` and, if required, underscores (`_`). No other characters are allowed. + + When you run `make create-metricset`, it creates all the basic files for your metricset, along with the required module files if the module does not already exist. See [Creating a Metricbeat Module](/extend/creating-metricbeat-module.md) for more details about the module files. + + ::::{note} + We use `{{metricset}}`, `{{module}}`, and `{{beat}}` in this guide as placeholders. You need to replace these with the actual names of your metricset, module, and beat. + :::: + + + The metricset that you created is already a functioning metricset and can be compiled. + +2. Compile your new metricset by running the following command: + + ```bash + mage update + mage build + ``` + + The first command, `mage update`, updates all generated files with the most recent files, data, and meta information from the metricset. The second command, `mage build`, compiles your source code and provides you with a binary called metricbeat in the same folder. You can run the binary in debug mode with the following command: + + ```bash + ./metricbeat -e -d "*" + ``` + + +After running the mage commands, you’ll find the metricset, along with its generated files, under `module/{{module}}/{metricset}`. This directory contains the following files: + +* `\{{metricset}}.go` +* `_meta/docs.asciidoc` +* `_meta/data.json` +* `_meta/fields.yml` + +Let’s look at the files in more detail next. + + +## `{{metricset}}.go` File [_metricset_go_file] + +The first file is `{{metricset}}.go`. It contains the logic on how to fetch data from the service and convert it for sending to the output. + +The generated file looks like this: + +[https://github.com/elastic/beats/blob/main/metricbeat/scripts/module/metricset/metricset.go.tmpl](https://github.com/elastic/beats/blob/main/metricbeat/scripts/module/metricset/metricset.go.tmpl) + +```go +package {metricset} + +import ( + "github.com/elastic/elastic-agent-libs/mapstr" + "github.com/elastic/beats/v7/libbeat/common/cfgwarn" + "github.com/elastic/beats/v7/metricbeat/mb" +) + +// init registers the MetricSet with the central registry as soon as the program +// starts. The New function will be called later to instantiate an instance of +// the MetricSet for each host is defined in the module's configuration. After the +// MetricSet has been created then Fetch will begin to be called periodically. +func init() { + mb.Registry.MustAddMetricSet("{module}", "{metricset}", New) +} + +// MetricSet holds any configuration or state information. It must implement +// the mb.MetricSet interface. And this is best achieved by embedding +// mb.BaseMetricSet because it implements all of the required mb.MetricSet +// interface methods except for Fetch. +type MetricSet struct { + mb.BaseMetricSet + counter int +} + +// New creates a new instance of the MetricSet. New is responsible for unpacking +// any MetricSet specific configuration options if there are any. +func New(base mb.BaseMetricSet) (mb.MetricSet, error) { + cfgwarn.Beta("The {module} {metricset} metricset is beta.") + + config := struct{}{} + if err := base.Module().UnpackConfig(&config); err != nil { + return nil, err + } + + return &MetricSet{ + BaseMetricSet: base, + counter: 1, + }, nil +} + +// Fetch method implements the data gathering and data conversion to the right +// format. It publishes the event which is then forwarded to the output. In case +// of an error set the Error field of mb.Event or simply call report.Error(). +func (m *MetricSet) Fetch(report mb.ReporterV2) error { + report.Event(mb.Event{ + MetricSetFields: mapstr.M{ + "counter": m.counter, + }, + }) + m.counter++ + + return nil +} +``` + +The `package` clause and `import` declaration are part of the base structure of each Go file. You should only modify this part of the file if your implementation requires more imports. + + +### Initialisation [_initialisation] + +The init method registers the metricset with the central registry. In Go the `init()` function is called before the execution of all other code. This means the module will be automatically registered with the global registry. + +The `New` method, which is passed to `MustAddMetricSet`, will be called after the setup of the module and before starting to fetch data. You normally don’t need to change this part of the file. + +```go +func init() { + mb.Registry.MustAddMetricSet("{module}", "{metricset}", New) +} +``` + + +### Definition [_definition] + +The MetricSet type defines all fields of the metricset. As a minimum it must be composed of the `mb.BaseMetricSet` fields, but can be extended with additional entries. These variables can be used to persist data or configuration between multiple fetch calls. + +You can add more fields to the MetricSet type, as you can see in the following example where the `username` and `password` string fields are added: + +```go +type MetricSet struct { + mb.BaseMetricSet + username string + password string +} +``` + + +### Creation [_creation] + +The `New` function creates a new instance of the MetricSet. The setup process of the MetricSet is also part of `New`. This method will be called before `Fetch` is called the first time. + +The `New` function also sets up the configuration by processing additional configuration entries, if needed. + +```go +func New(base mb.BaseMetricSet) (mb.MetricSet, error) { + + config := struct{}{} + + if err := base.Module().UnpackConfig(&config); err != nil { + return nil, err + } + + return &MetricSet{ + BaseMetricSet: base, + }, nil +} +``` + + +### Fetching [_fetching] + +The `Fetch` method is the central part of the metricset. `Fetch` is called every time new data is retrieved. If more than one host is defined, `Fetch` is called once for each host. The frequency of calling `Fetch` is based on the `period` defined in the configuration file. + +`Fetch` must publish the event using the `mb.ReporterV2.Event` method. If an error happens, `Fetch` can return an error, or if `Event` is being called in a loop, published using the `mb.ReporterV2.Error` method. This means that Metricbeat always sends an event, even on failure. You must make sure that the error message helps to identify the actual error. + +The following example shows a metricset `Fetch` method with a counter that is incremented for each `Fetch` call: + +```go +func (m *MetricSet) Fetch(report mb.ReporterV2) error { + + report.Event(mb.Event{ + MetricSetFields: common.MapStr{ + "counter": m.counter, + } + }) + m.counter++ + + return nil +} +``` + +The JSON output derived from the reported event will be identical to the naming and structure you use in `common.MapStr`. For more details about `MapStr` and its functions, see the [MapStr API docs](https://godoc.org/github.com/elastic/beats/libbeat/common#MapStr). + + +### Multi Fetching [_multi_fetching] + +`Event` can be called multiple times inside of the `Fetch` method for metricsets that might expose multiple events. `Event` returns a bool that indicates if the metricset is already closed and no further events can be processed, in which case `Fetch` should return immediately. If there is an error while processing one of many events, it can be published using the `mb.ReporterV2.Error` method, as opposed to returning an error value. + + +### Parsing and Normalizing Fields [_parsing_and_normalizing_fields] + +In Metricbeat we aim to normalize the metric names from all metricsets to respect a common [set of conventions](/extend/event-conventions.md). This makes it easy for users to find and interpret metrics. To simplify parsing, converting, renaming, and restructuring of the object read from the monitored system to the Metricbeat format, we have created the [schema](https://godoc.org/github.com/elastic/beats/libbeat/common/schema) package that allows you to declaratively define transformations. + +For example, assuming this input object: + +```go +input := map[string]interface{}{ + "testString": "hello", + "testInt": "42", + "testBool": "true", + "testFloat": "42.1", + "testObjString": "hello, object", +} +``` + +And the requirement to transform it into this one: + +```go +common.MapStr{ + "test_string": "hello", + "test_int": int64(42), + "test_bool": true, + "test_float": 42.1, + "test_obj": common.MapStr{ + "test_obj_string": "hello, object", + }, +} +``` + +You can use the schema package to transform the data, and optionally mark some fields in a schema as required or not. For example: + +```go +import ( + s "github.com/elastic/beats/libbeat/common/schema" + c "github.com/elastic/beats/libbeat/common/schema/mapstrstr" +) + +var ( + schema = s.Schema{ + "test_string": c.Str("testString", s.Required), <1> + "test_int": c.Int("testInt"), <2> + "test_bool": c.Bool("testBool", s.Optional), <3> + "test_float": c.Float("testFloat"), + "test_obj": s.Object{ + "test_obj_string": c.Str("testObjString", s.IgnoreAllErrors), <4> + }, + } +) + +func eventMapping(input map[string]interface{}) common.MapStr { + return schema.Apply(input) <5> +} +``` + +1. Marks a field as required. +2. If a field has no schema option set, it is equivalent to `Required`. +3. Marks the field as optional. +4. Ignore any value conversion error +5. By default, `Apply` will fail and return an error if any required field is missing. Using the optional second argument, you can specify how `Apply` handles different fields of the schema. The possible values are:* `AllRequired` is the default behavior. Returns an error if any required field is missing, including fields that are required because no schema option is set. +* `FailOnRequired` will fail if a field explicitly marked as `required` is missing. +* `NotFoundKeys(cb func([]string))` takes a callback function that will be called with a list of missing keys, allowing for finer-grained error handling. + + + +In the above example, note that it is possible to create the schema object once and apply it to all events. You can also use `ApplyTo` to add additional data to an existing `MapStr` object: + +```go +var ( + schema = s.Schema{ + "test_string": c.Str("testString"), + "test_int": c.Int("testInt"), + "test_bool": c.Bool("testBool"), + "test_float": c.Float("testFloat"), + "test_obj": s.Object{ + "test_obj_string": c.Str("testObjString"), + }, + } + + additionalSchema = s.Schema{ + "second_string": c.Str("secondString"), + "second_int": c.Int("secondInt"), + } +) + + data, err := schema.Apply(input) + if err != nil { + return err + } + + if m.parseMoreData{ + _, err := additionalSchema.ApplyTo(data, input) + if len(err) > 0 { <1> + return err.Err() + } + } +``` + +1. `ApplyTo` returns a raw MultiError object, making it suitable for finer-grained error handling. + + + +## Configuration File [_configuration_file] + +The configuration file for a metricset is handled by the module. If there are multiple metricsets in one module, make sure you add all metricsets to the configuration. For example: + +```go +metricbeat: + modules: + - module: {module-name} + metricsets: ["{metricset1}", "{metricset2}"] +``` + +::::{note} +Make sure that you run `make collect` after updating the config file so that your changes are also applied to the global configuration file and the docs. +:::: + + +For more details about the Metricbeat configuration file, see the topic about [Modules](/reference/metricbeat/configuration-metricbeat.md) in the Metricbeat documentation. + + +## What to Do Next [_what_to_do_next] + +This topic provides basic steps for creating a metricset. For more details about metricsets and how to extend your metricset further, see [Metricset Details](/extend/metricset-details.md). + diff --git a/docs/extend/dev-faq.md b/docs/extend/dev-faq.md new file mode 100644 index 000000000000..c51349b7abdc --- /dev/null +++ b/docs/extend/dev-faq.md @@ -0,0 +1,23 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/dev-faq.html +--- + +# Metricbeat Developer FAQ [dev-faq] + +This is a list of common questions when creating a metricset and the potential answers. + + +## Metricset is not compiled [_metricset_is_not_compiled] + +You are compiling your Beat, but the newly created metricset is not compiled? + +Make sure that the path to your module and metricset are added as an import path either in your `main.go` file or your `include/list.go` file. You can do this manually or by running `make imports`. + + +## Metricset is not started [_metricset_is_not_started] + +The metricset is compiled, but not started when starting Metricbeat? + +After creating your metricset, make sure you run `make collect`. This command adds the configuration of your metricset to the default configuration. If the metricset still doesn’t start, check your default configuration file to see if the metricset is listed there. + diff --git a/docs/extend/event-conventions.md b/docs/extend/event-conventions.md new file mode 100644 index 000000000000..add697dc01f6 --- /dev/null +++ b/docs/extend/event-conventions.md @@ -0,0 +1,72 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/event-conventions.html +--- + +# Naming Conventions [event-conventions] + +When creating events, use the following conventions for field names and abbreviations. + +## Field Names [field-names] + +Use the following naming conventions for field names: + +* All fields must be lower case. +* Use snake case (underscores) for combining words. +* Group related fields into subdocuments by using dot (.) notation. Groups typically have common prefixes. For example, if you have fields called `CPULoad` and `CPUSystem` in a service, you would convert them into `cpu.load` and `cpu.system` in the event. +* Avoid repeating the namespace in field names. If a word or abbreviation appears in the namespace, it’s not needed in the field name. For example, instead of `cpu.cpu_load`, use `cpu.load`. +* Use [units suffix](#units) when the metric matches one of the known units. +* Use [standardised names](#abbreviations) and avoid using abbreviations that aren’t commonly known. +* Organise the documents from general to specific to allow for namespacing. The type, such as `.pct`, should always be last. For example, `system.core.user.pct`. +* If two fields are the same, but with different units, remove the less granular one. For example, include `timeout.sec`, but don’t include `timeout.min`. If a less granular value is required, you can calculate it later. +* If a field name matches the namespace used for nested fields, add `.value` to the field name. For example, instead of: + + ```yaml + workers + workers.busy + workers.idle + ``` + + Use: + + ```yaml + workers.value + workers.busy + workers.idle + ``` + +* Do not use dots (.) in individual field names. Dots are reserved for grouping related fields into subdocuments. +* Use singular and plural names properly to reflect the field content. For example, use `requests_per_sec` rather than `request_per_sec`. + + +## Units [units] + +These are well-known suffixes to represent units of stored values, use them as a dotted suffix when possible. For example `system.memory.used.bytes` or `system.diskio.read.count`: + +| Suffix | Units | +| --- | --- | +| count | item count | +| pct | percentage | +| day | days | +| sec | seconds | +| ms | millisecond | +| us | microseconds | +| ns | nanoseconds | +| bytes | bytes | +| mb | megabytes | + + +## Standardised Names [abbreviations] + +Here is a list of standardised names and units that are used across all Beats: + +| Use…​ | Instead of…​ | +| --- | --- | +| avg | average | +| connection | conn | +| max | maximum | +| min | minimum | +| request | req | +| msg | message | + + diff --git a/docs/extend/event-fields-yml.md b/docs/extend/event-fields-yml.md new file mode 100644 index 000000000000..9d58a112cb52 --- /dev/null +++ b/docs/extend/event-fields-yml.md @@ -0,0 +1,172 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/event-fields-yml.html +--- + +# Defining field mappings [event-fields-yml] + +You must define the fields used by your Beat, along with their mapping details, in `_meta/fields.yml`. After editing this file, run `make update`. + +Define the field mappings in the `fields` array: + +```yaml +- key: mybeat + title: mybeat + description: These are the fields used by mybeat. + fields: + - name: last_name <1> + type: keyword <2> + required: true <3> + description: > <4> + The last name. + - name: first_name + type: keyword + required: true + description: > + The first name. + - name: comment + type: text + required: false + description: > + Comment made by the user. +``` + +1. `name`: The field name +2. `type`: The field type. The value of `type` can be any datatype [available in {{es}}](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md). If no value is specified, the default type is `keyword`. +3. `required`: Whether or not a field value is required +4. `description`: Some information about the field contents + + +## Mapping parameters [_mapping_parameters] + +You can specify other mapping parameters for each field. See the [{{es}} Reference](elasticsearch://reference/elasticsearch/mapping-reference/mapping-parameters.md) for more details about each parameter. + +`format` +: Specify a custom date format used by the field. + +`multi_fields` +: For `text` or `keyword` fields, use `multi_fields` to define multi-field mappings. + +`enabled` +: Whether or not the field is enabled. + +`analyzer` +: Which analyzer to use when indexing. + +`search_analyzer` +: Which analyzer to use when searching. + +`norms` +: Applies to `text` and `keyword` fields. Default is `false`. + +`dynamic` +: Dynamic field control. Can be one of `true` (default), `false`, or `strict`. + +`index` +: Whether or not the field should be indexed. + +`doc_values` +: Whether or not the field should have doc values generated. + +`copy_to` +: Which field to copy the field value into. + +`ignore_above` +: {{es}} ignores (does not index) strings that are longer than the specified value. When this property value is missing or `0`, the `libbeat` default value of `1024` characters is used. If the value is `-1`, the {{es}} default value is used. + +For example, you can use the `copy_to` mapping parameter to copy the `last_name` and `first_name` fields into the `full_name` field at index time: + +```yaml +- key: mybeat + title: mybeat + description: These are the fields used by mybeat. + fields: + - name: last_name + type: text + required: true + copy_to: full_name <1> + description: > + The last name. + - name: first_name + type: text + required: true + copy_to: full_name <2> + description: > + The first name. + - name: full_name + type: text + required: false + description: > + The last_name and first_name combined into one field for easy searchability. +``` + +1. Copy the value of `last_name` into `full_name` +2. Copy the value of `first_name` into `full_name` + + +There are also some {{kib}}-specific properties, not detailed here. These are: `analyzed`, `count`, `searchable`, `aggregatable`, and `script`. {{kib}} parameters can also be described using `pattern`, `input_format`, `output_format`, `output_precision`, `label_template`, `url_template`, and `open_link_in_current_tab`. + + +## Defining text multi-fields [_defining_text_multi_fields] + +There are various options that you can apply when using text fields. You can define a simple text field using the default analyzer without any other options, as in the example shown earlier. + +To keep the original keyword value when using `text` mappings, for instance to use in aggregations or ordering, you can use a multi-field mapping: + +```yaml +- key: mybeat + title: mybeat + description: These are the fields used by mybeat. + fields: + - name: city + type: text + multi_fields: <1> + - name: keyword <2> + type: keyword <3> +``` + +1. `multi_fields`: Define the `multi_fields` mapping parameter. +2. `name`: This is a conventional name for a multi-field. It can be anything (`raw` is another common option) but the convention is to use `keyword`. +3. `type`: Specify the `keyword` type to use the field in aggregations or to order documents. + + +For more information, see the [{{es}} documentation about multi-fields](elasticsearch://reference/elasticsearch/mapping-reference/multi-fields.md). + + +## Defining a text analyzer in-line [_defining_a_text_analyzer_in_line] + +It is possible to define a new text analyzer or search analyzer in-line with the field definition in the field’s mapping parameters. + +For example, you can define a new text analyzer that does not break hyphenated names: + +```yaml +- key: mybeat + title: mybeat + description: These are the fields used by mybeat. + fields: + - name: last_name + type: text + required: true + description: > + The last name. + analyzer: + mybeat_hyphenated_name: <1> + type: pattern <2> + pattern: "[\\W&&[^-]]+" <3> + search_analyzer: + mybeat_hyphenated_name: <4> + type: pattern + pattern: "[\\W&&[^-]]+" +``` + +1. Use a newly defined text analyzer +2. Define the custome analyzer type +3. Specify the analyzer behaviour +4. Use the same analyzer for the search + + +The names of custom analyzers that are defined in-line may not be reused for a different text analyzer. If a text analyzer name is reused it is checked for matching existing instances of the analyzer. It is recommended that the analyzer name is prefixed with the beat name to avoid name clashes. + +For more information, see [{{es}} documentation about defining custom text analyzers](docs-content://manage-data/data-store/text-analysis/create-custom-analyzer.md). + + diff --git a/docs/extend/export-dashboards.md b/docs/extend/export-dashboards.md new file mode 100644 index 000000000000..e5f339aded7f --- /dev/null +++ b/docs/extend/export-dashboards.md @@ -0,0 +1,133 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/export-dashboards.html +--- + +# Exporting New and Modified Beat Dashboards [export-dashboards] + +To export all the dashboards for any Elastic Beat or any community Beat, including any new or modified dashboards and all dependencies such as visualizations, searches, you can use the Go script `export_dashboards.go` from [dev-tools](https://github.com/elastic/beats/tree/master/dev-tools/cmd/dashboards). See the dev-tools [readme](https://github.com/elastic/beats/tree/master/dev-tools/README.md) for more info. + +Alternatively, if the scripts above are not available, you can use your Beat binary to export Kibana 6.0 dashboards or later. + +## Exporting from Kibana 6.0 to 7.14 [_exporting_from_kibana_6_0_to_7_14] + +The `dev-tools/cmd/export_dashboards.go` script helps you export your customized Kibana dashboards until the v7.14.x release. You might need to export a single dashboard or all the dashboards available for a module or Beat. + +It is also possible to use a Beat binary to export. + + +## Exporting from Kibana 7.15 or newer [_exporting_from_kibana_7_15_or_newer] + +From 7.15, your Beats version must be the same as your Kibana version to make sure the export API required is available. + +### Migrate legacy dashboards made with Kibana 7.14 or older [_migrate_legacy_dashboards_made_with_kibana_7_14_or_older] + +After you updated your Kibana instance to at least 7.15, you have to export your dashboards again with either `export_dashboards.go` tool or with your Beat. + + +### Export a single Kibana dashboard [_export_a_single_kibana_dashboard] + +To export a single dashboard for a module you can use the following command inside a Beat with modules: + +```shell +MODULE=redis ID=AV4REOpp5NkDleZmzKkE mage exportDashboard +``` + +```shell +./filebeat export dashboard --id 7fea2930-478e-11e7-b1f0-cb29bac6bf8b --folder module/redis +``` + +This generates an appropriate folder under module/redis for the dashboard, separating assets into dashboards, searches, vizualizations, etc. Each exported file is a JSON and their names are the IDs of the assets. + +::::{note} +The dashboard ID is available in the dashboard URL. For example, in case the dashboard URL is `app/kibana#/dashboard/AV4REOpp5NkDleZmzKkE?_g=()&_a=(description:'Overview%2...`, the dashboard ID is `AV4REOpp5NkDleZmzKkE`. +:::: + + + +### Export all module/Beat dashboards [_export_all_modulebeat_dashboards] + +Each module should contain a `module.yml` file with a list of all the dashboards available for the module. For the Beats that don’t have support for modules (e.g. Packetbeat), there is a `dashboards.yml` file that defines all the Packetbeat dashboards. + +Below, it’s an example of the `module.yml` file for the system module in Metricbeat: + +```shell +dashboards: +- id: Metricbeat-system-overview + file: Metricbeat-system-overview.ndjson + +- id: 79ffd6e0-faa0-11e6-947f-177f697178b8 + file: Metricbeat-host-overview.ndjson + +- id: CPU-slash-Memory-per-container + file: Metricbeat-containers-overview.ndjson +``` + +Each dashboard is defined by an `id` and the name of ndjson `file` where the dashboard is saved locally. + +By passing the yml file to the `export_dashboards.go` script or to the Beat, you can export all the dashboards defined: + +```shell +go run dev-tools/cmd/dashboards/export_dashboards.go --yml filebeat/module/system/module.yml --folder dashboards +``` + +```shell +./filebeat export dashboard --yml filebeat/module/system/module.yml +``` + + +### Export dashboards from a Kibana Space [_export_dashboards_from_a_kibana_space] + +If you are using the Kibana Spaces feature and want to export dashboards from a specific Space, pass the Space ID to the `export_dashboards.go` script: + +```shell +go run dev-tools/cmd/dashboards/export_dashboards.go -space-id my-space [other-options] +``` + +In case of running `export dashboard` of a Beat, you need to set the Space ID in `setup.kibana.space.id`. + + + +## Exporting Kibana 5.x dashboards [_exporting_kibana_5_x_dashboards] + +To export only some Kibana dashboards for an Elastic Beat or community Beat, you can simply pass a regular expression to the `export_dashboards.py` script to match the selected Kibana dashboards. + +Before running the `export_dashboards.py` script for the first time, you need to create an environment that contains all the required Python packages. + +```shell +make python-env +``` + +For example, to export all Kibana dashboards that start with the **Packetbeat** name: + +```shell +python ../dev-tools/cmd/dashboards/export_dashboards.py --regex Packetbeat* +``` + +To see all the available options, read the descriptions below or run: + +```shell +python ../dev-tools/cmd/dashboards/export_dashboards.py -h +``` + +**`--url `** +: The Elasticsearch URL. The default value is [http://localhost:9200](http://localhost:9200). + +**`--regex `** +: Regular expression to match all the Kibana dashboards to be exported. This argument is required. + +**`--kibana `** +: The Elasticsearch index pattern where Kibana saves its configuration. The default value is `.kibana`. + +**`--dir `** +: The output directory where the dashboards and all dependencies will be saved. The default value is `output`. + +The output directory has the following structure: + +```shell +output/ + index-pattern/ + dashboard/ + visualization/ + search/ +``` diff --git a/docs/extend/filebeat-modules-devguide.md b/docs/extend/filebeat-modules-devguide.md new file mode 100644 index 000000000000..46b158280d2d --- /dev/null +++ b/docs/extend/filebeat-modules-devguide.md @@ -0,0 +1,416 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/filebeat-modules-devguide.html +--- + +# Creating a New Filebeat Module [filebeat-modules-devguide] + +::::{important} +Elastic provides no warranty or support for the code used to generate modules and filesets. The generator is mainly offered as guidance for developers who want to create their own data shippers. +:::: + + +This guide will walk you through creating a new Filebeat module. + +All Filebeat modules currently live in the main [Beats](https://github.com/elastic/beats) repository. To clone the repository and build Filebeat (which you will need for testing), please follow the general instructions in [*Contributing to Beats*](./index.md). + + +## Overview [_overview] + +Each Filebeat module is composed of one or more "filesets". We usually create a module for each service that we support (`nginx` for Nginx, `mysql` for Mysql, and so on) and a fileset for each type of log that the service creates. For example, the Nginx module has `access` and `error` filesets. You can contribute a new module (with at least one fileset), or a new fileset for an existing module. + +::::{note} +In this guide we use `{{module}}` and `{{fileset}}` as placeholders for the module and fileset names. You need to replace these with the actual names you entered when your created the module and fileset. Only use characters `[a-z]` and, if required, underscores (`_`). No other characters are allowed. +:::: + + + +## Creating a new module [_creating_a_new_module] + +Run the following command in the `filebeat` folder: + +```bash +make create-module MODULE={module} +``` + +After running the `make create-module` command, you’ll find the module, along with its generated files, under `module/{{module}}`. This directory contains the following files: + +```bash +module/{module} +├── module.yml +└── _meta +    └── docs.asciidoc +    └── fields.yml +    └── kibana +``` + +Let’s look at these files one by one. + + +### module.yml [_module_yml] + +This file contains list of all the dashboards available for the module and used by `export_dashboards.go` script for exporting dashboards. Each dashboard is defined by an id and the name of json file where the dashboard is saved locally. At generation new fileset this file will be automatically updated with "default" dashboard settings for new fileset. Please ensure that this settings are correct. + + +### _meta/docs.asciidoc [_metadocs_asciidoc] + +This file contains module-specific documentation. You should include information about which versions of the service were tested and the variables that are defined in each fileset. + + +### _meta/fields.yml [_metafields_yml] + +The module level `fields.yml` contains descriptions for the module-level fields. Please review and update the title and the descriptions in this file. The title is used as a title in the docs, so it’s best to capitalize it. + + +### _meta/kibana [_metakibana] + +This folder contains the sample Kibana dashboards for this module. To create them, you can build them visually in Kibana and then export them with `export_dashboards`. + +The tool will export all of the dashboard dependencies (visualizations, saved searches) automatically. + +You can see various ways of using `export_dashboards` at [Exporting New and Modified Beat Dashboards](/extend/export-dashboards.md). The recommended way to export them is to list your dashboards in your module’s `module.yml` file: + +```yaml +dashboards: +- id: 69f5ae20-eb02-11e7-8f04-beef1daadb05 + file: mymodule-overview.json +- id: c0a7ce90-cafe-4242-8647-534bb4c21040 + file: mymodule-errors.json +``` + +Then run `export_dashboards` like this: + +```shell +$ cd dev-tools/cmd/dashboards +$ make # if export_dashboard is not built yet +$ ./export_dashboards --yml '../../../filebeat/module/{module}/module.yml' +``` + +New Filebeat modules might not be compatible with Kibana 5.x. To export dashboards that are compatible with 5.x, run the following command inside the developer virtual environment: + +```shell +$ cd filebeat +$ make python-env +$ cd module/{module}/ +$ python ../../../dev-tools/export_5x_dashboards.py --regex {module} --dir _meta/kibana/5.x +``` + +Where the `--regex` parameter should match the dashboard you want to export. + +Please note that dashboards exported from Kibana 5.x are not compatible with Kibana 6.x. + +You can find more details about the process of creating and exporting the Kibana dashboards by reading [this guide](http://www.elastic.co/guide/en/beats/devguide/master/new-dashboards.md). + + +## Creating a new fileset [_creating_a_new_fileset] + +Run the following command in the `filebeat` folder: + +```bash +make create-fileset MODULE={module} FILESET={fileset} +``` + +After running the `make create-fileset` command, you’ll find the fileset, along with its generated files, under `module/{{module}}/{fileset}`. This directory contains the following files: + +```bash +module/{module}/{fileset} +├── manifest.yml +├── config +│   └── {fileset}.yml +├── ingest +│   └── pipeline.json +├── _meta +│   └── fields.yml +│   └── kibana +│    └── default +└── test +``` + +Let’s look at these files one by one. + + +### manifest.yml [_manifest_yml] + +The `manifest.yml` is the control file for the module, where variables are defined and the other files are referenced. It is a YAML file, but in many places in the file, you can use built-in or defined variables by using the `{{.variable}}` syntax. + +The `var` section of the file defines the fileset variables and their default values. The module variables can be referenced in other configuration files, and their value can be overridden at runtime by the Filebeat configuration. + +As the fileset creator, you can use any names for the variables you define. Each variable must have a default value. So in it’s simplest form, this is how you can define a new variable: + +```yaml +var: + - name: pipeline + default: with_plugins +``` + +Most fileset should have a `paths` variable defined, which sets the default paths where the log files are located: + +```yaml +var: + - name: paths + default: + - /example/test.log* + os.darwin: + - /usr/local/example/test.log* + - /example/test.log* + os.windows: + - c:/programdata/example/logs/test.log* +``` + +There’s quite a lot going on in this file, so let’s break it down: + +* The name of the variable is `paths` and the default value is an array with one element: `"/example/test.log*"`. +* Note that variable values don’t have to be strings. They can be also numbers, objects, or as shown in this example, arrays. +* We will use the `paths` variable to set the input `paths` setting, so "glob" values can be used here. +* Besides the `default` value, the file defines values for particular operating systems: a default for darwin/OS X/macOS systems and a default for Windows systems. These are introduced via the `os.darwin` and `os.windows` keywords. The values under these keys become the default for the variable, if Filebeat is executed on the respective OS. + +Besides the variable definition, the `manifest.yml` file also contains references to the ingest pipeline and input configuration to use (see next sections): + +```yaml +ingest_pipeline: ingest/pipeline.json +input: config/testfileset.yml +``` + +These should point to the respective files from the fileset. + +Note that when evaluating the contents of these files, the variables are expanded, which enables you to select one file or the other depending on the value of a variable. For example: + +```yaml +ingest_pipeline: ingest/{{.pipeline}}.json +``` + +This example selects the ingest pipeline file based on the value of the `pipeline` variable. For the `pipeline` variable shown earlier, the path would resolve to `ingest/with_plugins.json` (assuming the variable value isn’t overridden at runtime.) + +In 6.6 and later, you can specify multiple ingest pipelines. + +```yaml +ingest_pipeline: + - ingest/main.json + - ingest/plain_logs.json + - ingest/json_logs.json +``` + +When multiple ingest pipelines are specified the first one in the list is considered to be the entry point pipeline. + +One reason for using multiple pipelines might be to send all logs harvested by this fileset to the entry point pipeline and have it delegate different parts of the processing to other pipelines. You can read details about setting this up in [the `ingest/*.json` section](#ingest-json-entry-point-pipeline). + + +### config/*.yml [_config_yml] + +The `config/` folder contains template files that generate Filebeat input configurations. The Filebeat inputs are primarily responsible for tailing files, filtering, and multi-line stitching, so that’s what you configure in the template files. + +A typical example looks like this: + +```yaml +type: log +paths: +{{ range $i, $path := .paths }} + - {{$path}} +{{ end }} +exclude_files: [".gz$"] +``` + +You’ll find this example in the template file that gets generated automatically when you run `make create-fileset`. In this example, the `paths` variable is used to construct the `paths` list for the input `paths` option. + +Any template files that you add to the `config/` folder need to generate a valid Filebeat input configuration in YAML format. The options accepted by the input configuration are documented in the [Filebeat Inputs](/reference/filebeat/configuration-filebeat-options.md) section of the Filebeat documentation. + +The template files use the templating language defined by the [Go standard library](https://golang.org/pkg/text/template/). + +Here is another example that also configures multiline stitching: + +```yaml +type: log +paths: +{{ range $i, $path := .paths }} + - {{$path}} +{{ end }} +exclude_files: [".gz$"] +multiline: + pattern: "^# User@Host: " + negate: true + match: after +``` + +Although you can add multiple configuration files under the `config/` folder, only the file indicated by the `manifest.yml` file will be loaded. You can use variables to dynamically switch between configurations. + + +### ingest/*.json [_ingest_json] + +The `ingest/` folder contains {{es}} [ingest pipeline](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) configurations. Ingest pipelines are responsible for parsing the log lines and doing other manipulations on the data. + +The files in this folder are JSON or YAML documents representing [pipeline definitions](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md). Just like with the `config/` folder, you can define multiple pipelines, but a single one is loaded at runtime based on the information from `manifest.yml`. + +The generator creates a JSON object similar to this one: + +```json +{ + "description": "Pipeline for parsing {module} {fileset} logs", + "processors": [ + ], + "on_failure" : [{ + "set" : { + "field" : "error.message", + "value" : "{{ _ingest.on_failure_message }}" + } + }] +} +``` + +Alternatively, you can use YAML formatted pipelines, which uses a simpler syntax: + +```yaml +description: "Pipeline for parsing {module} {fileset} logs" +processors: +on_failure: + - set: + field: error.message + value: "{{ _ingest.on_failure_message }}" +``` + +From here, you would typically add processors to the `processors` array to do the actual parsing. For information about available ingest processors, see the [processor reference documentation](elasticsearch://reference/ingestion-tools/enrich-processor/index.md). In particular, you will likely find the [grok processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md) to be useful for parsing. Here is an example for parsing the Nginx access logs. + +```json +{ + "grok": { + "field": "message", + "patterns":[ + "%{IPORHOST:nginx.access.remote_ip} - %{DATA:nginx.access.user_name} \\[%{HTTPDATE:nginx.access.time}\\] \"%{WORD:nginx.access.method} %{DATA:nginx.access.url} HTTP/%{NUMBER:nginx.access.http_version}\" %{NUMBER:nginx.access.response_code} %{NUMBER:nginx.access.body_sent.bytes} \"%{DATA:nginx.access.referrer}\" \"%{DATA:nginx.access.agent}\"" + ], + "ignore_missing": true + } +} +``` + +Note that you should follow the convention of naming of fields prefixed with the module and fileset name: `{{module}}.{fileset}.field`, e.g. `nginx.access.remote_ip`. Also, please review our [Naming Conventions](/extend/event-conventions.md). + +$$$ingest-json-entry-point-pipeline$$$ +In 6.6 and later, ingest pipelines can use the [`pipeline` processor](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) to delegate parts of the processings to other pipelines. + +This can be useful if you want a fileset to ingest the same *logical* information presented in different formats, e.g. csv vs. json versions of the same log files. Imagine an entry point ingest pipeline that detects the format of a log entry and then conditionally delegates further processing of that log entry, depending on the format, to another pipeline. + +```json +{ + "processors": [ + { + "grok": { + "field": "message", + "patterns": [ + "^%{CHAR:first_char}" + ], + "pattern_definitions": { + "CHAR": "." + } + } + }, + { + "pipeline": { + "if": "ctx.first_char == '{'", + "name": "{< IngestPipeline "json-log-processing-pipeline" >}" <1> + } + }, + { + "pipeline": { + "if": "ctx.first_char != '{'", + "name": "{< IngestPipeline "plain-log-processing-pipeline" >}" + } + } + ] +} +``` + +1. Use the `IngestPipeline` template function to resolve the name. This function converts the specified name into the fully qualified pipeline ID that is stored in Elasticsearch. + + +In order for the above pipeline to work, Filebeat must load the entry point pipeline as well as any sub-pipelines into Elasticsearch. You can tell Filebeat to do so by specifying all the necessary pipelines for the fileset in its `manifest.yml` file. The first pipeline in the list is considered to be the entry point pipeline. + +```yaml +ingest_pipeline: + - ingest/main.json + - ingest/plain_logs.yml + - ingest/json_logs.json +``` + +While developing the pipeline definition, we recommend making use of the [Simulate Pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) for testing and quick iteration. + +By default Filebeat does not update Ingest pipelines if already loaded. If you want to force updating your pipeline during development, use `./filebeat setup --pipelines` command. This uploads pipelines even if they are already available on the node. + + +### _meta/fields.yml [_metafields_yml_2] + +The `fields.yml` file contains the top-level structure for the fields in your fileset. It is used as the source of truth for: + +* the generated Elasticsearch mapping template +* the generated Kibana index pattern +* the generated documentation for the exported fields + +Besides the `fields.yml` file in the fileset, there is also a `fields.yml` file at the module level, placed under `module/{{module}}/_meta/fields.yml`, which should contain the fields defined at the module level, and the description of the module itself. In most cases, you should add the fields at the fileset level. + +After `pipeline.json` is created, it is possible to generate a base `field.yml`. + +```bash +make create-fields MODULE={module} FILESET={fileset} +``` + +Please, always check the generated file and make sure the fields are correct. You must add field documentation manually. + +If the fields are correct, it is time to generate documentation, configuration and Kibana index patterns. + +```bash +make update +``` + + +### test [_test] + +In the `test/` directory, you should place sample log files generated by the service. We have integration tests, automatically executed by CI, that will run Filebeat on each of the log files under the `test/` folder and check that there are no parsing errors and that all fields are documented. + +In addition, assuming you have a `test.log` file, you can add a `test.log-expected.json` file in the same directory that contains the expected documents as they are found via an Elasticsearch search. In this case, the integration tests will automatically check that the result is the same on each run. + +In order to test the filesets with the sample logs and/or generate the expected output one should run the tests locally for a specific module, using the following procedure under Filebeat directory: + +1. Start an Elasticsearch instance locally. For example, using Docker: + + ```bash + docker run \ + --name elasticsearch \ + -p 9200:9200 -p 9300:9300 \ + -e "xpack.security.http.ssl.enabled=false" -e "ELASTIC_PASSWORD=changeme" \ + -e "discovery.type=single-node" \ + --pull always --rm --detach \ + docker.elastic.co/elasticsearch/elasticsearch:master-SNAPSHOT + ``` + +2. Create an "admin" user on that Elasticsearch instance: + + ```bash + curl -u elastic:changeme \ + http://localhost:9200/_security/user/admin \ + -X POST -H 'Content-Type: application/json' \ + -d '{"password": "changeme", "roles": ["superuser"]}' + ``` + +3. Create the testing binary: `make filebeat.test` +4. Update fields yaml: `make update` +5. Create python env: `make python-env` +6. Source python env: `source ./build/python-env/bin/activate` +7. Run a test, for example to check nginx access log parsing: + + ```bash + INTEGRATION_TESTS=1 BEAT_STRICT_PERMS=false ES_PASS=changeme \ + TESTING_FILEBEAT_MODULES=nginx \ + pytest tests/system/test_modules.py -v --full-trace + ``` + +8. Add and remove option env vars as required. Here are some useful ones: + + * `TESTING_FILEBEAT_ALLOW_OLDER`: if set to 1, allow connecting older versions of Elasticsearch + * `TESTING_FILEBEAT_MODULES`: comma separated list of modules to test. + * `TESTING_FILEBEAT_FILESETS`: comma separated list of filesets to test. + * `TESTING_FILEBEAT_FILEPATTERN`: glob pattern for log files within the fileset to test. + * `GENERATE`: if set to 1, the expected documents will be generated. + + +The filebeat logs are writen to the `build` directory. It may be useful to tail them in another terminal using `tail -F build/system-tests/run/test_modules.Test.*/output.log`. + +For example if there’s a syntax error in an ingest pipeline, the test will probably just hang. The filebeat log output will contain the error message from elasticsearch. + diff --git a/docs/extend/generate-index-pattern.md b/docs/extend/generate-index-pattern.md new file mode 100644 index 000000000000..ac1b7cc795e1 --- /dev/null +++ b/docs/extend/generate-index-pattern.md @@ -0,0 +1,17 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/generate-index-pattern.html +--- + +# Generating the Beat Index Pattern [generate-index-pattern] + +The index-pattern defines the format of each field, and it’s used by Kibana to know how to display the field. If you change the fields exported by the Beat, you need to generate a new index pattern for your Beat. Otherwise, you can just use the index pattern available under the `kibana/*/index-pattern` directory. + +The Beat index pattern is generated from the `fields.yml`, which contains all the fields exported by the Beat. For each field, besides the `type`, you can configure the `format` field. The format informs Kibana about how to display a certain field. A good example is `percentage` or `bytes` to display fields as `50%` or `5MB`. + +To generate the index pattern from the `fields.yml`, you need to run the following command in the Beat repository: + +```shell +make update +``` + diff --git a/docs/extend/getting-ready-new-protocol.md b/docs/extend/getting-ready-new-protocol.md new file mode 100644 index 000000000000..1a427bccbccb --- /dev/null +++ b/docs/extend/getting-ready-new-protocol.md @@ -0,0 +1,63 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/getting-ready-new-protocol.html +--- + +# Getting Ready [getting-ready-new-protocol] + +Packetbeat is written in [Go](http://golang.org/), so having Go installed and knowing the basics are prerequisites for understanding this guide. But don’t worry if you aren’t a Go expert. Go is a relatively new language, and very few people are experts in it. In fact, several people learned Go by contributing to Packetbeat and libbeat, including the original Packetbeat authors. + +You will also need a good understanding of the wire protocol that you want to add support for. For standard protocols or protocols used in open source projects, you can usually find detailed specifications and example source code. Wireshark is a very useful tool for understanding the inner workings of the protocols it supports. + +In some cases you can even make use of existing libraries for doing the actual parsing and decoding of the protocol. If the particular protocol has a Go implementation with a liberal enough license, you might be able to use it to parse and decode individual messages instead of writing your own parser. + +Before starting, please also read the [*Contributing to Beats*](./index.md). + + +### Cloning and Compiling [_cloning_and_compiling] + +After you have [installed Go](https://golang.org/doc/install) and set up the [GOPATH](https://golang.org/doc/code.md#GOPATH) environment variable to point to your preferred workspace location, you can clone Packetbeat with the following commands: + +```shell +$ mkdir -p ${GOPATH}/src/github.com/elastic +$ cd ${GOPATH}/src/github.com/elastic +$ git clone https://github.com/elastic/beats.git +``` + +Note: If you have multiple go paths use `${GOPATH%%:*}`instead of `${GOPATH}`. + +Then you can compile it with: + +```shell +$ cd beats +$ make +``` + +Note that the location where you clone is important. If you prefer working outside of the `GOPATH` environment, you can clone to another directory and only create a symlink to the `$GOPATH/src/github.com/elastic/` directory. + + +## Forking and Branching [_forking_and_branching] + +We recommend the following work flow for contributing to Packetbeat: + +* Fork Beats in GitHub to your own account +* In the `$GOPATH/src/github.com/elastic/beats` folder, add your fork as a new remote. For example (replace `tsg` with your GitHub account): + +```shell +$ git remote add tsg git@github.com:tsg/beats.git +``` + +* Create a new branch for your work: + +```shell +$ git checkout -b cool_new_protocol +``` + +* Commit as often as you like, and then push to your private fork with: + +```shell +$ git push --set-upstream tsg cool_new_protocol +``` + +* When you are ready to submit your PR, simply do so from the GitHub web interface. Feel free to submit your PR early. You can still add commits to the branch after creating the PR. Submitting the PR early gives us more time to provide feedback and perhaps help you with it. + diff --git a/docs/extend/import-dashboards.md b/docs/extend/import-dashboards.md new file mode 100644 index 000000000000..2f4bff91c611 --- /dev/null +++ b/docs/extend/import-dashboards.md @@ -0,0 +1,117 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/import-dashboards.html +--- + +# Importing Existing Beat Dashboards [import-dashboards] + +The official Beats come with Kibana dashboards, and starting with 6.0.0, they are part of every Beat package. + +You can use the Beat executable to import all the dashboards and the index pattern for a Beat, including the dependencies such as visualizations and searches. + +To import the dashboards, run the `setup` command. + +```shell +./metricbeat setup +``` + +The `setup` phase loads several dependencies, such as: + +* Index mapping template in Elasticsearch +* Kibana dashboards +* Ingest pipelines +* ILM policy + +The dependencies vary depending on the Beat you’re setting up. + +For more details about the `setup` command, see the command-line help. For example: + +```shell +./metricbeat help setup + +This command does initial setup of the environment: + + * Index mapping template in Elasticsearch to ensure fields are mapped. + * Kibana dashboards (where available). + * ML jobs (where available). + * Ingest pipelines (where available). + * ILM policy (for Elasticsearch 6.5 and newer). + +Usage: + metricbeat setup [flags] + +Flags: + --dashboards Setup dashboards + -h, --help help for setup + --index-management Setup all components related to Elasticsearch index management, including template, ilm policy and rollover alias + --pipelines Setup Ingest pipelines +``` + +The flags are useful when you don’t want to load everything. For example, to import only the dashboards, use the `--dashboards` flag: + +```shell +./metricbeat setup --dashboards +``` + +Starting with Beats 6.0.0, the dashboards are no longer loaded directly into Elasticsearch. Instead, they are imported directly into Kibana. Thus, if your Kibana instance is not listening on localhost, or you enabled {{xpack}} for Kibana, you need to either configure the Kibana endpoint in the config for the Beat, or pass the Kibana host and credentials as arguments to the `setup` command. For example: + +```shell +./metricbeat setup -E setup.kibana.host=192.168.3.206:5601 -E setup.kibana.username=elastic -E setup.kibana.password=secret +``` + +By default, the `setup` command imports the dashboards from the `kibana` directory, which is available in the Beat package. + +::::{note} +The format of the saved dashboards is not compatible between Kibana 5.x and 6.x. Thus, the Kibana 5.x dashboards are available in the `5.x` directory, and the Kibana 6.0 dashboards, and older are in the `default` directory. +:::: + + +In case you are using customized dashboards, you can import them: + +* from a local directory: + + ```shell + ./metricbeat setup -E setup.dashboards.directory=kibana + ``` + +* from a local zip archive: + + ```shell + ./metricbeat setup -E setup.dashboards.file=metricbeat-dashboards-6.0.zip + ``` + +* from a zip archive available online: + + ```shell + ./metricbeat setup -E setup.dashboards.url=path/to/url + ``` + + See [Kibana dashboards configuration](#import-dashboard-options) for a description of the `setup.dashboards` configuration options. + + +## Import Dashboards for Development [import-dashboards-for-development] + +You can make use of the Magefile from the Beat GitHub repository to import the dashboards. If Kibana is running on localhost, then you can run the following command from the root of the Beat: + +```shell +mage dashboards +``` + + +## Kibana dashboards configuration [import-dashboard-options] + +The configuration file (`*.reference.yml`) of each Beat contains the `setup.dashboards` section for configuring from where to get the Kibana dashboards, as well as the name of the index pattern. Each of these configuration options can be overwritten with the command line options by using `-E` flag. + +**`setup.dashboards.directory=`** +: Local directory that contains the saved dashboards and their dependencies. The default value is the `kibana` directory available in the Beat package. + +**`setup.dashboards.file=`** +: Local zip archive with the dashboards. The archive can contain Kibana dashboards for a single Beat or for multiple Beats. The dashboards of each Beat are placed under a separate directory with the name of the Beat. + +**`setup.dashboards.url=`** +: Zip archive with the dashboards, available online. The archive can contain Kibana dashboards for a single Beat or for multiple Beats. The dashboards for each Beat are placed under a separate directory with the name of the Beat. + +**`setup.dashboards.index `** +: You should only use this option if you want to change the index pattern name that’s used by default. For example, if the default is `metricbeat-*`, you can change it to `custombeat-*`. + + diff --git a/docs/extend/index.md b/docs/extend/index.md new file mode 100644 index 000000000000..9d1fc2d86f7b --- /dev/null +++ b/docs/extend/index.md @@ -0,0 +1,194 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/beats-contributing.html +--- + +# Contribute to Beats [beats-contributing] + +If you have a bugfix or new feature that you would like to contribute, please start by opening a topic on the [forums](https://discuss.elastic.co/c/beats). It may be that somebody is already working on it, or that there are particular issues that you should know about before implementing the change. + +We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code. After committing your code, check out the [Elastic Contributor Program](https://www.elastic.co/community/contributor) where you can earn points and rewards for your contributions. + +The process for contributing to any of the Elastic repositories is similar. + + +## Contribution Steps [contribution-steps] + +1. Please make sure you have signed our [Contributor License Agreement](https://www.elastic.co/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once. +2. Send a pull request! Push your changes to your fork of the repository and [submit a pull request](https://help.github.com/articles/using-pull-requests) using our [pull request guidelines](/extend/pr-review.md). New PRs go to the main branch. The Beats core team will backport your PR if it is necessary. + +In the pull request, describe what your changes do and mention any bugs/issues related to the pull request. Please also add a changelog entry to [CHANGELOG.next.asciidoc](https://github.com/elastic/beats/blob/main/CHANGELOG.next.asciidoc). + + +## Setting Up Your Dev Environment [setting-up-dev-environment] + +The Beats are Go programs, so install the 1.22.10 version of [Go](http://golang.org/) which is being used for Beats development. + +After [installing Go](https://golang.org/doc/install), set the [GOPATH](https://golang.org/doc/code.md#GOPATH) environment variable to point to your workspace location, and make sure `$GOPATH/bin` is in your PATH. + +::::{note} +One deterministic manner to install the proper Go version to work with Beats is to use the [GVM](https://github.com/andrewkroh/gvm) Go version manager. An example for Mac users would be: +:::: + + +```shell +gvm use 1.22.10 +eval $(gvm 1.22.10) +``` + +Then you can clone Beats git repository: + +```shell +mkdir -p ${GOPATH}/src/github.com/elastic +git clone https://github.com/elastic/beats ${GOPATH}/src/github.com/elastic/beats +``` + +::::{note} +If you have multiple go paths, use `${GOPATH%%:*}` instead of `${GOPATH}`. +:::: + + +Beats developers primarily use [Mage](https://github.com/magefile/mage) for development. You can install mage using a make target: + +```shell +make mage +``` + +Then you can compile a particular Beat by using Mage. For example, for Filebeat: + +```shell +cd beats/filebeat +mage build +``` + +You can list all available mage targets with: + +```shell +mage -l +``` + +Some of the Beats might have extra development requirements, in which case you’ll find a CONTRIBUTING.md file in the Beat directory. + +We use an [EditorConfig](http://editorconfig.org/) file in the beats repository to standardise how different editors handle whitespace, line endings, and other coding styles in our files. Most popular editors have a [plugin](http://editorconfig.org/#download) for EditorConfig and we strongly recommend that you install it. + + +## Update scripts [update-scripts] + +The Beats use a variety of scripts based on Python, make and mage to generate configuration files and documentation. Ensure to use the version of python listed in the [.python-version](https://github.com/elastic/beats/blob/main/.python-version) file. + +The primary command for updating generated files is: + +```shell +make update +``` + +Each Beat has its own `update` target (for both `make` and `mage`), as well as a master `update` in the repository root. If a PR adds or removes a dependency, run `make update` in the root `beats` directory. + +Another command properly formats go source files and adds a copyright header: + +```shell +make fmt +``` + +Both of these commands should be run before submitting a PR. You can view all the available make targets with `make help`. + +These commands have the following dependencies: + +* Python >= 3.7 +* Python [venv module](https://docs.python.org/3/library/venv.html) +* [Mage](https://github.com/magefile/mage) + +Python venv module is included in the standard library in Python 3. On Debian/Ubuntu systems it also requires to install the `python3-venv` package, that includes additional support scripts: + +```shell +sudo apt-get install python3-venv +``` + + +## Selecting Build Targets [build-target-env-vars] + +Beats is built using the `make release` target. By default, make will select from a limited number of preset build targets: + +* darwin/amd64 +* darwin/arm64 +* linux/amd64 +* windows/amd64 + +You can change build targets using the `PLATFORMS` environment variable. Targets set with the `PLATFORMS` variable can either be a GOOS value, or a GOOS/arch pair. For example, `linux` and `linux/amd64` are both valid targets. You can select multiple targets, and the `PLATFORMS` list is space delimited, for example `darwin windows` will build on all supported darwin and windows architectures. In addition, you can add or remove from the list of build targets by prepending `+` or `-` to a given target. For example: `+bsd` or `-darwin`. + +You can find the complete list of supported build targets with `go tool dist list`. + + +## Linting [running-linter] + +Beats uses [golangci-lint](https://golangci-lint.run/). You can run the pre-configured linter against your change: + +```shell +mage llc +``` + +`llc` stands for `Lint Last Change` which includes all the Go files that were changed in either the last commit (if you’re on the `main` branch) or in a difference between your feature branch and the `main` branch. + +It’s expected that sometimes a contributor will be asked to fix linter issues unrelated to their contribution since the linter was introduced later than changes in some of the files. + +You can also run the linter against an individual package, for example the filbeat command package: + +```shell +golangci-lint run ./filebeat/cmd/... +``` + + +## Testing [running-testsuite] + +You can run the whole testsuite with the following command: + +```shell +make testsuite +``` + +Running the testsuite has the following requirements: + +* Python >= 3.7 +* Docker >= 1.12 +* Docker-compose >= 1.11 + +For more details, refer to the [Testing](/extend/testing.md) guide. + + +## Documentation [documentation] + +The main documentation for each Beat is located under `/docs` and is based on [AsciiDoc](https://docs.asciidoctor.org/asciidoc/latest/). The Beats documentation also makes extensive use of conditionals and content reuse to ensure consistency and accuracy. Before contributing to the documentation, read the following resources: + +* [Docs HOWTO](https://github.com/elastic/docs/blob/master/README.asciidoc) +* [Contributing to the docs](/extend/contributing-docs.md) + + +## Dependencies [dependencies] + +In order to create Beats we rely on Golang libraries and other external tools. + + +### Other dependencies [_other_dependencies] + +Besides Go libraries, we are using development tools to generate parsers for inputs and processors. + +The following packages are required to run `go generate`: + + +#### Auditbeat [_auditbeat] + +* FlatBuffers >= 1.9 + + +#### Filebeat [_filebeat] + +* Graphviz >= 2.43.0 +* Ragel >= 6.10 + + +## Changelog [changelog] + +To keep up to date with changes to the official Beats for community developers, follow the developer changelog [here](https://github.com/elastic/beats/blob/main/CHANGELOG-developer.next.asciidoc). + + + diff --git a/docs/extend/metricbeat-dev-overview.md b/docs/extend/metricbeat-dev-overview.md new file mode 100644 index 000000000000..bcd6d25c472e --- /dev/null +++ b/docs/extend/metricbeat-dev-overview.md @@ -0,0 +1,21 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/metricbeat-dev-overview.html +--- + +# Overview [metricbeat-dev-overview] + +Metricbeat consists of modules and metricsets. A Metricbeat module is typically named after the service the metrics are fetched from, such as redis, mysql, and so on. Each module can contain multiple metricsets. A metricset represents multiple metrics that are normally retrieved with one request from the remote system. For example, the Redis `info` metricset retrieves info that you get when you run the Redis `INFO` command, and the MySQL `status` metricset retrieves info that you get when you issue the MySQL `SHOW GLOBAL STATUS` query. + + +## Module and Metricsets Requirements [_module_and_metricsets_requirements] + +To guarantee the best user experience, it’s important to us that only high quality modules are part of Metricbeat. The modules and metricsets that are contributed must meet the following requirements: + +* Complete `fields.yml` file to generate docs and Elasticsearch templates +* Documentation files +* Integration tests +* 80% test coverage (unit, integration, and system tests combined) + +Metricbeat allows you to build a wide variety of modules and metricsets on top of it. For a module to be accepted, it should focus on fetching service metrics directly from the service itself and not via a third-party tool. The goal is to have as few movable parts as possible and for Metricbeat to run as close as possible to the service that it needs to monitor. + diff --git a/docs/extend/metricbeat-developer-guide.md b/docs/extend/metricbeat-developer-guide.md new file mode 100644 index 000000000000..264f1d0b8916 --- /dev/null +++ b/docs/extend/metricbeat-developer-guide.md @@ -0,0 +1,29 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/metricbeat-developer-guide.html +--- + +# Extending Metricbeat [metricbeat-developer-guide] + +Metricbeat periodically interrogates other services to fetch key metrics information. As a developer, you can use Metricbeat in two different ways: + +* Extend Metricbeat directly +* Create your own Beat and use Metricbeat as a library + +We recommend that you start by creating your own Beat to keep the development of your own module or metricset independent of Metricbeat. At a later stage, if you decide to add a module to Metricbeat, you can reuse the code without making additional changes. + +This following topics describe how to contribute to Metricbeat by adding metricsets, modules, and new Beats based on Metricbeat: + +* [Overview](./metricbeat-dev-overview.md) +* [Creating a Metricset](./creating-metricsets.md) +* [Metricset Details](./metricset-details.md) +* [Creating a Metricbeat Module](./creating-metricbeat-module.md) +* [Metricbeat Developer FAQ](./dev-faq.md) + +If you would like to contribute to Metricbeat or the Beats project, also see [*Contributing to Beats*](./index.md). + + + + + + diff --git a/docs/extend/metricset-details.md b/docs/extend/metricset-details.md new file mode 100644 index 000000000000..c1831564bb95 --- /dev/null +++ b/docs/extend/metricset-details.md @@ -0,0 +1,257 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/metricset-details.html +--- + +# Metricset Details [metricset-details] + +This topic provides additional details about creating metricsets. + + +## Adding Special Configuration Options [_adding_special_configuration_options] + +Each metricset can have its own configuration variables defined. To make use of these variables, you must extend the `New` method. For example, let’s assume that you want to add a `password` config option to the metricset. You would extend `beat.yml` in the following way: + +```yaml +metricbeat.modules: +- module: {module} + metricsets: ["{metricset}"] + password: "test1234" +``` + +To read in the new `password` config option, you need to modify the `New` method. First you define a config struct that contains the value types to be read. You can set default values, as needed. Then you pass the config to the `UnpackConfig` method for loading the configuration. + +Your implementation should look something like this: + +```go +type MetricSet struct { + mb.BaseMetricSet + password string +} + +func New(base mb.BaseMetricSet) (mb.MetricSet, error) { + + // Unpack additional configuration options. + config := struct { + Password string `config:"password"` + }{ + Password: "", + } + err := base.Module().UnpackConfig(&config) + if err != nil { + return nil, err + } + + return &MetricSet{ + BaseMetricSet: base, + password: config.Password, + }, nil +} +``` + + +### Timeout Connections to Services [_timeout_connections_to_services] + +Each time the `Fetch` method is called, it makes a request to the service, so it’s important to handle the connections correctly. We recommended that you set up the connections in the `New` method and persist them in the `MetricSet` object. This allows connections to be reused. + +One very important point is that connections must respect the timeout variable: `base.Module().Config().Timeout`. If the timeout elapses before the request completes, the request must be ended and an error must be returned to make sure the next request can be started on time. By default the Timeout is set to Period, so one request gets ended before a new request is made. + +If a request must be ended or has an error, make sure that you return a useful error message. This error message is also sent to Elasticsearch, making it possible to not only fetch metrics from the service, but also report potential problems or errors with the metricset. + + +### Data Transformation [_data_transformation] + +If the data transformation that has to happen in the `Fetch` method is extensive, we recommend that you create a second file called `data.go` in the same package as the metricset. The `data.go` file should contain a function called `eventMapping(...)`. A separate file is not required, but is currently a best practice because it isolates the functionality of the metricset and `Fetch` method from the data mapping. + + +### fields.yml [_fields_yml] + +You can find up to 3 different types of files named `fields.yml` in the beats repository for each metricbeat module: + +* `metricbeat/fields.yml`: Contains the definitions to create the Elasticsearch template, the Kibana index pattern configuration and the exported fields documentation for metricsets. To make sure the Elasticsearch template is correct, it’s important to keep this file up-to-date with all the changes. Generally, you shouldn’t touch this file manually because it’s generated by some commands in the build environment. +* `metricbeat/module/{{module}}/_meta/fields.yml`: Contains the general top level structure for all metricsets in a module. Normally you only need to modify the description in this file. Here is an example for the `fields.yml` file from the MySQL module. + + ```yaml + - key: mysql + title: "MySQL" + description: > + MySQL server status metrics collected from MySQL. + short_config: false + release: ga + fields: + - name: mysql + type: group + description: > + `mysql` contains the metrics that were obtained from MySQL + query. + fields: + ``` + +* `metricbeat/module/{{module}}/{metricset}/_meta/fields.yml`: Contains all fields definitions retrieved by the metricset. As field types, each field must have a core data type [supported by elasticsearch](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md#_core_datatypes). Here’s a very basic example that shows one group from the MySQL `status` metricset: + + ```yaml + - name: status + type: group + description: > + `status` contains the metrics that were obtained by the status SQL query. + fields: + - name: aborted + type: group + description: Aborted status fields. + fields: + - name: clients + type: integer + description: > + The number of connections that were aborted because the client died without closing the connection properly. + + - name: connects + type: integer + description: > + The number of failed attempts to connect to the MySQL server. + ``` + + + +### Testing [_testing] + +It’s important to also add tests for your metricset. There are three different types of tests that you need for testing a Beat: + +* unit tests +* integration tests +* system tests + +We recommend that you use all three when you create a metricset. Unit tests are written in Go and have no dependencies. Integration tests are also written in Go but require the service from which the module collects metrics to also be running. System tests for Metricbeat also require the service to be running in most cases and are written in Python based on our small Python test framework. We use [venv](https://docs.python.org/3/library/venv.html) to deal with Python dependencies. You can simply run the command `make python-env` and then `. build/python-env/bin/activate` . + +You should use a combination of the three test types to test your metricsets because each method has advantages and disadvantages. To get started with your own tests, it’s best to look at the existing tests. You’ll find the unit and integration tests in the `_test.go` files under existing modules and metricsets. Integration tests usually take the form of `TestFetch` and `TestData`. The system tests are under `tests/systems`. + + +#### Adding a Test Environment [_adding_a_test_environment] + +Integration and system tests need an environment that’s running the service. You can create this environment by using Docker and a docker-compose file. If you add a module that requires a service, you must add the service to the virtual environment. To do this, you: + +* Update the `docker-compose.yml` file with your environment +* Update the `docker-entrypoint.sh` script + +The `docker-compose.yml` file is at the root of Metricbeat. Most services have existing Docker modules and can be added as simply as Redis: + +```yaml +redis: + image: redis:3.2.3 +``` + +To allow the Beat to access your service, make sure that you define the environment variables in the docker-compose file and add the link to the container: + +```yaml +beat: + links: + - redis + environment: + - REDIS_HOST=redis + - REDIS_PORT=6379 +``` + +To make sure the service is running before the tests are started, modify the `docker-entrypoint.sh` script to add a check that verifies your service is running. For example, the check for Redis looks like this: + +```shell +waitFor ${REDIS_HOST} ${REDIS_PORT} Redis +``` + +The environment expects your service to be available as soon as it receives a response from the given address and port. + + +#### Adding the standard metricset integration tests [_adding_the_standard_metricset_integration_tests] + +There are normally two integration tests that are part of every metricset: `TestFetch` and `TestData`. Both tests will start up a new instance of your metricset and fetch an event. In order to start a metricset, you need to create a configuration object: + +```go +func getConfig() map[string]interface{} { + return map[string]interface{}{ + "module": "{module}", + "metricsets": []string{"{metricset}"}, + "hosts": []string{GetEnvHost() + ":" + GetEnvPort()}, <1> + } +} + +func GetEnvHost() string { <2> + host := os.Getenv("{module}_HOST") + if len(host) == 0 { + host = "127.0.0.1" + } + return host +} + +func GetEnvPort() string { <2> + port := os.Getenv("{module}_PORT") + + if len(port) == 0 { + port = "1234" + } + return port +} +``` + +1. Add any additional config options your metricset needs here. +2. The endpoint used by the metricset needs to be configurable for manual and automated testing. Environment variables should be defined in the module under `_meta/env` and included in the `docker-compose.yml` file. + + +The `TestFetch` integration test will return a single event from your metricset, which you can use to test the validity of the data. `TestData` will (re)generate the `_meta/data.json` file that documents the data reported by the metricset. + +```go +import ( + "os" + "testing" + + "github.com/stretchr/testify/assert" + + "github.com/elastic/beats/libbeat/tests/compose" + mbtest "github.com/elastic/beats/metricbeat/mb/testing" +) + +func TestFetch(t *testing.T) { + compose.EnsureUp(t, "{module}") <1> + + f := mbtest.NewReportingMetricSetV2Error(t, getConfig()) + + events, errs := mbtest.ReportingFetchV2Error(f) + if len(errs) > 0 { + t.Fatalf("Expected 0 errord, had %d. %v\n", len(errs), errs) + } + + assert.NotEmpty(t, events) <2> + +} + +func TestData(t *testing.T) { + + f := mbtest.NewReportingMetricSetV2Error(t, getConfig()) + + err := mbtest.WriteEventsReporterV2Error(f, t, "") <3> + if !assert.NoError(t, err) { + t.FailNow() + } +} +``` + +1. Use this to start the docker service associated with your metricset. +2. Add any further validity checks to verify the metricset is working. +3. `WriteEventsReporterV2Error` will take the first valid event from the metricset and write it to `_meta/data.json` + + + +#### Running the Tests [_running_the_tests] + +To run all the tests, run `make testsuite`. To only run unit tests, run `mage unitTest`, or for integration tests `mage integTest`. Be aware that a running Docker environment is needed for integration and system tests. + +To run `TestData` and generate the `data.json` file, run `go test -tags=integration -data -run TestData` in the directory where your test is located. + +To run the integration tests for a single module, set the `MODULE` environment variable to the name of the directory of the module. For example you can run the following command to run integration tests for `apache` module: + +```shell +MODULE=apache mage integTest +``` + + +## Documentation [_documentation] + +Each module must be documented. The documentation is based on asciidoc and is in the file `module/{{module}}/_meta/docs.asciidoc` for the module and in `module/{{module}}/{metricset}/_meta/docs.asciidoc` for the metricset. Basic documentation with the config file and an example output is automatically generated. Use these files to document specific configuration options or usage examples. + diff --git a/docs/extend/new-dashboards.md b/docs/extend/new-dashboards.md new file mode 100644 index 000000000000..912c383ae4a6 --- /dev/null +++ b/docs/extend/new-dashboards.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Creating New Kibana Dashboards" +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/new-dashboards.html +--- + +# Creating New Kibana Dashboards for a Beat or a Beat module [new-dashboards] + + +When contributing to Beats development, you may want to add new dashboards or customize the existing ones. To get started, you can [import the Kibana dashboards](/extend/import-dashboards.md) that come with the official Beats and use them as a starting point for your own dashboards. When you’re done making changes to the dashboards in Kibana, you can use the `export_dashboards` script to [export the dashboards](/extend/export-dashboards.md), along with all dependencies, to a local directory. + +To make sure the dashboards are compatible with the latest version of Kibana and Elasticsearch, we recommend that you use the virtual environment under [beats/testing/environments](https://github.com/elastic/beats/tree/master/testing/environments) to import, create, and export the Kibana dashboards. + +The following topics provide more detail about importing and working with Beats dashboards: + +* [Importing Existing Beat Dashboards](/extend/import-dashboards.md) +* [Building Your Own Beat Dashboards](/extend/build-dashboards.md) +* [Generating the Beat Index Pattern](/extend/generate-index-pattern.md) +* [Exporting New and Modified Beat Dashboards](/extend/export-dashboards.md) +* [Archiving Your Beat Dashboards](/extend/archive-dashboards.md) +* [Sharing Your Beat Dashboards](/extend/share-beat-dashboards.md) + + + + + + + diff --git a/docs/extend/new-protocol.md b/docs/extend/new-protocol.md new file mode 100644 index 000000000000..1ad4793aae66 --- /dev/null +++ b/docs/extend/new-protocol.md @@ -0,0 +1,16 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/new-protocol.html +--- + +# Adding a New Protocol to Packetbeat [new-protocol] + +The following topics describe how to add a new protocol to Packetbeat: + +* [Getting Ready](/extend/getting-ready-new-protocol.md) +* [Protocol Modules](/extend/protocol-modules.md) +* [Testing](/extend/protocol-testing.md) + + + + diff --git a/docs/extend/pr-review.md b/docs/extend/pr-review.md new file mode 100644 index 000000000000..c764b9fff0b2 --- /dev/null +++ b/docs/extend/pr-review.md @@ -0,0 +1,23 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/pr-review.html +--- + +# Pull request review guidelines [pr-review] + +Every change made to Beats must be held to a high standard, and while the responsibility for quality in a pull request ultimately lies with the author, Beats team members have the responsibility as reviewers to verify during their review process. Where this document is unclear or inappropriate let common sense and consensus override it. + + +## Code Style [_code_style] + +Everyone’s got an opinion on style. To avoid spending time on this issue we rely almost exclusively on `go fmt` and [hound](https://houndci.com/) to police style. If neither of these tools complain the code is almost certainly fine. There may be exceptions to this, but they should be extremely rare. Only override the judgement of these tools in the most unusual of situations. + + +## Flaky Tests [_flaky_tests] + +As software projects grow so does the complexity of their test cases and with that the probability of some tests becoming *flaky*. It is everyone’s responsibility to handle flaky tests. If you notice a pull request build failing for a reason that is unrelated to the pushed code follow the procedure below: + +1. Create an issue using the "Flaky Test" github issue template with the "Flaky Test" label attached. +2. Create a PR to mute or fix the flaky test. +3. Merge that PR and rebase off of it before continuing with the normal PR process for your original PR. + diff --git a/docs/extend/protocol-modules.md b/docs/extend/protocol-modules.md new file mode 100644 index 000000000000..fde1f19979fc --- /dev/null +++ b/docs/extend/protocol-modules.md @@ -0,0 +1,9 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/protocol-modules.html +--- + +# Protocol Modules [protocol-modules] + +We are working on updating this section. While you’re waiting for updates, you might want to try out the TCP protocol generator at [https://github.com/elastic/beats/tree/master/packetbeat/scripts/tcp-protocol](https://github.com/elastic/beats/tree/master/packetbeat/scripts/tcp-protocol). + diff --git a/docs/extend/protocol-testing.md b/docs/extend/protocol-testing.md new file mode 100644 index 000000000000..9b48102bf0e0 --- /dev/null +++ b/docs/extend/protocol-testing.md @@ -0,0 +1,9 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/protocol-testing.html +--- + +# Testing [protocol-testing] + +We are working on updating this section. + diff --git a/docs/extend/python-beats.md b/docs/extend/python-beats.md new file mode 100644 index 000000000000..b04754cc8dcb --- /dev/null +++ b/docs/extend/python-beats.md @@ -0,0 +1,68 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/python-beats.html +--- + +# Python in Beats [python-beats] + +Python is used for Beats development, it is the language used to implement system tests and some other tools. Python dependencies are managed by the use of virtual environments, supported by [venv](https://docs.python.org/3/library/venv.html). + +Beats development requires Python >= 3.7. + +## Installing Python and venv [installing-python] + +Python uses to be installed in many operating systems. If it is not installed in your system you can follow the instructions available in [https://www.python.org/downloads/](https://www.python.org/downloads/) + +In Ubuntu/Debian systems, Python 3 can be installed with: + +```sh +sudo apt-get install python3 python3-venv +``` + +There are packages for specific minor versions, so for example if Python 3.7 wants to be used, it can be installed with the following command: + +```sh +sudo apt-get install python3.7 python3.7-venv +``` + +It is recommended to use Python >= 3.7. + + +## Working with virtual environments [python-virtual-environments] + +All `make` and `mage` targets manage their own virtual environments in a transparent way, so for the most common operations required when contributing to beats, nothing special needs to be done. + +Virtual environments used by `make` can be found in most Beats directories under `build/python-env`, they are created by targets that need it, or can be explicitly created by running `make python-env`. The ones used by `mage` are created when required under `build/ve`. + +There are some environment variables that can be used to customize the creation of these virtual environments: + +* `PYTHON_EXE`: Python executable to be used in the virtual environment. It has to exist in the path. +* `PYTHON_ENV`: Path to the virtual environment to use. If it doesn’t exist, it is created by `make` or `mage` targets when needed. + +Virtual environments can also be used without `make` or `mage`, this is usual for example when running individual system tests with `pytest`. There are two ways to run commands from the virtual environment: + +* "Activating" the virtual environment in your current terminal running `source ./build/python-env/bin/activate`. Virtual environment can be deactivated by running `deactivate`. +* Directly running commands from the virtual environment path. For example `pytest` can be executed as `./build/python-env/bin/pytest`. + +To recreate a virtual environment, remove its directory. All virtual environments are also removed with `make clean`. + + +## Working with older versions [python-older-versions] + +Older versions of Beats were not compatible with Python 3, if you need to temporary work on one of these versions of Beats, and you don’t want to remove your current virtual environments, you can use environment variables to run commands in a temporary virtual environment. + +For example you can run `make update` with Python 2.7 with the following command: + +```sh +PYTHON_EXE=python2.7 PYTHON_ENV=/tmp/venv2 make update +``` + +If you need to run tests you can also create a virtual environment and then activate it to run commands from there: + +```sh +PYTHON_EXE=python2.7 PYTHON_ENV=/tmp/venv2 make python-env +source /tmp/venv2/bin/activate +... +``` + + diff --git a/docs/extend/share-beat-dashboards.md b/docs/extend/share-beat-dashboards.md new file mode 100644 index 000000000000..782cb21a22c3 --- /dev/null +++ b/docs/extend/share-beat-dashboards.md @@ -0,0 +1,9 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/share-beat-dashboards.html +--- + +# Sharing Your Beat Dashboards [share-beat-dashboards] + +When you’re done with your own Beat dashboards, how about letting everyone know? You can create a topic on the [Beats forum](https://discuss.elastic.co/c/beats), and provide the link to the zip archive together with a short description. + diff --git a/docs/extend/testing.md b/docs/extend/testing.md new file mode 100644 index 000000000000..ed288f78f817 --- /dev/null +++ b/docs/extend/testing.md @@ -0,0 +1,118 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/devguide/current/testing.html +--- + +# Testing [testing] + +Beats has a various sets of tests. This guide should help to understand how the different test suites work, how they are used and new tests are added. + +In general there are two major test suites: + +* Tests written in Go +* Tests written in Python + +The tests written in Go use the [Go Testing package](https://golang.org/pkg/testing/). The tests written in Python depend on [pytest](https://docs.pytest.org/en/latest/) and require a compiled and executable binary from the Go code. The python test run a beat with a specific config and params and either check if the output is as expected or if the correct things show up in the logs. + +For both of the above test suites so called integration tests exists. Integration tests in Beats are tests which require an external system like Elasticsearch to test if the integration with this service works as expected. Beats provides in its testsuite docker containers and docker-compose files to start these environments but a developer can run the required services also locally. + +## Running Go Tests [_running_go_tests] + +The Go tests can be executed in each Go package by running `go test .`. This will execute all tests which don’t don’t require an external service to be running. To run all non integration tests for a beat run `mage unitTest`. + +All Go tests are in the same package as the tested code itself and have the suffix `_test` in the file name. Most of the tests are in the same package as the rest of the code. Some of the tests which should be separate from the rest of the code or should not use private variables go under `{{packagename}}_test`. + +### Running Go Integration Tests [_running_go_integration_tests] + +Integration tests are labelled with the `//go:build integration` build tag and use the `_integration_test.go` suffix. + +To run the integration tests use the `mage goIntegTest` target, which will start the required services using [docker-compose](https://docs.docker.com/compose/) and run all integration tests. + +It is also possible to run module specific integration tests. For example, to run kafka only tests use `MODULE=kafka mage integTest -v` + +It is possible to start the `docker-compose` services manually to allow selecting which specific tests should be run. An example follows for filebeat: + +```bash +cd filebeat +# Pull and build the containers. Only needs to be done once unless you change the containers. +mage docker:composeBuild +# Bring up all containers, wait until they are healthy, and put them in the background. +mage docker:composeUp +# Run all integration tests. +go test ./filebeat/... -tags integration +# Stop all started containers. +mage docker:composeDown +``` + + +### Generate sample events [_generate_sample_events] + +Go tests support generating sample events to be used as fixtures. + +This generation can be perfomed running `go test --data`. This functionality is supported by packetbeat and Metricbeat. + +In Metricbeat, run the command from within a module like this: `go test --tags integration,azure --data --run "TestData"`. Make sure to add the relevant tags (`integration` is common then add module and metricset specific tags). + +A note about tags: the `--data` flag is a custom flag added by Metricbeat and Packetbeat frameworks. It will not be present in case tags do not match, as the relevant code will not be run and silently skipped (without the tag the test file is ignored by Go compiler so the framework doesn’t load). This may happen if there are different tags in the build tags of the metricset under test (i.e. the GCP billing metricset requires the `billing` tag too). + + + +## Running System (integration) Tests (Python and Go) [_running_system_integration_tests_python_and_go] + +The system tests are defined in the `tests/system` (for legacy Python test) and on `tests/integration` (for Go tests) directory. They require a testing binary to be available and the python environment to be set up. + +To create the testing binary run `mage buildSystemTestBinary`. This will create the test binary in the beat directory. To set up the Python testing environment run `mage pythonVirtualEnv` which will create a virtual environment with all test dependencies and print its location. To activate it, the instructions depend on your operating system. See the [virtualenv documentation](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#activating-a-virtual-environment). + +To run the system and integration tests use the `mage pythonIntegTest` target, which will start the required services using [docker-compose](https://docs.docker.com/compose/) and run all integration tests. Similar to Go integration tests, the individual steps can be done manually to allow selecting which tests should be run: + +```bash +# Create and activate the system test virtual environment (assumes a Unix system). +source $(mage pythonVirtualEnv)/bin/activate + +# Pull and build the containers. Only needs to be done once unless you change the containers. +mage docker:composeBuild + +# Bring up all containers, wait until they are healthy, and put them in the background. +mage docker:composeUp + +# Run all system and integration tests. +INTEGRATION_TESTS=1 pytest ./tests/system + +# Stop all started containers. +mage docker:composeDown +``` + +Filebeat’s module python tests have additional documentation found in the [Filebeat module](/extend/filebeat-modules-devguide.md) guide. + + +## Test commands [_test_commands] + +To list all mage commands run `mage -l`. A quick summary of the available test Make commands is: + +* `unit`: Go tests +* `unit-tests`: Go tests with coverage reports +* `integration-tests`: Go tests with services in local docker +* `integration-tests-environment`: Go tests inside docker with service in docker +* `fast-system-tests`: Python tests +* `system-tests`: Python tests with coverage report +* `INTEGRATION_TESTS=1 system-tests`: Python tests with local services +* `system-tests-environment`: Python tests inside docker with service in docker +* `testsuite`: Complete test suite in docker environment is run +* `test`: Runs testsuite without environment + +There are two experimental test commands: + +* `benchmark-tests`: Running Go tests with `-bench` flag +* `load-tests`: Running system tests with `LOAD_TESTS=1` flag + + +## Coverage report [_coverage_report] + +If the tests were run to create a test coverage, the coverage report files can be found under `build/docs`. To create a more human readable file out of the `.cov` file `make coverage-report` can be used. It creates a `.html` file for each report and a `full.html` as summary of all reports together in the directory `build/coverage`. + + +## Race detection [_race_detection] + +All tests can be run with the Go race detector enabled by setting the environment variable `RACE_DETECTOR=1`. This applies to tests in Go and Python. For Python the test binary has to be recompile when the flag is changed. Having the race detection enabled will slow down the tests. + + diff --git a/docs/extend/toc.yml b/docs/extend/toc.yml new file mode 100644 index 000000000000..1774ca5cf8da --- /dev/null +++ b/docs/extend/toc.yml @@ -0,0 +1,32 @@ +toc: + - file: index.md + - file: pr-review.md + - file: contributing-docs.md + - file: testing.md + - file: community-beats.md + children: + - file: event-fields-yml.md + - file: event-conventions.md + - file: python-beats.md + - file: new-dashboards.md + children: + - file: import-dashboards.md + - file: build-dashboards.md + - file: generate-index-pattern.md + - file: export-dashboards.md + - file: archive-dashboards.md + - file: share-beat-dashboards.md + - file: new-protocol.md + children: + - file: getting-ready-new-protocol.md + - file: protocol-modules.md + - file: protocol-testing.md + - file: metricbeat-developer-guide.md + children: + - file: metricbeat-dev-overview.md + - file: creating-metricsets.md + - file: metricset-details.md + - file: creating-metricbeat-module.md + - file: dev-faq.md + - file: filebeat-modules-devguide.md + - file: _migrating_dashboards_from_kibana_5_x_to_6_x.md diff --git a/docs/reference/auditbeat/add-cloud-metadata.md b/docs/reference/auditbeat/add-cloud-metadata.md new file mode 100644 index 000000000000..32b1a459cb05 --- /dev/null +++ b/docs/reference/auditbeat/add-cloud-metadata.md @@ -0,0 +1,205 @@ +--- +navigation_title: "add_cloud_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-cloud-metadata.html +--- + +# Add cloud metadata [add-cloud-metadata] + + +The `add_cloud_metadata` processor enriches each event with instance metadata from the machine’s hosting provider. At startup it will query a list of hosting providers and cache the instance metadata. + +The following cloud providers are supported: + +* Amazon Web Services (AWS) +* Digital Ocean +* Google Compute Engine (GCE) +* [Tencent Cloud](https://www.qcloud.com/?lang=en) (QCloud) +* Alibaba Cloud (ECS) +* Huawei Cloud (ECS) +* Azure Virtual Machine +* Openstack Nova +* Hetzner Cloud + + +## Special notes [_special_notes] + +`huawei` is an alias for `openstack`. Huawei cloud runs on OpenStack platform, and when viewed from a metadata API standpoint, it is impossible to differentiate it from OpenStack. If you know that your deployments run on Huawei Cloud exclusively, and you wish to have `cloud.provider` value as `huawei`, you can achieve this by overwriting the value using an `add_fields` processor. + +The Alibaba Cloud and Tencent cloud providers are disabled by default, because they require to access a remote host. The `providers` setting allows users to select a list of default providers to query. + +Cloud providers tend to maintain metadata services compliant with other cloud providers. For example, Openstack supports [EC2 compliant metadat service](https://docs.openstack.org/nova/latest/user/metadata.html#ec2-compatible-metadata). This makes it impossible to differentiate cloud provider (`cloud.provider` property) with auto discovery (when `providers` configuration is omitted). The processor implementation incorporates a priority mechanism where priority is given to some providers over others when there are multiple successful metadata results. Currently, `aws/ec2` and `azure` have priority over any other provider as their metadata retrival rely on SDKs. The expectation here is that SDK methods should fail if run in an environment not configured accordingly (ex:- missing configurations or credentials). + + +## Configurations [_configurations] + +The simple configuration below enables the processor. + +```yaml +processors: + - add_cloud_metadata: ~ +``` + +The `add_cloud_metadata` processor has three optional configuration settings. The first one is `timeout` which specifies the maximum amount of time to wait for a successful response when detecting the hosting provider. The default timeout value is `3s`. + +If a timeout occurs then no instance metadata will be added to the events. This makes it possible to enable this processor for all your deployments (in the cloud or on-premise). + +The second optional setting is `providers`. The `providers` settings accepts a list of cloud provider names to be used. If `providers` is not configured, then all providers that do not access a remote endpoint are enabled by default. The list of providers may alternatively be configured with the environment variable `BEATS_ADD_CLOUD_METADATA_PROVIDERS`, by setting it to a comma-separated list of provider names. + +List of names the `providers` setting supports: + +* "alibaba", or "ecs" for the Alibaba Cloud provider (disabled by default). +* "azure" for Azure Virtual Machine (enabled by default). If the virtual machine is part of an AKS managed cluster, the fields `orchestrator.cluster.name` and `orchestrator.cluster.id` can also be retrieved. "TENANT_ID", "CLIENT_ID" and "CLIENT_SECRET" environment variables need to be set for authentication purposes. If not set we fallback to [DefaultAzureCredential](https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication?tabs=bash#2-authenticate-with-azure) and user can choose different authentication methods (e.g. workload identity). +* "digitalocean" for Digital Ocean (enabled by default). +* "aws", or "ec2" for Amazon Web Services (enabled by default). +* "gcp" for Google Copmute Enging (enabled by default). +* "openstack", "nova", or "huawei" for Openstack Nova (enabled by default). +* "openstack-ssl", or "nova-ssl" for Openstack Nova when SSL metadata APIs are enabled (enabled by default). +* "tencent", or "qcloud" for Tencent Cloud (disabled by default). +* "hetzner" for Hetzner Cloud (enabled by default). + +For example, configuration below only utilize `aws` metadata retrival mechanism, + +```yaml +processors: + - add_cloud_metadata: + providers: + aws +``` + +The third optional configuration setting is `overwrite`. When `overwrite` is `true`, `add_cloud_metadata` overwrites existing `cloud.*` fields (`false` by default). + +The `add_cloud_metadata` processor supports SSL options to configure the http client used to query cloud metadata. See [SSL](/reference/auditbeat/configuration-ssl.md) for more information. + + +## Provided metadata [_provided_metadata] + +The metadata that is added to events varies by hosting provider. Below are examples for each of the supported providers. + +*AWS* + +Metadata given below are extracted from [instance identity document](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html), + +```json +{ + "cloud": { + "account.id": "123456789012", + "availability_zone": "us-east-1c", + "instance.id": "i-4e123456", + "machine.type": "t2.medium", + "image.id": "ami-abcd1234", + "provider": "aws", + "region": "us-east-1" + } +} +``` + +If the EC2 instance has IMDS enabled and if tags are allowed through IMDS endpoint, the processor will further append tags in metadata. Please refer official documentation on [IMDS endpoint](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) for further details. + +```json +{ + "aws": { + "tags": { + "org" : "myOrg", + "owner": "userID" + } + } +} +``` + +*Digital Ocean* + +```json +{ + "cloud": { + "instance.id": "1234567", + "provider": "digitalocean", + "region": "nyc2" + } +} +``` + +*GCP* + +```json +{ + "cloud": { + "availability_zone": "us-east1-b", + "instance.id": "1234556778987654321", + "machine.type": "f1-micro", + "project.id": "my-dev", + "provider": "gcp" + } +} +``` + +*Tencent Cloud* + +```json +{ + "cloud": { + "availability_zone": "gz-azone2", + "instance.id": "ins-qcloudv5", + "provider": "qcloud", + "region": "china-south-gz" + } +} +``` + +*Alibaba Cloud* + +This metadata is only available when VPC is selected as the network type of the ECS instance. + +```json +{ + "cloud": { + "availability_zone": "cn-shenzhen", + "instance.id": "i-wz9g2hqiikg0aliyun2b", + "provider": "ecs", + "region": "cn-shenzhen-a" + } +} +``` + +*Azure Virtual Machine* + +```json +{ + "cloud": { + "provider": "azure", + "instance.id": "04ab04c3-63de-4709-a9f9-9ab8c0411d5e", + "instance.name": "test-az-vm", + "machine.type": "Standard_D3_v2", + "region": "eastus2" + } +} +``` + +*Openstack Nova* + +```json +{ + "cloud": { + "instance.name": "test-998d932195.mycloud.tld", + "instance.id": "i-00011a84", + "availability_zone": "xxxx-az-c", + "provider": "openstack", + "machine.type": "m2.large" + } +} +``` + +*Hetzner Cloud* + +```json +{ + "cloud": { + "availability_zone": "hel1-dc2", + "instance.name": "my-hetzner-instance", + "instance.id": "111111", + "provider": "hetzner", + "region": "eu-central" + } +} +``` + diff --git a/docs/reference/auditbeat/add-cloudfoundry-metadata.md b/docs/reference/auditbeat/add-cloudfoundry-metadata.md new file mode 100644 index 000000000000..92dd2afbdc54 --- /dev/null +++ b/docs/reference/auditbeat/add-cloudfoundry-metadata.md @@ -0,0 +1,70 @@ +--- +navigation_title: "add_cloudfoundry_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-cloudfoundry-metadata.html +--- + +# Add Cloud Foundry metadata [add-cloudfoundry-metadata] + + +The `add_cloudfoundry_metadata` processor annotates each event with relevant metadata from Cloud Foundry applications. The events are annotated with Cloud Foundry metadata, only if the event contains a reference to a Cloud Foundry application (using field `cloudfoundry.app.id`) and the configured Cloud Foundry client is able to retrieve information for the application. + +Each event is annotated with: + +* Application Name +* Space ID +* Space Name +* Organization ID +* Organization Name + +::::{note} +Pivotal Application Service and Tanzu Application Service include this metadata in all events from the firehose since version 2.8. In these cases the metadata in the events is used, and `add_cloudfoundry_metadata` processor doesn’t modify these fields. +:::: + + +For efficient annotation, application metadata retrieved by the Cloud Foundry client is stored in a persistent cache on the filesystem under the `path.data` directory. This is done so the metadata can persist across restarts of Auditbeat. For control over this cache, use the `cache_duration` and `cache_retry_delay` settings. + +```yaml +processors: + - add_cloudfoundry_metadata: + api_address: https://api.dev.cfdev.sh + client_id: uaa-filebeat + client_secret: verysecret + ssl: + verification_mode: none + # To connect to Cloud Foundry over verified TLS you can specify a client and CA certificate. + #ssl: + # certificate_authorities: ["/etc/pki/cf/ca.pem"] + # certificate: "/etc/pki/cf/cert.pem" + # key: "/etc/pki/cf/cert.key" +``` + +It has the following settings: + +`api_address` +: (Optional) The URL of the Cloud Foundry API. It uses `http://api.bosh-lite.com` by default. + +`doppler_address` +: (Optional) The URL of the Cloud Foundry Doppler Websocket. It uses value from ${api_address}/v2/info by default. + +`uaa_address` +: (Optional) The URL of the Cloud Foundry UAA API. It uses value from ${api_address}/v2/info by default. + +`rlp_address` +: (Optional) The URL of the Cloud Foundry RLP Gateway. It uses value from ${api_address}/v2/info by default. + +`client_id` +: Client ID to authenticate with Cloud Foundry. + +`client_secret` +: Client Secret to authenticate with Cloud Foundry. + +`cache_duration` +: (Optional) Maximum amount of time to cache an application’s metadata. Defaults to 120 seconds. + +`cache_retry_delay` +: (Optional) Time to wait before trying to obtain an application’s metadata again in case of error. Defaults to 20 seconds. + +`ssl` +: (Optional) SSL configuration to use when connecting to Cloud Foundry. + diff --git a/docs/reference/auditbeat/add-docker-metadata.md b/docs/reference/auditbeat/add-docker-metadata.md new file mode 100644 index 000000000000..fe9b442b5765 --- /dev/null +++ b/docs/reference/auditbeat/add-docker-metadata.md @@ -0,0 +1,80 @@ +--- +navigation_title: "add_docker_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-docker-metadata.html +--- + +# Add Docker metadata [add-docker-metadata] + + +The `add_docker_metadata` processor annotates each event with relevant metadata from Docker containers. At startup it detects a docker environment and caches the metadata. The events are annotated with Docker metadata, only if a valid configuration is detected and the processor is able to reach Docker API. + +Each event is annotated with: + +* Container ID +* Name +* Image +* Labels + +::::{note} +When running Auditbeat in a container, you need to provide access to Docker’s unix socket in order for the `add_docker_metadata` processor to work. You can do this by mounting the socket inside the container. For example: + +`docker run -v /var/run/docker.sock:/var/run/docker.sock ...` + +To avoid privilege issues, you may also need to add `--user=root` to the `docker run` flags. Because the user must be part of the docker group in order to access `/var/run/docker.sock`, root access is required if Auditbeat is running as non-root inside the container. + +If Docker daemon is restarted the mounted socket will become invalid and metadata will stop working, in these situations there are two options: + +* Restart Auditbeat every time Docker is restarted +* Mount the entire `/var/run` directory (instead of just the socket) + +:::: + + +```yaml +processors: + - add_docker_metadata: + host: "unix:///var/run/docker.sock" + #match_fields: ["system.process.cgroup.id"] + #match_pids: ["process.pid", "process.parent.pid"] + #match_source: true + #match_source_index: 4 + #match_short_id: true + #cleanup_timeout: 60 + #labels.dedot: false + # To connect to Docker over TLS you must specify a client and CA certificate. + #ssl: + # certificate_authority: "/etc/pki/root/ca.pem" + # certificate: "/etc/pki/client/cert.pem" + # key: "/etc/pki/client/cert.key" +``` + +It has the following settings: + +`host` +: (Optional) Docker socket (UNIX or TCP socket). It uses `unix:///var/run/docker.sock` by default. + +`ssl` +: (Optional) SSL configuration to use when connecting to the Docker socket. + +`match_fields` +: (Optional) A list of fields to match a container ID, at least one of them should hold a container ID to get the event enriched. + +`match_pids` +: (Optional) A list of fields that contain process IDs. If the process is running in Docker then the event will be enriched. The default value is `["process.pid", "process.parent.pid"]`. + +`match_source` +: (Optional) Match container ID from a log path present in the `log.file.path` field. Enabled by default. + +`match_short_id` +: (Optional) Match container short ID from a log path present in the `log.file.path` field. Disabled by default. This allows to match directories names that have the first 12 characters of the container ID. For example, `/var/log/containers/b7e3460e2b21/*.log`. + +`match_source_index` +: (Optional) Index in the source path split by `/` to look for container ID. It defaults to 4 to match `/var/lib/docker/containers//*.log` + +`cleanup_timeout` +: (Optional) Time of inactivity to consider we can clean and forget metadata for a container, 60s by default. + +`labels.dedot` +: (Optional) Default to be false. If set to true, replace dots in labels with `_`. + diff --git a/docs/reference/auditbeat/add-fields.md b/docs/reference/auditbeat/add-fields.md new file mode 100644 index 000000000000..7430ad82e2ef --- /dev/null +++ b/docs/reference/auditbeat/add-fields.md @@ -0,0 +1,51 @@ +--- +navigation_title: "add_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-fields.html +--- + +# Add fields [add-fields] + + +The `add_fields` processor adds additional fields to the event. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. The `add_fields` processor will overwrite the target field if it already exists. By default the fields that you specify will be grouped under the `fields` sub-dictionary in the event. To group the fields under a different sub-dictionary, use the `target` setting. To store the fields as top-level fields, set `target: ''`. + +`target` +: (Optional) Sub-dictionary to put all fields into. Defaults to `fields`. Setting this to `@metadata` will add values to the event metadata instead of fields. + +`fields` +: Fields to be added. + +For example, this configuration: + +```yaml +processors: + - add_fields: + target: project + fields: + name: myproject + id: '574734885120952459' +``` + +Adds these fields to any event: + +```json +{ + "project": { + "name": "myproject", + "id": "574734885120952459" + } +} +``` + +This configuration will alter the event metadata: + +```yaml +processors: + - add_fields: + target: '@metadata' + fields: + op_type: "index" +``` + +When the event is ingested (e.g. by Elastisearch) the document will have `op_type: "index"` set as a metadata field. + diff --git a/docs/reference/auditbeat/add-host-metadata.md b/docs/reference/auditbeat/add-host-metadata.md new file mode 100644 index 000000000000..bad0295d7310 --- /dev/null +++ b/docs/reference/auditbeat/add-host-metadata.md @@ -0,0 +1,92 @@ +--- +navigation_title: "add_host_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-host-metadata.html +--- + +# Add Host metadata [add-host-metadata] + + +```yaml +processors: + - add_host_metadata: + cache.ttl: 5m + geo: + name: nyc-dc1-rack1 + location: 40.7128, -74.0060 + continent_name: North America + country_iso_code: US + region_name: New York + region_iso_code: NY + city_name: New York +``` + +It has the following settings: + +`netinfo.enabled` +: (Optional) Default true. Include IP addresses and MAC addresses as fields host.ip and host.mac + +`cache.ttl` +: (Optional) The processor uses an internal cache for the host metadata. This sets the cache expiration time. The default is 5m, negative values disable caching altogether. + +`geo.name` +: (Optional) User definable token to be used for identifying a discrete location. Frequently a datacenter, rack, or similar. + +`geo.location` +: (Optional) Longitude and latitude in comma separated format. + +`geo.continent_name` +: (Optional) Name of the continent. + +`geo.country_name` +: (Optional) Name of the country. + +`geo.region_name` +: (Optional) Name of the region. + +`geo.city_name` +: (Optional) Name of the city. + +`geo.country_iso_code` +: (Optional) ISO country code. + +`geo.region_iso_code` +: (Optional) ISO region code. + +`replace_fields` +: (Optional) Default true. If set to false, original host fields from the event will not be replaced by host fields from `add_host_metadata`. + +The `add_host_metadata` processor annotates each event with relevant metadata from the host machine. The fields added to the event look like the following: + +```json +{ + "host":{ + "architecture":"x86_64", + "name":"example-host", + "id":"", + "os":{ + "family":"darwin", + "type":"macos", + "build":"16G1212", + "platform":"darwin", + "version":"10.12.6", + "kernel":"16.7.0", + "name":"Mac OS X" + }, + "ip": ["192.168.0.1", "10.0.0.1"], + "mac": ["00:25:96:12:34:56", "72:00:06:ff:79:f1"], + "geo": { + "continent_name": "North America", + "country_iso_code": "US", + "region_name": "New York", + "region_iso_code": "NY", + "city_name": "New York", + "name": "nyc-dc1-rack1", + "location": "40.7128, -74.0060" + } + } +} +``` + +Note: `add_host_metadata` processor will overwrite host fields if `host.*` fields already exist in the event from Beats by default with `replace_fields` equals to `true`. Please use `add_observer_metadata` if the beat is being used to monitor external systems. + diff --git a/docs/reference/auditbeat/add-id.md b/docs/reference/auditbeat/add-id.md new file mode 100644 index 000000000000..10e2f87ba69f --- /dev/null +++ b/docs/reference/auditbeat/add-id.md @@ -0,0 +1,24 @@ +--- +navigation_title: "add_id" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-id.html +--- + +# Generate an ID for an event [add-id] + + +The `add_id` processor generates a unique ID for an event. + +```yaml +processors: + - add_id: ~ +``` + +The following settings are supported: + +`target_field` +: (Optional) Field where the generated ID will be stored. Default is `@metadata._id`. + +`type` +: (Optional) Type of ID to generate. Currently only `elasticsearch` is supported and is the default. The `elasticsearch` type generates IDs using the same algorithm that Elasticsearch uses for auto-generating document IDs. + diff --git a/docs/reference/auditbeat/add-kubernetes-metadata.md b/docs/reference/auditbeat/add-kubernetes-metadata.md new file mode 100644 index 000000000000..bbdc594cbef6 --- /dev/null +++ b/docs/reference/auditbeat/add-kubernetes-metadata.md @@ -0,0 +1,244 @@ +--- +navigation_title: "add_kubernetes_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-kubernetes-metadata.html +--- + +# Add Kubernetes metadata [add-kubernetes-metadata] + + +The `add_kubernetes_metadata` processor annotates each event with relevant metadata based on which Kubernetes pod the event originated from. This processor only adds metadata to the events that do not have it yet present. + +At startup, it detects an `in_cluster` environment and caches the Kubernetes-related metadata. Events are only annotated if a valid configuration is detected. If it’s not able to detect a valid Kubernetes configuration, the events are not annotated with Kubernetes-related metadata. + +Each event is annotated with: + +* Pod Name +* Pod UID +* Namespace +* Labels + +In addition, the node and namespace metadata are added to the pod metadata. + +The `add_kubernetes_metadata` processor has two basic building blocks: + +* Indexers +* Matchers + +Indexers use pod metadata to create unique identifiers for each one of the pods. These identifiers help to correlate the metadata of the observed pods with actual events. For example, the `ip_port` indexer can take a Kubernetes pod and create identifiers for it based on all its `pod_ip:container_port` combinations. + +Matchers use information in events to construct lookup keys that match the identifiers created by the indexers. For example, when the `fields` matcher takes `["metricset.host"]` as a lookup field, it would construct a lookup key with the value of the field `metricset.host`. When one of these lookup keys matches with one of the identifiers, the event is enriched with the metadata of the identified pod. + +Each Beat can define its own default indexers and matchers which are enabled by default. For example, Filebeat enables the `container` indexer, which identifies pod metadata based on all container IDs, and a `logs_path` matcher, which takes the `log.file.path` field, extracts the container ID, and uses it to retrieve metadata. + +You can find more information about the available indexers and matchers, and some examples in [Indexers and matchers](#kubernetes-indexers-and-matchers). + +The configuration below enables the processor when auditbeat is run as a pod in Kubernetes. + +```yaml +processors: + - add_kubernetes_metadata: + # Defining indexers and matchers manually is required for auditbeat, for instance: + #indexers: + # - ip_port: + #matchers: + # - fields: + # lookup_fields: ["metricset.host"] + #labels.dedot: true + #annotations.dedot: true +``` + +The configuration below enables the processor on a Beat running as a process on the Kubernetes node. + +```yaml +processors: + - add_kubernetes_metadata: + host: + # If kube_config is not set, KUBECONFIG environment variable will be checked + # and if not present it will fall back to InCluster + kube_config: $Auditbeat Reference/.kube/config + # Defining indexers and matchers manually is required for auditbeat, for instance: + #indexers: + # - ip_port: + #matchers: + # - fields: + # lookup_fields: ["metricset.host"] + #labels.dedot: true + #annotations.dedot: true +``` + +The configuration below has the default indexers and matchers disabled and enables ones that the user is interested in. + +```yaml +processors: + - add_kubernetes_metadata: + host: + # If kube_config is not set, KUBECONFIG environment variable will be checked + # and if not present it will fall back to InCluster + kube_config: ~/.kube/config + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - fields: + lookup_fields: ["metricset.host"] + #labels.dedot: true + #annotations.dedot: true +``` + +The `add_kubernetes_metadata` processor has the following configuration settings: + +`host` +: (Optional) Specify the node to scope auditbeat to in case it cannot be accurately detected, as when running auditbeat in host network mode. + +`scope` +: (Optional) Specify if the processor should have visibility at the node level or at the entire cluster level. Possible values are `node` and `cluster`. Scope is `node` by default. + +`namespace` +: (Optional) Select the namespace from which to collect the metadata. If it is not set, the processor collects metadata from all namespaces. It is unset by default. + +`add_resource_metadata` +: (Optional) Specify filters and configuration for the extra metadata, that will be added to the event. Configuration parameters: + + * `node` or `namespace`: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included while annotations are not. To change default behaviour `include_labels`, `exclude_labels` and `include_annotations` can be defined. Those settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Note: wildcards are not supported for those settings. The enrichment of `node` or `namespace` metadata can be individually disabled by setting `enabled: false`. + * `deployment`: If resource is `pod` and it is created from a `deployment`, by default the deployment name is added, this can be disabled by setting `deployment: false`. + * `cronjob`: If resource is `pod` and it is created from a `cronjob`, by default the cronjob name is added, this can be disabled by setting `cronjob: false`. + + Example: + + +```yaml + add_resource_metadata: + namespace: + include_labels: ["namespacelabel1"] + #labels.dedot: true + #annotations.dedot: true + node: + include_labels: ["nodelabel2"] + include_annotations: ["nodeannotation1"] + #labels.dedot: true + #annotations.dedot: true + deployment: false + cronjob: false +``` + +`kube_config` +: (Optional) Use given config file as configuration for Kubernetes client. It defaults to `KUBECONFIG` environment variable if present. + +`use_kubeadm` +: (Optional) Default true. By default requests to kubeadm config map are made in order to enrich cluster name by requesting /api/v1/namespaces/kube-system/configmaps/kubeadm-config API endpoint. + +`kube_client_options` +: (Optional) Additional options can be configured for Kubernetes client. Currently client QPS and burst are supported, if not set Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) will be used. Example: + +```yaml + kube_client_options: + qps: 5 + burst: 10 +``` + +`cleanup_timeout` +: (Optional) Specify the time of inactivity before stopping the running configuration for a container. This is `60s` by default. + +`sync_period` +: (Optional) Specify the timeout for listing historical resources. + +`default_indexers.enabled` +: (Optional) Enable or disable default pod indexers when you want to specify your own. + +`default_matchers.enabled` +: (Optional) Enable or disable default pod matchers when you want to specify your own. + +`labels.dedot` +: (Optional) Default to be true. If set to true, then `.` in labels will be replaced with `_`. + +`annotations.dedot` +: (Optional) Default to be true. If set to true, then `.` in labels will be replaced with `_`. + + +## Indexers and matchers [kubernetes-indexers-and-matchers] + +## Indexers [_indexers] + +Indexers use pods metadata to create unique identifiers for each one of the pods. + +Available indexers are: + +`container` +: Identifies the pod metadata using the IDs of its containers. + +`ip_port` +: Identifies the pod metadata using combinations of its IP and its exposed ports. When using this indexer metadata is identified using the IP of the pods, and the combination if `ip:port` for each one of the ports exposed by its containers. + +`pod_name` +: Identifies the pod metadata using its namespace and its name as `namespace/pod_name`. + +`pod_uid` +: Identifies the pod metadata using the UID of the pod. + + +## Matchers [_matchers] + +Matchers are used to construct the lookup keys that match with the identifiers created by indexes. + +### `field_format` [_field_format] + +Looks up pod metadata using a key created with a string format that can include event fields. + +This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event. + +For example, the following configuration uses the `ip_port` indexer to identify the pod metadata by combinations of the pod IP and its exposed ports, and uses the destination IP and port in events as match keys: + +```yaml +processors: +- add_kubernetes_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - field_format: + format: '%{[destination.ip]}:%{[destination.port]}' +``` + + +### `fields` [_fields] + +Looks up pod metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used. + +This matcher has an option `lookup_fields` to define the files whose value will be used for lookup. + +For example, the following configuration uses the `ip_port` indexer to identify pods, and defines a matcher that uses the destination IP or the server IP for the lookup, the first it finds in the event: + +```yaml +processors: +- add_kubernetes_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - fields: + lookup_fields: ['destination.ip', 'server.ip'] +``` + +It’s also possible to extract the matching key from fields using a regex pattern. The optional `regex_pattern` field can be used to set the pattern. The pattern **must** contain a capture group named `key`, whose value will be used as the matching key. + +For example, the following configuration uses the `container` indexer to identify containers by their id, and extracts the matching key from the cgroup id field added to system process metrics. This field has the form `cri-containerd-.scope`, so we need a regex pattern to obtain the container id. + +```yaml +processors: + - add_kubernetes_metadata: + indexers: + - container: + matchers: + - fields: + lookup_fields: ['system.process.cgroup.id'] + regex_pattern: 'cri-containerd-(?P[0-9a-z]+)\.scope' +``` + + + diff --git a/docs/reference/auditbeat/add-labels.md b/docs/reference/auditbeat/add-labels.md new file mode 100644 index 000000000000..54494150390b --- /dev/null +++ b/docs/reference/auditbeat/add-labels.md @@ -0,0 +1,45 @@ +--- +navigation_title: "add_labels" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-labels.html +--- + +# Add labels [add-labels] + + +The `add_labels` processors adds a set of key-value pairs to an event. The processor will flatten nested configuration objects like arrays or dictionaries into a fully qualified name by merging nested names with a `.`. Array entries create numeric names starting with 0. Labels are always stored under the Elastic Common Schema compliant `labels` sub-dictionary. + +`labels` +: dictionaries of labels to be added. + +For example, this configuration: + +```yaml +processors: + - add_labels: + labels: + number: 1 + with.dots: test + nested: + with.dots: nested + array: + - do + - re + - with.field: mi +``` + +Adds these fields to every event: + +```json +{ + "labels": { + "number": 1, + "with.dots": "test", + "nested.with.dots": "nested", + "array.0": "do", + "array.1": "re", + "array.2.with.field": "mi" + } +} +``` + diff --git a/docs/reference/auditbeat/add-locale.md b/docs/reference/auditbeat/add-locale.md new file mode 100644 index 000000000000..a2c61f897003 --- /dev/null +++ b/docs/reference/auditbeat/add-locale.md @@ -0,0 +1,31 @@ +--- +navigation_title: "add_locale" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-locale.html +--- + +# Add the local time zone [add-locale] + + +The `add_locale` processor enriches each event with the machine’s time zone offset from UTC or with the name of the time zone. It supports one configuration option named `format` that controls whether an offset or time zone abbreviation is added to the event. The default format is `offset`. The processor adds the a `event.timezone` value to each event. + +The configuration below enables the processor with the default settings. + +```yaml +processors: + - add_locale: ~ +``` + +This configuration enables the processor and configures it to add the time zone abbreviation to events. + +```yaml +processors: + - add_locale: + format: abbreviation +``` + +::::{note} +Please note that `add_locale` differentiates between daylight savings time (DST) and regular time. For example `CEST` indicates DST and and `CET` is regular time. +:::: + + diff --git a/docs/reference/auditbeat/add-network-direction.md b/docs/reference/auditbeat/add-network-direction.md new file mode 100644 index 000000000000..64b6a9d4614f --- /dev/null +++ b/docs/reference/auditbeat/add-network-direction.md @@ -0,0 +1,22 @@ +--- +navigation_title: "add_network_direction" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-network-direction.html +--- + +# Add network direction [add-network-direction] + + +The `add_network_direction` processor attempts to compute the perimeter-based network direction given an a source and destination ip address and list of internal networks. The key `internal_networks` can contain either CIDR blocks or a list of special values enumerated in the network section of [Conditions](/reference/auditbeat/defining-processors.md#conditions). + +```yaml +processors: + - add_network_direction: + source: source.ip + destination: destination.ip + target: network.direction + internal_networks: [ private ] +``` + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/auditbeat/add-nomad-metadata.md b/docs/reference/auditbeat/add-nomad-metadata.md new file mode 100644 index 000000000000..cd1f670065fd --- /dev/null +++ b/docs/reference/auditbeat/add-nomad-metadata.md @@ -0,0 +1,137 @@ +--- +navigation_title: "add_nomad_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-nomad-metadata.html +--- + +# Add Nomad metadata [add-nomad-metadata] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `add_nomad_metadata` processor adds fields with relevant metadata for applications deployed in Nomad. + +Each event is annotated with the following information: + +* Allocation name, identifier and status. +* Job name and type. +* Namespace where the job is deployed. +* Datacenter and region where the agent running the allocation is located. + +```yaml +processors: + - add_nomad_metadata: ~ +``` + +It has the following settings to configure the connection: + +`address` +: (Optional) The URL of the agent API used to request the metadata. It uses `http://127.0.0.1:4646` by default. + +`namespace` +: (Optional) Namespace to watch. If set, only events for allocations in this namespace will be annotated. + +`region` +: (Optional) Region to watch. If set, only events for allocations in this region will be annotated. + +`secret_id` +: (Optional) SecretID to use when connecting with the agent API. This is an example ACL policy to apply to the token. + +```json +namespace "*" { + policy = "read" +} +node { + policy = "read" +} +agent { + policy = "read" +} +``` + +`refresh_interval` +: (Optional) Interval used to update the cached metadata. It defaults to 30 seconds. + +`cleanup_timeout` +: (Optional) After an allocation has been removed, time to wait before cleaning up their associated resources. This is useful if you expect to receive events after an allocation has been removed, which can happen when collecting logs. It defaults to 60 seconds. + +You can decide if Auditbeat should annotate events related to allocations in local node or on the whole cluster configuring the scope with the following settings: + +`scope` +: (Optional) Scope of the resources to watch. It can be `node` to get metadata only for the allocations in a single agent, or `global`, to get metadata for allocations running on any agent. It defaults to `node`. + +`node` +: (Optional) When using `scope: node`, use `node` to specify the name of the local node if it cannot be discovered automatically. + +For example the following configuration could be used if Auditbeat is collecting events from all the allocations in the cluster: + +```yaml +processors: + - add_nomad_metadata: + scope: global +``` + +## Indexers and matchers [_indexers_and_matchers] + +Indexers and matchers are used to correlate fields in events with actual metadata. Auditbeat uses this information to know what metadata to include in each event. + +### Indexers [_indexers_2] + +Indexers use allocation metadata to create unique identifiers for each one of the pods. + +Avaliable indexers are: `allocation_name`:: Identifies allocations by its name and namespace (as `/`) `allocation_uuid`:: Identifies allocations by its unique identifier. + + +### Matchers [_matchers_2] + +Matchers are used to construct the lookup keys that match with the identifiers created by indexes. + + +### `field_format` [_field_format_2] + +Looks up allocation metadata using a key created with a string format that can include event fields. + +This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event. + +For example, the following configuration uses the `allocation_name` indexer to identify the allocation metadata by its name and namespace, and uses custom fields existing in the event as match keys: + +```yaml +processors: +- add_nomad_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_name: + matchers: + - field_format: + format: '%{[labels.nomad_namespace]}/%{[fields.nomad_alloc_name]}' +``` + + +### `fields` [_fields_2] + +Looks up allocation metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used. + +This matcher has an option `lookup_fields` to define the fields whose value will be used for lookup. + +For example, the following configuration uses the `allocation_uuid` indexer to identify allocations, and defines a matcher that uses some fields where the allocation UUID can be found for lookup, the first it finds in the event: + +```yaml +processors: +- add_nomad_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_uuid: + matchers: + - fields: + lookup_fields: ['host.name', 'fields.nomad_alloc_uuid'] +``` + + + diff --git a/docs/reference/auditbeat/add-observer-metadata.md b/docs/reference/auditbeat/add-observer-metadata.md new file mode 100644 index 000000000000..68ea963b0359 --- /dev/null +++ b/docs/reference/auditbeat/add-observer-metadata.md @@ -0,0 +1,88 @@ +--- +navigation_title: "add_observer_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-observer-metadata.html +--- + +# Add Observer metadata [add-observer-metadata] + + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +```yaml +processors: + - add_observer_metadata: + cache.ttl: 5m + geo: + name: nyc-dc1-rack1 + location: 40.7128, -74.0060 + continent_name: North America + country_iso_code: US + region_name: New York + region_iso_code: NY + city_name: New York +``` + +It has the following settings: + +`netinfo.enabled` +: (Optional) Default true. Include IP addresses and MAC addresses as fields observer.ip and observer.mac + +`cache.ttl` +: (Optional) The processor uses an internal cache for the observer metadata. This sets the cache expiration time. The default is 5m, negative values disable caching altogether. + +`geo.name` +: (Optional) User definable token to be used for identifying a discrete location. Frequently a datacenter, rack, or similar. + +`geo.location` +: (Optional) Longitude and latitude in comma separated format. + +`geo.continent_name` +: (Optional) Name of the continent. + +`geo.country_name` +: (Optional) Name of the country. + +`geo.region_name` +: (Optional) Name of the region. + +`geo.city_name` +: (Optional) Name of the city. + +`geo.country_iso_code` +: (Optional) ISO country code. + +`geo.region_iso_code` +: (Optional) ISO region code. + +The `add_observer_metadata` processor annotates each event with relevant metadata from the observer machine. The fields added to the event look like the following: + +```json +{ + "observer" : { + "hostname" : "avce", + "type" : "heartbeat", + "vendor" : "elastic", + "ip" : [ + "192.168.1.251", + "fe80::64b2:c3ff:fe5b:b974", + ], + "mac" : [ + "dc:c1:02:6f:1b:ed", + ], + "geo": { + "continent_name": "North America", + "country_iso_code": "US", + "region_name": "New York", + "region_iso_code": "NY", + "city_name": "New York", + "name": "nyc-dc1-rack1", + "location": "40.7128, -74.0060" + } + } +} +``` + diff --git a/docs/reference/auditbeat/add-process-metadata.md b/docs/reference/auditbeat/add-process-metadata.md new file mode 100644 index 000000000000..c79193bfe66c --- /dev/null +++ b/docs/reference/auditbeat/add-process-metadata.md @@ -0,0 +1,94 @@ +--- +navigation_title: "add_process_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-process-metadata.html +--- + +# Add process metadata [add-process-metadata] + + +The `add_process_metadata` processor enriches events with information from running processes, identified by their process ID (PID). + +```yaml +processors: + - add_process_metadata: + match_pids: + - process.pid +``` + +The fields added to the event look as follows: + +```json +{ + "container": { + "id": "b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1" + }, + "process": { + "args": [ + "/usr/lib/systemd/systemd", + "--switched-root", + "--system", + "--deserialize", + "22" + ], + "executable": "/usr/lib/systemd/systemd", + "name": "systemd", + "owner": { + "id": "0", + "name": "root" + }, + "parent": { + "pid": 0 + }, + "pid": 1, + "start_time": "2018-08-22T08:44:50.684Z", + "title": "/usr/lib/systemd/systemd --switched-root --system --deserialize 22" + } +} +``` + +Optionally, the process environment can be included, too: + +```json + ... + "env": { + "HOME": "/", + "TERM": "linux", + "BOOT_IMAGE": "/boot/vmlinuz-4.11.8-300.fc26.x86_64", + "LANG": "en_US.UTF-8", + } + ... +``` + +It has the following settings: + +`match_pids` +: List of fields to lookup for a PID. The processor will search the list sequentially until the field is found in the current event, and the PID lookup will be applied to the value of this field. + +`target` +: (Optional) Destination prefix where the `process` object will be created. The default is the event’s root. + +`include_fields` +: (Optional) List of fields to add. By default, the processor will add all the available fields except `process.env`. + +`ignore_missing` +: (Optional) When set to `false`, events that don’t contain any of the fields in match_pids will be discarded and an error will be generated. By default, this condition is ignored. + +`overwrite_keys` +: (Optional) By default, if a target field already exists, it will not be overwritten, and an error will be logged. If `overwrite_keys` is set to `true`, this condition will be ignored. + +`restricted_fields` +: (Optional) By default, the `process.env` field is not output, to avoid leaking sensitive data. If `restricted_fields` is `true`, the field will be present in the output. + +`host_path` +: (Optional) By default, the `host_path` field is set to the root directory of the host `/`. This is the path where `/proc` is mounted. For different runtime configurations of Kubernetes or Docker, the `host_path` can be set to overwrite the default. + +`cgroup_prefixes` +: (Optional) List of prefixes that will be matched against cgroup paths. When a cgroup path begins with a prefix in the list, then the last element of the path is returned as the container ID. Only one of `cgroup_prefixes` and `cgroup_rexex` should be configured. If neither are configured then a default `cgroup_regex` value is used that matches cgroup paths containing 64-character container IDs (like those from Docker, Kubernetes, and Podman). + +`cgroup_regex` +: (Optional) A regular expression that will be matched against cgroup paths. It must contain one capturing group. When a cgroup path matches the regular expression then the value of the capturing group is returned as the container ID. Only one of `cgroup_prefixes` and `cgroup_rexex` should be configured. If neither are configured then a default `cgroup_regex` value is used that matches cgroup paths containing 64-character container IDs (like those from Docker, Kubernetes, and Podman). + +`cgroup_cache_expire_time` +: (Optional) By default, the `cgroup_cache_expire_time` is set to 30 seconds. This is the length of time before cgroup cache elements expire in seconds. It can be set to 0 to disable the cgroup cache. In some container runtimes technology like runc, the container’s process is also process in the host kernel, and will be affected by PID rollover/reuse. The expire time needs to set smaller than the PIDs wrap around time to avoid wrong container id. + diff --git a/docs/reference/auditbeat/add-session-metadata.md b/docs/reference/auditbeat/add-session-metadata.md new file mode 100644 index 000000000000..771521fcc5b7 --- /dev/null +++ b/docs/reference/auditbeat/add-session-metadata.md @@ -0,0 +1,89 @@ +--- +navigation_title: "add_session_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-session-metadata.html +--- + +# Add session metadata [add-session-metadata] + + +The `add_session_metadata` processor enriches process events with additional information that users can see using the [Session View](docs-content://solutions/security/investigate/session-view.md) tool in the {{elastic-sec}} platform. + +::::{note} +The current release of `add_session_metadata` processor for {{auditbeat}} is limited to virtual machines (VMs) and bare metal environments. +:::: + + +Here’s an example using the `add_session_metadata` processor to enhance process events generated by the `auditd` module of {{auditbeat}}. + +```yaml +auditbeat.modules: +- module: auditd + processors: + - add_session_metadata: + backend: "auto" +``` + +## How the `add_session_metadata` processor works [add-session-metadata-explained] + +Using the available Linux kernel technology, the processor collects comprehensive information on all running system processes, compiling this data into a process database. When processing an event (such as those generated by the {{auditbeat}} `auditd` module), the processor queries this database to retrieve information about related processes, including the parent process, session leader, process group leader, and entry leader. It then enriches the original event with this metadata, providing a more complete picture of process relationships and system activities. + +This enhanced data enables the powerful [Session View](docs-content://solutions/security/investigate/session-view.md) tool in the {{elastic-sec}} platform, offering users deeper insights for analysis and investigation. + +### Backends [add-session-metadata-backends] + +The `add_session_metadata` processor operates using various backend options. + +* `auto` is the recommended setting. It attempts to use `kernel_tracing` first, falling back to `procfs` if necessary, ensuring compatibility even on systems without `kernel_tracing` support. +* `kernel_tracing` gathers information about processes using either eBPF or kprobes. It will use eBPF if available, but if not, it will fall back to kprobes. eBPF requires a system with kernel support for eBPF enabled, support for eBPF ring buffer, and auditbeat running as superuser. Kprobe support requires Linux kernel 3.10.0 or above, and auditbeat running as a superuser. +* `procfs` collects process information with the proc filesystem. This is compatible with older systems that may not support ebpf. To gather complete process info, auditbeat requires permissions to read all process data in procfs; for example, run as a superuser or have the `SYS_PTRACE` capability. + + +### Containers [add-session-metadata-containers] + +If you are running {{auditbeat}} in a container, the container must run in the host’s PID namespace. With the `auto` or `kernel_tracing` backend, these host directories must also be mounted to the same path within the container: `/sys/kernel/debug`, `/sys/fs/bpf`. + + + +## Enable and configure Session View in {{auditbeat}} [add-session-metadata-enable] + +To configure and enable [Session View](docs-content://solutions/security/investigate/session-view.md) functionality, you’ll: + +* Add the `add_sessions_metadata` processor to your `auditbeat.yml` file. +* Configure audit rules in your `auditbeat.yml` file. +* Restart {{auditbeat}}. + +We’ll walk you through these steps in more detail. + +1. Edit your `auditbeat.yml` file and add this info to the modules configuration section: + + ```yaml + auditbeat.modules: + - module: auditd + processors: + - add_session_metadata: + backend: "auto" + ``` + +2. Add audit rules in the modules configuration section of `auditbeat.yml` or the `audit.rules.d` config file, depending on your configuration: + + ```yaml + auditbeat.modules: + - module: auditd + audit_rules: | + ## executions + -a always,exit -F arch=b64 -S execve,execveat -k exec + -a always,exit -F arch=b64 -S exit_group + ## set_sid + -a always,exit -F arch=b64 -S setsid + ``` + +3. Save your configuration changes. +4. Restart {{auditbeat}}: + + ```sh + sudo systemctl restart auditbeat + ``` + + + diff --git a/docs/reference/auditbeat/add-tags.md b/docs/reference/auditbeat/add-tags.md new file mode 100644 index 000000000000..91b45734da0b --- /dev/null +++ b/docs/reference/auditbeat/add-tags.md @@ -0,0 +1,34 @@ +--- +navigation_title: "add_tags" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/add-tags.html +--- + +# Add tags [add-tags] + + +The `add_tags` processor adds tags to a list of tags. If the target field already exists, the tags are appended to the existing list of tags. + +`tags` +: List of tags to add. + +`target` +: (Optional) Field the tags will be added to. Defaults to `tags`. Setting tags in `@metadata` is not supported. + +For example, this configuration: + +```yaml +processors: + - add_tags: + tags: [web, production] + target: "environment" +``` + +Adds the environment field to every event: + +```json +{ + "environment": ["web", "production"] +} +``` + diff --git a/docs/reference/auditbeat/append.md b/docs/reference/auditbeat/append.md new file mode 100644 index 000000000000..8ef7a1c1f7f2 --- /dev/null +++ b/docs/reference/auditbeat/append.md @@ -0,0 +1,73 @@ +--- +navigation_title: "append" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/append.html +--- + +# Append Processor [append] + + +The `append` processor appends one or more values to an existing array if the target field already exists and it is an array. Converts a scaler to an array and appends one or more values to it if the field exists and it is a scaler. Here the values can either be one or more static values or one or more values from the fields listed under *fields* key. + +`target_field` +: The field in which you want to append the data. + +`fields` +: (Optional) List of fields from which you want to copy data from. If the value is of a concrete type it will be appended directly to the target. However, if the value is an array, all the elements of the array are pushed individually to the target field. + +`values` +: (Optional) List of static values you want to append to target field. + +`ignore_empty_values` +: (Optional) If set to `true`, all the `""` and `nil` are omitted from being appended to the target field. + +`fail_on_error` +: (Optional) If set to `true` and an error occurs, the changes are reverted and the original is returned. If set to `false`, processing continues if an error occurs. Default is `true`. + +`allow_duplicate` +: (Optional) If set to `false`, the processor does not append values already present in the field. The default is `true`, which will append duplicate values in the array. + +`ignore_missing` +: (Optional) Indicates whether to ignore events that lack the source field. The default is `false`, which will fail processing of an event if a field is missing. + +note: If you want to use `fields` parameter with fields under `message`, make sure you use `decode_json_fields` first with `target: ""`. + +For example, this configuration: + +```yaml +processors: + - decode_json_fields: + fields: message + target: "" + - append: + target_field: target-field + fields: + - concrete.field + - array.one + values: + - static-value + - "" + ignore_missing: true + fail_on_error: true + ignore_empty_values: true +``` + +Copies the values of `concrete.field`, `array.one` response fields and the static values to `target-field`: + +```json +{ + "concrete": { + "field": "val0" + }, + "array": { + "one": [ "val1", "val2" ] + }, + "target-field": [ + "val0", + "val1", + "val2", + "static-value" + ] +} +``` + diff --git a/docs/reference/auditbeat/auditbeat-configuration-reloading.md b/docs/reference/auditbeat/auditbeat-configuration-reloading.md new file mode 100644 index 000000000000..4b7af2a8d589 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-configuration-reloading.md @@ -0,0 +1,50 @@ +--- +navigation_title: "Config file reloading" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-configuration-reloading.html +--- + +# Reload the configuration dynamically [auditbeat-configuration-reloading] + + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +You can configure Auditbeat to dynamically reload configuration files when there are changes. To do this, you specify a path ([glob](https://golang.org/pkg/path/filepath/#Glob)) to watch for module configuration changes. When the files found by the glob change, new modules are started/stopped according to changes in the configuration files. + +To enable dynamic config reloading, you specify the `path` and `reload` options in the main `auditbeat.yml` config file. For example: + +```sh +auditbeat.config.modules: + path: ${path.config}/conf.d/*.yml + reload.enabled: true + reload.period: 10s +``` + +**`path`** +: A glob that defines the files to check for changes. + +**`reload.enabled`** +: When set to `true`, enables dynamic config reload. + +**`reload.period`** +: Specifies how often the files are checked for changes. Do not set the `period` to less than 1s because the modification time of files is often stored in seconds. Setting the `period` to less than 1s will result in unnecessary overhead. + +Each file found by the glob must contain a list of one or more module definitions. For example: + +```yaml +- module: file_integrity + paths: + - /www/wordpress + - /www/wordpress/wp-admin + - /www/wordpress/wp-content + - /www/wordpress/wp-includes +``` + +::::{note} +On systems with POSIX file permissions, all Beats configuration files are subject to ownership and file permission checks. If you encounter config loading errors related to file ownership, see {{beats-ref}}/config-file-permissions.html. +:::: + + diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-host.md b/docs/reference/auditbeat/auditbeat-dataset-system-host.md new file mode 100644 index 000000000000..d1abb1df5020 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-dataset-system-host.md @@ -0,0 +1,91 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-host.html +--- + +# System host dataset [auditbeat-dataset-system-host] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +This is the `host` dataset of the system module. + +It is implemented for Linux, macOS (Darwin), and Windows. + + +### Example dashboard [_example_dashboard_2] + +This dataset comes with a sample dashboard: + +:::{image} images/auditbeat-system-host-dashboard.png +:alt: Auditbeat System Host Dashboard +:class: screenshot +::: + +## Fields [_fields_3] + +For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section. + +Here is an example document generated by this dataset: + +```json +{ + "@timestamp": "2017-10-12T08:05:34.853Z", + "agent": { + "hostname": "host.example.com", + "name": "host.example.com" + }, + "event": { + "action": "host", + "dataset": "host", + "module": "system", + "kind": "state" + }, + "message": "Ubuntu host ubuntu-bionic (IP: 10.0.2.15) is up for 0 days, 5 hours, 11 minutes", + "service": { + "type": "system" + }, + "system": { + "audit": { + "host": { + "architecture": "x86_64", + "boottime": "2018-12-10T15:48:44Z", + "containerized": false, + "hostname": "ubuntu-bionic", + "id": "6f7be6fb33e6c77f057266415c094408", + "ip": [ + "10.0.2.15", + "fe80::2d:fdff:fe81:e747", + "172.28.128.3", + "fe80::a00:27ff:fe1f:7160", + "172.17.0.1", + "fe80::42:83ff:febe:1a3a", + "172.18.0.1", + "fe80::42:9eff:fed3:d888" + ], + "mac": [ + "02-2D-FD-81-E7-47", + "08-00-27-1F-71-60", + "02-42-83-BE-1A-3A", + "02-42-9E-D3-D8-88" + ], + "os": { + "family": "debian", + "kernel": "4.15.0-42-generic", + "name": "Ubuntu", + "platform": "ubuntu", + "version": "18.04.1 LTS (Bionic Beaver)" + }, + "timezone.name": "UTC", + "timezone.offset.sec": 0, + "type": "linux", + "uptime": 18661357350265 + } + } + } +} +``` + + diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-login.md b/docs/reference/auditbeat/auditbeat-dataset-system-login.md new file mode 100644 index 000000000000..e786ccd3fb00 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-dataset-system-login.md @@ -0,0 +1,73 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-login.html +--- + +# System login dataset [auditbeat-dataset-system-login] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +This is the `login` dataset of the system module. + + +## Implementation [_implementation] + +The `login` dataset is implemented for Linux only. + +On Linux, the dataset reads the [utmp](https://en.wikipedia.org/wiki/Utmp) files that keep track of logins and logouts to the system. They are usually located at `/var/log/wtmp` (successful logins) and `/var/log/btmp` (failed logins). + +The file patterns used to locate the files can be configured using `login.wtmp_file_pattern` and `login.btmp_file_pattern`. By default, both the current files and any rotated files (e.g. `wtmp.1`, `wtmp.2`) are read. + +utmp files are binary, but you can display their contents using the `utmpdump` utility. + + +### Example dashboard [_example_dashboard_3] + +The dataset comes with a sample dashboard: + +:::{image} images/auditbeat-system-login-dashboard.png +:alt: Auditbeat System Login Dashboard +:class: screenshot +::: + +## Fields [_fields_4] + +For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section. + +Here is an example document generated by this dataset: + +```json +{ + "@timestamp": "2017-10-12T08:05:34.853Z", + "event": { + "action": "user_login", + "category": "authentication", + "dataset": "login", + "kind": "event", + "module": "system", + "origin": "/var/log/wtmp", + "outcome": "success", + "type": "authentication_success" + }, + "message": "Login by user vagrant (UID: 1000) on pts/2 (PID: 14962) from 10.0.2.2 (IP: 10.0.2.2)", + "process": { + "pid": 14962 + }, + "service": { + "type": "system" + }, + "source": { + "ip": "10.0.2.2" + }, + "user": { + "id": 1000, + "name": "vagrant", + "terminal": "pts/2" + } +} +``` + + diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-package.md b/docs/reference/auditbeat/auditbeat-dataset-system-package.md new file mode 100644 index 000000000000..1b4ed1d06266 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-dataset-system-package.md @@ -0,0 +1,71 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-package.html +--- + +# System package dataset [auditbeat-dataset-system-package] + +This is the `package` dataset of the system module. + +It is implemented for Linux distributions using dpkg or rpm as their package manager, and for Homebrew on macOS (Darwin). + + +### Example dashboard [_example_dashboard_4] + +The dataset comes with a sample dashboard: + +:::{image} images/auditbeat-system-package-dashboard.png +:alt: Auditbeat System Package Dashboard +:class: screenshot +::: + +## Fields [_fields_5] + +For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section. + +Here is an example document generated by this dataset: + +```json +{ + "@timestamp": "2017-10-12T08:05:34.853Z", + "event": { + "action": "existing_package", + "category": [ + "package" + ], + "dataset": "package", + "id": "6bed65c5-9797-4fb7-9ec7-2d1873c54371", + "kind": "state", + "module": "system", + "type": [ + "info" + ] + }, + "message": "Package zstd (1.5.4) is already installed", + "package": { + "description": "Zstandard is a real-time compression algorithm", + "installed": "2023-02-15T20:40:24.390086982-05:00", + "name": "zstd", + "reference": "https://facebook.github.io/zstd/", + "type": "brew", + "version": "1.5.4" + }, + "service": { + "type": "system" + }, + "system": { + "audit": { + "package": { + "entity_id": "SxYD3ZMh/Ym0lBIk", + "installtime": "2023-02-15T20:40:24.390086982-05:00", + "name": "zstd", + "summary": "Zstandard is a real-time compression algorithm", + "url": "https://facebook.github.io/zstd/", + "version": "1.5.4" + } + } + } +} +``` + + diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-process.md b/docs/reference/auditbeat/auditbeat-dataset-system-process.md new file mode 100644 index 000000000000..265851b885b0 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-dataset-system-process.md @@ -0,0 +1,96 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-process.html +--- + +# System process dataset [auditbeat-dataset-system-process] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +This is the `process` dataset of the system module. It generates an event when a process starts and stops. + +It is implemented for Linux, macOS (Darwin), and Windows. + + +## Configuration options [_configuration_options_20] + +**`process.state.period`** +: The interval at which the dataset sends full state information. If set this will take precedence over `state.period`. The default value is `12h`. + +**`process.hash.max_file_size`** +: The maximum size of a file in bytes for which Auditbeat will compute hashes. Files larger than this size will not be hashed. The default value is 100 MiB. For convenience units can be specified as a suffix to the value. The supported units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, `gb`, `tib`, `tb`, `pib`, `pb`, `eib`, and `eb`. + +**`process.hash.hash_types`** +: A list of hash types to compute when the file changes. The supported hash types are `blake2b_256`, `blake2b_384`, `blake2b_512`, `md5`, `sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `sha512_224`, `sha512_256`, `sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, and `xxh64`. The default value is `sha1`. + + +### Example dashboard [_example_dashboard_5] + +The dataset comes with a sample dashboard: + +:::{image} images/auditbeat-system-process-dashboard.png +:alt: Auditbeat System Process Dashboard +:class: screenshot +::: + +## Fields [_fields_6] + +For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section. + +Here is an example document generated by this dataset: + +```json +{ + "@timestamp": "2017-10-12T08:05:34.853Z", + "event": { + "action": "process_stopped", + "dataset": "process", + "kind": "event", + "module": "system" + }, + "message": "Process zsh (PID: 9086) by user elastic STOPPED", + "process": { + "args": [ + "zsh" + ], + "entity_id": "+fYshazplsMYlr0y", + "executable": "/bin/zsh", + "hash": { + "sha1": "33646536613061316366353134643135613631643363383733653261373130393737633131303364" + }, + "name": "zsh", + "pid": 9086, + "ppid": 9085, + "start": "2019-01-01T00:00:01Z", + "working_directory": "/home/elastic" + }, + "service": { + "type": "system" + }, + "user": { + "effective": { + "group": { + "id": "1000" + }, + "id": "1000" + }, + "group": { + "id": "1000", + "name": "elastic" + }, + "id": "1000", + "name": "elastic", + "saved": { + "group": { + "id": "1000" + }, + "id": "1000" + } + } +} +``` + + diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-socket.md b/docs/reference/auditbeat/auditbeat-dataset-system-socket.md new file mode 100644 index 000000000000..8bb4566bcf2e --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-dataset-system-socket.md @@ -0,0 +1,267 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-socket.html +--- + +# System socket dataset [auditbeat-dataset-system-socket] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +This is the `socket` dataset of the system module. It allows to monitor network traffic to and from running processes. It’s main features are: + +* Supports TCP and UDP sockets over IPv4 and IPv6. +* Outputs per-flow bytes and packets counters. +* Enriches the flows with [process](ecs://reference/ecs-process.md) and [user](ecs://reference/ecs-user.md) information. +* Provides information similar to Packetbeat’s flow monitoring with reduced CPU and memory usage. +* Works on stock kernels without the need of custom modules, external libraries or development headers. +* Correlates IP addresses with DNS requests. + +This dataset does not analyze application-layer protocols nor provide any other advanced features present in Packetbeat: - Monitor network traffic whose destination is not a local process, as is the case with traffic forwarding. - Monitor layer 2 traffic, ICMP or raw sockets. + + +## Implementation [_implementation_2] + +It is implemented for Linux only and currently supports x86 (32 and 64 bit) architectures. + +The dataset uses [KProbe-based event tracing](https://www.kernel.org/doc/Documentation/trace/kprobetrace.txt) to monitor TCP and UDP sockets over IPv4 and IPv6, providing flow monitoring that includes byte and packet counters, as well as the local process and user involved in the flow. It does so by plugin into the TCP/IP stack to generate custom tracing events avoiding the need to copy network traffic to user space. + +By not relying on periodic polling, this approach enables the dataset to perform near real-time monitoring of the system without the risk of missing short lived connections or processes. + + +## Requirements [_requirements] + +Features used by the `socket` dataset require a minimum Linux kernel version of 3.12 (vanilla). However, some distributions have backported those features to older kernels. The following (non-exhaustive) lists the different distributions under which the dataset is known to work: + +| Distribution | kernel version | Works? | +| --- | --- | --- | +| CentOS 6.5 | 2.6.32-431.el6 | NO[[1]](#anchor-1) | +| CentOS 6.9 | 2.6.32-696.30.1.el6 | ✓ | +| CentOS 7.6 | 3.10.0-957.1.3.el7 | ✓ | +| RHEL 8 | 4.18.0-80.rhel8 | ✓ | +| Debian 8 | 3.16.0-6 | ✓ | +| Debian 9 | 4.9.0-8 | ✓ | +| Debian 10 | 4.19.0-5 | ✓ | +| SLES 12 | 4.4.73-5 | ✓ | +| Ubuntu 12.04 | 3.2.0-126 | NO[[1]](#anchor-1) | +| Ubuntu 14.04.6 | 3.13.0-170 | ✓ | +| Ubuntu 16.04.3 | 4.4.0-97 | ✓ | +| AWS Linux 2 | 4.14.138-114.102 | ✓ | + +$$$anchor-1$$$ +[[1]](#anchor-1): These systems lack [PERF_EVENT_IOC_ID ioctl.](https://lore.kernel.org/patchwork/patch/399251/) Support might be added in a future release. + +The dataset needs CAP_SYS_ADMIN and CAP_NET_ADMIN in order to work. + + +### Kernel configuration [_kernel_configuration] + +A kernel built with the following configuration options enabled is required: + +* `CONFIG_KPROBE_EVENTS`: Enables the KProbes subsystem. +* `CONFIG_DEBUG_FS`: For kernels laking `tracefs` support (<4.1). +* `CONFIG_IPV6`: IPv6 support in the kernel is needed even if disabled with `socket.enable_ipv6: false`. + +These settings are enabled by default in most distributions. + +The following configuration settings can prevent the dataset from starting: + +* `/sys/kernel/debug/kprobes/enabled` must be 1. +* `/proc/sys/net/ipv6/conf/lo/disable_ipv6` (IPv6 enabled in loopback device) is required when running with IPv6 enabled. + + +### Running on docker [_running_on_docker] + +The dataset can monitor the Docker host when running inside a container. However it needs to run on a `privileged` container with `CAP_NET_ADMIN`. The docker container running Auditbeat needs access to the host’s tracefs or debugfs directory. This is achieved by bind-mounting `/sys`. + + +## Configuration [_configuration_2] + +The following options are available for the `socket` dataset: + +* `socket.tracefs_path` (default: none) + +Must point to the mount-point of `tracefs` or the `tracing` directory inside `debugfs`. If this option is not specified, Auditbeat will look for the default locations: `/sys/kernel/tracing` and `/sys/kernel/debug/tracing`. If not found, it will attempt to mount `tracefs` and `debugfs` at their default locations. + +* `socket.enable_ipv6` (default: unset) + +Determines whether IPv6 must be monitored. When unset (default), IPv6 support is automatically detected. Even when IPv6 is disabled, in order to run the dataset you still need a kernel with IPv6 support (the `ipv6` module must be loaded if compiled as a module). + +* `socket.flow_inactive_timeout` (default: 30s) + +Determines how long a flow has to be inactive to be considered closed. + +* `socket.flow_termination_timeout` (default: 5s) + +Determines how long to wait after a socket has been closed for out of order packets. With TCP, some packets can be received shortly after a socket is closed. If set too low, additional flows will be generated for those packets. + +* `socket.socket_inactive_timeout` (default: 1m) + +How long a socket can be inactive to be evicted from the internal cache. A lower value reduces memory usage at the expense of some flows being reported as multiple partial flows. + +* `socket.perf_queue_size` (default: 4096) + +The number of tracing samples that can be queued for processing. A larger value uses more memory but reduces the chances of samples being lost when the system is under heavy load. + +* `socket.lost_queue_size` (default: 128) + +The number of lost samples notifications that can be queued. + +* `socket.ring_size_exponent` (default: 7) + +Controls the number of memory pages allocated for the per-CPU ring-buffer used to receive samples from the kernel. The actual amount of memory used is Number_of_CPUs x Page_Size(4KB) x 2ring_size_exponent. That is 0.5 MiB of RAM per CPU with the default value. + +* `socket.clock_max_drift` (default: 100ms) + +Defines the maximum difference between the kernel internal clock and the reference time used to timestamp events. + +* `socket.clock_sync_period` (default: 10s) + +Controls how often clock synchronization events are generated to measure drift between the kernel clock and the dataset’s reference clock. + +* `socket.guess_timeout` (default: 15s) + +The maximum time an individual guess is allowed to run. + +* `socket.dns.enabled` (default: true) + +If DNS traffic must be monitored to enrich network flows with DNS information. + +* `socket.dns.type` (default: af_packet) + +The method used to monitor DNS traffic. Currently, only `af_packet` is supported. + +* `socket.dns.af_packet.interface` (default: any) + +The network interface where DNS will be monitored. + +* `socket.dns.af_packet.snaplen` (default: 1024) + +Maximum number of bytes to copy for each captured packet. + +## Fields [_fields_7] + +For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section. + +Here is an example document generated by this dataset: + +```json +{ + "@timestamp":"2019-08-22T20:46:40.173Z", + "@metadata":{ + "beat":"auditbeat", + "type":"_doc", + "version":"7.4.0" + }, + "server":{ + "ip":"151.101.66.217", + "port":80, + "packets":5, + "bytes":437 + }, + "user":{ + "name":"vagrant", + "id":"1000" + }, + "network":{ + "packets":10, + "bytes":731, + "community_id":"1:jdjL1TkdpF1v1GM0+JxRRp+V7KI=", + "direction":"outbound", + "type":"ipv4", + "transport":"tcp" + }, + "group":{ + "id":"1000", + "name":"vagrant" + }, + "client":{ + "ip":"10.0.2.15", + "port":40192, + "packets":5, + "bytes":294 + }, + "event":{ + "duration":30728600, + "module":"system", + "dataset":"socket", + "kind":"event", + "action":"network_flow", + "category":"network", + "start":"2019-08-22T20:46:35.001Z", + "end":"2019-08-22T20:46:35.032Z" + }, + "ecs":{ + "version":"1.0.1" + }, + "host":{ + "name":"stretch", + "containerized":false, + "hostname":"stretch", + "architecture":"x86_64", + "os":{ + "name":"Debian GNU/Linux", + "kernel":"4.9.0-8-amd64", + "codename":"stretch", + "platform":"debian", + "version":"9 (stretch)", + "family":"debian" + }, + "id":"b3531219b5b4449eadbec59d47945649" + }, + "agent":{ + "version":"7.4.0", + "type":"auditbeat", + "ephemeral_id":"f7b0ab1a-da9e-4525-9252-59ecb68139f8", + "hostname":"stretch", + "id":"88862e07-b13a-4166-b1ef-b3e55b4a0cf2" + }, + "process":{ + "pid":4970, + "name":"curl", + "args":[ + "curl", + "http://elastic.co/", + "-o", + "/dev/null" + ], + "executable":"/usr/bin/curl", + "created":"2019-08-22T20:46:34.928Z" + }, + "system":{ + "audit":{ + "socket":{ + "kernel_sock_address":"0xffff8de29d337000", + "internal_version":"1.0.3", + "uid":1000, + "gid":1000, + "euid":1000, + "egid":1000 + } + } + }, + "destination":{ + "ip":"151.101.66.217", + "port":80, + "packets":5, + "bytes":437 + }, + "source":{ + "port":40192, + "packets":5, + "bytes":294, + "ip":"10.0.2.15" + }, + "flow":{ + "final":true, + "complete":true + }, + "service":{ + "type":"system" + } +} +``` + + diff --git a/docs/reference/auditbeat/auditbeat-dataset-system-user.md b/docs/reference/auditbeat/auditbeat-dataset-system-user.md new file mode 100644 index 000000000000..f509d11d0cdc --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-dataset-system-user.md @@ -0,0 +1,75 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-dataset-system-user.html +--- + +# System user dataset [auditbeat-dataset-system-user] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +This is the `user` dataset of the system module. + +It is implemented for Linux only. + + +### Example dashboard [_example_dashboard_6] + +The dataset comes with a sample dashboard: + +:::{image} images/auditbeat-system-user-dashboard.png +:alt: Auditbeat System User Dashboard +:class: screenshot +::: + +## Fields [_fields_8] + +For a description of each field in the dataset, see the [exported fields](/reference/auditbeat/exported-fields-system.md) section. + +Here is an example document generated by this dataset: + +```json +{ + "@timestamp": "2017-10-12T08:05:34.853Z", + "event": { + "action": "user_added", + "dataset": "user", + "kind": "event", + "module": "system" + }, + "message": "New user elastic (UID: 1001, Groups: elastic,docker)", + "service": { + "type": "system" + }, + "system": { + "audit": { + "user": { + "dir": "/home/elastic", + "gid": "1001", + "group": [ + { + "gid": "1001", + "name": "elastic" + }, + { + "gid": "1002", + "name": "docker" + } + ], + "name": "elastic", + "shell": "/bin/bash", + "uid": "1001" + } + } + }, + "user": { + "entity_id": "FgDfgeDptvvfdX+L", + "id": "1001", + "name": "elastic" + } +} +``` + + diff --git a/docs/reference/auditbeat/auditbeat-geoip.md b/docs/reference/auditbeat/auditbeat-geoip.md new file mode 100644 index 000000000000..f9789f920159 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-geoip.md @@ -0,0 +1,206 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-geoip.html +--- + +# Enrich events with geoIP information [auditbeat-geoip] + +You can use Auditbeat along with the [GeoIP Processor](elasticsearch://reference/ingestion-tools/enrich-processor/geoip-processor.md) in {{es}} to export geographic location information based on IP addresses. Then you can use this information to visualize the location of IP addresses on a map in {{kib}}. + +The `geoip` processor adds information about the geographical location of IP addresses, based on data from the Maxmind GeoLite2 City Database. Because the processor uses a geoIP database that’s installed on {{es}}, you don’t need to install a geoIP database on the machines running Auditbeat. + +::::{note} +If your use case involves using {{ls}}, you can use the [GeoIP filter](logstash://reference/plugins-filters-geoip.md) available in {{ls}} instead of using the `geoip` processor. However, using the `geoip` processor is the simplest approach when you don’t require the additional processing power of {{ls}}. +:::: + + + +## Configure the `geoip` processor [auditbeat-configuring-geoip] + +To configure Auditbeat and the `geoip` processor: + +1. Define an ingest pipeline that uses one or more `geoip` processors to add location information to the event. For example, you can use the Console in {{kib}} to create the following pipeline: + + ```console + PUT _ingest/pipeline/geoip-info + { + "description": "Add geoip info", + "processors": [ + { + "geoip": { + "field": "client.ip", + "target_field": "client.geo", + "ignore_missing": true + } + }, + { + "geoip": { + "database_file": "GeoLite2-ASN.mmdb", + "field": "client.ip", + "target_field": "client.as", + "properties": [ + "asn", + "organization_name" + ], + "ignore_missing": true + } + }, + { + "geoip": { + "field": "source.ip", + "target_field": "source.geo", + "ignore_missing": true + } + }, + { + "geoip": { + "database_file": "GeoLite2-ASN.mmdb", + "field": "source.ip", + "target_field": "source.as", + "properties": [ + "asn", + "organization_name" + ], + "ignore_missing": true + } + }, + { + "geoip": { + "field": "destination.ip", + "target_field": "destination.geo", + "ignore_missing": true + } + }, + { + "geoip": { + "database_file": "GeoLite2-ASN.mmdb", + "field": "destination.ip", + "target_field": "destination.as", + "properties": [ + "asn", + "organization_name" + ], + "ignore_missing": true + } + }, + { + "geoip": { + "field": "server.ip", + "target_field": "server.geo", + "ignore_missing": true + } + }, + { + "geoip": { + "database_file": "GeoLite2-ASN.mmdb", + "field": "server.ip", + "target_field": "server.as", + "properties": [ + "asn", + "organization_name" + ], + "ignore_missing": true + } + }, + { + "geoip": { + "field": "host.ip", + "target_field": "host.geo", + "ignore_missing": true + } + }, + { + "rename": { + "field": "server.as.asn", + "target_field": "server.as.number", + "ignore_missing": true + } + }, + { + "rename": { + "field": "server.as.organization_name", + "target_field": "server.as.organization.name", + "ignore_missing": true + } + }, + { + "rename": { + "field": "client.as.asn", + "target_field": "client.as.number", + "ignore_missing": true + } + }, + { + "rename": { + "field": "client.as.organization_name", + "target_field": "client.as.organization.name", + "ignore_missing": true + } + }, + { + "rename": { + "field": "source.as.asn", + "target_field": "source.as.number", + "ignore_missing": true + } + }, + { + "rename": { + "field": "source.as.organization_name", + "target_field": "source.as.organization.name", + "ignore_missing": true + } + }, + { + "rename": { + "field": "destination.as.asn", + "target_field": "destination.as.number", + "ignore_missing": true + } + }, + { + "rename": { + "field": "destination.as.organization_name", + "target_field": "destination.as.organization.name", + "ignore_missing": true + } + } + ] + } + ``` + + In this example, the pipeline ID is `geoip-info`. `field` specifies the field that contains the IP address to use for the geographical lookup, and `target_field` is the field that will hold the geographical information. `"ignore_missing": true` configures the pipeline to continue processing when it encounters an event that doesn’t have the specified field. + + See [GeoIP Processor](elasticsearch://reference/ingestion-tools/enrich-processor/geoip-processor.md) for more options. + + To learn more about adding host information to an event, see [add_host_metadata](/reference/auditbeat/add-host-metadata.md). + +2. In the Auditbeat config file, configure the {{es}} output to use the pipeline. Specify the pipeline ID in the `pipeline` option under `output.elasticsearch`. For example: + + ```yaml + output.elasticsearch: + hosts: ["localhost:9200"] + pipeline: geoip-info + ``` + +3. Run Auditbeat. Remember to use `sudo` if the config file is owned by root. + + ```sh + ./auditbeat -e + ``` + + If the lookups succeed, the events are enriched with `geo_point` fields, such as `client.geo.location` and `host.geo.location`, that you can use to populate visualizations in {{kib}}. + + +If you add a field that’s not already defined as a `geo_point` in the index template, add a mapping so the field gets indexed correctly. + + +## Visualize locations [auditbeat-visualizing-location] + +To visualize the location of IP addresses, you can create a new [coordinate map](docs-content://explore-analyze/visualize/maps.md) in {{kib}} and select the location field, for example `client.geo.location` or `host.geo.location`, as the Geohash. + +:::{image} images/coordinate-map.png +:alt: Coordinate map in {kib} +:class: screenshot +::: + diff --git a/docs/reference/auditbeat/auditbeat-installation-configuration.md b/docs/reference/auditbeat/auditbeat-installation-configuration.md new file mode 100644 index 000000000000..84822a639fb7 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-installation-configuration.md @@ -0,0 +1,345 @@ +--- +navigation_title: "Quick start" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-installation-configuration.html +--- + +# Auditbeat quick start: installation and configuration [auditbeat-installation-configuration] + + +This guide describes how to get started quickly with audit data collection. You’ll learn how to: + +* install Auditbeat on each system you want to monitor +* specify the location of your audit data +* parse log data into fields and send it to {es} +* visualize the log data in {kib} + +:::{image} images/auditbeat-auditd-dashboard.png +:alt: Auditbeat Auditd dashboard +:class: screenshot +::: + + +## Before you begin [_before_you_begin] + +You need {{es}} for storing and searching your data, and {{kib}} for visualizing and managing it. + +:::::::{tab-set} + +::::::{tab-item} Elasticsearch Service +To get started quickly, spin up a deployment of our [hosted {{ess}}](https://www.elastic.co/cloud/elasticsearch-service). The {{ess}} is available on AWS, GCP, and Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body). +:::::: + +::::::{tab-item} Self-managed +To install and run {{es}} and {{kib}}, see [Installing the {{stack}}](docs-content://deploy-manage/deploy/self-managed/deploy-cluster.md). +:::::: + +::::::: + +## Step 1: Install Auditbeat [install] + +Install Auditbeat on all the servers you want to monitor. + +To download and install Auditbeat, use the commands that work with your system: + +:::::::{tab-set} + +::::::{tab-item} DEB +Version 9.0.0-beta1 of Auditbeat has not yet been released. +:::::: + +::::::{tab-item} RPM +Version 9.0.0-beta1 of Auditbeat has not yet been released. +:::::: + +::::::{tab-item} MacOS +Version 9.0.0-beta1 of Auditbeat has not yet been released. +:::::: + +::::::{tab-item} Linux +Version 9.0.0-beta1 of Auditbeat has not yet been released. +:::::: + +::::::{tab-item} Windows +Version 9.0.0-beta1 of Auditbeat has not yet been released. +:::::: + +::::::: +The commands shown are for AMD platforms, but ARM packages are also available. Refer to the [download page](https://www.elastic.co/downloads/beats/auditbeat) for the full list of available packages. + + +### Other installation options [other-installation-options] + +* [APT or YUM](/reference/auditbeat/setup-repositories.md) +* [Download page](https://www.elastic.co/downloads/beats/auditbeat) +* [Docker](/reference/auditbeat/running-on-docker.md) +* [Kubernetes](/reference/auditbeat/running-on-kubernetes.md) + + +## Step 2: Connect to the {{stack}} [set-connection] + +Connections to {{es}} and {{kib}} are required to set up Auditbeat. + +Set the connection information in `auditbeat.yml`. To locate this configuration file, see [Directory layout](/reference/auditbeat/directory-layout.md). + +:::::::{tab-set} + +::::::{tab-item} Elasticsearch Service +Specify the [cloud.id](/reference/auditbeat/configure-cloud-id.md) of your {{ess}}, and set [cloud.auth](/reference/auditbeat/configure-cloud-id.md) to a user who is authorized to set up Auditbeat. For example: + +```yaml +cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" +cloud.auth: "auditbeat_setup:{pwd}" <1> +``` + +1. This examples shows a hard-coded password, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md). +:::::: + +::::::{tab-item} Self-managed +1. Set the host and port where Auditbeat can find the {{es}} installation, and set the username and password of a user who is authorized to set up Auditbeat. For example: + + ```yaml + output.elasticsearch: + hosts: ["https://myEShost:9200"] + username: "auditbeat_internal" + password: "{pwd}" <1> + ssl: + enabled: true + ca_trusted_fingerprint: "b9a10bbe64ee9826abeda6546fc988c8bf798b41957c33d05db736716513dc9c" <2> + ``` + + 1. This example shows a hard-coded password, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md). + 2. This example shows a hard-coded fingerprint, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md). The fingerprint is a HEX encoded SHA-256 of a CA certificate, when you start {{es}} for the first time, security features such as network encryption (TLS) for {{es}} are enabled by default. If you are using the self-signed certificate generated by {{es}} when it is started for the first time, you will need to add its fingerprint here. The fingerprint is printed on {{es}} start up logs, or you can refer to [connect clients to {{es}} documentation](docs-content://deploy-manage/security/security-certificates-keys.md#_connect_clients_to_es_5) for other options on retrieving it. If you are providing your own SSL certificate to {{es}} refer to [Auditbeat documentation on how to setup SSL](/reference/auditbeat/configuration-ssl.md#ssl-client-config). + +2. If you plan to use our pre-built {{kib}} dashboards, configure the {{kib}} endpoint. Skip this step if {{kib}} is running on the same host as {{es}}. + + ```yaml + setup.kibana: + host: "mykibanahost:5601" <1> + username: "my_kibana_user" <2> <3> + password: "{pwd}" + ``` + + 1. The hostname and port of the machine where {{kib}} is running, for example, `mykibanahost:5601`. If you specify a path after the port number, include the scheme and port: `http://mykibanahost:5601/path`. + 2. The `username` and `password` settings for {{kib}} are optional. If you don’t specify credentials for {{kib}}, Auditbeat uses the `username` and `password` specified for the {{es}} output. + 3. To use the pre-built {{kib}} dashboards, this user must be authorized to view dashboards or have the `kibana_admin` [built-in role](elasticsearch://reference/elasticsearch/roles.md). +:::::: + +::::::: +To learn more about required roles and privileges, see [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md). + +::::{note} +You can send data to other [outputs](/reference/auditbeat/configuring-output.md), such as {{ls}}, but that requires additional configuration and setup. +:::: + + + +## Step 3: Configure data collection modules [enable-modules] + +Auditbeat uses [modules](/reference/auditbeat/auditbeat-modules.md) to collect audit information. + +By default, Auditbeat uses a configuration that’s tailored to the operating system where Auditbeat is running. + +To use a different configuration, change the module settings in `auditbeat.yml`. + +The following example shows the `file_integrity` module configured to generate events whenever a file in one of the specified paths changes on disk: + +```sh +auditbeat.modules: + +- module: file_integrity + paths: + - /bin + - /usr/bin + - /sbin + - /usr/sbin + - /etc +``` + +::::{tip} +To test your configuration file, change to the directory where the Auditbeat binary is installed, and run Auditbeat in the foreground with the following options specified: `./auditbeat test config -e`. Make sure your config files are in the path expected by Auditbeat (see [Directory layout](/reference/auditbeat/directory-layout.md)), or use the `-c` flag to specify the path to the config file. +:::: + + +For more information about configuring Auditbeat, also see: + +* [Configure Auditbeat](/reference/auditbeat/configuring-howto-auditbeat.md) +* [Config file format](/reference/libbeat/config-file-format.md) +* [`auditbeat.reference.yml`](/reference/auditbeat/auditbeat-reference-yml.md): This reference configuration file shows all non-deprecated options. You’ll find it in the same location as `auditbeat.yml`. + + +## Step 4: Set up assets [setup-assets] + +Auditbeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets: + +1. Make sure the user specified in `auditbeat.yml` is [authorized to set up Auditbeat](/reference/auditbeat/privileges-to-setup-beats.md). +2. From the installation directory, run: + + :::::::{tab-set} + +::::::{tab-item} DEB +```sh + auditbeat setup -e + ``` +:::::: + +::::::{tab-item} RPM +```sh + auditbeat setup -e + ``` +:::::: + +::::::{tab-item} MacOS +```sh + ./auditbeat setup -e + ``` +:::::: + +::::::{tab-item} Linux +```sh + ./auditbeat setup -e + ``` +:::::: + +::::::{tab-item} Windows +```sh + PS > .\auditbeat.exe setup -e + ``` +:::::: + +::::::{tab-item} DEB +```sh +sudo service auditbeat start +``` + +::::{note} +If you use an `init.d` script to start Auditbeat, you can’t specify command line flags (see [Command reference](/reference/auditbeat/command-line-options.md)). To specify flags, start Auditbeat in the foreground. +:::: + + +Also see [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md). +:::::: + +::::::{tab-item} RPM +```sh +sudo service auditbeat start +``` + +::::{note} +If you use an `init.d` script to start Auditbeat, you can’t specify command line flags (see [Command reference](/reference/auditbeat/command-line-options.md)). To specify flags, start Auditbeat in the foreground. +:::: + + +Also see [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md). +:::::: + +::::::{tab-item} MacOS +```sh +sudo chown root auditbeat.yml <1> +sudo ./auditbeat -e +``` + +1. You’ll be running Auditbeat as root, so you need to change ownership of the configuration file, or run Auditbeat with `--strict.perms=false` specified. See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md). +:::::: + +::::::{tab-item} Linux +```sh +sudo chown root auditbeat.yml <1> +sudo ./auditbeat -e +``` + +1. You’ll be running Auditbeat as root, so you need to change ownership of the configuration file, or run Auditbeat with `--strict.perms=false` specified. See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md). +:::::: + +::::::{tab-item} Windows +```sh +PS C:\Program Files\auditbeat> Start-Service auditbeat +``` + +By default, Windows log files are stored in `C:\ProgramData\auditbeat\Logs`. +:::::: + +::::::: +Auditbeat should begin streaming events to {{es}}. + +If you see a warning about too many open files, you need to increase the `ulimit`. See the [FAQ](/reference/auditbeat/ulimit.md) for more details. + + +## Step 6: View your data in {{kib}} [view-data] + +To make it easier for you to start auditing the activities of users and processes on your system, Auditbeat comes with pre-built {{kib}} dashboards and UIs for visualizing your data. + +To open the dashboards: + +1. Launch {{kib}}: + +
+
+ + +
+
+ 1. [Log in](https://cloud.elastic.co/) to your {{ecloud}} account. + 2. Navigate to the {{kib}} endpoint in your deployment. + +
+ +
+ +2. In the side navigation, click **Discover**. To see Auditbeat data, make sure the predefined `auditbeat-*` data view is selected. + + ::::{tip} + If you don’t see data in {{kib}}, try changing the time filter to a larger range. By default, {{kib}} shows the last 15 minutes. + :::: + +3. In the side navigation, click **Dashboard**, then select the dashboard that you want to open. + +The dashboards are provided as examples. We recommend that you [customize](docs-content://explore-analyze/dashboards.md) them to meet your needs. + + +## What’s next? [_whats_next] + +Now that you have audit data streaming into {{es}}, learn how to unify your logs, metrics, uptime, and application performance data. + +1. Ingest data from other sources by installing and configuring other Elastic {{beats}}: + + | Elastic {{beats}} | To capture | + | --- | --- | + | [{{metricbeat}}](/reference/metricbeat/metricbeat-installation-configuration.md) | Infrastructure metrics | + | [{{filebeat}}](/reference/filebeat/filebeat-installation-configuration.md) | Logs | + | [{{winlogbeat}}](/reference/winlogbeat/winlogbeat-installation-configuration.md) | Windows event logs | + | [{{heartbeat}}](/reference/heartbeat/heartbeat-installation-configuration.md) | Uptime information | + | [APM](docs-content://solutions/observability/apps/application-performance-monitoring-apm.md) | Application performance metrics | + +2. Use the Observability apps in {{kib}} to search across all your data: + + | Elastic apps | Use to | + | --- | --- | + | [{{metrics-app}}](docs-content://solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Explore metrics about systems and services across your ecosystem | + | [{{logs-app}}](docs-content://solutions/observability/logs/explore-logs.md) | Tail related log data in real time | + | [{{uptime-app}}](docs-content://solutions/observability/apps/synthetic-monitoring.md#monitoring-uptime) | Monitor availability issues across your apps and services | + | [APM app](docs-content://solutions/observability/apps/overviews.md) | Monitor application performance | + | [{{siem-app}}](docs-content://solutions/security.md) | Analyze security events | + + diff --git a/docs/reference/auditbeat/auditbeat-module-auditd.md b/docs/reference/auditbeat/auditbeat-module-auditd.md new file mode 100644 index 000000000000..b71154b756ae --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-module-auditd.md @@ -0,0 +1,274 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-auditd.html +--- + +# Auditd Module [auditbeat-module-auditd] + +The `auditd` module receives audit events from the Linux Audit Framework that is a part of the Linux kernel. + +This module is available only for Linux. + + +## How it works [_how_it_works] + +This module establishes a subscription to the kernel to receive the events as they occur. So unlike most other modules, the `period` configuration option is unused because it is not implemented using polling. + +The Linux Audit Framework can send multiple messages for a single auditable event. For example, a `rename` syscall causes the kernel to send eight separate messages. Each message describes a different aspect of the activity that is occurring (the syscall itself, file paths, current working directory, process title). This module will combine all of the data from each of the messages into a single event. + +Messages for one event can be interleaved with messages from another event. This module will buffer the messages in order to combine related messages into a single event even if they arrive interleaved or out of order. + + +## Useful commands [_useful_commands] + +When running Auditbeat with the `auditd` module enabled, you might find that other monitoring tools interfere with Auditbeat. + +For example, you might encounter errors if another process, such as `auditd`, is registered to receive data from the Linux Audit Framework. You can use these commands to see if the `auditd` service is running and stop it: + +* See if `auditd` is running: + + ```shell + service auditd status + ``` + +* Stop the `auditd` service: + + ```shell + service auditd stop + ``` + +* Disable `auditd` from starting on boot: + + ```shell + chkconfig auditd off + ``` + + +To save CPU usage and disk space, you can use this command to stop `journald` from listening to audit messages: + +```shell +systemctl mask systemd-journald-audit.socket +``` + + +## Inspect the kernel audit system status [_inspect_the_kernel_audit_system_status] + +Auditbeat provides useful commands to query the state of the audit system in the Linux kernel. + +* See the list of installed audit rules: + + ```shell + auditbeat show auditd-rules + ``` + + Prints the list of loaded rules, similar to `auditctl -l`: + + ```shell + -a never,exit -S all -F pid=26253 + -a always,exit -F arch=b32 -S all -F key=32bit-abi + -a always,exit -F arch=b64 -S execve,execveat -F key=exec + -a always,exit -F arch=b64 -S connect,accept,bind -F key=external-access + -w /etc/group -p wa -k identity + -w /etc/passwd -p wa -k identity + -w /etc/gshadow -p wa -k identity + -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F key=access + -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F key=access + ``` + +* See the status of the audit system: + + ```shell + auditbeat show auditd-status + ``` + + Prints the status of the kernel audit system, similar to `auditctl -s`: + + ```shell + enabled 1 + failure 0 + pid 0 + rate_limit 0 + backlog_limit 8192 + lost 14407 + backlog 0 + backlog_wait_time 0 + features 0xf + ``` + + + +## Configuration options [_configuration_options_17] + +This module has some configuration options for tuning its behavior. The following example shows all configuration options with their default values. + +```yaml +- module: auditd + resolve_ids: true + failure_mode: silent + backlog_limit: 8192 + rate_limit: 0 + include_raw_message: false + include_warnings: false + backpressure_strategy: auto + immutable: false +``` + +This module also supports the [standard configuration options](#module-standard-options-auditd) described later. + +**`socket_type`** +: This optional setting controls the type of socket that Auditbeat uses to receive events from the kernel. The two options are `unicast` and `multicast`. + + `unicast` should be used when Auditbeat is the primary userspace daemon for receiving audit events and managing the rules. Only a single process can receive audit events through the "unicast" connection so any other daemons should be stopped (e.g. stop `auditd`). + + `multicast` can be used in kernel versions 3.16 and newer. By using `multicast` Auditbeat will receive an audit event broadcast that is not exclusive to a a single process. This is ideal for situations where `auditd` is running and managing the rules. + + By default Auditbeat will use `multicast` if the kernel version is 3.16 or newer and no rules have been defined. Otherwise `unicast` will be used. + + +**`immutable`** +: This boolean setting sets the audit config as immutable (`-e 2`). This option can only be used with the `socket_type: unicast` since Auditbeat needs to manage the rules to be able to set it. + + It is important to note that with this setting enabled, if Auditbeat is stopped and resumed events will continue to be processed but the configuration won’t be updated until the system is restarted entirely. + + +**`resolve_ids`** +: This boolean setting enables the resolution of UIDs and GIDs to their associated names. The default value is true. + +**`failure_mode`** +: This determines the kernel’s behavior on critical failures such as errors sending events to Auditbeat, the backlog limit was exceeded, the kernel ran out of memory, or the rate limit was exceeded. The options are `silent`, `log`, or `panic`. `silent` basically makes the kernel ignore the errors, `log` makes the kernel write the audit messages using `printk` so they show up in system’s syslog, and `panic` causes the kernel to panic to prevent use of the machine. Auditbeat’s default is `silent`. + +**`backlog_limit`** +: This controls the maximum number of audit messages that will be buffered by the kernel. + +**`rate_limit`** +: This sets a rate limit on the number of messages/sec delivered by the kernel. The default is 0, which disables rate limiting. Changing this value to anything other than zero can cause messages to be lost. The preferred approach to reduce the messaging rate is be more selective in the audit ruleset. + +**`include_raw_message`** +: This boolean setting causes Auditbeat to include each of the raw messages that contributed to the event in the document as a field called `event.original`. The default value is false. This setting is primarily used for development and debugging purposes. + +**`include_warnings`** +: This boolean setting causes Auditbeat to include as warnings any issues that were encountered while parsing the raw messages. The messages are written to the `error.message` field. The default value is false. When this setting is enabled the raw messages will be included in the event regardless of the `include_raw_message` config setting. This setting is primarily used for development and debugging purposes. + +**`audit_rules`** +: A string containing the audit rules that should be installed to the kernel. There should be one rule per line. Comments can be embedded in the string using `#` as a prefix. The format for rules is the same used by the Linux `auditctl` utility. Auditbeat supports adding file watches (`-w`) and syscall rules (`-a` or `-A`). For more information, see [Audit rules](#audit-rules). + +**`audit_rule_files`** +: A list of files to load audit rules from. This files are loaded after the rules declared in `audit_rules` are loaded. Wildcards are supported and will expand in lexicographical order. The format is the same as that of the `audit_rules` field. + +**`ignore_errors`** +: This setting allows errors during rule loading and parsing to be ignored, but logged as warnings. + +**`backpressure_strategy`** +: Specifies the strategy that Auditbeat uses to prevent backpressure from propagating to the kernel and impacting audited processes. + + The possible values are: + + * `auto` (default): Auditbeat uses the `kernel` strategy, if supported, or falls back to the `userspace` strategy. + * `kernel`: Auditbeat sets the `backlog_wait_time` in the kernel’s audit framework to 0. This causes events to be discarded in the kernel if the audit backlog queue fills to capacity. Requires a 3.14 kernel or newer. + * `userspace`: Auditbeat drops events when there is backpressure from the publishing pipeline. If no `rate_limit` is set, Auditbeat sets a rate limit of 5000. Users should test their setup and adjust the `rate_limit` option accordingly. + * `both`: Auditbeat uses the `kernel` and `userspace` strategies at the same time. + * `none`: No backpressure mitigation measures are enabled. + + + +### Standard configuration options [module-standard-options-auditd] + +You can specify the following options for any Auditbeat module. + +**`module`** +: The name of the module to run. + +**`enabled`** +: A Boolean value that specifies whether the module is enabled. + +**`fields`** +: A dictionary of fields that will be sent with the dataset event. This setting is optional. + +**`tags`** +: A list of tags that will be sent with the dataset event. This setting is optional. + +**`processors`** +: A list of processors to apply to the data generated by the dataset. + + See [Processors](/reference/auditbeat/filtering-enhancing-data.md) for information about specifying processors in your config. + + +**`index`** +: If present, this formatted string overrides the index for events from this module (for elasticsearch outputs), or sets the `raw_index` field of the event’s metadata (for other outputs). This string can only refer to the agent name and version and the event timestamp; for access to dynamic fields, use `output.elasticsearch.index` or a processor. + + Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might expand to `"auditbeat-myindex-2019.12.13"`. + + +**`keep_null`** +: If this option is set to true, fields with `null` values will be published in the output document. By default, `keep_null` is set to `false`. + +**`service.name`** +: A name given by the user to the service the data is collected from. It can be used for example to identify information collected from nodes of different clusters with the same `service.type`. + + +## Audit rules [audit-rules] + +The audit rules are where you configure the activities that are audited. These rules are configured as either syscalls or files that should be monitored. For example you can track all `connect` syscalls or file system writes to `/etc/passwd`. + +Auditing a large number of syscalls can place a heavy load on the system so consider carefully the rules you define and try to apply filters in the rules themselves to be as selective as possible. + +The kernel evaluates the rules in the order in which they were defined so place the most active rules first in order to speed up evaluation. + +You can assign keys to each rule for better identification of the rule that triggered an event and easier filtering later in Elasticsearch. + +Defining any audit rules in the config causes Auditbeat to purge all existing audit rules prior to adding the rules specified in the config. Therefore it is unnecessary and unsupported to include a `-D` (delete all) rule. + +```sh +auditbeat.modules: +- module: auditd + audit_rules: | + # Things that affect identity. + -w /etc/group -p wa -k identity + -w /etc/passwd -p wa -k identity + -w /etc/gshadow -p wa -k identity + -w /etc/shadow -p wa -k identity + + # Unauthorized access attempts to files (unsuccessful). + -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access + -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access + -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access + -a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access +``` + + +## Example configuration [_example_configuration] + +The Auditd module supports the common configuration options that are described under [configuring Auditbeat](/reference/auditbeat/configuration-auditbeat.md). Here is an example configuration: + +```yaml +auditbeat.modules: +- module: auditd + # Load audit rules from separate files. Same format as audit.rules(7). + audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ] + audit_rules: | + ## Define audit rules here. + ## Create file watches (-w) or syscall audits (-a or -A). Uncomment these + ## examples or add your own rules. + + ## If you are on a 64 bit platform, everything should be running + ## in 64 bit mode. This rule will detect any use of the 32 bit syscalls + ## because this might be a sign of someone exploiting a hole in the 32 + ## bit API. + #-a always,exit -F arch=b32 -S all -F key=32bit-abi + + ## Executions. + #-a always,exit -F arch=b64 -S execve,execveat -k exec + + ## External access (warning: these can be expensive to audit). + #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access + + ## Identity changes. + #-w /etc/group -p wa -k identity + #-w /etc/passwd -p wa -k identity + #-w /etc/gshadow -p wa -k identity + + ## Unauthorized access attempts. + #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access + #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access +``` + diff --git a/docs/reference/auditbeat/auditbeat-module-file_integrity.md b/docs/reference/auditbeat/auditbeat-module-file_integrity.md new file mode 100644 index 000000000000..77009058d5c2 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-module-file_integrity.md @@ -0,0 +1,137 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-file_integrity.html +--- + +# File Integrity Module [auditbeat-module-file_integrity] + +The `file_integrity` module sends events when a file is changed (created, updated, or deleted) on disk. The events contain file metadata and hashes. + +The module is implemented for Linux, macOS (Darwin), and Windows. + + +## How it works [_how_it_works_2] + +This module uses features of the operating system to monitor file changes in realtime. When the module starts it creates a subscription with the OS to receive notifications of changes to the specified files or directories. Upon receiving notification of a change the module will read the file’s metadata and the compute a hash of the file’s contents. + +At startup this module will perform an initial scan of the configured files and directories to generate baseline data for the monitored paths and detect changes since the last time it was run. It uses locally persisted data in order to only send events for new or modified files. + +The operating system features that power this feature are as follows. + +* Linux - Multiple backends are supported: `auto`, `fsnotify`, `kprobes`, `ebpf`. By default, `fsnotify` is used, and therefore the kernel must have inotify support. Inotify was initially merged into the 2.6.13 Linux kernel. The eBPF backend uses modern eBPF features and supports 5.10.16+ kernels. The `Kprobes` backend uses tracefs and supports 3.10+ kernels. FSNotify doesn’t have the ability to associate user data to file events. The preferred backend can be selected by specifying the `backend` config option. Since eBPF and Kprobes are in technical preview, `auto` will default to `fsnotify`. +* macOS (Darwin) - Uses the `FSEvents` API, present since macOS 10.5. This API coalesces multiple changes to a file into a single event. Auditbeat translates this coalesced changes into a meaningful sequence of actions. However, in rare situations the reported events may have a different ordering than what actually happened. +* Windows - `ReadDirectoryChangesW` is used. + +The file integrity module should not be used to monitor paths on network file systems. + + +## Configuration options [_configuration_options_18] + +This module has some configuration options for tuning its behavior. The following example shows all configuration options with their default values for Linux. + +```yaml +- module: file_integrity + paths: + - /bin + - /usr/bin + - /sbin + - /usr/sbin + - /etc + exclude_files: + - '(?i)\.sw[nop]$' + - '~$' + - '/\.git($|/)' + include_files: [] + scan_at_start: true + scan_rate_per_sec: 50 MiB + max_file_size: 100 MiB + hash_types: [sha1] + recursive: false +``` + +This module also supports the [standard configuration options](#module-standard-options-file_integrity) described later. + +**`paths`** +: A list of paths (directories or files) to watch. Globs are not supported. The specified paths should exist when the metricset is started. Paths should be absolute, although the file integrity module will attempt to resolve relative path events to their absolute file path. Symbolic links will be resolved on module start and the link target will be watched if link resolution is successful. Changes to the symbolic link after module start will not change the watch target. If the link does not resolve to a valid target, the symbolic link itself will be watched; if the symlink target becomes valid after module start up this will not be picked up by the file system watches. + +**`exclude_files`** +: A list of regular expressions used to filter out events for unwanted files. The expressions are matched against the full path of every file and directory. When used in conjunction with `include_files`, file paths need to match both `include_files` and not match `exclude_files` to be selected. By default, no files are excluded. See [*Regular expression support*](/reference/auditbeat/regexp-support.md) for a list of supported regexp patterns. It is recommended to wrap regular expressions in single quotation marks to avoid issues with YAML escaping rules. + +**`include_files`** +: A list of regular expressions used to specify which files to select. When configured, only files matching the pattern will be monitored. The expressions are matched against the full path of every file and directory. When used in conjunction with `exclude_files`, file paths need to match both `include_files` and not match `exclude_files` to be selected. By default, all files are selected. See [*Regular expression support*](/reference/auditbeat/regexp-support.md) for a list of supported regexp patterns. It is recommended to wrap regular expressions in single quotation marks to avoid issues with YAML escaping rules. + +**`scan_at_start`** +: A boolean value that controls if Auditbeat scans over the configured file paths at startup and send events for the files that have been modified since the last time Auditbeat was running. The default value is true. + + This feature depends on data stored locally in `path.data` in order to determine if a file has changed. The first time Auditbeat runs it will send an event for each file it encounters. + + +**`scan_rate_per_sec`** +: When `scan_at_start` is enabled this sets an average read rate defined in bytes per second for the initial scan. This throttles the amount of CPU and I/O that Auditbeat consumes at startup. The default value is "50 MiB". Setting the value to "0" disables throttling. For convenience units can be specified as a suffix to the value. The supported units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, `gb`, `tib`, `tb`, `pib`, `pb`, `eib`, and `eb`. + +**`max_file_size`** +: The maximum size of a file in bytes for which Auditbeat will compute hashes and run file parsers. Files larger than this size will not be hashed or analysed by configured file parsers. The default value is 100 MiB. For convenience, units can be specified as a suffix to the value. The supported units are `b` (default), `kib`, `kb`, `mib`, `mb`, `gib`, `gb`, `tib`, `tb`, `pib`, `pb`, `eib`, and `eb`. + +**`hash_types`** +: A list of hash types to compute when the file changes. The supported hash types are `blake2b_256`, `blake2b_384`, `blake2b_512`, `md5`, `sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `sha512_224`, `sha512_256`, `sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, and `xxh64`. The default value is `sha1`. + +**`file_parsers`** +: A list of `file_integrity` fields under `file` that will be populated by file format parsers. The available fields that can be analysed are listed in the auditbeat.reference.yml file. File parsers are run on all files within the `max_file_size` limit in the configured paths during a scan or when a file event involves the file. Files that are not targets of the specific file parser are only sniffed to examine whether analysis should proceed. This will usually only involve reading a small number of bytes. + +**`recursive`** +: By default, the watches set to the paths specified in `paths` are not recursive. This means that only changes to the contents of this directories are watched. If `recursive` is set to `true`, the `file_integrity` module will watch for changes on this directories and all their subdirectories. + +**`backend`** +: (**Linux only**) Select the backend which will be used to source events. Valid values: `auto`, `fsnotify`, `kprobes`, `ebpf`. Default: `fsnotify`. + + +### Standard configuration options [module-standard-options-file_integrity] + +You can specify the following options for any Auditbeat module. + +**`module`** +: The name of the module to run. + +**`enabled`** +: A Boolean value that specifies whether the module is enabled. + +**`fields`** +: A dictionary of fields that will be sent with the dataset event. This setting is optional. + +**`tags`** +: A list of tags that will be sent with the dataset event. This setting is optional. + +**`processors`** +: A list of processors to apply to the data generated by the dataset. + + See [Processors](/reference/auditbeat/filtering-enhancing-data.md) for information about specifying processors in your config. + + +**`index`** +: If present, this formatted string overrides the index for events from this module (for elasticsearch outputs), or sets the `raw_index` field of the event’s metadata (for other outputs). This string can only refer to the agent name and version and the event timestamp; for access to dynamic fields, use `output.elasticsearch.index` or a processor. + + Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might expand to `"auditbeat-myindex-2019.12.13"`. + + +**`keep_null`** +: If this option is set to true, fields with `null` values will be published in the output document. By default, `keep_null` is set to `false`. + +**`service.name`** +: A name given by the user to the service the data is collected from. It can be used for example to identify information collected from nodes of different clusters with the same `service.type`. + + +## Example configuration [_example_configuration_2] + +The File Integrity module supports the common configuration options that are described under [configuring Auditbeat](/reference/auditbeat/configuration-auditbeat.md). Here is an example configuration: + +```yaml +auditbeat.modules: +- module: file_integrity + paths: + - /bin + - /usr/bin + - /sbin + - /usr/sbin + - /etc +``` + diff --git a/docs/reference/auditbeat/auditbeat-module-system.md b/docs/reference/auditbeat/auditbeat-module-system.md new file mode 100644 index 000000000000..0240d427d794 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-module-system.md @@ -0,0 +1,211 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-system.html +--- + +# System Module [auditbeat-module-system] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +The `system` module collects various security related information about a system. All datasets send both periodic state information (e.g. all currently running processes) and real-time changes (e.g. when a new process starts or stops). + +The module is fully implemented for Linux on x86. Currently, the `socket` module is not available on ARM. Some datasets are also available for macOS (Darwin) and Windows. + + +## How it works [_how_it_works_3] + +Each dataset sends two kinds of information: state and events. + +State information is sent periodically and (for some datasets) on startup. A state update will consist of one event per object that is currently active on the system (e.g. a process). All events belonging to the same state update will share the same UUID in `event.id`. + +The frequency of state updates can be controlled for all datasets using the `state.period` configuration option. Overrides are available per dataset. The default is `12h`. + +Event information is sent as the events occur (e.g. a process starts or stops). All datasets are currently using a poll model to retrieve their data. The frequency of these polls is controlled by the `period` configuration parameter. + + +### Entity IDs [_entity_ids] + +This module populates `entity_id` fields to uniquely identify entities (users, packages, processes…​) within a host. This requires Auditbeat to obtain a unique identifier for the host: + +* Windows: Uses the `HKLM\Software\Microsoft\Cryptography\MachineGuid` registry key. +* macOS: Uses the value returned by `gethostuuid(2)` system call. +* Linux: Uses the content of one of the following files, created by either `systemd` or `dbus`: + + * /etc/machine-id + * /var/lib/dbus/machine-id + * /var/db/dbus/machine-id + + +::::{note} +Under CentOS 6.x, it’s possible that none of the files above exist. In that case, running `dbus-uuidgen --ensure` (provided by the `dbus` package) will generate one for you. +:::: + + + +### Example dashboard [_example_dashboard] + +The module comes with a sample dashboard: + +:::{image} images/auditbeat-system-overview-dashboard.png +:alt: Auditbeat System Overview Dashboard +:class: screenshot +::: + + +## Configuration options [_configuration_options_19] + +This module has some configuration options for controlling its behavior. The following example shows all configuration options with their default values for Linux. + +::::{note} +It is recommended to configure some datasets separately. See below for a sample suggested configuration. +:::: + + +```yaml +- module: system + datasets: + - host + - login + - package + - process + - socket + - user + period: 10s + state.period: 12h + + socket.include_localhost: false + + user.detect_password_changes: true +``` + +This module also supports the [standard configuration options](#module-standard-options-system) described later. + +**`state.period`** +: The interval at which the datasets send full state information. This option can be overridden per dataset using `{{dataset}}.state.period`. + +**`user.detect_password_changes`** +: If the `user` dataset is configured and this option is set to `true`, Auditbeat will read password information in `/etc/passwd` and `/etc/shadow` to detect password changes. A hash will be kept locally in the `beat.db` file to detect changes between Auditbeat restarts. The `beat.db` file should be readable only by the root user and be treated similar to the shadow file itself. + + +### Standard configuration options [module-standard-options-system] + +You can specify the following options for any Auditbeat module. + +**`module`** +: The name of the module to run. + +**`datasets`** +: A list of datasets to execute. + +**`enabled`** +: A Boolean value that specifies whether the module is enabled. + +**`period`** +: The frequency at which the datasets check for changes. If a system is not reachable, Auditbeat returns an error for each period. This setting is required. For most datasets, especially `process` and `socket`, a shorter period is recommended. + +**`fields`** +: A dictionary of fields that will be sent with the dataset event. This setting is optional. + +**`tags`** +: A list of tags that will be sent with the dataset event. This setting is optional. + +**`processors`** +: A list of processors to apply to the data generated by the dataset. + + See [Processors](/reference/auditbeat/filtering-enhancing-data.md) for information about specifying processors in your config. + + +**`index`** +: If present, this formatted string overrides the index for events from this module (for elasticsearch outputs), or sets the `raw_index` field of the event’s metadata (for other outputs). This string can only refer to the agent name and version and the event timestamp; for access to dynamic fields, use `output.elasticsearch.index` or a processor. + + Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might expand to `"auditbeat-myindex-2019.12.13"`. + + +**`keep_null`** +: If this option is set to true, fields with `null` values will be published in the output document. By default, `keep_null` is set to `false`. + +**`service.name`** +: A name given by the user to the service the data is collected from. It can be used for example to identify information collected from nodes of different clusters with the same `service.type`. + + +## Suggested configuration [_suggested_configuration] + +Processes and sockets can be short-lived, so the chance of missing an update increases if the polling interval is too large. + +On the other hand, host and user information is unlikely to change frequently, so a longer polling interval can be used. + +```yaml +- module: system + datasets: + - host + - login + - package + - user + period: 1m + + user.detect_password_changes: true + +- module: system + datasets: + - process + - socket + period: 1s +``` + + +## Example configuration [_example_configuration_3] + +The System module supports the common configuration options that are described under [configuring Auditbeat](/reference/auditbeat/configuration-auditbeat.md). Here is an example configuration: + +```yaml +auditbeat.modules: +- module: system + datasets: + - package # Installed, updated, and removed packages + + period: 2m # The frequency at which the datasets check for changes + +- module: system + datasets: + - host # General host information, e.g. uptime, IPs + - login # User logins, logouts, and system boots. + - process # Started and stopped processes + - socket # Opened and closed sockets + - user # User information + + # How often datasets send state updates with the + # current state of the system (e.g. all currently + # running processes, all open sockets). + state.period: 12h + + # Enabled by default. Auditbeat will read password fields in + # /etc/passwd and /etc/shadow and store a hash locally to + # detect any changes. + user.detect_password_changes: true + + # File patterns of the login record files. + login.wtmp_file_pattern: /var/log/wtmp* + login.btmp_file_pattern: /var/log/btmp* +``` + + +## Datasets [_datasets] + +The following datasets are available: + +* [host](/reference/auditbeat/auditbeat-dataset-system-host.md) +* [login](/reference/auditbeat/auditbeat-dataset-system-login.md) +* [package](/reference/auditbeat/auditbeat-dataset-system-package.md) +* [process](/reference/auditbeat/auditbeat-dataset-system-process.md) +* [socket](/reference/auditbeat/auditbeat-dataset-system-socket.md) +* [user](/reference/auditbeat/auditbeat-dataset-system-user.md) + + + + + + + diff --git a/docs/reference/auditbeat/auditbeat-modules.md b/docs/reference/auditbeat/auditbeat-modules.md new file mode 100644 index 000000000000..a4d87b1fb4f0 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-modules.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-modules.html +--- + +# Modules [auditbeat-modules] + +This section contains detailed information about the metric collecting modules contained in Auditbeat. More details about each module can be found under the links below. + +* [Auditd](/reference/auditbeat/auditbeat-module-auditd.md) +* [File Integrity](/reference/auditbeat/auditbeat-module-file_integrity.md) +* [System](/reference/auditbeat/auditbeat-module-system.md) + diff --git a/docs/reference/auditbeat/auditbeat-overview.md b/docs/reference/auditbeat/auditbeat-overview.md new file mode 100644 index 000000000000..7fb6f4754fa4 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-overview.md @@ -0,0 +1,12 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-overview.html + - https://www.elastic.co/guide/en/beats/auditbeat/current/index.html +--- + +# Auditbeat overview [auditbeat-overview] + +Auditbeat is a lightweight shipper that you can install on your servers to audit the activities of users and processes on your systems. For example, you can use Auditbeat to collect and centralize audit events from the Linux Audit Framework. You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations. + +Auditbeat is an Elastic [Beat](https://www.elastic.co/beats). It’s based on the `libbeat` framework. For more information, see the [Beats Platform Reference](/reference/index.md). + diff --git a/docs/reference/auditbeat/auditbeat-reference-yml.md b/docs/reference/auditbeat/auditbeat-reference-yml.md new file mode 100644 index 000000000000..b62983d5bcba --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-reference-yml.md @@ -0,0 +1,1876 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-reference-yml.html +--- + +# auditbeat.reference.yml [auditbeat-reference-yml] + +The following reference file is available with your Auditbeat installation. It shows all non-deprecated Auditbeat options. You can copy from this file and paste configurations into the `auditbeat.yml` file to customize it. + +::::{tip} +The reference file is located in the same directory as the `auditbeat.yml` file. To locate the file, see [Directory layout](/reference/auditbeat/directory-layout.md). +:::: + + +The contents of the file are included here for your convenience. + +```yaml +## Auditbeat Configuration ############################# + +# This is a reference configuration file documenting all non-deprecated options +# in comments. For a shorter configuration example that contains only the most +# common options, please see auditbeat.yml in the same directory. +# +# You can find the full configuration reference here: +# https://www.elastic.co/guide/en/beats/auditbeat/index.html + +# ============================== Config Reloading ============================== + +# Config reloading allows to dynamically load modules. Each file that is +# monitored must contain one or multiple modules as a list. +auditbeat.config.modules: + + # Glob pattern for configuration reloading + path: ${path.config}/modules.d/*.yml + + # Period on which files under path should be checked for changes + reload.period: 10s + + # Set to true to enable config reloading + reload.enabled: false + +# Maximum amount of time to randomly delay the start of a dataset. Use 0 to +# disable startup delay. +auditbeat.max_start_delay: 10s + +# =========================== Modules configuration ============================ +auditbeat.modules: + +# The auditd module collects events from the audit framework in the Linux +# kernel. You need to specify audit rules for the events that you want to audit. +- module: auditd + resolve_ids: true + failure_mode: silent + backlog_limit: 8196 + rate_limit: 0 + include_raw_message: false + include_warnings: false + + # Set to true to publish fields with null values in events. + #keep_null: false + + # Load audit rules from separate files. Same format as audit.rules(7). + audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ] + audit_rules: | + ## Define audit rules here. + ## Create file watches (-w) or syscall audits (-a or -A). Uncomment these + ## examples or add your own rules. + + ## If you are on a 64 bit platform, everything should be running + ## in 64 bit mode. This rule will detect any use of the 32 bit syscalls + ## because this might be a sign of someone exploiting a hole in the 32 + ## bit API. + #-a always,exit -F arch=b32 -S all -F key=32bit-abi + + ## Executions. + #-a always,exit -F arch=b64 -S execve,execveat -k exec + + ## External access (warning: these can be expensive to audit). + #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access + + ## Identity changes. + #-w /etc/group -p wa -k identity + #-w /etc/passwd -p wa -k identity + #-w /etc/gshadow -p wa -k identity + + ## Unauthorized access attempts. + #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access + #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access + +# The file integrity module sends events when files are changed (created, +# updated, deleted). The events contain file metadata and hashes. +- module: file_integrity + paths: + - /bin + - /usr/bin + - /sbin + - /usr/sbin + - /etc + + # List of regular expressions to filter out notifications for unwanted files. + # Wrap in single quotes to workaround YAML escaping rules. By default no files + # are ignored. + exclude_files: + - '(?i)\.sw[nop]$' + - '~$' + - '/\.git($|/)' + + # List of regular expressions used to explicitly include files. When configured, + # Auditbeat will ignore files unless they match a pattern. + #include_files: + #- '/\.ssh($|/)' + # Select the backend which will be used to source events. + # "fsnotify" doesn't have the ability to associate user data to file events. + # Valid values: auto, fsnotify, kprobes, ebpf. + # Default: fsnotify. + backend: fsnotify + + # Scan over the configured file paths at startup and send events for new or + # modified files since the last time Auditbeat was running. + scan_at_start: true + + # Average scan rate. This throttles the amount of CPU and I/O that Auditbeat + # consumes at startup while scanning. Default is "50 MiB". + scan_rate_per_sec: 50 MiB + + # Limit on the size of files that will be hashed. Default is "100 MiB". + max_file_size: 100 MiB + + # Hash types to compute when the file changes. Supported types are + # blake2b_256, blake2b_384, blake2b_512, md5, sha1, sha224, sha256, sha384, + # sha512, sha512_224, sha512_256, sha3_224, sha3_256, sha3_384, sha3_512, and xxh64. + # Default is sha1. + hash_types: [sha1] + + # Detect changes to files included in subdirectories. Disabled by default. + recursive: false + + # Set to true to publish fields with null values in events. + #keep_null: false + + # Parse detailed information for the listed fields. Field paths in the list below + # that are a prefix of other field paths imply the longer field path. A set of + # fields may be specified using an RE2 regular expression quoted in //. For example + # /^file\.pe\./ will match all file.pe.* fields. Note that the expression is not + # implicitly anchored, so the empty expression will match all fields. + # file_parsers: + # - file.elf.sections + # - file.elf.sections.name + # - file.elf.sections.physical_size + # - file.elf.sections.virtual_size + # - file.elf.sections.entropy + # - file.elf.sections.var_entropy + # - file.elf.import_hash + # - file.elf.imports + # - file.elf.imports_names_entropy + # - file.elf.imports_names_var_entropy + # - file.elf.go_import_hash + # - file.elf.go_imports + # - file.elf.go_imports_names_entropy + # - file.elf.go_imports_names_var_entropy + # - file.elf.go_stripped + # - file.macho.sections + # - file.macho.sections.name + # - file.macho.sections.physical_size + # - file.macho.sections.virtual_size + # - file.macho.sections.entropy + # - file.macho.sections.var_entropy + # - file.macho.import_hash + # - file.macho.symhash + # - file.macho.imports + # - file.macho.imports_names_entropy + # - file.macho.imports_names_var_entropy + # - file.macho.go_import_hash + # - file.macho.go_imports + # - file.macho.go_imports_names_entropy + # - file.macho.go_imports_names_var_entropy + # - file.macho.go_stripped + # - file.pe.sections + # - file.pe.sections.name + # - file.pe.sections.physical_size + # - file.pe.sections.virtual_size + # - file.pe.sections.entropy + # - file.pe.sections.var_entropy + # - file.pe.import_hash + # - file.pe.imphash + # - file.pe.imports + # - file.pe.imports_names_entropy + # - file.pe.imports_names_var_entropy + # - file.pe.go_import_hash + # - file.pe.go_imports + # - file.pe.go_imports_names_entropy + # - file.pe.go_imports_names_var_entropy + # - file.pe.go_stripped + + + +# ================================== General =================================== + +# The name of the shipper that publishes the network data. It can be used to group +# all the transactions sent by a single shipper in the web interface. +# If this option is not defined, the hostname is used. +#name: + +# The tags of the shipper are included in their field with each +# transaction published. Tags make it easy to group servers by different +# logical properties. +#tags: ["service-X", "web-tier"] + +# Optional fields that you can specify to add additional information to the +# output. Fields can be scalar values, arrays, dictionaries, or any nested +# combination of these. +#fields: +# env: staging + +# If this option is set to true, the custom fields are stored as top-level +# fields in the output document instead of being grouped under a field +# sub-dictionary. Default is false. +#fields_under_root: false + +# Configure the precision of all timestamps in Auditbeat. +# Available options: millisecond, microsecond, nanosecond +#timestamp.precision: millisecond + +# Internal queue configuration for buffering events to be published. +# Queue settings may be overridden by performance presets in the +# Elasticsearch output. To configure them manually use "preset: custom". +#queue: + # Queue type by name (default 'mem') + # The memory queue will present all available events (up to the outputs + # bulk_max_size) to the output, the moment the output is ready to serve + # another batch of events. + #mem: + # Max number of events the queue can buffer. + #events: 3200 + + # Hints the minimum number of events stored in the queue, + # before providing a batch of events to the outputs. + # The default value is set to 2048. + # A value of 0 ensures events are immediately available + # to be sent to the outputs. + #flush.min_events: 1600 + + # Maximum duration after which events are available to the outputs, + # if the number of events stored in the queue is < `flush.min_events`. + #flush.timeout: 10s + + # The disk queue stores incoming events on disk until the output is + # ready for them. This allows a higher event limit than the memory-only + # queue and lets pending events persist through a restart. + #disk: + # The directory path to store the queue's data. + #path: "${path.data}/diskqueue" + + # The maximum space the queue should occupy on disk. Depending on + # input settings, events that exceed this limit are delayed or discarded. + #max_size: 10GB + + # The maximum size of a single queue data file. Data in the queue is + # stored in smaller segments that are deleted after all their events + # have been processed. + #segment_size: 1GB + + # The number of events to read from disk to memory while waiting for + # the output to request them. + #read_ahead: 512 + + # The number of events to accept from inputs while waiting for them + # to be written to disk. If event data arrives faster than it + # can be written to disk, this setting prevents it from overflowing + # main memory. + #write_ahead: 2048 + + # The duration to wait before retrying when the queue encounters a disk + # write error. + #retry_interval: 1s + + # The maximum length of time to wait before retrying on a disk write + # error. If the queue encounters repeated errors, it will double the + # length of its retry interval each time, up to this maximum. + #max_retry_interval: 30s + +# Sets the maximum number of CPUs that can be executed simultaneously. The +# default is the number of logical CPUs available in the system. +#max_procs: + +# ================================= Processors ================================= + +# Processors are used to reduce the number of fields in the exported event or to +# enhance the event with external metadata. This section defines a list of +# processors that are applied one by one and the first one receives the initial +# event: +# +# event -> filter1 -> event1 -> filter2 ->event2 ... +# +# The supported processors are drop_fields, drop_event, include_fields, +# decode_json_fields, and add_cloud_metadata. +# +# For example, you can use the following processors to keep the fields that +# contain CPU load percentages, but remove the fields that contain CPU ticks +# values: +# +#processors: +# - include_fields: +# fields: ["cpu"] +# - drop_fields: +# fields: ["cpu.user", "cpu.system"] +# +# The following example drops the events that have the HTTP response code 200: +# +#processors: +# - drop_event: +# when: +# equals: +# http.code: 200 +# +# The following example renames the field a to b: +# +#processors: +# - rename: +# fields: +# - from: "a" +# to: "b" +# +# The following example tokenizes the string into fields: +# +#processors: +# - dissect: +# tokenizer: "%{key1} - %{key2}" +# field: "message" +# target_prefix: "dissect" +# +# The following example enriches each event with metadata from the cloud +# provider about the host machine. It works on EC2, GCE, DigitalOcean, +# Tencent Cloud, and Alibaba Cloud. +# +#processors: +# - add_cloud_metadata: ~ +# +# The following example enriches each event with the machine's local time zone +# offset from UTC. +# +#processors: +# - add_locale: +# format: offset +# +# The following example enriches each event with docker metadata, it matches +# given fields to an existing container id and adds info from that container: +# +#processors: +# - add_docker_metadata: +# host: "unix:///var/run/docker.sock" +# match_fields: ["system.process.cgroup.id"] +# match_pids: ["process.pid", "process.parent.pid"] +# match_source: true +# match_source_index: 4 +# match_short_id: false +# cleanup_timeout: 60 +# labels.dedot: false +# # To connect to Docker over TLS you must specify a client and CA certificate. +# #ssl: +# # certificate_authority: "/etc/pki/root/ca.pem" +# # certificate: "/etc/pki/client/cert.pem" +# # key: "/etc/pki/client/cert.key" +# +# The following example enriches each event with docker metadata, it matches +# container id from log path available in `source` field (by default it expects +# it to be /var/lib/docker/containers/*/*.log). +# +#processors: +# - add_docker_metadata: ~ +# +# The following example enriches each event with host metadata. +# +#processors: +# - add_host_metadata: ~ +# +# The following example enriches each event with process metadata using +# process IDs included in the event. +# +#processors: +# - add_process_metadata: +# match_pids: ["system.process.ppid"] +# target: system.process.parent +# +# The following example decodes fields containing JSON strings +# and replaces the strings with valid JSON objects. +# +#processors: +# - decode_json_fields: +# fields: ["field1", "field2", ...] +# process_array: false +# max_depth: 1 +# target: "" +# overwrite_keys: false +# +#processors: +# - decompress_gzip_field: +# from: "field1" +# to: "field2" +# ignore_missing: false +# fail_on_error: true +# +# The following example copies the value of the message to message_copied +# +#processors: +# - copy_fields: +# fields: +# - from: message +# to: message_copied +# fail_on_error: true +# ignore_missing: false +# +# The following example truncates the value of the message to 1024 bytes +# +#processors: +# - truncate_fields: +# fields: +# - message +# max_bytes: 1024 +# fail_on_error: false +# ignore_missing: true +# +# The following example preserves the raw message under event.original +# +#processors: +# - copy_fields: +# fields: +# - from: message +# to: event.original +# fail_on_error: false +# ignore_missing: true +# - truncate_fields: +# fields: +# - event.original +# max_bytes: 1024 +# fail_on_error: false +# ignore_missing: true +# +# The following example URL-decodes the value of field1 to field2 +# +#processors: +# - urldecode: +# fields: +# - from: "field1" +# to: "field2" +# ignore_missing: false +# fail_on_error: true + +# =============================== Elastic Cloud ================================ + +# These settings simplify using Auditbeat with the Elastic Cloud (https://cloud.elastic.co/). + +# The cloud.id setting overwrites the `output.elasticsearch.hosts` and +# `setup.kibana.host` options. +# You can find the `cloud.id` in the Elastic Cloud web UI. +#cloud.id: + +# The cloud.auth setting overwrites the `output.elasticsearch.username` and +# `output.elasticsearch.password` settings. The format is `:`. +#cloud.auth: + +# ================================== Outputs =================================== + +# Configure what output to use when sending the data collected by the beat. + +# ---------------------------- Elasticsearch Output ---------------------------- +output.elasticsearch: + # Boolean flag to enable or disable the output module. + #enabled: true + + # Array of hosts to connect to. + # Scheme and port can be left out and will be set to the default (http and 9200) + # In case you specify and additional path, the scheme is required: http://localhost:9200/path + # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 + hosts: ["localhost:9200"] + + # Performance presets configure other output fields to recommended values + # based on a performance priority. + # Options are "balanced", "throughput", "scale", "latency" and "custom". + # Default if unspecified: "custom" + preset: balanced + + # Set gzip compression level. Set to 0 to disable compression. + # This field may conflict with performance presets. To set it + # manually use "preset: custom". + # The default is 1. + #compression_level: 1 + + # Configure escaping HTML symbols in strings. + #escape_html: false + + # Protocol - either `http` (default) or `https`. + #protocol: "https" + + # Authentication credentials - either API key or username/password. + #api_key: "id:api_key" + #username: "elastic" + #password: "changeme" + + # Dictionary of HTTP parameters to pass within the URL with index operations. + #parameters: + #param1: value1 + #param2: value2 + + # Number of workers per Elasticsearch host. + # This field may conflict with performance presets. To set it + # manually use "preset: custom". + #worker: 1 + + # If set to true and multiple hosts are configured, the output plugin load + # balances published events onto all Elasticsearch hosts. If set to false, + # the output plugin sends all events to only one host (determined at random) + # and will switch to another host if the currently selected one becomes + # unreachable. The default value is true. + #loadbalance: true + + # Optional data stream or index name. The default is "auditbeat-%{[agent.version]}". + # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly. + #index: "auditbeat-%{[agent.version]}" + + # Optional ingest pipeline. By default, no pipeline will be used. + #pipeline: "" + + # Optional HTTP path + #path: "/elasticsearch" + + # Custom HTTP headers to add to each request + #headers: + # X-My-Header: Contents of the header + + # Proxy server URL + #proxy_url: http://proxy:3128 + + # Whether to disable proxy settings for outgoing connections. If true, this + # takes precedence over both the proxy_url field and any environment settings + # (HTTP_PROXY, HTTPS_PROXY). The default is false. + #proxy_disable: false + + # The number of times a particular Elasticsearch index operation is attempted. If + # the indexing operation doesn't succeed after this many retries, the events are + # dropped. The default is 3. + #max_retries: 3 + + # The maximum number of events to bulk in a single Elasticsearch bulk API index request. + # This field may conflict with performance presets. To set it + # manually use "preset: custom". + # The default is 1600. + #bulk_max_size: 1600 + + # The number of seconds to wait before trying to reconnect to Elasticsearch + # after a network error. After waiting backoff.init seconds, the Beat + # tries to reconnect. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful connection, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to connect to + # Elasticsearch after a network error. The default is 60s. + #backoff.max: 60s + + # The maximum amount of time an idle connection will remain idle + # before closing itself. Zero means use the default of 60s. The + # format is a Go language duration (example 60s is 60 seconds). + # This field may conflict with performance presets. To set it + # manually use "preset: custom". + # The default is 3s. + # idle_connection_timeout: 3s + + # Configure HTTP request timeout before failing a request to Elasticsearch. + #timeout: 90 + + # Prevents auditbeat from connecting to older Elasticsearch versions when set to `false` + #allow_older_versions: true + + # Use SSL settings for HTTPS. + #ssl.enabled: true + + # Controls the verification of certificates. Valid values are: + # * full, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. + # * strict, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. If the Subject Alternative + # Name is empty, it returns an error. + # * certificate, which verifies that the provided certificate is signed by a + # trusted authority (CA), but does not perform any hostname verification. + # * none, which performs no verification of the server's certificate. This + # mode disables many of the security benefits of SSL/TLS and should only be used + # after very careful consideration. It is primarily intended as a temporary + # diagnostic mechanism when attempting to resolve TLS errors; its use in + # production environments is strongly discouraged. + # The default value is full. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions from 1.1 + # up to 1.3 are enabled. + #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] + + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client certificate key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the certificate key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE-based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + # Configure a pin that can be used to do extra validation of the verified certificate chain, + # this allow you to ensure that a specific certificate is used to validate the chain of trust. + # + # The pin is a base64 encoded string of the SHA-256 fingerprint. + #ssl.ca_sha256: "" + + # A root CA HEX encoded fingerprint. During the SSL handshake if the + # fingerprint matches the root CA certificate, it will be added to + # the provided list of root CAs (`certificate_authorities`), if the + # list is empty or not defined, the matching certificate will be the + # only one in the list. Then the normal SSL validation happens. + #ssl.ca_trusted_fingerprint: "" + + + # Enables restarting auditbeat if any file listed by `key`, + # `certificate`, or `certificate_authorities` is modified. + # This feature IS NOT supported on Windows. + #ssl.restart_on_cert_change.enabled: false + + # Period to scan for changes on CA certificate files + #ssl.restart_on_cert_change.period: 1m + + # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set. + #kerberos.enabled: true + + # Authentication type to use with Kerberos. Available options: keytab, password. + #kerberos.auth_type: password + + # Path to the keytab file. It is used when auth_type is set to keytab. + #kerberos.keytab: /etc/elastic.keytab + + # Path to the Kerberos configuration. + #kerberos.config_path: /etc/krb5.conf + + # Name of the Kerberos user. + #kerberos.username: elastic + + # Password of the Kerberos user. It is used when auth_type is set to password. + #kerberos.password: changeme + + # Kerberos realm. + #kerberos.realm: ELASTIC + + +# ------------------------------ Logstash Output ------------------------------- +#output.logstash: + # Boolean flag to enable or disable the output module. + #enabled: true + + # The Logstash hosts + #hosts: ["localhost:5044"] + + # Number of workers per Logstash host. + #worker: 1 + + # Set gzip compression level. + #compression_level: 3 + + # Configure escaping HTML symbols in strings. + #escape_html: false + + # Optional maximum time to live for a connection to Logstash, after which the + # connection will be re-established. A value of `0s` (the default) will + # disable this feature. + # + # Not yet supported for async connections (i.e. with the "pipelining" option set) + #ttl: 30s + + # Optionally load-balance events between Logstash hosts. Default is false. + #loadbalance: false + + # Number of batches to be sent asynchronously to Logstash while processing + # new batches. + #pipelining: 2 + + # If enabled only a subset of events in a batch of events is transferred per + # transaction. The number of events to be sent increases up to `bulk_max_size` + # if no error is encountered. + #slow_start: false + + # The number of seconds to wait before trying to reconnect to Logstash + # after a network error. After waiting backoff.init seconds, the Beat + # tries to reconnect. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful connection, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to connect to + # Logstash after a network error. The default is 60s. + #backoff.max: 60s + + # Optional index name. The default index name is set to auditbeat + # in all lowercase. + #index: 'auditbeat' + + # SOCKS5 proxy server URL + #proxy_url: socks5://user:password@socks5-server:2233 + + # Resolve names locally when using a proxy server. Defaults to false. + #proxy_use_local_resolver: false + + # Use SSL settings for HTTPS. + #ssl.enabled: true + + # Controls the verification of certificates. Valid values are: + # * full, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. + # * strict, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. If the Subject Alternative + # Name is empty, it returns an error. + # * certificate, which verifies that the provided certificate is signed by a + # trusted authority (CA), but does not perform any hostname verification. + # * none, which performs no verification of the server's certificate. This + # mode disables many of the security benefits of SSL/TLS and should only be used + # after very careful consideration. It is primarily intended as a temporary + # diagnostic mechanism when attempting to resolve TLS errors; its use in + # production environments is strongly discouraged. + # The default value is full. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions from 1.1 + # up to 1.3 are enabled. + #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] + + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client certificate key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the certificate key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE-based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + # Configure a pin that can be used to do extra validation of the verified certificate chain, + # this allow you to ensure that a specific certificate is used to validate the chain of trust. + # + # The pin is a base64 encoded string of the SHA-256 fingerprint. + #ssl.ca_sha256: "" + + # A root CA HEX encoded fingerprint. During the SSL handshake if the + # fingerprint matches the root CA certificate, it will be added to + # the provided list of root CAs (`certificate_authorities`), if the + # list is empty or not defined, the matching certificate will be the + # only one in the list. Then the normal SSL validation happens. + #ssl.ca_trusted_fingerprint: "" + + # Enables restarting auditbeat if any file listed by `key`, + # `certificate`, or `certificate_authorities` is modified. + # This feature IS NOT supported on Windows. + #ssl.restart_on_cert_change.enabled: false + + # Period to scan for changes on CA certificate files + #ssl.restart_on_cert_change.period: 1m + + # The number of times to retry publishing an event after a publishing failure. + # After the specified number of retries, the events are typically dropped. + # Some Beats, such as Filebeat and Winlogbeat, ignore the max_retries setting + # and retry until all events are published. Set max_retries to a value less + # than 0 to retry until all events are published. The default is 3. + #max_retries: 3 + + # The maximum number of events to bulk in a single Logstash request. The + # default is 2048. + #bulk_max_size: 2048 + + # The number of seconds to wait for responses from the Logstash server before + # timing out. The default is 30s. + #timeout: 30s + +# -------------------------------- Kafka Output -------------------------------- +#output.kafka: + # Boolean flag to enable or disable the output module. + #enabled: true + + # The list of Kafka broker addresses from which to fetch the cluster metadata. + # The cluster metadata contain the actual Kafka brokers events are published + # to. + #hosts: ["localhost:9092"] + + # The Kafka topic used for produced events. The setting can be a format string + # using any event field. To set the topic from document type use `%{[type]}`. + #topic: beats + + # The Kafka event key setting. Use format string to create a unique event key. + # By default no event key will be generated. + #key: '' + + # The Kafka event partitioning strategy. Default hashing strategy is `hash` + # using the `output.kafka.key` setting or randomly distributes events if + # `output.kafka.key` is not configured. + #partition.hash: + # If enabled, events will only be published to partitions with reachable + # leaders. Default is false. + #reachable_only: false + + # Configure alternative event field names used to compute the hash value. + # If empty `output.kafka.key` setting will be used. + # Default value is empty list. + #hash: [] + + # Authentication details. Password is required if username is set. + #username: '' + #password: '' + + # SASL authentication mechanism used. Can be one of PLAIN, SCRAM-SHA-256 or SCRAM-SHA-512. + # Defaults to PLAIN when `username` and `password` are configured. + #sasl.mechanism: '' + + # Kafka version Auditbeat is assumed to run against. Defaults to the "1.0.0". + #version: '1.0.0' + + # Configure JSON encoding + #codec.json: + # Pretty-print JSON event + #pretty: false + + # Configure escaping HTML symbols in strings. + #escape_html: false + + # Metadata update configuration. Metadata contains leader information + # used to decide which broker to use when publishing. + #metadata: + # Max metadata request retry attempts when cluster is in middle of leader + # election. Defaults to 3 retries. + #retry.max: 3 + + # Wait time between retries during leader elections. Default is 250ms. + #retry.backoff: 250ms + + # Refresh metadata interval. Defaults to every 10 minutes. + #refresh_frequency: 10m + + # Strategy for fetching the topics metadata from the broker. Default is false. + #full: false + + # The number of times to retry publishing an event after a publishing failure. + # After the specified number of retries, events are typically dropped. + # Some Beats, such as Filebeat, ignore the max_retries setting and retry until + # all events are published. Set max_retries to a value less than 0 to retry + # until all events are published. The default is 3. + #max_retries: 3 + + # The number of seconds to wait before trying to republish to Kafka + # after a network error. After waiting backoff.init seconds, the Beat + # tries to republish. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful publish, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to republish to + # Kafka after a network error. The default is 60s. + #backoff.max: 60s + + # The maximum number of events to bulk in a single Kafka request. The default + # is 2048. + #bulk_max_size: 2048 + + # Duration to wait before sending bulk Kafka request. 0 is no delay. The default + # is 0. + #bulk_flush_frequency: 0s + + # The number of seconds to wait for responses from the Kafka brokers before + # timing out. The default is 30s. + #timeout: 30s + + # The maximum duration a broker will wait for number of required ACKs. The + # default is 10s. + #broker_timeout: 10s + + # The number of messages buffered for each Kafka broker. The default is 256. + #channel_buffer_size: 256 + + # The keep-alive period for an active network connection. If 0s, keep-alives + # are disabled. The default is 0 seconds. + #keep_alive: 0 + + # Sets the output compression codec. Must be one of none, snappy and gzip. The + # default is gzip. + #compression: gzip + + # Set the compression level. Currently only gzip provides a compression level + # between 0 and 9. The default value is chosen by the compression algorithm. + #compression_level: 4 + + # The maximum permitted size of JSON-encoded messages. Bigger messages will be + # dropped. The default value is 1000000 (bytes). This value should be equal to + # or less than the broker's message.max.bytes. + #max_message_bytes: 1000000 + + # The ACK reliability level required from broker. 0=no response, 1=wait for + # local commit, -1=wait for all replicas to commit. The default is 1. Note: + # If set to 0, no ACKs are returned by Kafka. Messages might be lost silently + # on error. + #required_acks: 1 + + # The configurable ClientID used for logging, debugging, and auditing + # purposes. The default is "beats". + #client_id: beats + + # Use SSL settings for HTTPS. + #ssl.enabled: true + + # Controls the verification of certificates. Valid values are: + # * full, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. + # * strict, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. If the Subject Alternative + # Name is empty, it returns an error. + # * certificate, which verifies that the provided certificate is signed by a + # trusted authority (CA), but does not perform any hostname verification. + # * none, which performs no verification of the server's certificate. This + # mode disables many of the security benefits of SSL/TLS and should only be used + # after very careful consideration. It is primarily intended as a temporary + # diagnostic mechanism when attempting to resolve TLS errors; its use in + # production environments is strongly discouraged. + # The default value is full. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions from 1.1 + # up to 1.3 are enabled. + #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] + + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client certificate key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the certificate key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE-based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + # Configure a pin that can be used to do extra validation of the verified certificate chain, + # this allow you to ensure that a specific certificate is used to validate the chain of trust. + # + # The pin is a base64 encoded string of the SHA-256 fingerprint. + #ssl.ca_sha256: "" + + # A root CA HEX encoded fingerprint. During the SSL handshake if the + # fingerprint matches the root CA certificate, it will be added to + # the provided list of root CAs (`certificate_authorities`), if the + # list is empty or not defined, the matching certificate will be the + # only one in the list. Then the normal SSL validation happens. + #ssl.ca_trusted_fingerprint: "" + + # Enables restarting auditbeat if any file listed by `key`, + # `certificate`, or `certificate_authorities` is modified. + # This feature IS NOT supported on Windows. + #ssl.restart_on_cert_change.enabled: false + + # Period to scan for changes on CA certificate files + #ssl.restart_on_cert_change.period: 1m + + # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set. + #kerberos.enabled: true + + # Authentication type to use with Kerberos. Available options: keytab, password. + #kerberos.auth_type: password + + # Path to the keytab file. It is used when auth_type is set to keytab. + #kerberos.keytab: /etc/security/keytabs/kafka.keytab + + # Path to the Kerberos configuration. + #kerberos.config_path: /etc/krb5.conf + + # The service name. Service principal name is contructed from + # service_name/hostname@realm. + #kerberos.service_name: kafka + + # Name of the Kerberos user. + #kerberos.username: elastic + + # Password of the Kerberos user. It is used when auth_type is set to password. + #kerberos.password: changeme + + # Kerberos realm. + #kerberos.realm: ELASTIC + + # Enables Kerberos FAST authentication. This may + # conflict with certain Active Directory configurations. + #kerberos.enable_krb5_fast: false + +# -------------------------------- Redis Output -------------------------------- +#output.redis: + # Boolean flag to enable or disable the output module. + #enabled: true + + # Configure JSON encoding + #codec.json: + # Pretty print json event + #pretty: false + + # Configure escaping HTML symbols in strings. + #escape_html: false + + # The list of Redis servers to connect to. If load-balancing is enabled, the + # events are distributed to the servers in the list. If one server becomes + # unreachable, the events are distributed to the reachable servers only. + # The hosts setting supports redis and rediss urls with custom password like + # redis://:password@localhost:6379. + #hosts: ["localhost:6379"] + + # The name of the Redis list or channel the events are published to. The + # default is auditbeat. + #key: auditbeat + + # The password to authenticate to Redis with. The default is no authentication. + #password: + + # The Redis database number where the events are published. The default is 0. + #db: 0 + + # The Redis data type to use for publishing events. If the data type is list, + # the Redis RPUSH command is used. If the data type is channel, the Redis + # PUBLISH command is used. The default value is list. + #datatype: list + + # The number of workers to use for each host configured to publish events to + # Redis. Use this setting along with the loadbalance option. For example, if + # you have 2 hosts and 3 workers, in total 6 workers are started (3 for each + # host). + #worker: 1 + + # If set to true and multiple hosts or workers are configured, the output + # plugin load balances published events onto all Redis hosts. If set to false, + # the output plugin sends all events to only one host (determined at random) + # and will switch to another host if the currently selected one becomes + # unreachable. The default value is true. + #loadbalance: true + + # The Redis connection timeout in seconds. The default is 5 seconds. + #timeout: 5s + + # The number of times to retry publishing an event after a publishing failure. + # After the specified number of retries, the events are typically dropped. + # Some Beats, such as Filebeat, ignore the max_retries setting and retry until + # all events are published. Set max_retries to a value less than 0 to retry + # until all events are published. The default is 3. + #max_retries: 3 + + # The number of seconds to wait before trying to reconnect to Redis + # after a network error. After waiting backoff.init seconds, the Beat + # tries to reconnect. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful connection, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to connect to + # Redis after a network error. The default is 60s. + #backoff.max: 60s + + # The maximum number of events to bulk in a single Redis request or pipeline. + # The default is 2048. + #bulk_max_size: 2048 + + # The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The + # value must be a URL with a scheme of socks5://. + #proxy_url: + + # This option determines whether Redis hostnames are resolved locally when + # using a proxy. The default value is false, which means that name resolution + # occurs on the proxy server. + #proxy_use_local_resolver: false + + # Use SSL settings for HTTPS. + #ssl.enabled: true + + # Controls the verification of certificates. Valid values are: + # * full, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. + # * strict, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. If the Subject Alternative + # Name is empty, it returns an error. + # * certificate, which verifies that the provided certificate is signed by a + # trusted authority (CA), but does not perform any hostname verification. + # * none, which performs no verification of the server's certificate. This + # mode disables many of the security benefits of SSL/TLS and should only be used + # after very careful consideration. It is primarily intended as a temporary + # diagnostic mechanism when attempting to resolve TLS errors; its use in + # production environments is strongly discouraged. + # The default value is full. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions from 1.1 + # up to 1.3 are enabled. + #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] + + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client certificate key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the certificate key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE-based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + # Configure a pin that can be used to do extra validation of the verified certificate chain, + # this allow you to ensure that a specific certificate is used to validate the chain of trust. + # + # The pin is a base64 encoded string of the SHA-256 fingerprint. + #ssl.ca_sha256: "" + + # A root CA HEX encoded fingerprint. During the SSL handshake if the + # fingerprint matches the root CA certificate, it will be added to + # the provided list of root CAs (`certificate_authorities`), if the + # list is empty or not defined, the matching certificate will be the + # only one in the list. Then the normal SSL validation happens. + #ssl.ca_trusted_fingerprint: "" + + +# -------------------------------- File Output --------------------------------- +#output.file: + # Boolean flag to enable or disable the output module. + #enabled: true + + # Configure JSON encoding + #codec.json: + # Pretty-print JSON event + #pretty: false + + # Configure escaping HTML symbols in strings. + #escape_html: false + + # Path to the directory where to save the generated files. The option is + # mandatory. + #path: "/tmp/auditbeat" + + # Name of the generated files. The default is `auditbeat` and it generates + # files: `auditbeat-{datetime}.ndjson`, `auditbeat-{datetime}-1.ndjson`, etc. + #filename: auditbeat + + # Maximum size in kilobytes of each file. When this size is reached, and on + # every Auditbeat restart, the files are rotated. The default value is 10240 + # kB. + #rotate_every_kb: 10000 + + # Maximum number of files under path. When this number of files is reached, + # the oldest file is deleted and the rest are shifted from last to first. The + # default is 7 files. + #number_of_files: 7 + + # Permissions to use for file creation. The default is 0600. + #permissions: 0600 + + # Configure automatic file rotation on every startup. The default is true. + #rotate_on_startup: true + +# ------------------------------- Console Output ------------------------------- +#output.console: + # Boolean flag to enable or disable the output module. + #enabled: true + + # Configure JSON encoding + #codec.json: + # Pretty-print JSON event + #pretty: false + + # Configure escaping HTML symbols in strings. + #escape_html: false + +# =================================== Paths ==================================== + +# The home path for the Auditbeat installation. This is the default base path +# for all other path settings and for miscellaneous files that come with the +# distribution (for example, the sample dashboards). +# If not set by a CLI flag or in the configuration file, the default for the +# home path is the location of the binary. +#path.home: + +# The configuration path for the Auditbeat installation. This is the default +# base path for configuration files, including the main YAML configuration file +# and the Elasticsearch template file. If not set by a CLI flag or in the +# configuration file, the default for the configuration path is the home path. +#path.config: ${path.home} + +# The data path for the Auditbeat installation. This is the default base path +# for all the files in which Auditbeat needs to store its data. If not set by a +# CLI flag or in the configuration file, the default for the data path is a data +# subdirectory inside the home path. +#path.data: ${path.home}/data + +# The logs path for a Auditbeat installation. This is the default location for +# the Beat's log files. If not set by a CLI flag or in the configuration file, +# the default for the logs path is a logs subdirectory inside the home path. +#path.logs: ${path.home}/logs + +# ================================== Keystore ================================== + +# Location of the Keystore containing the keys and their sensitive values. +#keystore.path: "${path.config}/beats.keystore" + +# ================================= Dashboards ================================= + +# These settings control loading the sample dashboards to the Kibana index. Loading +# the dashboards are disabled by default and can be enabled either by setting the +# options here or by using the `-setup` CLI flag or the `setup` command. +#setup.dashboards.enabled: false + +# The directory from where to read the dashboards. The default is the `kibana` +# folder in the home path. +#setup.dashboards.directory: ${path.home}/kibana + +# The URL from where to download the dashboard archive. It is used instead of +# the directory if it has a value. +#setup.dashboards.url: + +# The file archive (zip file) from where to read the dashboards. It is used instead +# of the directory when it has a value. +#setup.dashboards.file: + +# In case the archive contains the dashboards from multiple Beats, this lets you +# select which one to load. You can load all the dashboards in the archive by +# setting this to the empty string. +#setup.dashboards.beat: auditbeat + +# The name of the Kibana index to use for setting the configuration. Default is ".kibana" +#setup.dashboards.kibana_index: .kibana + +# The Elasticsearch index name. This overwrites the index name defined in the +# dashboards and index pattern. Example: testbeat-* +#setup.dashboards.index: + +# Always use the Kibana API for loading the dashboards instead of autodetecting +# how to install the dashboards by first querying Elasticsearch. +#setup.dashboards.always_kibana: false + +# If true and Kibana is not reachable at the time when dashboards are loaded, +# it will retry to reconnect to Kibana instead of exiting with an error. +#setup.dashboards.retry.enabled: false + +# Duration interval between Kibana connection retries. +#setup.dashboards.retry.interval: 1s + +# Maximum number of retries before exiting with an error, 0 for unlimited retrying. +#setup.dashboards.retry.maximum: 0 + +# ================================== Template ================================== + +# A template is used to set the mapping in Elasticsearch +# By default template loading is enabled and the template is loaded. +# These settings can be adjusted to load your own template or overwrite existing ones. + +# Set to false to disable template loading. +#setup.template.enabled: true + +# Template name. By default the template name is "auditbeat-%{[agent.version]}" +# The template name and pattern has to be set in case the Elasticsearch index pattern is modified. +#setup.template.name: "auditbeat-%{[agent.version]}" + +# Template pattern. By default the template pattern is "auditbeat-%{[agent.version]}" to apply to the default index settings. +# The template name and pattern has to be set in case the Elasticsearch index pattern is modified. +#setup.template.pattern: "auditbeat-%{[agent.version]}" + +# Path to fields.yml file to generate the template +#setup.template.fields: "${path.config}/fields.yml" + +# A list of fields to be added to the template and Kibana index pattern. Also +# specify setup.template.overwrite: true to overwrite the existing template. +#setup.template.append_fields: +#- name: field_name +# type: field_type + +# Enable JSON template loading. If this is enabled, the fields.yml is ignored. +#setup.template.json.enabled: false + +# Path to the JSON template file +#setup.template.json.path: "${path.config}/template.json" + +# Name under which the template is stored in Elasticsearch +#setup.template.json.name: "" + +# Set this option if the JSON template is a data stream. +#setup.template.json.data_stream: false + +# Overwrite existing template +# Do not enable this option for more than one instance of auditbeat as it might +# overload your Elasticsearch with too many update requests. +#setup.template.overwrite: false + +# Elasticsearch template settings +setup.template.settings: + + # A dictionary of settings to place into the settings.index dictionary + # of the Elasticsearch template. For more details, please check + # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html + #index: + #number_of_shards: 1 + #codec: best_compression + + # A dictionary of settings for the _source field. For more details, please check + # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html + #_source: + #enabled: false + +# ====================== Index Lifecycle Management (ILM) ====================== + +# Configure index lifecycle management (ILM) to manage the backing indices +# of your data streams. + +# Enable ILM support. Valid values are true, or false. +#setup.ilm.enabled: true + +# Set the lifecycle policy name. The default policy name is +# 'beatname'. +#setup.ilm.policy_name: "mypolicy" + +# The path to a JSON file that contains a lifecycle policy configuration. Used +# to load your own lifecycle policy. +#setup.ilm.policy_file: + +# Disable the check for an existing lifecycle policy. The default is true. +# If you set this option to false, lifecycle policy will not be installed, +# even if setup.ilm.overwrite is set to true. +#setup.ilm.check_exists: true + +# Overwrite the lifecycle policy at startup. The default is false. +#setup.ilm.overwrite: false + +# ======================== Data Stream Lifecycle (DSL) ========================= + +# Configure Data Stream Lifecycle to manage data streams while connected to Serverless elasticsearch. +# These settings are mutually exclusive with ILM settings which are not supported in Serverless projects. + +# Enable DSL support. Valid values are true, or false. +#setup.dsl.enabled: true + +# Set the lifecycle policy name or pattern. For DSL, this name must match the data stream that the lifecycle is for. +# The default data stream pattern is auditbeat-%{[agent.version]}" +# The template string `%{[agent.version]}` will resolve to the current stack version. +# The other possible template value is `%{[beat.name]}`. +#setup.dsl.data_stream_pattern: "auditbeat-%{[agent.version]}" + +# The path to a JSON file that contains a lifecycle policy configuration. Used +# to load your own lifecycle policy. +# If no custom policy is specified, a default policy with a lifetime of 7 days will be created. +#setup.dsl.policy_file: + +# Disable the check for an existing lifecycle policy. The default is true. If +# you disable this check, set setup.dsl.overwrite: true so the lifecycle policy +# can be installed. +#setup.dsl.check_exists: true + +# Overwrite the lifecycle policy at startup. The default is false. +#setup.dsl.overwrite: false + +# =================================== Kibana =================================== + +# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. +# This requires a Kibana endpoint configuration. +setup.kibana: + + # Kibana Host + # Scheme and port can be left out and will be set to the default (http and 5601) + # In case you specify and additional path, the scheme is required: http://localhost:5601/path + # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 + #host: "localhost:5601" + + # Optional protocol and basic auth credentials. + #protocol: "https" + #username: "elastic" + #password: "changeme" + + # Optional HTTP path + #path: "" + + # Optional Kibana space ID. + #space.id: "" + + # Custom HTTP headers to add to each request + #headers: + # X-My-Header: Contents of the header + + # Use SSL settings for HTTPS. + #ssl.enabled: true + + # Controls the verification of certificates. Valid values are: + # * full, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. + # * strict, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. If the Subject Alternative + # Name is empty, it returns an error. + # * certificate, which verifies that the provided certificate is signed by a + # trusted authority (CA), but does not perform any hostname verification. + # * none, which performs no verification of the server's certificate. This + # mode disables many of the security benefits of SSL/TLS and should only be used + # after very careful consideration. It is primarily intended as a temporary + # diagnostic mechanism when attempting to resolve TLS errors; its use in + # production environments is strongly discouraged. + # The default value is full. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions from 1.1 + # up to 1.3 are enabled. + #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] + + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client certificate key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the certificate key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE-based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + # Configure a pin that can be used to do extra validation of the verified certificate chain, + # this allow you to ensure that a specific certificate is used to validate the chain of trust. + # + # The pin is a base64 encoded string of the SHA-256 fingerprint. + #ssl.ca_sha256: "" + + # A root CA HEX encoded fingerprint. During the SSL handshake if the + # fingerprint matches the root CA certificate, it will be added to + # the provided list of root CAs (`certificate_authorities`), if the + # list is empty or not defined, the matching certificate will be the + # only one in the list. Then the normal SSL validation happens. + #ssl.ca_trusted_fingerprint: "" + + +# ================================== Logging =================================== + +# There are four options for the log output: file, stderr, syslog, eventlog +# The file output is the default. + +# Sets log level. The default log level is info. +# Available log levels are: error, warning, info, debug +#logging.level: info + +# Enable debug output for selected components. To enable all selectors use ["*"] +# Other available selectors are "beat", "publisher", "service" +# Multiple selectors can be chained. +#logging.selectors: [ ] + +# Send all logging output to stderr. The default is false. +#logging.to_stderr: false + +# Send all logging output to syslog. The default is false. +#logging.to_syslog: false + +# Send all logging output to Windows Event Logs. The default is false. +#logging.to_eventlog: false + +# If enabled, Auditbeat periodically logs its internal metrics that have changed +# in the last period. For each metric that changed, the delta from the value at +# the beginning of the period is logged. Also, the total values for +# all non-zero internal metrics are logged on shutdown. The default is true. +#logging.metrics.enabled: true + +# The period after which to log the internal metrics. The default is 30s. +#logging.metrics.period: 30s + +# A list of metrics namespaces to report in the logs. Defaults to [stats]. +# `stats` contains general Beat metrics. `dataset` may be present in some +# Beats and contains module or input metrics. +#logging.metrics.namespaces: [stats] + +# Logging to rotating files. Set logging.to_files to false to disable logging to +# files. +logging.to_files: true +logging.files: + # Configure the path where the logs are written. The default is the logs directory + # under the home path (the binary location). + #path: /var/log/auditbeat + + # The name of the files where the logs are written to. + #name: auditbeat + + # Configure log file size limit. If the limit is reached, log file will be + # automatically rotated. + #rotateeverybytes: 10485760 # = 10MB + + # Number of rotated log files to keep. The oldest files will be deleted first. + #keepfiles: 7 + + # The permissions mask to apply when rotating log files. The default value is 0600. + # Must be a valid Unix-style file permissions mask expressed in octal notation. + #permissions: 0600 + + # Enable log file rotation on time intervals in addition to the size-based rotation. + # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h + # are boundary-aligned with minutes, hours, days, weeks, months, and years as + # reported by the local system clock. All other intervals are calculated from the + # Unix epoch. Defaults to disabled. + #interval: 0 + + # Rotate existing logs on startup rather than appending them to the existing + # file. Defaults to true. + # rotateonstartup: true + +#=============================== Events Logging =============================== +# Some outputs will log raw events on errors like indexing errors in the +# Elasticsearch output, to prevent logging raw events (that may contain +# sensitive information) together with other log messages, a different +# log file, only for log entries containing raw events, is used. It will +# use the same level, selectors and all other configurations from the +# default logger, but it will have it's own file configuration. +# +# Having a different log file for raw events also prevents event data +# from drowning out the regular log files. +# +# IMPORTANT: No matter the default logger output configuration, raw events +# will **always** be logged to a file configured by `logging.event_data.files`. + +# logging.event_data: +# Logging to rotating files. Set logging.to_files to false to disable logging to +# files. +#logging.event_data.to_files: true +#logging.event_data: + # Configure the path where the logs are written. The default is the logs directory + # under the home path (the binary location). + #path: /var/log/auditbeat + + # The name of the files where the logs are written to. + #name: auditbeat-events-data + + # Configure log file size limit. If the limit is reached, log file will be + # automatically rotated. + #rotateeverybytes: 5242880 # = 5MB + + # Number of rotated log files to keep. The oldest files will be deleted first. + #keepfiles: 2 + + # The permissions mask to apply when rotating log files. The default value is 0600. + # Must be a valid Unix-style file permissions mask expressed in octal notation. + #permissions: 0600 + + # Enable log file rotation on time intervals in addition to the size-based rotation. + # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h + # are boundary-aligned with minutes, hours, days, weeks, months, and years as + # reported by the local system clock. All other intervals are calculated from the + # Unix epoch. Defaults to disabled. + #interval: 0 + + # Rotate existing logs on startup rather than appending them to the existing + # file. Defaults to false. + # rotateonstartup: false + +# ============================= X-Pack Monitoring ============================== +# Auditbeat can export internal metrics to a central Elasticsearch monitoring +# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The +# reporting is disabled by default. + +# Set to true to enable the monitoring reporter. +#monitoring.enabled: false + +# Sets the UUID of the Elasticsearch cluster under which monitoring data for this +# Auditbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch +# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch. +#monitoring.cluster_uuid: + +# Uncomment to send the metrics to Elasticsearch. Most settings from the +# Elasticsearch output are accepted here as well. +# Note that the settings should point to your Elasticsearch *monitoring* cluster. +# Any setting that is not set is automatically inherited from the Elasticsearch +# output configuration, so if you have the Elasticsearch output configured such +# that it is pointing to your Elasticsearch monitoring cluster, you can simply +# uncomment the following line. +#monitoring.elasticsearch: + + # Array of hosts to connect to. + # Scheme and port can be left out and will be set to the default (http and 9200) + # In case you specify an additional path, the scheme is required: http://localhost:9200/path + # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 + #hosts: ["localhost:9200"] + + # Set gzip compression level. + #compression_level: 0 + + # Protocol - either `http` (default) or `https`. + #protocol: "https" + + # Authentication credentials - either API key or username/password. + #api_key: "id:api_key" + #username: "beats_system" + #password: "changeme" + + # Dictionary of HTTP parameters to pass within the URL with index operations. + #parameters: + #param1: value1 + #param2: value2 + + # Custom HTTP headers to add to each request + #headers: + # X-My-Header: Contents of the header + + # Proxy server url + #proxy_url: http://proxy:3128 + + # The number of times a particular Elasticsearch index operation is attempted. If + # the indexing operation doesn't succeed after this many retries, the events are + # dropped. The default is 3. + #max_retries: 3 + + # The maximum number of events to bulk in a single Elasticsearch bulk API index request. + # The default is 50. + #bulk_max_size: 50 + + # The number of seconds to wait before trying to reconnect to Elasticsearch + # after a network error. After waiting backoff.init seconds, the Beat + # tries to reconnect. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful connection, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to connect to + # Elasticsearch after a network error. The default is 60s. + #backoff.max: 60s + + # Configure HTTP request timeout before failing a request to Elasticsearch. + #timeout: 90 + + # Use SSL settings for HTTPS. + #ssl.enabled: true + + # Controls the verification of certificates. Valid values are: + # * full, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. + # * strict, which verifies that the provided certificate is signed by a trusted + # authority (CA) and also verifies that the server's hostname (or IP address) + # matches the names identified within the certificate. If the Subject Alternative + # Name is empty, it returns an error. + # * certificate, which verifies that the provided certificate is signed by a + # trusted authority (CA), but does not perform any hostname verification. + # * none, which performs no verification of the server's certificate. This + # mode disables many of the security benefits of SSL/TLS and should only be used + # after very careful consideration. It is primarily intended as a temporary + # diagnostic mechanism when attempting to resolve TLS errors; its use in + # production environments is strongly discouraged. + # The default value is full. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions from 1.1 + # up to 1.3 are enabled. + #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] + + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client certificate key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the certificate key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE-based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + # Configure a pin that can be used to do extra validation of the verified certificate chain, + # this allow you to ensure that a specific certificate is used to validate the chain of trust. + # + # The pin is a base64 encoded string of the SHA-256 fingerprint. + #ssl.ca_sha256: "" + + # A root CA HEX encoded fingerprint. During the SSL handshake if the + # fingerprint matches the root CA certificate, it will be added to + # the provided list of root CAs (`certificate_authorities`), if the + # list is empty or not defined, the matching certificate will be the + # only one in the list. Then the normal SSL validation happens. + #ssl.ca_trusted_fingerprint: "" + + # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set. + #kerberos.enabled: true + + # Authentication type to use with Kerberos. Available options: keytab, password. + #kerberos.auth_type: password + + # Path to the keytab file. It is used when auth_type is set to keytab. + #kerberos.keytab: /etc/elastic.keytab + + # Path to the Kerberos configuration. + #kerberos.config_path: /etc/krb5.conf + + # Name of the Kerberos user. + #kerberos.username: elastic + + # Password of the Kerberos user. It is used when auth_type is set to password. + #kerberos.password: changeme + + # Kerberos realm. + #kerberos.realm: ELASTIC + + #metrics.period: 10s + #state.period: 1m + +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + +# =============================== HTTP Endpoint ================================ + +# Each beat can expose internal metrics through an HTTP endpoint. For security +# reasons the endpoint is disabled by default. This feature is currently experimental. +# Stats can be accessed through http://localhost:5066/stats. For pretty JSON output +# append ?pretty to the URL. + +# Defines if the HTTP endpoint is enabled. +#http.enabled: false + +# The HTTP endpoint will bind to this hostname, IP address, unix socket, or named pipe. +# When using IP addresses, it is recommended to only use localhost. +#http.host: localhost + +# Port on which the HTTP endpoint will bind. Default is 5066. +#http.port: 5066 + +# Define which user should be owning the named pipe. +#http.named_pipe.user: + +# Define which permissions should be applied to the named pipe, use the Security +# Descriptor Definition Language (SDDL) to define the permission. This option cannot be used with +# `http.user`. +#http.named_pipe.security_descriptor: + +# Defines if the HTTP pprof endpoints are enabled. +# It is recommended that this is only enabled on localhost as these endpoints may leak data. +#http.pprof.enabled: false + +# Controls the fraction of goroutine blocking events that are reported in the +# blocking profile. +#http.pprof.block_profile_rate: 0 + +# Controls the fraction of memory allocations that are recorded and reported in +# the memory profile. +#http.pprof.mem_profile_rate: 524288 + +# Controls the fraction of mutex contention events that are reported in the +# mutex profile. +#http.pprof.mutex_profile_rate: 0 + +# ============================== Process Security ============================== + +# Enable or disable seccomp system call filtering on Linux. Default is enabled. +#seccomp.enabled: true + +# ============================== Instrumentation =============================== + +# Instrumentation support for the auditbeat. +#instrumentation: + # Set to true to enable instrumentation of auditbeat. + #enabled: false + + # Environment in which auditbeat is running on (eg: staging, production, etc.) + #environment: "" + + # APM Server hosts to report instrumentation results to. + #hosts: + # - http://localhost:8200 + + # API Key for the APM Server(s). + # If api_key is set then secret_token will be ignored. + #api_key: + + # Secret token for the APM Server(s). + #secret_token: + + # Enable profiling of the server, recording profile samples as events. + # + # This feature is experimental. + #profiling: + #cpu: + # Set to true to enable CPU profiling. + #enabled: false + #interval: 60s + #duration: 10s + #heap: + # Set to true to enable heap profiling. + #enabled: false + #interval: 60s + +# ================================= Migration ================================== + +# This allows to enable 6.7 migration aliases +#migration.6_to_7.enabled: false + +# =============================== Feature Flags ================================ + +# Enable and configure feature flags. +#features: +# fqdn: +# enabled: true +``` + diff --git a/docs/reference/auditbeat/auditbeat-starting.md b/docs/reference/auditbeat/auditbeat-starting.md new file mode 100644 index 000000000000..2ff51fec383d --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-starting.md @@ -0,0 +1,70 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-starting.html +--- + +# Start Auditbeat [auditbeat-starting] + +Before starting Auditbeat: + +* Follow the steps in [Quick start: installation and configuration](/reference/auditbeat/auditbeat-installation-configuration.md) to install, configure, and set up the Auditbeat environment. +* Make sure {{kib}} and {{es}} are running. +* Make sure the user specified in `auditbeat.yml` is [authorized to publish events](/reference/auditbeat/privileges-to-publish-events.md). + +To start Auditbeat, run: + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +sudo service auditbeat start +``` + +::::{note} +If you use an `init.d` script to start Auditbeat, you can’t specify command line flags (see [Command reference](/reference/auditbeat/command-line-options.md)). To specify flags, start Auditbeat in the foreground. +:::: + + +Also see [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md). +:::::: + +::::::{tab-item} RPM +```sh +sudo service auditbeat start +``` + +::::{note} +If you use an `init.d` script to start Auditbeat, you can’t specify command line flags (see [Command reference](/reference/auditbeat/command-line-options.md)). To specify flags, start Auditbeat in the foreground. +:::: + + +Also see [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md). +:::::: + +::::::{tab-item} MacOS +```sh +sudo chown root auditbeat.yml <1> +sudo ./auditbeat -e +``` + +1. You’ll be running Auditbeat as root, so you need to change ownership of the configuration file, or run Auditbeat with `--strict.perms=false` specified. See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md). +:::::: + +::::::{tab-item} Linux +```sh +sudo chown root auditbeat.yml <1> +sudo ./auditbeat -e +``` + +1. You’ll be running Auditbeat as root, so you need to change ownership of the configuration file, or run Auditbeat with `--strict.perms=false` specified. See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md). +:::::: + +::::::{tab-item} Windows +```sh +PS C:\Program Files\auditbeat> Start-Service auditbeat +``` + +By default, Windows log files are stored in `C:\ProgramData\auditbeat\Logs`. +:::::: + +::::::: diff --git a/docs/reference/auditbeat/auditbeat-template.md b/docs/reference/auditbeat/auditbeat-template.md new file mode 100644 index 000000000000..78188f074945 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat-template.md @@ -0,0 +1,228 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-template.html +--- + +# Load the Elasticsearch index template [auditbeat-template] + +{{es}} uses [index templates](docs-content://manage-data/data-store/templates.md) to define: + +* Settings that control the behavior of your data stream and backing indices. The settings include the lifecycle policy used to manage backing indices as they grow and age. +* Mappings that determine how fields are analyzed. Each mapping sets the [{{es}} datatype](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md) to use for a specific data field. + +The recommended index template file for Auditbeat is installed by the Auditbeat packages. If you accept the default configuration in the `auditbeat.yml` config file, Auditbeat loads the template automatically after successfully connecting to {{es}}. If the template already exists, it’s not overwritten unless you configure Auditbeat to do so. + +::::{note} +A connection to {{es}} is required to load the index template. If the output is not {{es}} (or {{ess}}), you must [load the template manually](#load-template-manually). +:::: + + +This page shows how to change the default template loading behavior to: + +* [Load your own index template](#load-custom-template) +* [Overwrite an existing index template](#overwrite-template) +* [Disable automatic index template loading](#disable-template-loading) +* [Load the index template manually](#load-template-manually) + +For a full list of template setup options, see [Elasticsearch index template](/reference/auditbeat/configuration-template.md). + + +## Load your own index template [load-custom-template] + +To load your own index template, set the following options: + +```yaml +setup.template.name: "your_template_name" +setup.template.fields: "path/to/fields.yml" +``` + +If the template already exists, it’s not overwritten unless you configure Auditbeat to do so. + +You can load templates for both data streams and indices. + + +## Overwrite an existing index template [overwrite-template] + +::::{warning} +Do not enable this option for more than one instance of Auditbeat. If you start multiple instances at the same time, it can overload your {{es}} with too many template update requests. +:::: + + +To overwrite a template that’s already loaded into {{es}}, set: + +```yaml +setup.template.overwrite: true +``` + + +## Disable automatic index template loading [disable-template-loading] + +You may want to disable automatic template loading if you’re using an output other than {{es}} and need to load the template manually. To disable automatic template loading, set: + +```yaml +setup.template.enabled: false +``` + +If you disable automatic template loading, you must load the index template manually. + + +## Load the index template manually [load-template-manually] + +To load the index template manually, run the [`setup`](/reference/auditbeat/command-line-options.md#setup-command) command. A connection to {{es}} is required. If another output is enabled, you need to temporarily disable that output and enable {{es}} by using the `-E` option. The examples here assume that Logstash output is enabled. You can omit the `-E` flags if {{es}} output is already enabled. + +If you are connecting to a secured {{es}} cluster, make sure you’ve configured credentials as described in the [Quick start: installation and configuration](/reference/auditbeat/auditbeat-installation-configuration.md). + +If the host running Auditbeat does not have direct connectivity to {{es}}, see [Load the index template manually (alternate method)](#load-template-manually-alternate). + +To load the template, use the appropriate command for your system. + +**deb and rpm:** + +```sh +auditbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +``` + +**mac:** + +```sh +./auditbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +``` + +**linux:** + +```sh +./auditbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +``` + +**docker:** + +```sh +docker run --rm docker.elastic.co/beats/auditbeat:9.0.0-beta1 setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +``` + +**win:** + +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). + +From the PowerShell prompt, change to the directory where you installed Auditbeat, and run: + +```sh +PS > .\auditbeat.exe setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +``` + + +### Force Kibana to look at newest documents [force-kibana-new] + +If you’ve already used Auditbeat to index data into {{es}}, the index may contain old documents. After you load the index template, you can delete the old documents from `auditbeat-*` to force Kibana to look at the newest documents. + +Use this command: + +**deb and rpm:** + +```sh +curl -XDELETE 'http://localhost:9200/auditbeat-*' +``` + +**mac:** + +```sh +curl -XDELETE 'http://localhost:9200/auditbeat-*' +``` + +**linux:** + +```sh +curl -XDELETE 'http://localhost:9200/auditbeat-*' +``` + +**win:** + +```sh +PS > Invoke-RestMethod -Method Delete "http://localhost:9200/auditbeat-*" +``` + +This command deletes all indices that match the pattern `auditbeat`. Before running this command, make sure you want to delete all indices that match the pattern. + + +## Load the index template manually (alternate method) [load-template-manually-alternate] + +If the host running Auditbeat does not have direct connectivity to {{es}}, you can export the index template to a file, move it to a machine that does have connectivity, and then install the template manually. + +To export the index template, run: + +**deb and rpm:** + +```sh +auditbeat export template > auditbeat.template.json +``` + +**mac:** + +```sh +./auditbeat export template > auditbeat.template.json +``` + +**linux:** + +```sh +./auditbeat export template > auditbeat.template.json +``` + +**win:** + +```sh +PS > .\auditbeat.exe export template --es.version 9.0.0-beta1 | Out-File -Encoding UTF8 auditbeat.template.json +``` + +To install the template, run: + +**deb and rpm:** + +```sh +curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_index_template/auditbeat-9.0.0-beta1 -d@auditbeat.template.json +``` + +**mac:** + +```sh +curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_index_template/auditbeat-9.0.0-beta1 -d@auditbeat.template.json +``` + +**linux:** + +```sh +curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_index_template/auditbeat-9.0.0-beta1 -d@auditbeat.template.json +``` + +**win:** + +```sh +PS > Invoke-RestMethod -Method Put -ContentType "application/json" -InFile auditbeat.template.json -Uri http://localhost:9200/_index_template/auditbeat-9.0.0-beta1 +``` + +Once you have loaded the index template, load the data stream as well. If you do not load it, you have to give the publisher user `manage` permission on auditbeat-9.0.0-beta1 index. + +**deb and rpm:** + +```sh +curl -XPUT http://localhost:9200/_data_stream/auditbeat-9.0.0-beta1 +``` + +**mac:** + +```sh +curl -XPUT http://localhost:9200/_data_stream/auditbeat-9.0.0-beta1 +``` + +**linux:** + +```sh +curl -XPUT http://localhost:9200/_data_stream/auditbeat-9.0.0-beta1 +``` + +**win:** + +```sh +PS > Invoke-RestMethod -Method Put -Uri http://localhost:9200/_data_stream/auditbeat-9.0.0-beta1 +``` + diff --git a/docs/reference/auditbeat/auditbeat.md b/docs/reference/auditbeat/auditbeat.md new file mode 100644 index 000000000000..0be0c0096c02 --- /dev/null +++ b/docs/reference/auditbeat/auditbeat.md @@ -0,0 +1,8 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/index.html +--- + +# Auditbeat + +Just a placeholder for a top index page. diff --git a/docs/reference/auditbeat/bandwidth-throttling.md b/docs/reference/auditbeat/bandwidth-throttling.md new file mode 100644 index 000000000000..8c8b2961fdc5 --- /dev/null +++ b/docs/reference/auditbeat/bandwidth-throttling.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/bandwidth-throttling.html +--- + +# Auditbeat uses too much bandwidth [bandwidth-throttling] + +If you need to limit bandwidth usage, we recommend that you configure the network stack on your OS to perform bandwidth throttling. + +For example, the following Linux commands cap the connection between Auditbeat and Logstash by setting a limit of 50 kbps on TCP connections over port 5044: + +```shell +tc qdisc add dev $DEV root handle 1: htb +tc class add dev $DEV parent 1:1 classid 1:10 htb rate 50kbps ceil 50kbps +tc filter add dev $DEV parent 1:0 prio 1 protocol ip handle 10 fw flowid 1:10 +iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10 +``` + +Using OS tools to perform bandwidth throttling gives you better control over policies. For example, you can use OS tools to cap bandwidth during the day, but not at night. Or you can leave the bandwidth uncapped, but assign a low priority to the traffic. + diff --git a/docs/reference/auditbeat/beats-api-keys.md b/docs/reference/auditbeat/beats-api-keys.md new file mode 100644 index 000000000000..2772c23eadd0 --- /dev/null +++ b/docs/reference/auditbeat/beats-api-keys.md @@ -0,0 +1,142 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/beats-api-keys.html +--- + +# Grant access using API keys [beats-api-keys] + +Instead of using usernames and passwords, you can use API keys to grant access to {{es}} resources. You can set API keys to expire at a certain time, and you can explicitly invalidate them. Any user with the `manage_api_key` or `manage_own_api_key` cluster privilege can create API keys. + +Auditbeat instances typically send both collected data and monitoring information to {{es}}. If you are sending both to the same cluster, you can use the same API key. For different clusters, you need to use an API key per cluster. + +::::{note} +For security reasons, we recommend using a unique API key per Auditbeat instance. You can create as many API keys per user as necessary. +:::: + + +::::{important} +Review [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md) before creating API keys for Auditbeat. +:::: + + + +## Create an API key for publishing [beats-api-key-publish] + +To create an API key to use for writing data to {{es}}, use the [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key), for example: + +```console +POST /_security/api_key +{ + "name": "auditbeat_host001", <1> + "role_descriptors": { + "auditbeat_writer": { <2> + "cluster": ["monitor", "read_ilm", "read_pipeline"], + "index": [ + { + "names": ["auditbeat-*"], + "privileges": ["view_index_metadata", "create_doc", "auto_configure"] + } + ] + } + } +} +``` + +1. Name of the API key +2. Granted privileges, see [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md) + + +::::{note} +See [Create a *publishing* user](/reference/auditbeat/privileges-to-publish-events.md) for the list of privileges required to publish events. +:::: + + +The return value will look something like this: + +```console-result +{ + "id":"TiNAGG4BaaMdaH1tRfuU", <1> + "name":"auditbeat_host001", + "api_key":"KnR6yE41RrSowb0kQ0HWoA" <2> +} +``` + +1. Unique id for this API key +2. Generated API key + + +You can now use this API key in your `auditbeat.yml` configuration file like this: + +```yaml +output.elasticsearch: + api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1> +``` + +1. Format is `id:api_key` (as returned by [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key)) + + + +## Create an API key for monitoring [beats-api-key-monitor] + +To create an API key to use for sending monitoring data to {{es}}, use the [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key), for example: + +```console +POST /_security/api_key +{ + "name": "auditbeat_host001", <1> + "role_descriptors": { + "auditbeat_monitoring": { <2> + "cluster": ["monitor"], + "index": [ + { + "names": [".monitoring-beats-*"], + "privileges": ["create_index", "create"] + } + ] + } + } +} +``` + +1. Name of the API key +2. Granted privileges, see [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md) + + +::::{note} +See [Create a *monitoring* user](/reference/auditbeat/privileges-to-publish-monitoring.md) for the list of privileges required to send monitoring data. +:::: + + +The return value will look something like this: + +```console-result +{ + "id":"TiNAGG4BaaMdaH1tRfuU", <1> + "name":"auditbeat_host001", + "api_key":"KnR6yE41RrSowb0kQ0HWoA" <2> +} +``` + +1. Unique id for this API key +2. Generated API key + + +You can now use this API key in your `auditbeat.yml` configuration file like this: + +```yaml +monitoring.elasticsearch: + api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1> +``` + +1. Format is `id:api_key` (as returned by [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key)) + + + +## Learn more about API keys [learn-more-api-keys] + +See the {{es}} API key documentation for more information: + +* [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) +* [Get API key information](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-api-key) +* [Invalidate API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-invalidate-api-key) + diff --git a/docs/reference/auditbeat/change-index-name.md b/docs/reference/auditbeat/change-index-name.md new file mode 100644 index 000000000000..468db6c9284c --- /dev/null +++ b/docs/reference/auditbeat/change-index-name.md @@ -0,0 +1,23 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/change-index-name.html +--- + +# Change the index name [change-index-name] + +Auditbeat uses data streams named `auditbeat-9.0.0-beta1`. To use a different name, set the [`index`](/reference/auditbeat/elasticsearch-output.md#index-option-es) option in the {{es}} output. You also need to configure the `setup.template.name` and `setup.template.pattern` options to match the new name. For example: + +```sh +output.elasticsearch.index: "customname-%{[agent.version]}" +setup.template.name: "customname-%{[agent.version]}" +setup.template.pattern: "customname-%{[agent.version]}" +``` + +If you’re using pre-built Kibana dashboards, also set the `setup.dashboards.index` option. For example: + +```yaml +setup.dashboards.index: "customname-*" +``` + +For a full list of template setup options, see [Elasticsearch index template](/reference/auditbeat/configuration-template.md). + diff --git a/docs/reference/auditbeat/command-line-options.md b/docs/reference/auditbeat/command-line-options.md new file mode 100644 index 000000000000..ae37d5282d90 --- /dev/null +++ b/docs/reference/auditbeat/command-line-options.md @@ -0,0 +1,362 @@ +--- +navigation_title: "Command reference" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/command-line-options.html +--- + +# Auditbeat command reference [command-line-options] + + +Auditbeat provides a command-line interface for starting Auditbeat and performing common tasks, like testing configuration files and loading dashboards. + +The command-line also supports [global flags](#global-flags) for controlling global behaviors. + +::::{tip} +Use `sudo` to run the following commands if: + +* the config file is owned by `root`, or +* Auditbeat is configured to capture data that requires `root` access + +:::: + + +Some of the features described here require an Elastic license. For more information, see [https://www.elastic.co/subscriptions](https://www.elastic.co/subscriptions) and [License Management](docs-content://deploy-manage/license/manage-your-license-in-self-managed-cluster.md). + +| Commands | | +| --- | --- | +| [`export`](#export-command) | Exports the configuration, index template, ILM policy, or a dashboard to stdout. | +| [`help`](#help-command) | Shows help for any command. | +| [`keystore`](#keystore-command) | Manages the [secrets keystore](/reference/auditbeat/keystore.md). | +| [`run`](#run-command) | Runs Auditbeat. This command is used by default if you start Auditbeat without specifying a command. | +| [`setup`](#setup-command) | Sets up the initial environment, including the index template, ILM policy and write alias, and {{kib}} dashboards (when available). | +| [`test`](#test-command) | Tests the configuration. | +| [`version`](#version-command) | Shows information about the current version. | + +Also see [Global flags](#global-flags). + +## `export` command [export-command] + +Exports the configuration, index template, ILM policy, or a dashboard to stdout. You can use this command to quickly view your configuration, see the contents of the index template and the ILM policy, or export a dashboard from {{kib}}. + +**SYNOPSIS** + +```sh +auditbeat export SUBCOMMAND [FLAGS] +``` + +**SUBCOMMANDS** + +**`config`** +: Exports the current configuration to stdout. If you use the `-c` flag, this command exports the configuration that’s defined in the specified file. + +$$$dashboard-subcommand$$$**`dashboard`** +: Exports a dashboard. You can use this option to store a dashboard on disk in a module and load it automatically. For example, to export the dashboard to a JSON file, run: + + ```shell + auditbeat export dashboard --id="DASHBOARD_ID" > dashboard.json + ``` + + To find the `DASHBOARD_ID`, look at the URL for the dashboard in {{kib}}. By default, `export dashboard` writes the dashboard to stdout. The example shows how to write the dashboard to a JSON file so that you can import it later. The JSON file will contain the dashboard with all visualizations and searches. You must load the index pattern separately for Auditbeat. + + To load the dashboard, copy the generated `dashboard.json` file into the `kibana/6/dashboard` directory of Auditbeat, and run `auditbeat setup --dashboards` to import the dashboard. + + If {{kib}} is not running on `localhost:5061`, you must also adjust the Auditbeat configuration under `setup.kibana`. + + +$$$template-subcommand$$$**`template`** +: Exports the index template to stdout. You can specify the `--es.version` flag to further define what gets exported. Furthermore you can export the template to a file instead of `stdout` by defining a directory via `--dir`. + +$$$ilm-policy-subcommand$$$ + +**`ilm-policy`** +: Exports the index lifecycle management policy to stdout. You can specify the `--es.version` and a `--dir` to which the policy should be exported as a file rather than exporting to `stdout`. + +**FLAGS** + +**`--es.version VERSION`** +: When used with [`template`](#template-subcommand), exports an index template that is compatible with the specified version. When used with [`ilm-policy`](#ilm-policy-subcommand), exports the ILM policy if the specified ES version is enabled for ILM. + +**`-h, --help`** +: Shows help for the `export` command. + +**`--dir DIRNAME`** +: Define a directory to which the template, pipelines, and ILM policy should be exported to as files instead of printing them to `stdout`. + +**`--id DASHBOARD_ID`** +: When used with [`dashboard`](#dashboard-subcommand), specifies the dashboard ID. + +Also see [Global flags](#global-flags). + +**EXAMPLES** + +```sh +auditbeat export config +auditbeat export template --es.version 9.0.0-beta1 +auditbeat export dashboard --id="a7b35890-8baa-11e8-9676-ef67484126fb" > dashboard.json +``` + + +## `help` command [help-command] + +Shows help for any command. If no command is specified, shows help for the `run` command. + +**SYNOPSIS** + +```sh +auditbeat help COMMAND_NAME [FLAGS] +``` + +**`COMMAND_NAME`** +: Specifies the name of the command to show help for. + +**FLAGS** + +**`-h, --help`** +: Shows help for the `help` command. + +Also see [Global flags](#global-flags). + +**EXAMPLE** + +```sh +auditbeat help export +``` + + +## `keystore` command [keystore-command] + +Manages the [secrets keystore](/reference/auditbeat/keystore.md). + +**SYNOPSIS** + +```sh +auditbeat keystore SUBCOMMAND [FLAGS] +``` + +**SUBCOMMANDS** + +**`add KEY`** +: Adds the specified key to the keystore. Use the `--force` flag to overwrite an existing key. Use the `--stdin` flag to pass the value through `stdin`. + +**`create`** +: Creates a keystore to hold secrets. Use the `--force` flag to overwrite the existing keystore. + +**`list`** +: Lists the keys in the keystore. + +**`remove KEY`** +: Removes the specified key from the keystore. + +**FLAGS** + +**`--force`** +: Valid with the `add` and `create` subcommands. When used with `add`, overwrites the specified key. When used with `create`, overwrites the keystore. + +**`--stdin`** +: When used with `add`, uses the stdin as the source of the key’s value. + +**`-h, --help`** +: Shows help for the `keystore` command. + +Also see [Global flags](#global-flags). + +**EXAMPLES** + +```sh +auditbeat keystore create +auditbeat keystore add ES_PWD +auditbeat keystore remove ES_PWD +auditbeat keystore list +``` + +See [Secrets keystore](/reference/auditbeat/keystore.md) for more examples. + + +## `run` command [run-command] + +Runs Auditbeat. This command is used by default if you start Auditbeat without specifying a command. + +**SYNOPSIS** + +```sh +auditbeat run [FLAGS] +``` + +Or: + +```sh +auditbeat [FLAGS] +``` + +**FLAGS** + +**`-N, --N`** +: Disables publishing for testing purposes. This option disables all outputs except the [File output](/reference/auditbeat/file-output.md). + +**`--cpuprofile FILE`** +: Writes CPU profile data to the specified file. This option is useful for troubleshooting Auditbeat. + +**`-h, --help`** +: Shows help for the `run` command. + +**`--httpprof [HOST]:PORT`** +: Starts an http server for profiling. This option is useful for troubleshooting and profiling Auditbeat. + +**`--memprofile FILE`** +: Writes memory profile data to the specified output file. This option is useful for troubleshooting Auditbeat. + +**`--system.hostfs MOUNT_POINT`** +: Specifies the mount point of the host’s filesystem for use in monitoring a host. This flag is depricated, and an alternate hostfs should be specified via the `hostfs` module config value. + +Also see [Global flags](#global-flags). + +**EXAMPLE** + +```sh +auditbeat run -e +``` + +Or: + +```sh +auditbeat -e +``` + + +## `setup` command [setup-command] + +Sets up the initial environment, including the index template, ILM policy and write alias, and {{kib}} dashboards (when available) + +* The index template ensures that fields are mapped correctly in Elasticsearch. If index lifecycle management is enabled it also ensures that the defined ILM policy and write alias are connected to the indices matching the index template. The ILM policy takes care of the lifecycle of an index, when to do a rollover, when to move an index from the hot phase to the next phase, etc. +* The {{kib}} dashboards make it easier for you to visualize Auditbeat data in {{kib}}. + +This command sets up the environment without actually running Auditbeat and ingesting data. Specify optional flags to set up a subset of assets. + +**SYNOPSIS** + +```sh +auditbeat setup [FLAGS] +``` + +**FLAGS** + +**`--dashboards`** +: Sets up the {{kib}} dashboards (when available). This option loads the dashboards from the Auditbeat package. For more options, such as loading customized dashboards, see [Importing Existing Beat Dashboards](http://www.elastic.co/guide/en/beats/devguide/master/import-dashboards.md) in the *Beats Developer Guide*. + +**`-h, --help`** +: Shows help for the `setup` command. + +**`--index-management`** +: Sets up components related to Elasticsearch index management including template, ILM policy, and write alias (if supported and configured). + +Also see [Global flags](#global-flags). + +**EXAMPLES** + +```sh +auditbeat setup --dashboards +auditbeat setup --index-management +``` + + +## `test` command [test-command] + +Tests the configuration. + +**SYNOPSIS** + +```sh +auditbeat test SUBCOMMAND [FLAGS] +``` + +**SUBCOMMANDS** + +**`config`** +: Tests the configuration settings. + +**`output`** +: Tests that Auditbeat can connect to the output by using the current settings. + +**FLAGS** + +**`-h, --help`** +: Shows help for the `test` command. + +Also see [Global flags](#global-flags). + +**EXAMPLE** + +```sh +auditbeat test config +``` + + +## `version` command [version-command] + +Shows information about the current version. + +**SYNOPSIS** + +```sh +auditbeat version [FLAGS] +``` + +**FLAGS** + +**`-h, --help`** +: Shows help for the `version` command. + +Also see [Global flags](#global-flags). + +**EXAMPLE** + +```sh +auditbeat version +``` + + +## Global flags [global-flags] + +These global flags are available whenever you run Auditbeat. + +**`-E, --E "SETTING_NAME=VALUE"`** +: Overrides a specific configuration setting. You can specify multiple overrides. For example: + + ```sh + auditbeat -E "name=mybeat" -E "output.elasticsearch.hosts=['http://myhost:9200']" + ``` + + This setting is applied to the currently running Auditbeat process. The Auditbeat configuration file is not changed. + + +**`-c, --c FILE`** +: Specifies the configuration file to use for Auditbeat. The file you specify here is relative to `path.config`. If the `-c` flag is not specified, the default config file, `auditbeat.yml`, is used. + +**`-d, --d SELECTORS`** +: Enables debugging for the specified selectors. For the selectors, you can specify a comma-separated list of components, or you can use `-d "*"` to enable debugging for all components. For example, `-d "publisher"` displays all the publisher-related messages. + +**`-e, --e`** +: Logs to stderr and disables syslog/file output. + +**`--environment`** +: For logging purposes, specifies the environment that Auditbeat is running in. This setting is used to select a default log output when no log output is configured. Supported values are: `systemd`, `container`, `macos_service`, and `windows_service`. If `systemd` or `container` is specified, Auditbeat will log to stdout and stderr by default. + +**`--path.config`** +: Sets the path for configuration files. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details. + +**`--path.data`** +: Sets the path for data files. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details. + +**`--path.home`** +: Sets the path for miscellaneous files. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details. + +**`--path.logs`** +: Sets the path for log files. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details. + +**`--strict.perms`** +: Sets strict permission checking on configuration files. The default is `--strict.perms=true`. See [Config file ownership and permissions](/reference/libbeat/config-file-permissions.md) for more information. + +**`-v, --v`** +: Logs INFO-level messages. + + diff --git a/docs/reference/auditbeat/community-id.md b/docs/reference/auditbeat/community-id.md new file mode 100644 index 000000000000..c5882878e85a --- /dev/null +++ b/docs/reference/auditbeat/community-id.md @@ -0,0 +1,41 @@ +--- +navigation_title: "community_id" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/community-id.html +--- + +# Community ID Network Flow Hash [community-id] + + +The `community_id` processor computes a network flow hash according to the [Community ID Flow Hash specification](https://github.com/corelight/community-id-spec). + +The flow hash is useful for correlating all network events related to a single flow. For example you can filter on a community ID value and you might get back the Netflow records from multiple collectors and layer 7 protocol records from Packetbeat. + +By default the processor is configured to read the flow parameters from the appropriate Elastic Common Schema (ECS) fields. If you are processing ECS data then no parameters are required. + +```yaml +processors: + - community_id: +``` + +If the data does not conform to ECS then you can customize the field names that the processor reads from. You can also change the `target` field which is where the computed hash is written to. + +```yaml +processors: + - community_id: + fields: + source_ip: my_source_ip + source_port: my_source_port + destination_ip: my_dest_ip + destination_port: my_dest_port + iana_number: my_iana_number + transport: my_transport + icmp_type: my_icmp_type + icmp_code: my_icmp_code + target: network.community_id +``` + +If the necessary fields are not present in the event then the processor will silently continue without adding the target field. + +The processor also accepts an optional `seed` parameter that must be a 16-bit unsigned integer. This value gets incorporated into all generated hashes. + diff --git a/docs/reference/auditbeat/configuration-auditbeat.md b/docs/reference/auditbeat/configuration-auditbeat.md new file mode 100644 index 000000000000..0d13f0766834 --- /dev/null +++ b/docs/reference/auditbeat/configuration-auditbeat.md @@ -0,0 +1,32 @@ +--- +navigation_title: "Modules" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-auditbeat.html +--- + +# Configure modules [configuration-auditbeat] + + +To enable specific modules you add entries to the `auditbeat.modules` list in the `auditbeat.yml` config file. Each entry in the list begins with a dash (-) and is followed by settings for that module. + +The following example shows a configuration that runs the `auditd` and `file_integrity` modules. + +```yaml +auditbeat.modules: + +- module: auditd + audit_rules: | + -w /etc/passwd -p wa -k identity + -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access + +- module: file_integrity + paths: + - /bin + - /usr/bin + - /sbin + - /usr/sbin + - /etc +``` + +The configuration details vary by module. See the [module documentation](/reference/auditbeat/auditbeat-modules.md) for more detail about configuring the available modules. + diff --git a/docs/reference/auditbeat/configuration-dashboards.md b/docs/reference/auditbeat/configuration-dashboards.md new file mode 100644 index 000000000000..942260c97103 --- /dev/null +++ b/docs/reference/auditbeat/configuration-dashboards.md @@ -0,0 +1,103 @@ +--- +navigation_title: "Kibana dashboards" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-dashboards.html +--- + +# Configure Kibana dashboard loading [configuration-dashboards] + + +Auditbeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Auditbeat data in Kibana. + +To load the dashboards, you can either enable dashboard loading in the `setup.dashboards` section of the `auditbeat.yml` config file, or you can run the `setup` command. Dashboard loading is disabled by default. + +When dashboard loading is enabled, Auditbeat uses the Kibana API to load the sample dashboards. Dashboard loading is only attempted when Auditbeat starts up. If Kibana is not available at startup, Auditbeat will stop with an error. + +To enable dashboard loading, add the following setting to the config file: + +```yaml +setup.dashboards.enabled: true +``` + + +## Configuration options [_configuration_options_12] + +You can specify the following options in the `setup.dashboards` section of the `auditbeat.yml` config file: + + +### `setup.dashboards.enabled` [_setup_dashboards_enabled] + +If this option is set to true, Auditbeat loads the sample Kibana dashboards from the local `kibana` directory in the home path of the Auditbeat installation. + +::::{note} +Auditbeat loads dashboards on startup if either `enabled` is set to `true` or the `setup.dashboards` section is included in the configuration. +:::: + + +::::{note} +When dashboard loading is enabled, Auditbeat overwrites any existing dashboards that match the names of the dashboards you are loading. This happens every time Auditbeat starts. +:::: + + +If no other options are set, the dashboard are loaded from the local `kibana` directory in the home path of the Auditbeat installation. To load dashboards from a different location, you can configure one of the following options: [`setup.dashboards.directory`](#directory-option), [`setup.dashboards.url`](#url-option), or [`setup.dashboards.file`](#file-option). + + +### `setup.dashboards.directory` [directory-option] + +The directory that contains the dashboards to load. The default is the `kibana` folder in the home path. + + +### `setup.dashboards.url` [url-option] + +The URL to use for downloading the dashboard archive. If this option is set, Auditbeat downloads the dashboard archive from the specified URL instead of using the local directory. + + +### `setup.dashboards.file` [file-option] + +The file archive (zip file) that contains the dashboards to load. If this option is set, Auditbeat looks for a dashboard archive in the specified path instead of using the local directory. + + +### `setup.dashboards.beat` [_setup_dashboards_beat] + +In case the archive contains the dashboards for multiple Beats, this setting lets you select the Beat for which you want to load dashboards. To load all the dashboards in the archive, set this option to an empty string. The default is `"auditbeat"`. + + +### `setup.dashboards.kibana_index` [_setup_dashboards_kibana_index] + +The name of the Kibana index to use for setting the configuration. The default is `".kibana"` + + +### `setup.dashboards.index` [_setup_dashboards_index] + +The Elasticsearch index name. This setting overwrites the index name defined in the dashboards and index pattern. Example: `"testbeat-*"` + +::::{note} +This setting only works for Kibana 6.0 and newer. +:::: + + + +### `setup.dashboards.always_kibana` [_setup_dashboards_always_kibana] + +Force loading of dashboards using the Kibana API without querying Elasticsearch for the version. The default is `false`. + + +### `setup.dashboards.retry.enabled` [_setup_dashboards_retry_enabled] + +If this option is set to true, and Kibana is not reachable at the time when dashboards are loaded, Auditbeat will retry to reconnect to Kibana instead of exiting with an error. Disabled by default. + + +### `setup.dashboards.retry.interval` [_setup_dashboards_retry_interval] + +Duration interval between Kibana connection retries. Defaults to 1 second. + + +### `setup.dashboards.retry.maximum` [_setup_dashboards_retry_maximum] + +Maximum number of retries before exiting with an error. Set to 0 for unlimited retrying. Default is unlimited. + + +### `setup.dashboards.string_replacements` [_setup_dashboards_string_replacements] + +The needle and replacements string map, which is used to replace needle string in dashboards and their references contents. + diff --git a/docs/reference/auditbeat/configuration-feature-flags.md b/docs/reference/auditbeat/configuration-feature-flags.md new file mode 100644 index 000000000000..7a64f2939d68 --- /dev/null +++ b/docs/reference/auditbeat/configuration-feature-flags.md @@ -0,0 +1,54 @@ +--- +navigation_title: "Feature flags" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-feature-flags.html +--- + +# Configure feature flags [configuration-feature-flags] + + +The Feature Flags section of the `auditbeat.yml` config file contains settings in Auditbeat that are disabled by default. These may include experimental features, changes to behaviors within Auditbeat, or settings that could cause a breaking change. For example a setting that changes information included in events might be inconsistent with the naming pattern expected in your configured Auditbeat output. + +To enable any of the settings listed on this page, change the associated `enabled` flag from `false` to `true`. + +```yaml +features: + mysetting: + enabled: true +``` + + +## Configuration options [_configuration_options_16] + +You can specify the following options in the `features` section of the `auditbeat.yml` config file: + + +### `fqdn` [_fqdn] + +Contains configuration for the FQDN reporting feature. When this feature is enabled, the fully-qualified domain name for the host is reported in the `host.name` field in events produced by Auditbeat. + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +For FQDN reporting to work as expected, the hostname of the current host must either: + +* Have a CNAME entry defined in DNS. +* Have one of its corresponding IP addresses respond successfully to a reverse DNS lookup. + +If neither pre-requisite is satisfied, `host.name` continues to report the hostname of the current host as if the FQDN feature flag were not enabled. + +Example configuration: + +```yaml +features: + fqdn: + enabled: true +``` + + +#### `enabled` [_enabled_10] + +Set to `true` to enable the FQDN reporting feature of Auditbeat. Defaults to `false`. + diff --git a/docs/reference/auditbeat/configuration-general-options.md b/docs/reference/auditbeat/configuration-general-options.md new file mode 100644 index 000000000000..fca99fbe88e3 --- /dev/null +++ b/docs/reference/auditbeat/configuration-general-options.md @@ -0,0 +1,88 @@ +--- +navigation_title: "General settings" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-general-options.html +--- + +# Configure general settings [configuration-general-options] + + +You can specify settings in the `auditbeat.yml` config file to control the general behavior of Auditbeat. + + +## General configuration options [configuration-general] + + +These options are supported by all Elastic Beats. Because they are common options, they are not namespaced. + +Here is an example configuration: + +```yaml +name: "my-shipper" +tags: ["service-X", "web-tier"] +``` + + +### `name` [_name] + +The name of the Beat. If this option is empty, the `hostname` of the server is used. The name is included as the `agent.name` field in each published transaction. You can use the name to group all transactions sent by a single Beat. + +Example: + +```yaml +name: "my-shipper" +``` + + +### `tags` [_tags] + +A list of tags that the Beat includes in the `tags` field of each published transaction. Tags make it easy to group servers by different logical properties. For example, if you have a cluster of web servers, you can add the "webservers" tag to the Beat on each server, and then use filters and queries in the Kibana web interface to get visualisations for the whole group of servers. + +Example: + +```yaml +tags: ["my-service", "hardware", "test"] +``` + + +### `fields` [libbeat-configuration-fields] + +Optional fields that you can specify to add additional information to the output. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. By default, the fields that you specify here will be grouped under a `fields` sub-dictionary in the output document. To store the custom fields as top-level fields, set the `fields_under_root` option to true. + +Example: + +```yaml +fields: {project: "myproject", instance-id: "574734885120952459"} +``` + + +### `fields_under_root` [_fields_under_root] + +If this option is set to true, the custom [fields](#libbeat-configuration-fields) are stored as top-level fields in the output document instead of being grouped under a `fields` sub-dictionary. If the custom field names conflict with other field names, then the custom fields overwrite the other fields. + +Example: + +```yaml +fields_under_root: true +fields: + instance_id: i-10a64379 + region: us-east-1 +``` + + +### `processors` [_processors] + +A list of processors to apply to the data generated by the beat. + +See [Processors](/reference/auditbeat/filtering-enhancing-data.md) for information about specifying processors in your config. + + +### `max_procs` [_max_procs] + +Sets the maximum number of CPUs that can be executing simultaneously. The default is the number of logical CPUs available in the system. + + +### `timestamp.precision` [_timestamp_precision] + +Configure the precision of all timestamps. By default it is set to millisecond. Available options: millisecond, microsecond, nanosecond + diff --git a/docs/reference/auditbeat/configuration-instrumentation.md b/docs/reference/auditbeat/configuration-instrumentation.md new file mode 100644 index 000000000000..6b935e91ebc6 --- /dev/null +++ b/docs/reference/auditbeat/configuration-instrumentation.md @@ -0,0 +1,87 @@ +--- +navigation_title: "Instrumentation" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-instrumentation.html +--- + +# Configure APM instrumentation [configuration-instrumentation] + + +Libbeat uses the Elastic APM Go Agent to instrument its publishing pipeline. Currently, only the Elasticsearch output is instrumented. To gain insight into the performance of Auditbeat, you can enable this instrumentation and send trace data to the APM Integration. + +Example configuration with instrumentation enabled: + +```yaml +instrumentation: + enabled: true + environment: production + hosts: + - "http://localhost:8200" + api_key: L5ER6FEvjkmlfalBealQ3f3fLqf03fazfOV +``` + + +## Configuration options [_configuration_options_15] + +You can specify the following options in the `instrumentation` section of the `auditbeat.yml` config file: + + +### `enabled` [_enabled_9] + +Set to `true` to enable instrumentation of Auditbeat. Defaults to `false`. + + +### `environment` [_environment] + +Set the environment in which Auditbeat is running, for example, `staging`, `production`, `dev`, etc. Environments can be filtered in the [APM app](docs-content://solutions/observability/apps/overviews.md). + + +### `hosts` [_hosts_3] + +The APM integration [host](docs-content://reference/ingestion-tools/observability/apm-settings.md) to report instrumentation data to. Defaults to `http://localhost:8200`. + + +### `api_key` [_api_key_2] + +The [API Key](docs-content://reference/ingestion-tools/observability/apm-settings.md) used to secure communication with the APM Integration. If `api_key` is set then `secret_token` will be ignored. + + +### `secret_token` [_secret_token] + +The [Secret token](docs-content://reference/ingestion-tools/observability/apm-settings.md) used to secure communication with the APM Integration. + + +### `profiling.cpu.enabled` [_profiling_cpu_enabled] + +Set to `true` to enable CPU profiling, where profile samples are recorded as events. + +This feature is experimental. + + +### `profiling.cpu.interval` [_profiling_cpu_interval] + +Configure the CPU profiling interval. Defaults to `60s`. + +This feature is experimental. + + +### `profiling.cpu.duration` [_profiling_cpu_duration] + +Configure the CPU profiling duration. Defaults to `10s`. + +This feature is experimental. + + +### `profiling.heap.enabled` [_profiling_heap_enabled] + +Set to `true` to enable heap profiling. + +This feature is experimental. + + +### `profiling.heap.interval` [_profiling_heap_interval] + +Configure the heap profiling interval. Defaults to `60s`. + +This feature is experimental. + diff --git a/docs/reference/auditbeat/configuration-kerberos.md b/docs/reference/auditbeat/configuration-kerberos.md new file mode 100644 index 000000000000..e3a5a183572e --- /dev/null +++ b/docs/reference/auditbeat/configuration-kerberos.md @@ -0,0 +1,90 @@ +--- +navigation_title: "Kerberos" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-kerberos.html +--- + +# Configure Kerberos [configuration-kerberos] + + +You can specify Kerberos options with any output or input that supports Kerberos, like {{es}}. + +The following encryption types are supported: + +* aes128-cts-hmac-sha1-96 +* aes128-cts-hmac-sha256-128 +* aes256-cts-hmac-sha1-96 +* aes256-cts-hmac-sha384-192 +* des3-cbc-sha1-kd +* rc4-hmac + +Example output config with Kerberos password based authentication: + +```yaml +output.elasticsearch.hosts: ["http://my-elasticsearch.elastic.co:9200"] +output.elasticsearch.kerberos.auth_type: password +output.elasticsearch.kerberos.username: "elastic" +output.elasticsearch.kerberos.password: "changeme" +output.elasticsearch.kerberos.config_path: "/etc/krb5.conf" +output.elasticsearch.kerberos.realm: "ELASTIC.CO" +``` + +The service principal name for the Elasticsearch instance is contructed from these options. Based on this configuration it is going to be `HTTP/my-elasticsearch.elastic.co@ELASTIC.CO`. + + +## Configuration options [_configuration_options_9] + +You can specify the following options in the `kerberos` section of the `auditbeat.yml` config file: + + +### `enabled` [_enabled_8] + +The `enabled` setting can be used to enable the kerberos configuration by setting it to `false`. The default value is `true`. + +::::{note} +Kerberos settings are disabled if either `enabled` is set to `false` or the `kerberos` section is missing. +:::: + + + +### `auth_type` [_auth_type] + +There are two options to authenticate with Kerberos KDC: `password` and `keytab`. + +`password` expects the principal name and its password. When choosing `keytab`, you have to specify a principal name and a path to a keytab. The keytab must contain the keys of the selected principal. Otherwise, authentication will fail. + + +### `config_path` [_config_path] + +You need to set the path to the `krb5.conf`, so Auditbeat can find the Kerberos KDC to retrieve a ticket. + + +### `username` [_username_3] + +Name of the principal used to connect to the output. + + +### `password` [_password_4] + +If you configured `password` for `auth_type`, you have to provide a password for the selected principal. + + +### `keytab` [_keytab] + +If you configured `keytab` for `auth_type`, you have to provide the path to the keytab of the selected principal. + + +### `service_name` [_service_name] + +This option can only be configured for Kafka. It is the name of the Kafka service, usually `kafka`. + + +### `realm` [_realm] + +Name of the realm where the output resides. + + +### `enable_krb5_fast` [_enable_krb5_fast] + +Enable Kerberos FAST authentication. This may conflict with some Active Directory installations. The default is `false`. + diff --git a/docs/reference/auditbeat/configuration-logging.md b/docs/reference/auditbeat/configuration-logging.md new file mode 100644 index 000000000000..5ff9c17c26e9 --- /dev/null +++ b/docs/reference/auditbeat/configuration-logging.md @@ -0,0 +1,253 @@ +--- +navigation_title: "Logging" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-logging.html +--- + +# Configure logging [configuration-logging] + + +The `logging` section of the `auditbeat.yml` config file contains options for configuring the logging output. The logging system can write logs to the syslog or rotate log files. If logging is not explicitly configured the file output is used. + +```yaml +logging.level: info +logging.to_files: true +logging.files: + path: /var/log/auditbeat + name: auditbeat + keepfiles: 7 + permissions: 0640 +``` + +::::{tip} +In addition to setting logging options in the config file, you can modify the logging output configuration from the command line. See [Command reference](/reference/auditbeat/command-line-options.md). +:::: + + +::::{warning} +When Auditbeat is running on a Linux system with systemd, it uses by default the `-e` command line option, that makes it write all the logging output to stderr so it can be captured by journald. Other outputs are disabled. See [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md) to know more and learn how to change this. +:::: + + + +## Configuration options [_configuration_options_14] + +You can specify the following options in the `logging` section of the `auditbeat.yml` config file: + + +### `logging.to_stderr` [_logging_to_stderr] + +When true, writes all logging output to standard error output. This is equivalent to using the `-e` command line option. + + +### `logging.to_syslog` [_logging_to_syslog] + +When true, writes all logging output to the syslog. + +::::{note} +This option is not supported on Windows. +:::: + + + +### `logging.to_eventlog` [_logging_to_eventlog] + +When true, writes all logging output to the Windows Event Log. + + +### `logging.to_files` [_logging_to_files] + +When true, writes all logging output to files. The log files are automatically rotated when the log file size limit is reached. + +::::{note} +Auditbeat only creates a log file if there is logging output. For example, if you set the log [`level`](#level) to `error` and there are no errors, there will be no log file in the directory specified for logs. +:::: + + + +### `logging.level` [level] + +Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default log level is `info`. + +`debug` +: Logs debug messages, including a detailed printout of all events flushed. Also logs informational messages, warnings, errors, and critical errors. When the log level is `debug`, you can specify a list of [`selectors`](#selectors) to display debug messages for specific components. If no selectors are specified, the `*` selector is used to display debug messages for all components. + +`info` +: Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors. + +`warning` +: Logs warnings, errors, and critical errors. + +`error` +: Logs errors and critical errors. + + +### `logging.selectors` [selectors] + +The list of debugging-only selector tags used by different Auditbeat components. Use `*` to enable debug output for all components. Use `publisher` to display debug messages related to event publishing. + +::::{tip} +The list of available selectors may change between releases, so avoid creating tests that depend on specific selectors. + +To see which selectors are available, run Auditbeat in debug mode (set `logging.level: debug` in the configuration). The selector name appears after the log level and is enclosed in brackets. + +:::: + + +To configure multiple selectors, use the following [YAML list syntax](/reference/libbeat/config-file-format.md): + +```yaml +logging.selectors: [ harvester, input ] +``` + +To override selectors at the command line, use the `-d` global flag (`-d` also sets the debug log level). For more information, see [Command reference](/reference/auditbeat/command-line-options.md). + + +### `logging.metrics.enabled` [_logging_metrics_enabled] + +By default, Auditbeat periodically logs its internal metrics that have changed in the last period. For each metric that changed, the delta from the value at the beginning of the period is logged. Also, the total values for all non-zero internal metrics are logged on shutdown. Set this to false to disable this behavior. The default is true. + +Here is an example log line: + +```shell +2017-12-17T19:17:42.667-0500 INFO [metrics] log/log.go:110 Non-zero metrics in the last 30s: beat.info.uptime.ms=30004 beat.memstats.gc_next=5046416 +``` + +Note that we currently offer no backwards compatible guarantees for the internal metrics and for this reason they are also not documented. + + +### `logging.metrics.period` [_logging_metrics_period] + +The period after which to log the internal metrics. The default is 30s. + + +### `logging.metrics.namespaces` [_logging_metrics_namespaces] + +A list of metrics namespaces to report in the logs. Defaults to `[stats]`. `stats` contains general Beat metrics. `dataset` and `inputs` may be present in some Beats and contains module or input metrics. + + +### `logging.files.path` [_logging_files_path] + +The directory that log files are written to. The default is the logs path. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details. + + +### `logging.files.name` [_logging_files_name] + +The name of the file that logs are written to. The default is *auditbeat*. + + +### `logging.files.rotateeverybytes` [_logging_files_rotateeverybytes] + +The maximum size of a log file. If the limit is reached, a new log file is generated. The default size limit is 10485760 (10 MB). + + +### `logging.files.keepfiles` [_logging_files_keepfiles] + +The number of most recent rotated log files to keep on disk. Older files are deleted during log rotation. The default value is 7. The `keepfiles` options has to be in the range of 2 to 1024 files. + + +### `logging.files.permissions` [_logging_files_permissions] + +The permissions mask to apply when rotating log files. The default value is 0600. The `permissions` option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with *0*. + +The most permissive mask allowed is 0640. If a higher permissions mask is specified via this setting, it will be subject to an umask of 0027. + +This option is not supported on Windows. + +Examples: + +* 0640: give read and write access to the file owner, and read access to members of the group associated with the file. +* 0600: give read and write access to the file owner, and no access to all others. + + +### `logging.files.interval` [_logging_files_interval] + +Enable log file rotation on time intervals in addition to size-based rotation. Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h are boundary-aligned with minutes, hours, days, weeks, months, and years as reported by the local system clock. All other intervals are calculated from the unix epoch. Defaults to disabled. + + +### `logging.files.rotateonstartup` [_logging_files_rotateonstartup] + +If the log file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to true. + + +### `logging.files.redirect_stderr` [preview] [_logging_files_redirect_stderr] + +When true, diagnostic messages printed to Auditbeat’s standard error output will also be logged to the log file. This can be helpful in situations were Auditbeat terminates unexpectedly because an error has been detected by Go’s runtime but diagnostic information is not present in the log file. This feature is only available when logging to files (`logging.to_files` is true). Disabled by default. + + +## Logging format [_logging_format] + +The logging format is generally the same for each logging output. The one exception is with the syslog output where the timestamp is not included in the message because syslog adds its own timestamp. + +Each log message consists of the following parts: + +* Timestamp in ISO8601 format +* Level +* Logger name contained in brackets (Optional) +* File name and line number of the caller +* Message +* Structured data encoded in JSON (Optional) + +Below are some samples: + +`2017-12-17T18:54:16.241-0500 INFO logp/core_test.go:13 unnamed global logger` + +`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:16 some message` + +`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:19 some message {"x": 1}` + + +## Configuration options for event_data logger [_configuration_options_for_event_data_logger] + +Some outputs will log raw events on errors like indexing errors in the Elasticsearch output, to prevent logging raw events (that may contain sensitive information) together with other log messages, a different log file, only for log entries containing raw events, is used. It will use the same level, selectors and all other configurations from the default logger, but it will have it’s own file configuration. + +Having a different log file for raw events also prevents event data from drowning out the regular log files. + +::::{important} +No matter the default logger output configuration, raw events will **always** be logged to a file configured by `logging.event_data.files`. +:::: + + + +### `logging.event_data.files.path` [_logging_event_data_files_path] + +The directory that log files are written to. The default is the logs path. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details. + + +### `logging.event_data.files.name` [_logging_event_data_files_name] + +The name of the file that logs are written to. The default is *auditbeat*-events-data. + + +### `logging.event_data.files.rotateeverybytes` [_logging_event_data_files_rotateeverybytes] + +The maximum size of a log file. If the limit is reached, a new log file is generated. The default size limit is 5242880 (5 MB). + + +### `logging.event_data.files.keepfiles` [_logging_event_data_files_keepfiles] + +The number of most recent rotated log files to keep on disk. Older files are deleted during log rotation. The default value is 2. The `keepfiles` options has to be in the range of 2 to 1024 files. + + +### `logging.event_data.files.permissions` [_logging_event_data_files_permissions] + +The permissions mask to apply when rotating log files. The default value is 0600. The `permissions` option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with *0*. + +The most permissive mask allowed is 0640. If a higher permissions mask is specified via this setting, it will be subject to an umask of 0027. + +This option is not supported on Windows. + +Examples: + +* 0640: give read and write access to the file owner, and read access to members of the group associated with the file. +* 0600: give read and write access to the file owner, and no access to all others. + + +### `logging.event_data.files.interval` [_logging_event_data_files_interval] + +Enable log file rotation on time intervals in addition to size-based rotation. Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h are boundary-aligned with minutes, hours, days, weeks, months, and years as reported by the local system clock. All other intervals are calculated from the unix epoch. Defaults to disabled. + + +### `logging.event_data.files.rotateonstartup` [_logging_event_data_files_rotateonstartup] + +If the log file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to false. diff --git a/docs/reference/auditbeat/configuration-monitor.md b/docs/reference/auditbeat/configuration-monitor.md new file mode 100644 index 000000000000..5b595db68ae2 --- /dev/null +++ b/docs/reference/auditbeat/configuration-monitor.md @@ -0,0 +1,113 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-monitor.html +--- + +# Settings for internal collection [configuration-monitor] + +Use the following settings to configure internal collection when you are not using {{metricbeat}} to collect monitoring data. + +You specify these settings in the X-Pack monitoring section of the `auditbeat.yml` config file: + +## `monitoring.enabled` [_monitoring_enabled] + +The `monitoring.enabled` config is a boolean setting to enable or disable {{monitoring}}. If set to `true`, monitoring is enabled. + +The default value is `false`. + + +## `monitoring.elasticsearch` [_monitoring_elasticsearch] + +The {{es}} instances that you want to ship your Auditbeat metrics to. This configuration option contains the following fields: + + +## `monitoring.cluster_uuid` [_monitoring_cluster_uuid] + +The `monitoring.cluster_uuid` config identifies the {{es}} cluster under which the monitoring data will appear in the Stack Monitoring UI. + +### `api_key` [_api_key_3] + +The detail of the API key to be used to send monitoring information to {{es}}. See [*Grant access using API keys*](/reference/auditbeat/beats-api-keys.md) for more information. + + +### `bulk_max_size` [_bulk_max_size_5] + +The maximum number of metrics to bulk in a single {{es}} bulk API index request. The default is `50`. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md). + + +### `backoff.init` [_backoff_init_4] + +The number of seconds to wait before trying to reconnect to Elasticsearch after a network error. After waiting `backoff.init` seconds, Auditbeat tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is 1s. + + +### `backoff.max` [_backoff_max_4] + +The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. The default is 60s. + + +### `compression_level` [_compression_level_3] + +The gzip compression level. Setting this value to `0` disables compression. The compression level must be in the range of `1` (best speed) to `9` (best compression). The default value is `0`. Increasing the compression level reduces the network usage but increases the CPU usage. + + +### `headers` [_headers_3] + +Custom HTTP headers to add to each request. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md). + + +### `hosts` [_hosts_4] + +The list of {{es}} nodes to connect to. Monitoring metrics are distributed to these nodes in round robin order. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md). + + +### `max_retries` [_max_retries_5] + +The number of times to retry sending the monitoring metrics after a failure. After the specified number of retries, the metrics are typically dropped. The default value is `3`. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md). + + +### `parameters` [_parameters_2] + +Dictionary of HTTP parameters to pass within the url with index operations. + + +### `password` [_password_6] + +The password that Auditbeat uses to authenticate with the {{es}} instances for shipping monitoring data. + + +### `metrics.period` [_metrics_period] + +The time interval (in seconds) when metrics are sent to the {{es}} cluster. A new snapshot of Auditbeat metrics is generated and scheduled for publishing each period. The default value is 10 * time.Second. + + +### `state.period` [_state_period] + +The time interval (in seconds) when state information are sent to the {{es}} cluster. A new snapshot of Auditbeat state is generated and scheduled for publishing each period. The default value is 60 * time.Second. + + +### `protocol` [_protocol] + +The name of the protocol to use when connecting to the {{es}} cluster. The options are: `http` or `https`. The default is `http`. If you specify a URL for `hosts`, however, the value of protocol is overridden by the scheme you specify in the URL. + + +### `proxy_url` [_proxy_url_4] + +The URL of the proxy to use when connecting to the {{es}} cluster. For more information, see [Elasticsearch](/reference/auditbeat/elasticsearch-output.md). + + +### `timeout` [_timeout_5] + +The HTTP request timeout in seconds for the {{es}} request. The default is `90`. + + +### `ssl` [_ssl_5] + +Configuration options for Transport Layer Security (TLS) or Secure Sockets Layer (SSL) parameters like the certificate authority (CA) to use for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to {{es}}. For more information, see [SSL](/reference/auditbeat/configuration-ssl.md). + + +### `username` [_username_4] + +The user ID that Auditbeat uses to authenticate with the {{es}} instances for shipping monitoring data. + + + diff --git a/docs/reference/auditbeat/configuration-output-codec.md b/docs/reference/auditbeat/configuration-output-codec.md new file mode 100644 index 000000000000..fe4682d9aaef --- /dev/null +++ b/docs/reference/auditbeat/configuration-output-codec.md @@ -0,0 +1,32 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-output-codec.html +--- + +# Change the output codec [configuration-output-codec] + +For outputs that do not require a specific encoding, you can change the encoding by using the codec configuration. You can specify either the `json` or `format` codec. By default the `json` codec is used. + +**`json.pretty`**: If `pretty` is set to true, events will be nicely formatted. The default is false. + +**`json.escape_html`**: If `escape_html` is set to true, html symbols will be escaped in strings. The default is false. + +Example configuration that uses the `json` codec with pretty printing enabled to write events to the console: + +```yaml +output.console: + codec.json: + pretty: true + escape_html: false +``` + +**`format.string`**: Configurable format string used to create a custom formatted message. + +Example configurable that uses the `format` codec to print the events timestamp and message field to console: + +```yaml +output.console: + codec.format: + string: '%{[@timestamp]} %{[message]}' +``` + diff --git a/docs/reference/auditbeat/configuration-path.md b/docs/reference/auditbeat/configuration-path.md new file mode 100644 index 000000000000..a541aff4af20 --- /dev/null +++ b/docs/reference/auditbeat/configuration-path.md @@ -0,0 +1,78 @@ +--- +navigation_title: "Project paths" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-path.html +--- + +# Configure project paths [configuration-path] + + +The `path` section of the `auditbeat.yml` config file contains configuration options that define where Auditbeat looks for its files. For example, Auditbeat looks for the Elasticsearch template file in the configuration path and writes log files in the logs path. + +Please see the [Directory layout](/reference/auditbeat/directory-layout.md) section for more details. + +Here is an example configuration: + +```yaml +path.home: /usr/share/beat +path.config: /etc/beat +path.data: /var/lib/beat +path.logs: /var/log/ +``` + +Note that it is possible to override these options by using command line flags. + + +## Configuration options [_configuration_options] + +You can specify the following options in the `path` section of the `auditbeat.yml` config file: + + +### `home` [_home] + +The home path for the Auditbeat installation. This is the default base path for all other path settings and for miscellaneous files that come with the distribution (for example, the sample dashboards). If not set by a CLI flag or in the configuration file, the default for the home path is the location of the Auditbeat binary. + +Example: + +```yaml +path.home: /usr/share/beats +``` + + +### `config` [_config] + +The configuration path for the Auditbeat installation. This is the default base path for configuration files, including the main YAML configuration file and the Elasticsearch template file. If not set by a CLI flag or in the configuration file, the default for the configuration path is the home path. + +Example: + +```yaml +path.config: /usr/share/beats/config +``` + + +### `data` [_data] + +The data path for the Auditbeat installation. This is the default base path for all the files in which Auditbeat needs to store its data. If not set by a CLI flag or in the configuration file, the default for the data path is a `data` subdirectory inside the home path. + +Example: + +```yaml +path.data: /var/lib/beats +``` + +::::{tip} +When running multiple Auditbeat instances on the same host, make sure they each have a distinct `path.data` value. +:::: + + + +### `logs` [_logs] + +The logs path for a Auditbeat installation. This is the default location for Auditbeat’s log files. If not set by a CLI flag or in the configuration file, the default for the logs path is a `logs` subdirectory inside the home path. + +Example: + +```yaml +path.logs: /var/log/beats +``` + diff --git a/docs/reference/auditbeat/configuration-ssl.md b/docs/reference/auditbeat/configuration-ssl.md new file mode 100644 index 000000000000..519ce4cac46f --- /dev/null +++ b/docs/reference/auditbeat/configuration-ssl.md @@ -0,0 +1,486 @@ +--- +navigation_title: "SSL" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-ssl.html +--- + +# Configure SSL [configuration-ssl] + + +You can specify SSL options when you configure: + +* [outputs](/reference/auditbeat/configuring-output.md) that support SSL +* the [Kibana endpoint](/reference/auditbeat/setup-kibana-endpoint.md) + +Example output config with SSL enabled: + +```yaml +output.elasticsearch.hosts: ["https://192.168.1.42:9200"] +output.elasticsearch.ssl.certificate_authorities: ["/etc/client/ca.pem"] +output.elasticsearch.ssl.certificate: "/etc/client/cert.pem" +output.elasticsearch.ssl.key: "/etc/client/cert.key" +``` + +Also see [*Secure communication with Logstash*](/reference/auditbeat/configuring-ssl-logstash.md). + +Example Kibana endpoint config with SSL enabled: + +```yaml +setup.kibana.host: "https://192.0.2.255:5601" +setup.kibana.ssl.enabled: true +setup.kibana.ssl.certificate_authorities: ["/etc/client/ca.pem"] +setup.kibana.ssl.certificate: "/etc/client/cert.pem" +setup.kibana.ssl.key: "/etc/client/cert.key" +``` + +There are a number of SSL configuration options available to you: + +* [Common configuration options](#ssl-common-config) +* [Client configuration options](#ssl-client-config) +* [Server configuration options](#ssl-server-config) + + +## Common configuration options [ssl-common-config] + +Common SSL configuration options can be used in both client and server configurations. You can specify the following options in the `ssl` section of each subsystem that supports SSL. + + +### `enabled` [enabled] + +To disable SSL configuration, set the value to `false`. The default value is `true`. + +::::{note} +SSL settings are disabled if either `enabled` is set to `false` or the `ssl` section is missing. + +:::: + + + +### `supported_protocols` [supported-protocols] + +List of allowed SSL/TLS versions. If SSL/TLS server decides for protocol versions not configured, the connection will be dropped during or after the handshake. The setting is a list of allowed protocol versions: `TLSv1.1`, `TLSv1.2`, and `TLSv1.3`. + +The default value is `[TLSv1.2, TLSv1.3]`. + + +### `cipher_suites` [cipher-suites] + +The list of cipher suites to use. The first entry has the highest priority. If this option is omitted, the Go crypto library’s [default suites](https://golang.org/pkg/crypto/tls/) are used (recommended). + +Note that if TLS 1.3 is enabled (which is true by default), then the default TLS 1.3 cipher suites are always included, because Go’s standard library adds them to all connections. In order to exclude the default TLS 1.3 ciphers, TLS 1.3 must also be disabled, e.g. with the setting `ssl.supported_protocols = [TLSv1.2]`. + +The following cipher suites are available: + +| Cypher | Notes | +| --- | --- | +| ECDHE-ECDSA-AES-128-CBC-SHA | | +| ECDHE-ECDSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. | +| ECDHE-ECDSA-AES-128-GCM-SHA256 | TLS 1.2 only. | +| ECDHE-ECDSA-AES-256-CBC-SHA | | +| ECDHE-ECDSA-AES-256-GCM-SHA384 | TLS 1.2 only. | +| ECDHE-ECDSA-CHACHA20-POLY1305 | TLS 1.2 only. | +| ECDHE-ECDSA-RC4-128-SHA | Disabled by default. RC4 not recommended. | +| ECDHE-RSA-3DES-CBC3-SHA | | +| ECDHE-RSA-AES-128-CBC-SHA | | +| ECDHE-RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. | +| ECDHE-RSA-AES-128-GCM-SHA256 | TLS 1.2 only. | +| ECDHE-RSA-AES-256-CBC-SHA | | +| ECDHE-RSA-AES-256-GCM-SHA384 | TLS 1.2 only. | +| ECDHE-RSA-CHACHA20-POLY1205 | TLS 1.2 only. | +| ECDHE-RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. | +| RSA-3DES-CBC3-SHA | | +| RSA-AES-128-CBC-SHA | | +| RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. | +| RSA-AES-128-GCM-SHA256 | TLS 1.2 only. | +| RSA-AES-256-CBC-SHA | | +| RSA-AES-256-GCM-SHA384 | TLS 1.2 only. | +| RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. | + +Here is a list of acronyms used in defining the cipher suites: + +* 3DES: Cipher suites using triple DES +* AES-128/256: Cipher suites using AES with 128/256-bit keys. +* CBC: Cipher using Cipher Block Chaining as block cipher mode. +* ECDHE: Cipher suites using Elliptic Curve Diffie-Hellman (DH) ephemeral key exchange. +* ECDSA: Cipher suites using Elliptic Curve Digital Signature Algorithm for authentication. +* GCM: Galois/Counter mode is used for symmetric key cryptography. +* RC4: Cipher suites using RC4. +* RSA: Cipher suites using RSA. +* SHA, SHA256, SHA384: Cipher suites using SHA-1, SHA-256 or SHA-384. + + +### `curve_types` [curve-types] + +The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). + +The following elliptic curve types are available: + +* P-256 +* P-384 +* P-521 +* X25519 + + +### `ca_sha256` [ca-sha256] + +This configures a certificate pin that you can use to ensure that a specific certificate is part of the verified chain. + +The pin is a base64 encoded string of the SHA-256 of the certificate. + +::::{note} +This check is not a replacement for the normal SSL validation, but it adds additional validation. If this option is used with `verification_mode` set to `none`, the check will always fail because it will not receive any verified chains. +:::: + + + +## Client configuration options [ssl-client-config] + +You can specify the following options in the `ssl` section of each subsystem that supports SSL. + + +### `certificate_authorities` [client-certificate-authorities] + +The list of root certificates for verifications is required. If `certificate_authorities` is empty or not set, the system keystore is used. If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well. + +By default you can specify a list of files that `auditbeat` will read, but you can also embed a certificate directly in the `YAML` configuration: + +```yaml +certificate_authorities: + - | + -----BEGIN CERTIFICATE----- + MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF + ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 + MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB + BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n + fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl + 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t + /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP + PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 + CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O + BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux + 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D + 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw + 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA + H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu + 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 + yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk + sxSmbIUfc2SGJGCJD4I= + -----END CERTIFICATE----- +``` + + +### `certificate: "/etc/client/cert.pem"` [client-certificate] + +The path to the certificate for SSL client authentication is only required if `client_authentication` is specified. If the certificate is not specified, client authentication is not available. The connection might fail if the server requests client authentication. If the SSL server does not require client authentication, the certificate will be loaded, but not requested or used by the server. + +When this option is configured, the [`key`](#client-key) option is also required. The certificate option support embedding of the certificate: + +```yaml +certificate: | + -----BEGIN CERTIFICATE----- + MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF + ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 + MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB + BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n + fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl + 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t + /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP + PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 + CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O + BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux + 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D + 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw + 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA + H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu + 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 + yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk + sxSmbIUfc2SGJGCJD4I= + -----END CERTIFICATE----- +``` + + +### `key: "/etc/client/cert.key"` [client-key] + +The client certificate key used for client authentication and is only required if `client_authentication` is configured. The key option support embedding of the private key: + +```yaml +key: | + -----BEGIN PRIVATE KEY----- + MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI + sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP + Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F + KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2 + MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z + HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ + nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx + Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0 + eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/ + Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM + epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve + Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn + BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8 + VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU + zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5 + GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA + 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7 + TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF + hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li + e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze + Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T + kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+ + kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav + NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K + 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc + nygO9KTJuUiBrLr0AHEnqko= + -----END PRIVATE KEY----- +``` + + +### `key_passphrase` [client-key-passphrase] + +The passphrase used to decrypt an encrypted key stored in the configured `key` file. + + +### `verification_mode` [client-verification-mode] + +Controls the verification of server certificates. Valid values are: + +`full` +: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. + +`strict` +: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error. + +`certificate` +: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification. + +`none` +: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged. + + The default value is `full`. + + + +### `ca_trusted_fingerprint` [ca_trusted_fingerprint] + +A HEX encoded SHA-256 of a CA certificate. If this certificate is present in the chain during the handshake, it will be added to the `certificate_authorities` list and the handshake will continue normaly. + +To get the fingerprint from a CA certificate on a Unix-like system, you can use the following command, where `ca.crt` is the certificate. + +``` +openssl x509 -fingerprint -sha256 -noout -in ./ca.crt | awk --field-separator="=" '{print $2}' | sed 's/://g' +``` + + +## Server configuration options [ssl-server-config] + +You can specify the following options in the `ssl` section of each subsystem that supports SSL. + + +### `certificate_authorities` [server-certificate-authorities] + +The list of root certificates for client verifications is only required if `client_authentication` is configured. If `certificate_authorities` is empty or not set, and `client_authentication` is configured, the system keystore is used. + +If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well. By default you can specify a list of files that `auditbeat` will read, but you can also embed a certificate directly in the `YAML` configuration: + +```yaml +certificate_authorities: + - | + -----BEGIN CERTIFICATE----- + MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF + ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 + MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB + BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n + fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl + 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t + /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP + PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 + CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O + BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux + 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D + 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw + 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA + H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu + 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 + yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk + sxSmbIUfc2SGJGCJD4I= + -----END CERTIFICATE----- +``` + + +### `certificate: "/etc/server/cert.pem"` [server-certificate] + +The end-entity (leaf) certificate that the server uses to identify itself. If the certificate is signed by a certificate authority (CA), then it should include intermediate CA certificates, sorted from leaf to root. For servers, a `certificate` and [`key`](#server-key) must be specified. + +The certificate option supports embedding of the PEM certificate content. This example contains the leaf certificate followed by issuer’s certificate. + +```yaml +certificate: | + -----BEGIN CERTIFICATE----- + MIIF2jCCA8KgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBlMQswCQYDVQQGEwJVUzEW + MBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8g + UmVhbDEOMAwGA1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwHhcNMjMxMDMw + MTkyMzU4WhcNMjMxMDMxMTkyMzU4WjB2MQswCQYDVQQGEwJVUzEWMBQGA1UEBxMN + U2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8gUmVhbDEOMAwG + A1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMxDzANBgNVBAMTBnNlcnZlcjCC + AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALW37cart7l0KE3LCStFbiGm + Rr/QSkuPv+Y+SXFT4zXrMFP3mOfUCVsR4lugv+jmql9qjbwR9jKsgKXA1kSvNXSZ + lLYWRcNnQ+QzwKxJf/jy246nSfqb2FKvVMs580lDwKHHxn/FSpHV93O4Goy5cLfF + ACE7BSdJdxl5DVAMmmkzd6gBGgN8dQIbcyJYuIZYQt44PqSYh/BomTyOXKrmvX4y + t7/pF+ldJjWZq/6SfCq6WE0jSrpI1P/42Qd9h5Tsnl6qsUGA2Tz5ZqKz2cyxaIlK + wL9tYDionfFIl+jZcxkGPF2a14O1TycCI0B/z+0VL+HR/8fKAB0NdP+QRLaPWOrn + DvraAO+bVKC6VrQyUYNUOwtd2gMUqm6Hzrf4s3wjP754eSJkvnSoSAB6l7ZmJKe5 + Pz5oDDOVPwKHv/MrhsCSMNFeXSEO+rq9TtYEAFQI5rFGHlURga8kA1T1pirHyEtS + 2o8GUSPSHVulaPdFnHg4xfTexfRYLCqya75ISJuY2/+2GblCie/re1GFitZCZ46/ + xiQQDOjgL96soDVZ+cTtMpXanslgDapTts9LPIJTd9FUJCY1omISGiSjABRuTlCV + 8054ja4BKVahSd5BqqtVkWyV64SCut6kce2ndwBkyFvlZ6cteLCW7KtzYvba4XBb + YIAs+H+9e/bZUVhws5mFAgMBAAGjgYMwgYAwDgYDVR0PAQH/BAQDAgeAMB0GA1Ud + JQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATAOBgNVHQ4EBwQFAQIDBAUwPwYDVR0R + BDgwNoIJbG9jYWxob3N0ghFiZWF0cy5leGFtcGxlLmNvbYcEfwAAAYcQAAAAAAAA + AAAAAAAAAAAAATANBgkqhkiG9w0BAQsFAAOCAgEAldSZOUi+OUR46ERQuINl1oED + mjNsQ9FNP/RDu8mPJaNb5v2sAbcpuZb9YdnScT+d+n0+LMd5uz2g67Qr73QCpXwL + 9YJIs56i7qMTKXlVvRQrvF9P/zP3sm5Zfd2I/x+8oXgEeYsxAWipJ8RsbnN1dtu8 + C4l+P0E58jjrjom11W90RiHYaT0SI2PPBTTRhYLz0HayThPZDMdFnIQqVxUYbQD5 + ybWu77hnsvC/g2C8/N2LAdQGJJ67owMa5T3YRneiaSvvOf3I45oeLE+olGAPdrSq + 5Sp0G7fcAKMRPxcwYeD7V5lfYMtb+RzECpYAHT8zHKLZl6/34q2k8P8EWEpAsD80 + +zSbCkdvNiU5lU90rV8E2baTKCg871k4O8sT48eUyDps6ZUCfT1dgefXeyOTV5bY + 864Zo6bWJhAJ7Qa2d4HJkqPzSbqsosHVobojgkOcMqkStLHd8sgtCoFmJMflbp7E + ghawl/RVFEkL9+TWy9fR8sJWRx13P8CUP6AL9kVmcU2c3gMNpvQfIii9QOnQrRsi + yZj9FKl+ZM49I6RQ6dY5JVgWtpVm/+GBVuy1Aj91JEjw7r1jAeir5K9LAXG8kEN9 + irndx1SK2MMTY79lGHFGQRv3vnQGI0Wzjtn31YJ7qIFNJ1WWbAZLR9FBtzmMeXM6 + puoJ9UYvfIcHUGPdZGU= + -----END CERTIFICATE----- + -----BEGIN CERTIFICATE----- + MIIFpjCCA46gAwIBAgIBATANBgkqhkiG9w0BAQsFADBlMQswCQYDVQQGEwJVUzEW + MBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8g + UmVhbDEOMAwGA1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwHhcNMjMxMDMw + MTkyMzU2WhcNMjMxMDMxMTkyMzU2WjBlMQswCQYDVQQGEwJVUzEWMBQGA1UEBxMN + U2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8gUmVhbDEOMAwG + A1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwggIiMA0GCSqGSIb3DQEBAQUA + A4ICDwAwggIKAoICAQDQP3hJt4jTIo+tBXB/R4RuBTvv6OOago9joxlNDm0abseJ + ehE0V8FDi0SSpa7ZiqwCGq/deu5OIWVNpFCLHeH5YBriNmB7oPkNRCleu50JsUrG + RjSTtBIJcu/CVpD7Q5XMbhbhYcPArrxrSreo3ox8a+2X7b8nA1xPgIcWqSCgs9iV + lwKHaQWNTUXYwwZG7b9WG4EJaki6t1+1QbDDJU0oWrZNg23wQEBvEVRDQs7kadvm + 9YtZLPULlSyV4Rk3yNW8dPXHjcz2wp3PBPIWIQe9mzYU608307TkUMVN2EEOImxl + Wm1RtXYvvVb1LiY0C2lYbN3jLZQzffK5RsS87ocqTQM+HvDBv/PupHDvW08wietu + RtRbdx/2cN0GLmOHnkWKx+GlYDZfAtIj958fTKl2hHyNqJ1pE7vksSYBwBxMFQem + eSGzw5pO53kmPcZO203YQ2qoJd7z1aLf7eAOqDn5zwlYNc00bZ6DwTZsyptGv9sZ + zcZuovppPgCN4f1I9ja/NPKep+sVKfQqR5HuOFOPFcr6oOioESJSgIvXXF9RhCVh + UMeZKWWSCNm1ea4h6q8OJdQfM7XXkXm+dEyF0TogC00CidZWuYMZcgXND5p/1Di5 + PkCKPUMllCoK0oaTfFioNW7qtNbDGQrW+spwDa4kjJNKYtDD0jjPgFMgSzQ2MwID + AQABo2EwXzAOBgNVHQ8BAf8EBAMCAoQwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsG + AQUFBwMBMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFImOXc9Tv+mgn9jOsPig + 9vlAUTa+MA0GCSqGSIb3DQEBCwUAA4ICAQBZ9tqU88Nmgf+vDgkKMKLmLMaRCRlV + HcYrm7WoWLX+q6VSbmvf5eD5OrzzbAnnp16iXap8ivsAEFTo8XWh/bjl7G/2jetR + xZD2WHtzmAg3s4SVsEHIyFUF1ERwnjO2ndHjoIsx8ktUk1aNrmgPI6s07fkULDm+ + 2aXyBSZ9/oimZM/s3IqYJecxwE+yyS+FiS6mSDCCVIyQXdtVAbFHegyiBYv8EbwF + Xz70QiqQtxotGlfts/3uN1s+xnEoWz5E6S5DQn4xQh0xiKSXPizMXou9xKzypeSW + qtNdwtg62jKWDaVriBfrvoCnyjjCIjmcTcvA2VLmeZShyTuIucd0lkg2NKIGeM7I + o33hmdiKaop1fVtj8zqXvCRa3ecmlvcxPKX0otVFORFNOfaPjH/CjW0CnP0LByGK + YW19w0ncJZa9cc1SlNL28lnBhW+i1+ViR02wtjabH9XO+mtxuaEPDZ1hLhhjktqI + Y2oFUso4C5xiTU/hrH8+cFv0dn/+zyQoLfJEQbUX9biFeytt7T4Yynwhdy7jryqH + fdy/QM26YnsE8D7l4mv99z+zII0IRGnQOuLTuNAIyGJUf69hCDubZFDeHV/IB9hU + 6GA6lBpsJlTDgfJLbtKuAHxdn1DO+uGg0GxgwggH6Vh9x9yQK2E6BaepJisL/zNB + RQQmEyTn1hn/eA== + -----END CERTIFICATE----- +``` + + +### `key: "/etc/server/cert.key"` [server-key] + +The server certificate key used for authentication is required. The key option supports embedding of the private key: + +```yaml +key: | + -----BEGIN PRIVATE KEY----- + MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI + sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP + Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F + KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2 + MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z + HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ + nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx + Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0 + eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/ + Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM + epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve + Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn + BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8 + VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU + zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5 + GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA + 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7 + TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF + hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li + e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze + Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T + kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+ + kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav + NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K + 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc + nygO9KTJuUiBrLr0AHEnqko= + -----END PRIVATE KEY----- +``` + + +### `key_passphrase` [server-key-passphrase] + +The passphrase is used to decrypt an encrypted key stored in the configured `key` file. + + +### `verification_mode` [server-verification-mode] + +Controls the verification of client certificates. Valid values are: + +`full` +: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. + +`strict` +: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error. + +`certificate` +: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification. + +`none` +: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged. + + The default value is `full`. + + + +### `renegotiation` [server-renegotiation] + +This configures what types of TLS renegotiation are supported. The valid options are: + +`never` +: Disables renegotiation. + +`once` +: Allows a remote server to request renegotiation once per connection. + +`freely` +: Allows a remote server to request renegotiation repeatedly. + + The default value is `never`. + + + +### `restart_on_cert_change.enabled` [exit_on_cert_change_enabled] + +If set to `true` Auditbeat will restart if any file listed by `key`, `certificate`, or `certificate_authorities` is modified. + +::::{note} +This feature is NOT supported on Windows. The default value is `false`. +:::: + + +::::{note} +This feature requres the `execve` system call to be enabled. If you have a custom seccomp policy in place, make sure to allow for `execve`. +:::: + + + +### `restart_on_cert_change.period` [restart_on_cert_change_period] + +Specifies how often the files are checked for changes. Do not set the period to less than 1s because the modification time of files is often stored in seconds. Setting the period to less than 1s will result in validation error and Auditbeat will not start. The default value is 1m. + diff --git a/docs/reference/auditbeat/configuration-template.md b/docs/reference/auditbeat/configuration-template.md new file mode 100644 index 000000000000..e06b29197b15 --- /dev/null +++ b/docs/reference/auditbeat/configuration-template.md @@ -0,0 +1,112 @@ +--- +navigation_title: "Elasticsearch index template" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-template.html +--- + +# Configure Elasticsearch index template loading [configuration-template] + + +The `setup.template` section of the `auditbeat.yml` config file specifies the [index template](docs-content://manage-data/data-store/templates.md) to use for setting mappings in Elasticsearch. If template loading is enabled (the default), Auditbeat loads the index template automatically after successfully connecting to Elasticsearch. + +::::{note} +A connection to Elasticsearch is required to load the index template. If the configured output is not Elasticsearch (or {{ess}}), you must [load the template manually](/reference/auditbeat/auditbeat-template.md#load-template-manually). +:::: + + +You can adjust the following settings to load your own template or overwrite an existing one. + +**`setup.template.enabled`** +: Set to false to disable template loading. If this is set to false, you must [load the template manually](/reference/auditbeat/auditbeat-template.md#load-template-manually). + +**`setup.template.name`** +: The name of the template. The default is `auditbeat`. The Auditbeat version is always appended to the given name, so the final name is `auditbeat-%{[agent.version]}`. + +**`setup.template.pattern`** +: The template pattern to apply to the default index settings. The default pattern is `auditbeat`. The Auditbeat version is always included in the pattern, so the final pattern is `auditbeat-%{[agent.version]}`. + + Example: + + ```yaml + setup.template.name: "auditbeat" + setup.template.pattern: "auditbeat" + ``` + + +**`setup.template.fields`** +: The path to the YAML file describing the fields. The default is `fields.yml`. If a relative path is set, it is considered relative to the config path. See the [Directory layout](/reference/auditbeat/directory-layout.md) section for details. + +**`setup.template.overwrite`** +: A boolean that specifies whether to overwrite the existing template. The default is false. Do not enable this option if you start more than one instance of Auditbeat at the same time. It can overload {{es}} by sending too many template update requests. + +**`setup.template.settings`** +: A dictionary of settings to place into the `settings.index` dictionary of the Elasticsearch template. For more details about the available Elasticsearch mapping options, please see the Elasticsearch [mapping reference](docs-content://manage-data/data-store/mapping.md). + + Example: + + ```yaml + setup.template.name: "auditbeat" + setup.template.fields: "fields.yml" + setup.template.overwrite: false + setup.template.settings: + index.number_of_shards: 1 + index.number_of_replicas: 1 + ``` + + +**`setup.template.settings._source`** +: A dictionary of settings for the `_source` field. For the available settings, please see the Elasticsearch [reference](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md). + + Example: + + ```yaml + setup.template.name: "auditbeat" + setup.template.fields: "fields.yml" + setup.template.overwrite: false + setup.template.settings: + _source.enabled: false + ``` + + +**`setup.template.append_fields`** +: A list of fields to be added to the template and {{kib}} index pattern. This setting adds new fields. It does not overwrite or change existing fields. + + This setting is useful when your data contains fields that Auditbeat doesn’t know about in advance. + + If `append_fields` is specified along with `overwrite: true`, Auditbeat overwrites the existing template and applies the new template when creating new indices. Existing indices are not affected. If you’re running multiple instances of Auditbeat with different `append_fields` settings, the last one writing the template takes precedence. + + Any changes to this setting also affect the {{kib}} index pattern. + + Example config: + + ```yaml + setup.template.overwrite: true + setup.template.append_fields: + - name: test.name + type: keyword + - name: test.hostname + type: long + ``` + + +**`setup.template.json.enabled`** +: Set to `true` to load a JSON-based template file. Specify the path to your {{es}} index template file and set the name of the template. + + ```yaml + setup.template.json.enabled: true + setup.template.json.path: "template.json" + setup.template.json.name: "template-name" + setup.template.json.data_stream: false + ``` + + +::::{note} +If the JSON template is used, the `fields.yml` is skipped for the template generation. +:::: + + +::::{note} +If the JSON template is a data stream, set `setup.template.json.data_stream`. +:::: + + diff --git a/docs/reference/auditbeat/configure-cloud-id.md b/docs/reference/auditbeat/configure-cloud-id.md new file mode 100644 index 000000000000..7e6e27b99092 --- /dev/null +++ b/docs/reference/auditbeat/configure-cloud-id.md @@ -0,0 +1,34 @@ +--- +navigation_title: "{{ess}}" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configure-cloud-id.html +--- + +# Configure the output for {{ess}} on {{ecloud}} [configure-cloud-id] + + +Auditbeat comes with two settings that simplify the output configuration when used together with [{{ess}}](https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body). When defined, these setting overwrite settings from other parts in the configuration. + +Example: + +```yaml +cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" +cloud.auth: "elastic:{pwd}" +``` + +These settings can be also specified at the command line, like this: + +```sh +auditbeat -e -E cloud.id="" -E cloud.auth="" +``` + +## `cloud.id` [_cloud_id] + +The Cloud ID, which can be found in the {{ess}} web console, is used by Auditbeat to resolve the {{es}} and {{kib}} URLs. This setting overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. For more on locating and configuring the Cloud ID, see [Configure Beats and Logstash with Cloud ID](docs-content://deploy-manage/deploy/cloud-enterprise/find-cloud-id.md). + + +## `cloud.auth` [_cloud_auth] + +When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and `output.elasticsearch.password` settings. Because the Kibana settings inherit the username and password from the {{es}} output, this can also be used to set the `setup.kibana.username` and `setup.kibana.password` options. + + diff --git a/docs/reference/auditbeat/configuring-howto-auditbeat.md b/docs/reference/auditbeat/configuring-howto-auditbeat.md new file mode 100644 index 000000000000..56bf5870a0a4 --- /dev/null +++ b/docs/reference/auditbeat/configuring-howto-auditbeat.md @@ -0,0 +1,46 @@ +--- +navigation_title: "Configure" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-howto-auditbeat.html +--- + +# Configure Auditbeat [configuring-howto-auditbeat] + + +::::{tip} +To get started quickly, read [Quick start: installation and configuration](/reference/auditbeat/auditbeat-installation-configuration.md). +:::: + + +To configure Auditbeat, edit the configuration file. The default configuration file is called `auditbeat.yml`. The location of the file varies by platform. To locate the file, see [Directory layout](/reference/auditbeat/directory-layout.md). + +There’s also a full example configuration file called `auditbeat.reference.yml` that shows all non-deprecated options. + +::::{tip} +See the [Config File Format](/reference/libbeat/config-file-format.md) for more about the structure of the config file. +:::: + + +The following topics describe how to configure Auditbeat: + +* [Modules](/reference/auditbeat/configuration-auditbeat.md) +* [General settings](/reference/auditbeat/configuration-general-options.md) +* [Project paths](/reference/auditbeat/configuration-path.md) +* [Config file reloading](/reference/auditbeat/auditbeat-configuration-reloading.md) +* [Output](/reference/auditbeat/configuring-output.md) +* [SSL](/reference/auditbeat/configuration-ssl.md) +* [Index lifecycle management (ILM)](/reference/auditbeat/ilm.md) +* [Elasticsearch index template](/reference/auditbeat/configuration-template.md) +* [{{kib}} endpoint](/reference/auditbeat/setup-kibana-endpoint.md) +* [Kibana dashboards](/reference/auditbeat/configuration-dashboards.md) +* [Processors](/reference/auditbeat/filtering-enhancing-data.md) +* [Internal queue](/reference/auditbeat/configuring-internal-queue.md) +* [Logging](/reference/auditbeat/configuration-logging.md) +* [HTTP endpoint](/reference/auditbeat/http-endpoint.md) +* [*Regular expression support*](/reference/auditbeat/regexp-support.md) +* [Instrumentation](/reference/auditbeat/configuration-instrumentation.md) +* [Feature flags](/reference/auditbeat/configuration-feature-flags.md) +* [*auditbeat.reference.yml*](/reference/auditbeat/auditbeat-reference-yml.md) + +After changing configuration settings, you need to restart Auditbeat to pick up the changes. + diff --git a/docs/reference/auditbeat/configuring-ingest-node.md b/docs/reference/auditbeat/configuring-ingest-node.md new file mode 100644 index 000000000000..178b83895baf --- /dev/null +++ b/docs/reference/auditbeat/configuring-ingest-node.md @@ -0,0 +1,50 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-ingest-node.html +--- + +# Parse data using an ingest pipeline [configuring-ingest-node] + +When you use {{es}} for output, you can configure Auditbeat to use an [ingest pipeline](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) to pre-process documents before the actual indexing takes place in {{es}}. An ingest pipeline is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of {{ls}}. For example, you can create an ingest pipeline in {{es}} that consists of one processor that removes a field in a document followed by another processor that renames a field. + +After defining the pipeline in {{es}}, you simply configure Auditbeat to use the pipeline. To configure Auditbeat, you specify the pipeline ID in the `parameters` option under `elasticsearch` in the `auditbeat.yml` file: + +```yaml +output.elasticsearch: + hosts: ["localhost:9200"] + pipeline: my_pipeline_id +``` + +For example, let’s say that you’ve defined the following pipeline in a file named `pipeline.json`: + +```json +{ + "description": "Test pipeline", + "processors": [ + { + "lowercase": { + "field": "agent.name" + } + } + ] +} +``` + +To add the pipeline in {{es}}, you would run: + +```shell +curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json +``` + +Then in the `auditbeat.yml` file, you would specify: + +```yaml +output.elasticsearch: + hosts: ["localhost:9200"] + pipeline: "test-pipeline" +``` + +When you run Auditbeat, the value of `agent.name` is converted to lowercase before indexing. + +For more information about defining a pre-processing pipeline, see the [ingest pipeline](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) documentation. + diff --git a/docs/reference/auditbeat/configuring-internal-queue.md b/docs/reference/auditbeat/configuring-internal-queue.md new file mode 100644 index 000000000000..3331c1b2a1b8 --- /dev/null +++ b/docs/reference/auditbeat/configuring-internal-queue.md @@ -0,0 +1,144 @@ +--- +navigation_title: "Internal queue" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-internal-queue.html +--- + +# Configure the internal queue [configuring-internal-queue] + + +Auditbeat uses an internal queue to store events before publishing them. The queue is responsible for buffering and combining events into batches that can be consumed by the outputs. The outputs will use bulk operations to send a batch of events in one transaction. + +You can configure the type and behavior of the internal queue by setting options in the `queue` section of the `auditbeat.yml` config file or by setting options in the `queue` section of the output. Only one queue type can be configured. + +This sample configuration sets the memory queue to buffer up to 4096 events: + +```yaml +queue.mem: + events: 4096 +``` + + +## Configure the memory queue [configuration-internal-queue-memory] + +The memory queue keeps all events in memory. + +The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted. + +The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`. `flush.min_events` gives a limit on the number of events that can be included in a single batch, and `flush.timeout` specifies how long the queue should wait to completely fill an event request. If the output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of `bulk_max_size` and `flush.min_events`. + +`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`. + +In synchronous mode, an event request is always filled as soon as events are available, even if there are not enough events to fill the requested batch. This is useful when latency must be minimized. To use synchronous mode, set `flush.timeout` to 0. + +For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0 or 1. In this case, batch size will be capped at 1/2 the queue capacity. + +In asynchronous mode, an event request will wait up to the specified timeout to try and fill the requested batch completely. If the timeout expires, the queue returns a partial batch with all available events. To use asynchronous mode, set `flush.timeout` to a positive duration, e.g. `5s`. + +This sample configuration forwards events to the output when there are enough events to fill the output’s request (usually controlled by `bulk_max_size`, and limited to at most 512 events by `flush.min_events`), or when events have been waiting for 5s without filling the requested size: + +```yaml +queue.mem: + events: 4096 + flush.min_events: 512 + flush.timeout: 5s +``` + + +## Configuration options [_configuration_options_13] + +You can specify the following options in the `queue.mem` section of the `auditbeat.yml` config file: + + +#### `events` [queue-mem-events-option] + +Number of events the queue can store. + +The default value is 3200 events. + + +#### `flush.min_events` [queue-mem-flush-min-events-option] + +If greater than 1, specifies the maximum number of events per batch. In this case the output must wait for the queue to accumulate the requested number of events or for `flush.timeout` to expire before publishing. + +If 0 or 1, sets the maximum number of events per batch to half the queue size, and sets the queue to synchronous mode (equivalent to `flush.timeout` of 0). + +The default value is 1600. + + +#### `flush.timeout` [queue-mem-flush-timeout-option] + +Maximum wait time for event requests from the output to be fulfilled. If set to 0s, events are returned immediately. + +The default value is 10s. + + +## Configure the disk queue [configuration-internal-queue-disk] + +The disk queue stores pending events on the disk rather than main memory. This allows Beats to queue a larger number of events than is possible with the memory queue, and to save events when a Beat or device is restarted. This increased reliability comes with a performance tradeoff, as every incoming event must be written and read from the device’s disk. However, for setups where the disk is not the main bottleneck, the disk queue gives a simple and relatively low-overhead way to add a layer of robustness to incoming event data. + +To enable the disk queue with default settings, specify a maximum size: + +```yaml +queue.disk: + max_size: 10GB +``` + +The queue will use up to the specified maximum size on disk. It will only use as much space as required. For example, if the queue is only storing 1GB of events, then it will only occupy 1GB on disk no matter how high the maximum is. Queue data is deleted from disk after it has been successfully sent to the output. + + +### Configuration options [configuration-internal-queue-disk-reference] + +You can specify the following options in the `queue.disk` section of the `auditbeat.yml` config file: + + +#### `path` [_path] + +The path to the directory where the disk queue should store its data files. The directory is created on startup if it doesn’t exist. + +The default value is `"${path.data}/diskqueue"`. + + +#### `max_size` (required) [_max_size_required] + +The maximum size the queue should use on disk. Events that exceed this maximum will either pause their input or be discarded, depending on the input’s configuration. + +A value of `0` means that no maximum size is enforced, and the queue can grow up to the amount of free space on the disk. This value should be used with caution, as completely filling a system’s main disk can make it inoperable. It is best to use this setting only with a dedicated data or backup partition that will not interfere with Auditbeat or the rest of the host system. + +The default value is `10GB`. + + +#### `segment_size` [_segment_size] + +Data added to the queue is stored in segment files. Each segment contains some number of events waiting to be sent to the outputs, and is deleted when all its events are sent. By default, segment size is limited to 1/10 of the maximum queue size. Using a smaller size means that the queue will use more data files, but they will be deleted more quickly after use. Using a larger size means some data will take longer to delete, but the queue will use fewer auxiliary files. It is usually fine to leave this value unchanged. + +The default value is `max_size / 10`. + + +#### `read_ahead` [_read_ahead] + +The number of events that should be read from disk into memory while waiting for an output to request them. If you find outputs are slowing down because they can’t read as many events at a time, adjusting this setting upward may help, at the cost of higher memory usage. + +The default value is `512`. + + +#### `write_ahead` [_write_ahead] + +The number of events the queue should accept and store in memory while waiting for them to be written to disk. If you find the queue’s memory use is too high because events are waiting too long to be written to disk, adjusting this setting downward may help, at the cost of reduced event throughput. On the other hand, if inputs are waiting or discarding events because they are being produced faster than the disk can handle, adjusting this setting upward may help, at the cost of higher memory usage. + +The default value is `2048`. + + +#### `retry_interval` [_retry_interval] + +Some disk errors may block operation of the queue, for example a permission error writing to the data directory, or a disk full error while writing an event. In this case, the queue reports the error and retries after pausing for the time specified in `retry_interval`. + +The default value is `1s` (one second). + + +#### `max_retry_interval` [_max_retry_interval] + +When there are multiple consecutive errors writing to the disk, the queue increases the retry interval by factors of 2 up to a maximum of `max_retry_interval`. Increase this value if you are concerned about logging too many errors or overloading the host system if the target disk becomes unavailable for an extended time. + +The default value is `30s` (thirty seconds). + diff --git a/docs/reference/auditbeat/configuring-output.md b/docs/reference/auditbeat/configuring-output.md new file mode 100644 index 000000000000..362fd8e7dc78 --- /dev/null +++ b/docs/reference/auditbeat/configuring-output.md @@ -0,0 +1,31 @@ +--- +navigation_title: "Output" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-output.html +--- + +# Configure the output [configuring-output] + + +You configure Auditbeat to write to a specific output by setting options in the Outputs section of the `auditbeat.yml` config file. Only a single output may be defined. + +The following topics describe how to configure each supported output. If you’ve secured the {{stack}}, also read [Secure](/reference/auditbeat/securing-auditbeat.md) for more about security-related configuration options. + +* [{{ess}}](/reference/auditbeat/configure-cloud-id.md) +* [Elasticsearch](/reference/auditbeat/elasticsearch-output.md) +* [Logstash](/reference/auditbeat/logstash-output.md) +* [Kafka](/reference/auditbeat/kafka-output.md) +* [Redis](/reference/auditbeat/redis-output.md) +* [File](/reference/auditbeat/file-output.md) +* [Console](/reference/auditbeat/console-output.md) +* [Discard](/reference/auditbeat/discard-output.md) + + + + + + + + + + diff --git a/docs/reference/auditbeat/configuring-ssl-logstash.md b/docs/reference/auditbeat/configuring-ssl-logstash.md new file mode 100644 index 000000000000..70f884d2e0f5 --- /dev/null +++ b/docs/reference/auditbeat/configuring-ssl-logstash.md @@ -0,0 +1,118 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-ssl-logstash.html +--- + +# Secure communication with Logstash [configuring-ssl-logstash] + +You can use SSL mutual authentication to secure connections between Auditbeat and Logstash. This ensures that Auditbeat sends encrypted data to trusted Logstash servers only, and that the Logstash server receives data from trusted Auditbeat clients only. + +To use SSL mutual authentication: + +1. Create a certificate authority (CA) and use it to sign the certificates that you plan to use for Auditbeat and Logstash. Creating a correct SSL/TLS infrastructure is outside the scope of this document. There are many online resources available that describe how to create certificates. + + ::::{tip} + If you are using {{security-features}}, you can use the [elasticsearch-certutil tool](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) to generate certificates. + :::: + +2. Configure Auditbeat to use SSL. In the `auditbeat.yml` config file, specify the following settings under `ssl`: + + * `certificate_authorities`: Configures Auditbeat to trust any certificates signed by the specified CA. If `certificate_authorities` is empty or not set, the trusted certificate authorities of the host system are used. + * `certificate` and `key`: Specifies the certificate and key that Auditbeat uses to authenticate with Logstash. + + For example: + + ```yaml + output.logstash: + hosts: ["logs.mycompany.com:5044"] + ssl.certificate_authorities: ["/etc/ca.crt"] + ssl.certificate: "/etc/client.crt" + ssl.key: "/etc/client.key" + ``` + + For more information about these configuration options, see [SSL](/reference/auditbeat/configuration-ssl.md). + +3. Configure Logstash to use SSL. In the Logstash config file, specify the following settings for the [Beats input plugin for Logstash](logstash://reference/plugins-inputs-beats.md): + + * `ssl`: When set to true, enables Logstash to use SSL/TLS. + * `ssl_certificate_authorities`: Configures Logstash to trust any certificates signed by the specified CA. + * `ssl_certificate` and `ssl_key`: Specify the certificate and key that Logstash uses to authenticate with the client. + * `ssl_verify_mode`: Specifies whether the Logstash server verifies the client certificate against the CA. You need to specify either `peer` or `force_peer` to make the server ask for the certificate and validate it. If you specify `force_peer`, and Auditbeat doesn’t provide a certificate, the Logstash connection will be closed. If you choose not to use [certutil](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md), the certificates that you obtain must allow for both `clientAuth` and `serverAuth` if the extended key usage extension is present. + + For example: + + ```json + input { + beats { + port => 5044 + ssl => true + ssl_certificate_authorities => ["/etc/ca.crt"] + ssl_certificate => "/etc/server.crt" + ssl_key => "/etc/server.key" + ssl_verify_mode => "force_peer" + } + } + ``` + + For more information about these options, see the [documentation for the Beats input plugin](logstash://reference/plugins-inputs-beats.md). + + + +## Validate the Logstash server’s certificate [testing-ssl-logstash] + +Before running Auditbeat, you should validate the Logstash server’s certificate. You can use `curl` to validate the certificate even though the protocol used to communicate with Logstash is not based on HTTP. For example: + +```shell +curl -v --cacert ca.crt https://logs.mycompany.com:5044 +``` + +If the test is successful, you’ll receive an empty response error: + +```shell +* Rebuilt URL to: https://logs.mycompany.com:5044/ +* Trying 192.168.99.100... +* Connected to logs.mycompany.com (192.168.99.100) port 5044 (#0) +* TLS 1.2 connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA +* Server certificate: logs.mycompany.com +* Server certificate: mycompany.com +> GET / HTTP/1.1 +> Host: logs.mycompany.com:5044 +> User-Agent: curl/7.43.0 +> Accept: */* +> +* Empty reply from server +* Connection #0 to host logs.mycompany.com left intact +curl: (52) Empty reply from server +``` + +The following example uses the IP address rather than the hostname to validate the certificate: + +```shell +curl -v --cacert ca.crt https://192.168.99.100:5044 +``` + +Validation for this test fails because the certificate is not valid for the specified IP address. It’s only valid for the `logs.mycompany.com`, the hostname that appears in the Subject field of the certificate. + +```shell +* Rebuilt URL to: https://192.168.99.100:5044/ +* Trying 192.168.99.100... +* Connected to 192.168.99.100 (192.168.99.100) port 5044 (#0) +* WARNING: using IP address, SNI is being disabled by the OS. +* SSL: certificate verification failed (result: 5) +* Closing connection 0 +curl: (51) SSL: certificate verification failed (result: 5) +``` + +See the [troubleshooting docs](/reference/auditbeat/ssl-client-fails.md) for info about resolving this issue. + + +## Test the Auditbeat to Logstash connection [_test_the_auditbeat_to_logstash_connection] + +If you have Auditbeat running as a service, first stop the service. Then test your setup by running Auditbeat in the foreground so you can quickly see any errors that occur: + +```sh +auditbeat -c auditbeat.yml -e -v +``` + +Any errors will be printed to the console. See the [troubleshooting docs](/reference/auditbeat/ssl-client-fails.md) for info about resolving common errors. + diff --git a/docs/reference/auditbeat/connection-problem.md b/docs/reference/auditbeat/connection-problem.md new file mode 100644 index 000000000000..d52751bd2302 --- /dev/null +++ b/docs/reference/auditbeat/connection-problem.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/connection-problem.html +--- + +# Logstash connection doesn���t work [connection-problem] + +You may have configured {{ls}} or Auditbeat incorrectly. To resolve the issue: + +* Make sure that {{ls}} is running and you can connect to it. First, try to ping the {{ls}} host to verify that you can reach it from the host running Auditbeat. Then use either `nc` or `telnet` to make sure that the port is available. For example: + + ```shell + ping + telnet 5044 + ``` + +* Verify that the config file for Auditbeat specifies the correct port where {{ls}} is running. +* Make sure that the {{es}} output is commented out in the config file and the {{ls}} output is uncommented. +* Confirm that the most recent [Beats input plugin for {{ls}}](logstash://reference/plugins-inputs-beats.md) is installed and configured. Note that Beats will not connect to the Lumberjack input plugin. To learn how to install and update plugins, see [Working with plugins](logstash://reference/working-with-plugins.md). + diff --git a/docs/reference/auditbeat/console-output.md b/docs/reference/auditbeat/console-output.md new file mode 100644 index 000000000000..bd1c028cb6fa --- /dev/null +++ b/docs/reference/auditbeat/console-output.md @@ -0,0 +1,67 @@ +--- +navigation_title: "Console" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/console-output.html +--- + +# Configure the Console output [console-output] + + +The Console output writes events in JSON format to stdout. + +::::{warning} +The Console output should be used only for debugging issues as it can produce a large amount of logging data. +:::: + + +To use this output, edit the Auditbeat configuration file to disable the {{es}} output by commenting it out, and enable the console output by adding `output.console`. + +Example configuration: + +```yaml +output.console: + pretty: true +``` + +## Configuration options [_configuration_options_7] + +You can specify the following `output.console` options in the `auditbeat.yml` config file: + +### `enabled` [_enabled_6] + +The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + +### `pretty` [_pretty] + +If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false. + + +### `codec` [_codec_4] + +Output codec configuration. If the `codec` section is missing, events will be json encoded using the `pretty` option. + +See [Change the output codec](/reference/auditbeat/configuration-output-codec.md) for more information. + + +### `bulk_max_size` [_bulk_max_size_4] + +The maximum number of events to buffer internally during publishing. The default is 2048. + +Specifying a larger batch size may add some latency and buffering during publishing. However, for Console output, this setting does not affect how events are published. + +Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. + + +### `queue` [_queue_6] + +Configuration options for internal queue. + +See [Internal queue](/reference/auditbeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `auditbeat.yml` or the `output` section but not both. + + + diff --git a/docs/reference/auditbeat/contributing-to-beats.md b/docs/reference/auditbeat/contributing-to-beats.md new file mode 100644 index 000000000000..79b7d64c0734 --- /dev/null +++ b/docs/reference/auditbeat/contributing-to-beats.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/contributing-to-beats.html +--- + +# Contribute to Beats [contributing-to-beats] + +The Beats are open source and we love to receive contributions from our community — you! + +There are many ways to contribute, from writing tutorials or blog posts, improving the documentation, submitting bug reports and feature requests, or writing code that implements a whole new protocol, module, or Beat. + +The [Beats Developer Guide](http://www.elastic.co/guide/en/beats/devguide/master/index.md) is your one-stop shop for everything related to developing code for the Beats project. + diff --git a/docs/reference/auditbeat/convert.md b/docs/reference/auditbeat/convert.md new file mode 100644 index 000000000000..cc42e44ba519 --- /dev/null +++ b/docs/reference/auditbeat/convert.md @@ -0,0 +1,42 @@ +--- +navigation_title: "convert" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/convert.html +--- + +# Convert [convert] + + +The `convert` processor converts a field in the event to a different type, such as converting a string to an integer. + +The supported types include: `integer`, `long`, `float`, `double`, `string`, `boolean`, and `ip`. + +The `ip` type is effectively an alias for `string`, but with an added validation that the value is an IPv4 or IPv6 address. + +```yaml +processors: + - convert: + fields: + - {from: "src_ip", to: "source.ip", type: "ip"} + - {from: "src_port", to: "source.port", type: "integer"} + ignore_missing: true + fail_on_error: false +``` + +The `convert` processor has the following configuration settings: + +`fields` +: (Required) This is the list of fields to convert. At least one item must be contained in the list. Each item in the list must have a `from` key that specifies the source field. The `to` key is optional and specifies where to assign the converted value. If `to` is omitted then the `from` field is updated in-place. The `type` key specifies the data type to convert the value to. If `type` is omitted then the processor copies or renames the field without any type conversion. + +`ignore_missing` +: (Optional) If `true` the processor continues to the next field when the `from` key is not found in the event. If false then the processor returns an error and does not process the remaining fields. Default is `false`. + +`fail_on_error` +: (Optional) If false type conversion failures are ignored and the processor continues to the next field. Default is `true`. + +`tag` +: (Optional) An identifier for this processor. Useful for debugging. + +`mode` +: (Optional) When both `from` and `to` are defined for a field then `mode` controls whether to `copy` or `rename` the field when the type conversion is successful. Default is `copy`. + diff --git a/docs/reference/auditbeat/copy-fields.md b/docs/reference/auditbeat/copy-fields.md new file mode 100644 index 000000000000..0cb36da4318b --- /dev/null +++ b/docs/reference/auditbeat/copy-fields.md @@ -0,0 +1,45 @@ +--- +navigation_title: "copy_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/copy-fields.html +--- + +# Copy fields [copy-fields] + + +The `copy_fields` processor takes the value of a field and copies it to a new field. + +You cannot use this processor to replace an existing field. If the target field already exists, you must [drop](/reference/auditbeat/drop-fields.md) or [rename](/reference/auditbeat/rename-fields.md) the field before using `copy_fields`. + +`fields` +: List of `from` and `to` pairs to copy from and to. It’s supported to use `@metadata.` prefix for `from` and `to` and copy values not just in/from/to the event fields but also in/from/to the event metadata. + +`fail_on_error` +: (Optional) If set to `true` and an error occurs, the changes are reverted and the original is returned. If set to `false`, processing continues if an error occurs. Default is `true`. + +`ignore_missing` +: (Optional) Indicates whether to ignore events that lack the source field. The default is `false`, which will fail processing of an event if a field is missing. + +For example, this configuration: + +```yaml +processors: + - copy_fields: + fields: + - from: message + to: event.original + fail_on_error: false + ignore_missing: true +``` + +Copies the original `message` field to `event.original`: + +```json +{ + "message": "my-interesting-message", + "event": { + "original": "my-interesting-message" + } +} +``` + diff --git a/docs/reference/auditbeat/could-not-locate-index-pattern.md b/docs/reference/auditbeat/could-not-locate-index-pattern.md new file mode 100644 index 000000000000..d5aea2e11c9e --- /dev/null +++ b/docs/reference/auditbeat/could-not-locate-index-pattern.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/could-not-locate-index-pattern.html +--- + +# Dashboard could not locate the index-pattern [could-not-locate-index-pattern] + +Typically Auditbeat sets up the index pattern automatically when it loads the index template. However, if for some reason Auditbeat loads the index template, but the index pattern does not get created correctly, you’ll see a "could not locate that index-pattern" error. To resolve this problem: + +1. Try running the `setup` command again. For example: `./auditbeat setup`. +2. If that doesn’t work, go to the Management app in {{kib}}, and under **Index Patterns**, look for the pattern. + + 1. If the pattern doesn’t exist, create it manually. + + * Set the **Time filter field name** to `@timestamp`. + * Set the **Custom index pattern ID** advanced option. For example, if your custom index name is `auditbeat-customname`, set the custom index pattern ID to `auditbeat-customname-*`. + + +For more information, see [Creating an index pattern](docs-content://explore-analyze/find-and-organize/data-views.md) in the {{kib}} docs. + diff --git a/docs/reference/auditbeat/decode-base64-field.md b/docs/reference/auditbeat/decode-base64-field.md new file mode 100644 index 000000000000..e1cd807859a3 --- /dev/null +++ b/docs/reference/auditbeat/decode-base64-field.md @@ -0,0 +1,35 @@ +--- +navigation_title: "decode_base64_field" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-base64-field.html +--- + +# Decode Base64 fields [decode-base64-field] + + +The `decode_base64_field` processor specifies a field to base64 decode. The `field` key contains a `from: old-key` and a `to: new-key` pair. `from` is the origin and `to` the target name of the field. + +To overwrite fields either first rename the target field or use the `drop_fields` processor to drop the field and then rename the field. + +```yaml +processors: + - decode_base64_field: + field: + from: "field1" + to: "field2" + ignore_missing: false + fail_on_error: true +``` + +In the example above: - field1 is decoded in field2 + +The `decode_base64_field` processor has the following configuration settings: + +`ignore_missing` +: (Optional) If set to true, no error is logged in case a key which should be base64 decoded is missing. Default is `false`. + +`fail_on_error` +: (Optional) If set to true, in case of an error the base64 decode of fields is stopped and the original event is returned. If set to false, decoding continues also if an error happened during decoding. Default is `true`. + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/auditbeat/decode-duration.md b/docs/reference/auditbeat/decode-duration.md new file mode 100644 index 000000000000..b153ba498e29 --- /dev/null +++ b/docs/reference/auditbeat/decode-duration.md @@ -0,0 +1,25 @@ +--- +navigation_title: "decode_duration" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-duration.html +--- + +# Decode duration [decode-duration] + + +The `decode_duration` processor decodes a Go-style duration string into a specific `format`. + +For more information about the Go `time.Duration` string style, refer to the [Go documentation](https://pkg.go.dev/time#Duration). + +| Name | Required | Default | Description | | +| --- | --- | --- | --- | --- | +| `field` | yes | | Which field of event needs to be decoded as `time.Duration` | | +| `format` | yes | `milliseconds` | Supported formats: `milliseconds`/`seconds`/`minutes`/`hours` | | + +```yaml +processors: + - decode_duration: + field: "app.rpc.cost" + format: "milliseconds" +``` + diff --git a/docs/reference/auditbeat/decode-json-fields.md b/docs/reference/auditbeat/decode-json-fields.md new file mode 100644 index 000000000000..6a5e3aeba1c5 --- /dev/null +++ b/docs/reference/auditbeat/decode-json-fields.md @@ -0,0 +1,48 @@ +--- +navigation_title: "decode_json_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-json-fields.html +--- + +# Decode JSON fields [decode-json-fields] + + +The `decode_json_fields` processor decodes fields containing JSON strings and replaces the strings with valid JSON objects. + +```yaml +processors: + - decode_json_fields: + fields: ["field1", "field2", ...] + process_array: false + max_depth: 1 + target: "" + overwrite_keys: false + add_error_key: true +``` + +The `decode_json_fields` processor has the following configuration settings: + +`fields` +: The fields containing JSON strings to decode. + +`process_array` +: (Optional) A Boolean value that specifies whether to process arrays. The default is `false`. + +`max_depth` +: (Optional) The maximum parsing depth. A value of `1` will decode the JSON objects in fields indicated in `fields`, a value of `2` will also decode the objects embedded in the fields of these parsed documents. The default is `1`. + +`target` +: (Optional) The field under which the decoded JSON will be written. By default, the decoded JSON object replaces the string field from which it was read. To merge the decoded JSON fields into the root of the event, specify `target` with an empty string (`target: ""`). Note that the `null` value (`target:`) is treated as if the field was not set. + +`overwrite_keys` +: (Optional) A Boolean value that specifies whether existing keys in the event are overwritten by keys from the decoded JSON object. The default value is `false`. + +`expand_keys` +: (Optional) A Boolean value that specifies whether keys in the decoded JSON should be recursively de-dotted and expanded into a hierarchical object structure. For example, `{"a.b.c": 123}` would be expanded into `{"a":{"b":{"c":123}}}`. + +`add_error_key` +: (Optional) If set to `true` and an error occurs while decoding JSON keys, the `error` field will become a part of the event with the error message. If set to `false`, there will not be any error in the event’s field. The default value is `false`. + +`document_id` +: (Optional) JSON key that’s used as the document ID. If configured, the field will be removed from the original JSON document and stored in `@metadata._id` + diff --git a/docs/reference/auditbeat/decode-xml-wineventlog.md b/docs/reference/auditbeat/decode-xml-wineventlog.md new file mode 100644 index 000000000000..6de117485e6f --- /dev/null +++ b/docs/reference/auditbeat/decode-xml-wineventlog.md @@ -0,0 +1,162 @@ +--- +navigation_title: "decode_xml_wineventlog" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-xml-wineventlog.html +--- + +# Decode XML Wineventlog [decode-xml-wineventlog] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `decode_xml_wineventlog` processor decodes Windows Event Log data in XML format that is stored under the `field` key. It outputs the result into the `target_field`. + +The output fields will be the same as the [winlogbeat winlog fields](/reference/winlogbeat/exported-fields-winlog.md#_winlog). + +The supported configuration options are: + +`field` +: (Required) Source field containing the XML. Defaults to `message`. + +`target_field` +: (Required) The field under which the decoded XML will be written. To merge the decoded XML fields into the root of the event specify `target_field` with an empty string (`target_field: ""`). The default value is `winlog`. + +`overwrite_keys` +: (Optional) A boolean that specifies whether keys that already exist in the event are overwritten by keys from the decoded XML object. The default value is `true`. + +`map_ecs_fields` +: (Optional) A boolean that specifies whether to map additional ECS fields when possible. Note that ECS field keys are placed outside of `target_field`. The default value is `true`. + +`ignore_missing` +: (Optional) If `true` the processor will not return an error when a specified field does not exist. Defaults to `false`. + +`ignore_failure` +: (Optional) Ignore all errors produced by the processor. Defaults to `false`. + +`language` +: (Optional) The language ID the events will be rendered in. The language will be forced regardless of the system language. Forwarded events will ignore this setting. A complete list of language IDs can be found [here](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/a9eac961-e77d-41a6-90a5-ce1a8b0cdb9c). It defaults to `0`, which indicates to use the system language. + +Example: + +```yaml +processors: + - decode_xml_wineventlog: + field: event.original + target_field: winlog +``` + +```json +{ + "event": { + "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success" + } +} +``` + +Will produce the following output: + +```json +{ + "event": { + "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success", + "action": "Special Logon", + "code": "4672", + "kind": "event", + "outcome": "success", + "provider": "Microsoft-Windows-Security-Auditing", + }, + "host": { + "name": "vagrant", + }, + "log": { + "level": "information", + }, + "winlog": { + "channel": "Security", + "outcome": "success", + "activity_id": "{ffb23523-1f32-0000-c335-b2ff321fd701}", + "level": "information", + "event_id": 4672, + "provider_name": "Microsoft-Windows-Security-Auditing", + "record_id": 11303, + "computer_name": "vagrant", + "keywords_raw": 9232379236109516800, + "opcode": "Info", + "provider_guid": "{54849625-5478-4994-a5ba-3e3b0328c30d}", + "event_data": { + "SubjectUserSid": "S-1-5-18", + "SubjectUserName": "SYSTEM", + "SubjectDomainName": "NT AUTHORITY", + "SubjectLogonId": "0x3e7", + "PrivilegeList": "SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege" + }, + "task": "Special Logon", + "keywords": [ + "Audit Success" + ], + "message": "Special privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege", + "process": { + "pid": 652, + "thread": { + "id": 4660 + } + } + } +} +``` + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + +The field mappings are as follows: + +| Event Field | Source XML Element | Notes | +| --- | --- | --- | +| `winlog.channel` | `` | | +| `winlog.event_id` | `` | | +| `winlog.provider_name` | `` | `Name` attribute | +| `winlog.record_id` | `` | | +| `winlog.task` | `` | | +| `winlog.computer_name` | `` | | +| `winlog.keywords` | `` | list of each `Keyword` | +| `winlog.opcodes` | `` | | +| `winlog.provider_guid` | `` | `Guid` attribute | +| `winlog.version` | `` | | +| `winlog.time_created` | `` | `SystemTime` attribute | +| `winlog.outcome` | `` | "success" if bit 0x20000000000000 is set, "failure" if 0x10000000000000 is set | +| `winlog.level` | `` | converted to lowercase | +| `winlog.message` | `` | line endings removed | +| `winlog.user.identifier` | `` | | +| `winlog.user.domain` | `` | | +| `winlog.user.name` | `` | | +| `winlog.user.type` | `` | converted from integer to String | +| `winlog.event_data` | `` | map where `Name` attribute in Data element is key, and value is the value of the Data element | +| `winlog.user_data` | `` | map where `Name` attribute in Data element is key, and value is the value of the Data element | +| `winlog.activity_id` | `` | | +| `winlog.related_activity_id` | `` | | +| `winlog.kernel_time` | `` | | +| `winlog.process.pid` | `` | | +| `winlog.process.thread.id` | `` | | +| `winlog.processor_id` | `` | | +| `winlog.processor_time` | `` | | +| `winlog.session_id` | `` | | +| `winlog.user_time` | `` | | +| `winlog.error.code` | `` | | + +If `map_ecs_fields` is enabled then the following field mappings are also performed: + +| Event Field | Source XML or other field | Notes | +| --- | --- | --- | +| `event.code` | `winlog.event_id` | | +| `event.kind` | `"event"` | | +| `event.provider` | `` | `Name` attribute | +| `event.action` | `` | | +| `event.host.name` | `` | | +| `event.outcome` | `winlog.outcome` | | +| `log.level` | `winlog.level` | | +| `message` | `winlog.message` | | +| `error.code` | `winlog.error.code` | | +| `error.message` | `winlog.error.message` | | + diff --git a/docs/reference/auditbeat/decode-xml.md b/docs/reference/auditbeat/decode-xml.md new file mode 100644 index 000000000000..252cd8380806 --- /dev/null +++ b/docs/reference/auditbeat/decode-xml.md @@ -0,0 +1,96 @@ +--- +navigation_title: "decode_xml" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/decode-xml.html +--- + +# Decode XML [decode-xml] + + +The `decode_xml` processor decodes XML data that is stored under the `field` key. It outputs the result into the `target_field`. + +This example demonstrates how to decode an XML string contained in the `message` field and write the resulting fields into the root of the document. Any fields that already exist will be overwritten. + +```yaml +processors: + - decode_xml: + field: message + target_field: "" + overwrite_keys: true +``` + +By default any decoding errors that occur will stop the processing chain and the error will be added to `error.message` field. To ignore all errors and continue to the next processor you can set `ignore_failure: true`. To specifically ignore failures caused by `field` not existing you can set `ignore_missing: true`. + +```yaml +processors: + - decode_xml: + field: example + target_field: xml + ignore_missing: true + ignore_failure: true +``` + +By default all keys converted from XML will have the names converted to lowercase. If there is a need to disable this behavior it is possible to use the below example: + +```yaml +processors: + - decode_xml: + field: message + target_field: xml + to_lower: false +``` + +Example XML input: + +```xml + + + William H. Gaddis + The Recognitions + One of the great seminal American novels of the 20th century. + + +``` + +Will produce the following output: + +```json +{ + "xml": { + "catalog": { + "book": { + "author": "William H. Gaddis", + "review": "One of the great seminal American novels of the 20th century.", + "seq": "1", + "title": "The Recognitions" + } + } + } +} +``` + +The supported configuration options are: + +`field` +: (Required) Source field containing the XML. Defaults to `message`. + +`target_field` +: (Optional) The field under which the decoded XML will be written. By default the decoded XML object replaces the field from which it was read. To merge the decoded XML fields into the root of the event specify `target_field` with an empty string (`target_field: ""`). Note that the `null` value (`target_field:`) is treated as if the field was not set at all. + +`overwrite_keys` +: (Optional) A boolean that specifies whether keys that already exist in the event are overwritten by keys from the decoded XML object. The default value is `true`. + +`to_lower` +: (Optional) Converts all keys to lowercase. Accepts either `true` or `false`. The default value is `true`. + +`document_id` +: (Optional) XML key to use as the document ID. If configured, the field will be removed from the original XML document and stored in `@metadata._id`. + +`ignore_missing` +: (Optional) If `true` the processor will not return an error when a specified field does not exist. Defaults to `false`. + +`ignore_failure` +: (Optional) Ignore all errors produced by the processor. Defaults to `false`. + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/auditbeat/decompress-gzip-field.md b/docs/reference/auditbeat/decompress-gzip-field.md new file mode 100644 index 000000000000..18ef37619cce --- /dev/null +++ b/docs/reference/auditbeat/decompress-gzip-field.md @@ -0,0 +1,35 @@ +--- +navigation_title: "decompress_gzip_field" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/decompress-gzip-field.html +--- + +# Decompress gzip fields [decompress-gzip-field] + + +The `decompress_gzip_field` processor specifies a field to gzip decompress. The `field` key contains a `from: old-key` and a `to: new-key` pair. `from` is the origin and `to` the target name of the field. + +To overwrite fields either first rename the target field or use the `drop_fields` processor to drop the field and then decompress the field. + +```yaml +processors: + - decompress_gzip_field: + field: + from: "field1" + to: "field2" + ignore_missing: false + fail_on_error: true +``` + +In the example above: - field1 is decoded in field2 + +The `decompress_gzip_field` processor has the following configuration settings: + +`ignore_missing` +: (Optional) If set to true, no error is logged in case a key which should be decompressed is missing. Default is `false`. + +`fail_on_error` +: (Optional) If set to true, in case of an error the decompression of fields is stopped and the original event is returned. If set to false, decompression continues also if an error happened during decoding. Default is `true`. + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/auditbeat/defining-processors.md b/docs/reference/auditbeat/defining-processors.md new file mode 100644 index 000000000000..dbcb0706c811 --- /dev/null +++ b/docs/reference/auditbeat/defining-processors.md @@ -0,0 +1,329 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/defining-processors.html +--- + +# Define processors [defining-processors] + +You can use processors to filter and enhance data before sending it to the configured output. To define a processor, you specify the processor name, an optional condition, and a set of parameters: + +```yaml +processors: + - : + when: + + + + - : + when: + + + +... +``` + +Where: + +* `` specifies a [processor](#processors) that performs some kind of action, such as selecting the fields that are exported or adding metadata to the event. +* `` specifies an optional [condition](#conditions). If the condition is present, then the action is executed only if the condition is fulfilled. If no condition is set, then the action is always executed. +* `` is the list of parameters to pass to the processor. + +More complex conditional processing can be accomplished by using the if-then-else processor configuration. This allows multiple processors to be executed based on a single condition. + +```yaml +processors: + - if: + + then: <1> + - : + + - : + + ... + else: <2> + - : + + - : + + ... +``` + +1. `then` must contain a single processor or a list of one or more processors to execute when the condition evaluates to true. +2. `else` is optional. It can contain a single processor or a list of processors to execute when the conditional evaluate to false. + + +## Where are processors valid? [where-valid] + +Processors are valid: + +* At the top-level in the configuration. The processor is applied to all data collected by Auditbeat. +* Under a specific module. The processor is applied to the data collected for that module. + + ```yaml + auditbeat.modules: + - module: + processors: + - : + when: + + + ``` + + + +## Processors [processors] + +The supported processors are: + +* [`add_cloud_metadata`](/reference/auditbeat/add-cloud-metadata.md) +* [`add_cloudfoundry_metadata`](/reference/auditbeat/add-cloudfoundry-metadata.md) +* [`add_docker_metadata`](/reference/auditbeat/add-docker-metadata.md) +* [`add_fields`](/reference/auditbeat/add-fields.md) +* [`add_host_metadata`](/reference/auditbeat/add-host-metadata.md) +* [`add_id`](/reference/auditbeat/add-id.md) +* [`add_kubernetes_metadata`](/reference/auditbeat/add-kubernetes-metadata.md) +* [`add_labels`](/reference/auditbeat/add-labels.md) +* [`add_locale`](/reference/auditbeat/add-locale.md) +* [`add_nomad_metadata`](/reference/auditbeat/add-nomad-metadata.md) +* [`add_observer_metadata`](/reference/auditbeat/add-observer-metadata.md) +* [`add_process_metadata`](/reference/auditbeat/add-process-metadata.md) +* [`add_session_metadata`](/reference/auditbeat/add-session-metadata.md) +* [`add_tags`](/reference/auditbeat/add-tags.md) +* [`append`](/reference/auditbeat/append.md) +* [`community_id`](/reference/auditbeat/community-id.md) +* [`convert`](/reference/auditbeat/convert.md) +* [`copy_fields`](/reference/auditbeat/copy-fields.md) +* [`decode_base64_field`](/reference/auditbeat/decode-base64-field.md) +* [`decode_duration`](/reference/auditbeat/decode-duration.md) +* [`decode_json_fields`](/reference/auditbeat/decode-json-fields.md) +* [`decode_xml`](/reference/auditbeat/decode-xml.md) +* [`decode_xml_wineventlog`](/reference/auditbeat/decode-xml-wineventlog.md) +* [`decompress_gzip_field`](/reference/auditbeat/decompress-gzip-field.md) +* [`detect_mime_type`](/reference/auditbeat/detect-mime-type.md) +* [`dissect`](/reference/auditbeat/dissect.md) +* [`dns`](/reference/auditbeat/processor-dns.md) +* [`drop_event`](/reference/auditbeat/drop-event.md) +* [`drop_fields`](/reference/auditbeat/drop-fields.md) +* [`extract_array`](/reference/auditbeat/extract-array.md) +* [`fingerprint`](/reference/auditbeat/fingerprint.md) +* [`include_fields`](/reference/auditbeat/include-fields.md) +* [`move-fields`](/reference/auditbeat/move-fields.md) +* [`rate_limit`](/reference/auditbeat/rate-limit.md) +* [`registered_domain`](/reference/auditbeat/processor-registered-domain.md) +* [`rename`](/reference/auditbeat/rename-fields.md) +* [`replace`](/reference/auditbeat/replace-fields.md) +* [`syslog`](/reference/auditbeat/syslog.md) +* [`translate_ldap_attribute`](/reference/auditbeat/processor-translate-guid.md) +* [`translate_sid`](/reference/auditbeat/processor-translate-sid.md) +* [`truncate_fields`](/reference/auditbeat/truncate-fields.md) +* [`urldecode`](/reference/auditbeat/urldecode.md) + + +## Conditions [conditions] + +Each condition receives a field to compare. You can specify multiple fields under the same condition by using `AND` between the fields (for example, `field1 AND field2`). + +For each field, you can specify a simple field name or a nested map, for example `dns.question.name`. + +See [Exported fields](/reference/auditbeat/exported-fields.md) for a list of all the fields that are exported by Auditbeat. + +The supported conditions are: + +* [`equals`](#condition-equals) +* [`contains`](#condition-contains) +* [`regexp`](#condition-regexp) +* [`range`](#condition-range) +* [`network`](#condition-network) +* [`has_fields`](#condition-has_fields) +* [`or`](#condition-or) +* [`and`](#condition-and) +* [`not`](#condition-not) + + +#### `equals` [condition-equals] + +With the `equals` condition, you can compare if a field has a certain value. The condition accepts only an integer or a string value. + +For example, the following condition checks if the response code of the HTTP transaction is 200: + +```yaml +equals: + http.response.code: 200 +``` + + +#### `contains` [condition-contains] + +The `contains` condition checks if a value is part of a field. The field can be a string or an array of strings. The condition accepts only a string value. + +For example, the following condition checks if an error is part of the transaction status: + +```yaml +contains: + status: "Specific error" +``` + + +#### `regexp` [condition-regexp] + +The `regexp` condition checks the field against a regular expression. The condition accepts only strings. + +For example, the following condition checks if the process name starts with `foo`: + +```yaml +regexp: + system.process.name: "^foo.*" +``` + + +#### `range` [condition-range] + +The `range` condition checks if the field is in a certain range of values. The condition supports `lt`, `lte`, `gt` and `gte`. The condition accepts only integer, float, or strings that can be converted to either of these as values. + +For example, the following condition checks for failed HTTP transactions by comparing the `http.response.code` field with 400. + +```yaml +range: + http.response.code: + gte: 400 +``` + +This can also be written as: + +```yaml +range: + http.response.code.gte: 400 +``` + +The following condition checks if the CPU usage in percentage has a value between 0.5 and 0.8. + +```yaml +range: + system.cpu.user.pct.gte: 0.5 + system.cpu.user.pct.lt: 0.8 +``` + + +#### `network` [condition-network] + +The `network` condition checks whether a field’s value falls within a specified IP network range. If multiple fields are provided, each field value must match its corresponding network range. You can specify multiple network ranges for a single field, and a match occurs if any one of the ranges matches. If the field value is an array of IPs, it will match if any of the IPs fall within any of the given ranges. Both IPv4 and IPv6 addresses are supported. + +The network range may be specified using CIDR notation, like "192.0.2.0/24" or "2001:db8::/32", or by using one of these named ranges: + +* `loopback` - Matches loopback addresses in the range of `127.0.0.0/8` or `::1/128`. +* `unicast` - Matches global unicast addresses defined in RFC 1122, RFC 4632, and RFC 4291 with the exception of the IPv4 broadcast address (`255.255.255.255`). This includes private address ranges. +* `multicast` - Matches multicast addresses. +* `interface_local_multicast` - Matches IPv6 interface-local multicast addresses. +* `link_local_unicast` - Matches link-local unicast addresses. +* `link_local_multicast` - Matches link-local multicast addresses. +* `private` - Matches private address ranges defined in RFC 1918 (IPv4) and RFC 4193 (IPv6). +* `public` - Matches addresses that are not loopback, unspecified, IPv4 broadcast, link local unicast, link local multicast, interface local multicast, or private. +* `unspecified` - Matches unspecified addresses (either the IPv4 address "0.0.0.0" or the IPv6 address "::"). + +The following condition returns true if the `source.ip` value is within the private address space. + +```yaml +network: + source.ip: private +``` + +This condition returns true if the `destination.ip` value is within the IPv4 range of `192.168.1.0` - `192.168.1.255`. + +```yaml +network: + destination.ip: '192.168.1.0/24' +``` + +And this condition returns true when `destination.ip` is within any of the given subnets. + +```yaml +network: + destination.ip: ['192.168.1.0/24', '10.0.0.0/8', loopback] +``` + + +#### `has_fields` [condition-has_fields] + +The `has_fields` condition checks if all the given fields exist in the event. The condition accepts a list of string values denoting the field names. + +For example, the following condition checks if the `http.response.code` field is present in the event. + +```yaml +has_fields: ['http.response.code'] +``` + + +#### `or` [condition-or] + +The `or` operator receives a list of conditions. + +```yaml +or: + - + - + - + ... +``` + +For example, to configure the condition `http.response.code = 304 OR http.response.code = 404`: + +```yaml +or: + - equals: + http.response.code: 304 + - equals: + http.response.code: 404 +``` + + +#### `and` [condition-and] + +The `and` operator receives a list of conditions. + +```yaml +and: + - + - + - + ... +``` + +For example, to configure the condition `http.response.code = 200 AND status = OK`: + +```yaml +and: + - equals: + http.response.code: 200 + - equals: + status: OK +``` + +To configure a condition like ` OR AND `: + +```yaml +or: + - + - and: + - + - +``` + + +#### `not` [condition-not] + +The `not` operator receives the condition to negate. + +```yaml +not: + +``` + +For example, to configure the condition `NOT status = OK`: + +```yaml +not: + equals: + status: OK +``` + + diff --git a/docs/reference/auditbeat/detect-mime-type.md b/docs/reference/auditbeat/detect-mime-type.md new file mode 100644 index 000000000000..c96dfec51582 --- /dev/null +++ b/docs/reference/auditbeat/detect-mime-type.md @@ -0,0 +1,22 @@ +--- +navigation_title: "detect_mime_type" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/detect-mime-type.html +--- + +# Detect mime type [detect-mime-type] + + +The `detect_mime_type` processor attempts to detect a mime type for a field that contains a given stream of bytes. The `field` key contains the field used as the data source and the `target` key contains the field to populate with the detected type. It’s supported to use `@metadata.` prefix for `target` and set the value in the event metadata instead of fields. + +```yaml +processors: + - detect_mime_type: + field: http.request.body.content + target: http.request.mime_type +``` + +In the example above: - http.request.body.content is used as the source and http.request.mime_type is set to the detected mime type + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/auditbeat/diff-logstash-beats.md b/docs/reference/auditbeat/diff-logstash-beats.md new file mode 100644 index 000000000000..2fece6444942 --- /dev/null +++ b/docs/reference/auditbeat/diff-logstash-beats.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/diff-logstash-beats.html +--- + +# Not sure whether to use Logstash or Beats [diff-logstash-beats] + +Beats are lightweight data shippers that you install as agents on your servers to send specific types of operational data to {{es}}. Beats have a small footprint and use fewer system resources than {{ls}}. + +{{ls}} has a larger footprint, but provides a broad array of input, filter, and output plugins for collecting, enriching, and transforming data from a variety of sources. + +For more information, see the [{{ls}} Introduction](logstash://reference/index.md) and the [Beats Overview](/reference/index.md). + diff --git a/docs/reference/auditbeat/directory-layout.md b/docs/reference/auditbeat/directory-layout.md new file mode 100644 index 000000000000..f8b5d9103e2d --- /dev/null +++ b/docs/reference/auditbeat/directory-layout.md @@ -0,0 +1,70 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/directory-layout.html +--- + +# Directory layout [directory-layout] + +The directory layout of an installation is as follows: + +::::{tip} +Archive installation has a different layout. See [zip, tar.gz, or tgz](#directory-layout-archive). +:::: + + +| Type | Description | Default Location | Config Option | +| --- | --- | --- | --- | +| home | Home of the Auditbeat installation. | | `path.home` | +| bin | The location for the binary files. | `{path.home}/bin` | | +| config | The location for configuration files. | `{path.home}` | `path.config` | +| data | The location for persistent data files. | `{path.home}/data` | `path.data` | +| logs | The location for the logs created by Auditbeat. | `{path.home}/logs` | `path.logs` | + +You can change these settings by using CLI flags or setting [path options](/reference/auditbeat/configuration-path.md) in the configuration file. + +## Default paths [_default_paths] + +Auditbeat uses the following default paths unless you explicitly change them. + + +#### deb and rpm [_deb_and_rpm] + +| Type | Description | Location | +| --- | --- | --- | +| home | Home of the Auditbeat installation. | `/usr/share/auditbeat` | +| bin | The location for the binary files. | `/usr/share/auditbeat/bin` | +| config | The location for configuration files. | `/etc/auditbeat` | +| data | The location for persistent data files. | `/var/lib/auditbeat` | +| logs | The location for the logs created by Auditbeat. | `/var/log/auditbeat` | + +For the deb and rpm distributions, these paths are set in the init script or in the systemd unit file. Make sure that you start the Auditbeat service by using the preferred operating system method (init scripts or `systemctl`). Otherwise the paths might be set incorrectly. + + +#### docker [_docker] + +| Type | Description | Location | +| --- | --- | --- | +| home | Home of the Auditbeat installation. | `/usr/share/auditbeat` | +| bin | The location for the binary files. | `/usr/share/auditbeat` | +| config | The location for configuration files. | `/usr/share/auditbeat` | +| data | The location for persistent data files. | `/usr/share/auditbeat/data` | +| logs | The location for the logs created by Auditbeat. | `/usr/share/auditbeat/logs` | + + +#### zip, tar.gz, or tgz [directory-layout-archive] + +| Type | Description | Location | +| --- | --- | --- | +| home | Home of the Auditbeat installation. | `{extract.path}` | +| bin | The location for the binary files. | `{extract.path}` | +| config | The location for configuration files. | `{extract.path}` | +| data | The location for persistent data files. | `{extract.path}/data` | +| logs | The location for the logs created by Auditbeat. | `{extract.path}/logs` | + +For the zip, tar.gz, or tgz distributions, these paths are based on the location of the extracted binary file. This means that if you start Auditbeat with the following simple command, all paths are set correctly: + +```sh +./auditbeat +``` + + diff --git a/docs/reference/auditbeat/discard-output.md b/docs/reference/auditbeat/discard-output.md new file mode 100644 index 000000000000..e5c4fc8892e9 --- /dev/null +++ b/docs/reference/auditbeat/discard-output.md @@ -0,0 +1,37 @@ +--- +navigation_title: "Discard" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/discard-output.html +--- + +# Configure the Discard output [discard-output] + + +The Discard output throws away data. + +::::{warning} +The Discard output should be used only for development or debugging issues. Data is lost. +:::: + + +This can be useful if you want to work on your input configuration without needing to configure an output. It can also be useful to test how changes in input and processor configuration affect performance. + +Example configuration: + +```yaml +output.discard: + enabled: true +``` + +## Configuration options [_configuration_options_8] + +You can specify the following `output.discard` options in the `auditbeat.yml` config file: + +### `enabled` [_enabled_7] + +The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + + diff --git a/docs/reference/auditbeat/dissect.md b/docs/reference/auditbeat/dissect.md new file mode 100644 index 000000000000..d76e3fcb122f --- /dev/null +++ b/docs/reference/auditbeat/dissect.md @@ -0,0 +1,95 @@ +--- +navigation_title: "dissect" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/dissect.html +--- + +# Dissect strings [dissect] + + +The `dissect` processor tokenizes incoming strings using defined patterns. + +```yaml +processors: + - dissect: + tokenizer: "%{key1} %{key2} %{key3|convert_datatype}" + field: "message" + target_prefix: "dissect" +``` + +The `dissect` processor has the following configuration settings: + +`tokenizer` +: The field used to define the **dissection** pattern. Optional convert datatype can be provided after the key using `|` as separator to convert the value from string to integer, long, float, double, boolean or ip. + +`field` +: (Optional) The event field to tokenize. Default is `message`. + +`target_prefix` +: (Optional) The name of the field where the values will be extracted. When an empty string is defined, the processor will create the keys at the root of the event. Default is `dissect`. When the target key already exists in the event, the processor won’t replace it and log an error; you need to either drop or rename the key before using dissect, or enable the `overwrite_keys` flag. + +`ignore_failure` +: (Optional) Flag to control whether the processor returns an error if the tokenizer fails to match the message field. If set to true, the processor will silently restore the original event, allowing execution of subsequent processors (if any). If set to false (default), the processor will log an error, preventing execution of other processors. + +`overwrite_keys` +: (Optional) When set to true, the processor will overwrite existing keys in the event. The default is false, which causes the processor to fail when a key already exists. + +`trim_values` +: (Optional) Enables the trimming of the extracted values. Useful to remove leading and/or trailing spaces. Possible values are: + + * `none`: (default) no trimming is performed. + * `left`: values are trimmed on the left (leading). + * `right`: values are trimmed on the right (trailing). + * `all`: values are trimmed for leading and trailing. + + +`trim_chars` +: (Optional) Set of characters to trim from values, when trimming is enabled. The default is to trim the space character (`" "`). To trim multiple characters, simply set it to a string containing all characters to trim. For example, `trim_chars: " \t"` will trim spaces and/or tabs. + +For tokenization to be successful, all keys must be found and extracted, if one of them cannot be found an error will be logged and no modification is done on the original event. + +::::{note} +A key can contain any characters except reserved suffix or prefix modifiers: `/`,`&`, `+`, `#` and `?`. +:::: + + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + +## Dissect example [dissect-example] + +For this example, imagine that an application generates the following messages: + +```sh +"321 - App01 - WebServer is starting" +"321 - App01 - WebServer is up and running" +"321 - App01 - WebServer is scaling 2 pods" +"789 - App02 - Database is will be restarted in 5 minutes" +"789 - App02 - Database is up and running" +"789 - App02 - Database is refreshing tables" +``` + +Use the `dissect` processor to split each message into three fields, for example, `service.pid`, `service.name` and `service.status`: + +```yaml +processors: + - dissect: + tokenizer: '"%{service.pid|integer} - %{service.name} - %{service.status}"' + field: "message" + target_prefix: "" +``` + +This configuration produces fields like: + +```json +"service": { + "pid": 321, + "name": "App01", + "status": "WebServer is up and running" +}, +``` + +`service.name` is an ECS [keyword field](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md), which means that you can use it in {{es}} for filtering, sorting, and aggregations. + +When possible, use ECS-compatible field names. For more information, see the [Elastic Common Schema](ecs://reference/index.md) documentation. + + diff --git a/docs/reference/auditbeat/drop-event.md b/docs/reference/auditbeat/drop-event.md new file mode 100644 index 000000000000..a5da398445e5 --- /dev/null +++ b/docs/reference/auditbeat/drop-event.md @@ -0,0 +1,20 @@ +--- +navigation_title: "drop_event" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/drop-event.html +--- + +# Drop events [drop-event] + + +The `drop_event` processor drops the entire event if the associated condition is fulfilled. The condition is mandatory, because without one, all the events are dropped. + +```yaml +processors: + - drop_event: + when: + condition +``` + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/auditbeat/drop-fields.md b/docs/reference/auditbeat/drop-fields.md new file mode 100644 index 000000000000..ba62bb65b75c --- /dev/null +++ b/docs/reference/auditbeat/drop-fields.md @@ -0,0 +1,35 @@ +--- +navigation_title: "drop_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/drop-fields.html +--- + +# Drop fields from events [drop-fields] + + +The `drop_fields` processor specifies which fields to drop if a certain condition is fulfilled. The condition is optional. If it’s missing, the specified fields are always dropped. The `@timestamp` and `type` fields cannot be dropped, even if they show up in the `drop_fields` list. + +```yaml +processors: + - drop_fields: + when: + condition + fields: ["field1", "field2", ...] + ignore_missing: false +``` + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + +::::{note} +If you define an empty list of fields under `drop_fields`, then no fields are dropped. +:::: + + +The `drop_fields` processor has the following configuration settings: + +`fields` +: If non-empty, a list of matching field names will be removed. Any element in array can contain a regular expression delimited by two slashes (*/reg_exp/*), in order to match (name) and remove more than one field. + +`ignore_missing` +: (Optional) If `true` the processor will not return an error when a specified field does not exist. Defaults to `false`. + diff --git a/docs/reference/auditbeat/elasticsearch-output.md b/docs/reference/auditbeat/elasticsearch-output.md new file mode 100644 index 000000000000..e075de21509c --- /dev/null +++ b/docs/reference/auditbeat/elasticsearch-output.md @@ -0,0 +1,520 @@ +--- +navigation_title: "Elasticsearch" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/elasticsearch-output.html +--- + +# Configure the Elasticsearch output [elasticsearch-output] + + +The Elasticsearch output sends events directly to Elasticsearch using the Elasticsearch HTTP API. + +Example configuration: + +```yaml +output.elasticsearch: + hosts: ["https://myEShost:9200"] <1> +``` + +1. To enable SSL, add `https` to all URLs defined under *hosts*. + + +When sending data to a secured cluster through the `elasticsearch` output, Auditbeat can use any of the following authentication methods: + +* Basic authentication credentials (username and password). +* Token-based (API key) authentication. +* Public Key Infrastructure (PKI) certificates. + +**Basic authentication:** + +```yaml +output.elasticsearch: + hosts: ["https://myEShost:9200"] + username: "auditbeat_writer" + password: "{pwd}" +``` + +**API key authentication:** + +```yaml +output.elasticsearch: + hosts: ["https://myEShost:9200"] + api_key: "ZCV7VnwBgnX0T19fN8Qe:KnR6yE41RrSowb0kQ0HWoA" +``` + +**PKI certificate authentication:** + +```yaml +output.elasticsearch: + hosts: ["https://myEShost:9200"] + ssl.certificate: "/etc/pki/client/cert.pem" + ssl.key: "/etc/pki/client/cert.key" +``` + +See [*Secure communication with Elasticsearch*](/reference/auditbeat/securing-communication-elasticsearch.md) for details on each authentication method. + +## Compatibility [_compatibility] + +This output works with all compatible versions of Elasticsearch. See the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_compatibility). + +Optionally, you can set Auditbeat to only connect to instances that are at least on the same version as the Beat. The check can be enabled by setting `output.elasticsearch.allow_older_versions` to `false`. Leaving the setting at it’s default value of `true` avoids an issue where Auditbeat cannot connect to {{es}} after having been upgraded to a version higher than the {{stack}}. + + +## Configuration options [_configuration_options_2] + +You can specify the following options in the `elasticsearch` section of the `auditbeat.yml` config file: + +### `enabled` [_enabled] + +The enabled config is a boolean setting to enable or disable the output. If set to `false`, the output is disabled. + +The default value is `true`. + + +### `hosts` [hosts-option] + +The list of Elasticsearch nodes to connect to. The events are distributed to these nodes in round robin order. If one node becomes unreachable, the event is automatically sent to another node. Each Elasticsearch node can be defined as a `URL` or `IP:PORT`. For example: `http://192.15.3.2`, `https://es.found.io:9230` or `192.24.3.2:9300`. If no port is specified, `9200` is used. + +::::{note} +When a node is defined as an `IP:PORT`, the *scheme* and *path* are taken from the [`protocol`](#protocol-option) and [`path`](#path-option) config options. +:::: + + +```yaml +output.elasticsearch: + hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] + protocol: https + path: /elasticsearch +``` + +In the previous example, the Elasticsearch nodes are available at `https://10.45.3.2:9220/elasticsearch` and `https://10.45.3.1:9230/elasticsearch`. + + +### `compression_level` [compression-level-option] + +The gzip compression level. Setting this value to `0` disables compression. The compression level must be in the range of `1` (best speed) to `9` (best compression). + +Increasing the compression level will reduce the network usage but will increase the cpu usage. + +The default value is `1`. + + +### `escape_html` [_escape_html] + +Configure escaping of HTML in strings. Set to `true` to enable escaping. + +The default value is `false`. + + +### `worker` or `workers` [worker-option] + +The number of workers per configured host publishing events to Elasticsearch. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). + +The default value is `1`. + + +### `loadbalance` [_loadbalance] + +When `loadbalance: true` is set, Auditbeat connects to all configured hosts and sends data through all connections in parallel. If a connection fails, data is sent to the remaining hosts until it can be reestablished. Data will still be sent as long as Auditbeat can connect to at least one of its configured hosts. + +When `loadbalance: false` is set, Auditbeat sends data to a single host at a time. The target host is chosen at random from the list of configured hosts, and all data is sent to that target until the connection fails, when a new target is selected. Data will still be sent as long as Auditbeat can connect to at least one of its configured hosts. + +The default value is `true`. + +```yaml +output.elasticsearch: + hosts: ["localhost:9200", "localhost:9201"] + loadbalance: true +``` + + +### `api_key` [_api_key] + +Instead of using a username and password, you can use API keys to secure communication with {{es}}. The value must be the ID of the API key and the API key joined by a colon: `id:api_key`. + +See [*Grant access using API keys*](/reference/auditbeat/beats-api-keys.md) for more information. + + +### `username` [_username] + +The basic authentication username for connecting to Elasticsearch. + +This user needs the privileges required to publish events to {{es}}. To create a user like this, see [Create a *publishing* user](/reference/auditbeat/privileges-to-publish-events.md). + + +### `password` [_password] + +The basic authentication password for connecting to Elasticsearch. + + +### `parameters` [_parameters] + +Dictionary of HTTP parameters to pass within the url with index operations. + + +### `protocol` [protocol-option] + +The name of the protocol Elasticsearch is reachable on. The options are: `http` or `https`. The default is `http`. However, if you specify a URL for [`hosts`](#hosts-option), the value of `protocol` is overridden by whatever scheme you specify in the URL. + + +### `path` [path-option] + +An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix. + + +### `headers` [_headers] + +Custom HTTP headers to add to each request created by the Elasticsearch output. Example: + +```yaml +output.elasticsearch.headers: + X-My-Header: Header contents +``` + +It is possible to specify multiple header values for the same header name by separating them with a comma. + + +### `proxy_disable` [_proxy_disable] + +If set to `true` all proxy settings, including `HTTP_PROXY` and `HTTPS_PROXY` variables are ignored. + + +### `proxy_url` [_proxy_url] + +The URL of the proxy to use when connecting to the Elasticsearch servers. The value must be a complete URL. If a value is not specified through the configuration file then proxy environment variables are used. See the [Go documentation](https://golang.org/pkg/net/http/#ProxyFromEnvironment) for more information about the environment variables. + + +### `proxy_headers` [_proxy_headers] + +Additional headers to send to proxies during CONNECT requests. + + +### `index` [index-option-es] + +The indexing target to write events to. Can point to an [index](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html), [alias](docs-content://manage-data/data-store/aliases.md), or [data stream](docs-content://manage-data/data-store/data-streams.md). When using daily indices, this will be the index name. The default is `"auditbeat-%{[agent.version]}-%{+yyyy.MM.dd}"`, for example, `"auditbeat-9.0.0-beta1-2025-01-30"`. If you change this setting, you also need to configure the `setup.template.name` and `setup.template.pattern` options (see [Elasticsearch index template](/reference/auditbeat/configuration-template.md)). + +If you are using the pre-built Kibana dashboards, you also need to set the `setup.dashboards.index` option (see [Kibana dashboards](/reference/auditbeat/configuration-dashboards.md)). + +When [index lifecycle management (ILM)](/reference/auditbeat/ilm.md) is enabled, the default `index` is `"auditbeat-%{[agent.version]}-%{+yyyy.MM.dd}-%{{index_num}}"`, for example, `"auditbeat-9.0.0-beta1-2025-01-30-000001"`. Custom `index` settings are ignored when ILM is enabled. If you’re sending events to a cluster that supports index lifecycle management, see [Index lifecycle management (ILM)](/reference/auditbeat/ilm.md) to learn how to change the index name. + +You can set the index dynamically by using a format string to access any event field. For example, this configuration uses a custom field, `fields.log_type`, to set the index: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + index: "%{[fields.log_type]}-%{[agent.version]}-%{+yyyy.MM.dd}" <1> +``` + +1. We recommend including `agent.version` in the name to avoid mapping issues when you upgrade. + + +With this configuration, all events with `log_type: normal` are sent to an index named `normal-9.0.0-beta1-2025-01-30`, and all events with `log_type: critical` are sent to an index named `critical-9.0.0-beta1-2025-01-30`. + +::::{tip} +To learn how to add custom fields to events, see the [`fields`](/reference/auditbeat/configuration-general-options.md#libbeat-configuration-fields) option. +:::: + + +See the [`indices`](#indices-option-es) setting for other ways to set the index dynamically. + + +### `indices` [indices-option-es] + +An array of index selector rules. Each rule specifies the index to use for events that match the rule. During publishing, Auditbeat uses the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the `indices` setting is missing or no rule matches, the [`index`](#index-option-es) setting is used. + +Similar to `index`, defining custom `indices` will disable [Index lifecycle management (ILM)](/reference/auditbeat/ilm.md). + +Rule settings: + +**`index`** +: The index format string to use. If this string contains field references, such as `%{[fields.name]}`, the fields must exist, or the rule fails. + +**`mappings`** +: A dictionary that takes the value returned by `index` and maps it to a new name. + +**`default`** +: The default string value to use if `mappings` does not find a match. + +**`when`** +: A condition that must succeed in order to execute the current rule. All the [conditions](/reference/auditbeat/defining-processors.md#conditions) supported by processors are also supported here. + +The following example sets the index based on whether the `message` field contains the specified string: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + indices: + - index: "warning-%{[agent.version]}-%{+yyyy.MM.dd}" + when.contains: + message: "WARN" + - index: "error-%{[agent.version]}-%{+yyyy.MM.dd}" + when.contains: + message: "ERR" +``` + +This configuration results in indices named `warning-9.0.0-beta1-2025-01-30` and `error-9.0.0-beta1-2025-01-30` (plus the default index if no matches are found). + +The following example sets the index by taking the name returned by the `index` format string and mapping it to a new name that’s used for the index: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + indices: + - index: "%{[fields.log_type]}" + mappings: + critical: "sev1" + normal: "sev2" + default: "sev3" +``` + +This configuration results in indices named `sev1`, `sev2`, and `sev3`. + +The `mappings` setting simplifies the configuration, but is limited to string values. You cannot specify format strings within the mapping pairs. + + +### `ilm` [ilm-es] + +Configuration options for index lifecycle management. + +See [Index lifecycle management (ILM)](/reference/auditbeat/ilm.md) for more information. + + +### `pipeline` [pipeline-option-es] + +A format string value that specifies the ingest pipeline to write events to. + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipeline: my_pipeline_id +``` + +::::{important} +The `pipeline` is always lowercased. If `pipeline: Foo-Bar`, then the pipeline name in {{es}} needs to be defined as `foo-bar`. +:::: + + +For more information, see [*Parse data using an ingest pipeline*](/reference/auditbeat/configuring-ingest-node.md). + +You can set the ingest pipeline dynamically by using a format string to access any event field. For example, this configuration uses a custom field, `fields.log_type`, to set the pipeline for each event: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipeline: "%{[fields.log_type]}_pipeline" +``` + +With this configuration, all events with `log_type: normal` are sent to a pipeline named `normal_pipeline`, and all events with `log_type: critical` are sent to a pipeline named `critical_pipeline`. + +::::{tip} +To learn how to add custom fields to events, see the [`fields`](/reference/auditbeat/configuration-general-options.md#libbeat-configuration-fields) option. +:::: + + +See the [`pipelines`](#pipelines-option-es) setting for other ways to set the ingest pipeline dynamically. + + +### `pipelines` [pipelines-option-es] + +An array of pipeline selector rules. Each rule specifies the ingest pipeline to use for events that match the rule. During publishing, Auditbeat uses the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the `pipelines` setting is missing or no rule matches, the [`pipeline`](#pipeline-option-es) setting is used. + +Rule settings: + +**`pipeline`** +: The pipeline format string to use. If this string contains field references, such as `%{[fields.name]}`, the fields must exist, or the rule fails. + +**`mappings`** +: A dictionary that takes the value returned by `pipeline` and maps it to a new name. + +**`default`** +: The default string value to use if `mappings` does not find a match. + +**`when`** +: A condition that must succeed in order to execute the current rule. All the [conditions](/reference/auditbeat/defining-processors.md#conditions) supported by processors are also supported here. + +The following example sends events to a specific pipeline based on whether the `message` field contains the specified string: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipelines: + - pipeline: "warning_pipeline" + when.contains: + message: "WARN" + - pipeline: "error_pipeline" + when.contains: + message: "ERR" +``` + +The following example sets the pipeline by taking the name returned by the `pipeline` format string and mapping it to a new name that’s used for the pipeline: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipelines: + - pipeline: "%{[fields.log_type]}" + mappings: + critical: "sev1_pipeline" + normal: "sev2_pipeline" + default: "sev3_pipeline" +``` + +With this configuration, all events with `log_type: critical` are sent to `sev1_pipeline`, all events with `log_type: normal` are sent to a `sev2_pipeline`, and all other events are sent to `sev3_pipeline`. + +For more information about ingest pipelines, see [*Parse data using an ingest pipeline*](/reference/auditbeat/configuring-ingest-node.md). + + +### `max_retries` [_max_retries] + +The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. + +Set `max_retries` to a value less than 0 to retry until all events are published. + +The default is 3. + + +### `bulk_max_size` [bulk-max-size-option] + +The maximum number of events to bulk in a single Elasticsearch bulk API index request. The default is 1600. + +Events can be collected into batches. Auditbeat will split batches read from the queue which are larger than `bulk_max_size` into multiple batches. + +Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput. + +Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. + + +### `backoff.init` [backoff-init-option] + +The number of seconds to wait before trying to reconnect to Elasticsearch after a network error. After waiting `backoff.init` seconds, Auditbeat tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. + + +### `backoff.max` [backoff-max-option] + +The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. The default is `60s`. + + +### `idle_connection_timeout` [idle-connection-timeout-option] + +The maximum amount of time an idle connection will remain idle before closing itself. Zero means no limit. The format is a Go language duration (example 60s is 60 seconds). The default is 3s. + + +### `timeout` [_timeout] + +The http request timeout in seconds for the Elasticsearch request. The default is 90. + + +### `allow_older_versions` [_allow_older_versions] + +By default, Auditbeat expects the Elasticsearch instance to be on the same or newer version to provide optimal experience. We suggest you connect to the same version to make sure all features Auditbeat is using are available in your Elasticsearch instance. + +You can disable the check for example during updating the Elastic Stack, so data collection can go on. + + +### `ssl` [_ssl] + +Configuration options for SSL parameters like the certificate authority to use for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to Elasticsearch. + +See the [secure communication with {{es}}](/reference/auditbeat/securing-communication-elasticsearch.md) guide or [SSL configuration reference](/reference/auditbeat/configuration-ssl.md) for more information. + + +### `kerberos` [_kerberos] + +Configuration options for Kerberos authentication. + +See [Kerberos](/reference/auditbeat/configuration-kerberos.md) for more information. + + +### `queue` [_queue] + +Configuration options for internal queue. + +See [Internal queue](/reference/auditbeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `auditbeat.yml` or the `output` section but not both. ===== `non_indexable_policy` + +Specifies the behavior when the elasticsearch cluster explicitly rejects documents, for example on mapping conflicts. + +#### `drop` [_drop] + +The default behaviour, when an event is explicitly rejected by elasticsearch it is dropped. + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + non_indexable_policy.drop: ~ +``` + + +#### `dead_letter_index` [_dead_letter_index] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +On an explicit rejection, this policy will retry the event in the next batch. However, the target index will change to index specified. In addition, the structure of the event will be change to the following fields: + +message +: Contains the escaped json of the original event. + +error.type +: Contains the status code + +error.message +: Contains status returned by elasticsearch, describing the reason + +`index` +: The index to send rejected events to. + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + non_indexable_policy.dead_letter_index: + index: "my-dead-letter-index" +``` + + + +### `preset` [_preset] + +The performance preset to apply to the output configuration. + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + preset: balanced +``` + +Performance presets apply a set of configuration overrides based on a desired performance goal. If set, a performance preset will override other configuration flags to match the recommended settings for that preset. If a preset doesn’t set a value for a particular field, the user-specified value will be used if present, otherwise the default. Valid options are: * `balanced`: good starting point for general efficiency * `throughput`: good for high data volumes, may increase cpu and memory requirements * `scale`: reduces ambient resource use in large low-throughput deployments * `latency`: minimize the time for fresh data to become visible in Elasticsearch * `custom`: apply user configuration directly with no overrides + +The default if unspecified is `custom`. + +Presets represent current recommendations based on the intended goal; their effect may change between versions to better suit those goals. Currently the presets have the following effects: + +| preset | balanced | throughput | scale | latency | +| --- | --- | --- | --- | --- | +| [`bulk_max_size`](#bulk-max-size-option) | 1600 | 1600 | 1600 | 50 | +| [`worker`](#worker-option) | 1 | 4 | 1 | 1 | +| [`queue.mem.events`](/reference/auditbeat/configuring-internal-queue.md#queue-mem-events-option) | 3200 | 12800 | 3200 | 4100 | +| [`queue.mem.flush.min_events`](/reference/auditbeat/configuring-internal-queue.md#queue-mem-flush-min-events-option) | 1600 | 1600 | 1600 | 2050 | +| [`queue.mem.flush.timeout`](/reference/auditbeat/configuring-internal-queue.md#queue-mem-flush-timeout-option) | `10s` | `5s` | `20s` | `1s` | +| [`compression_level`](#compression-level-option) | 1 | 1 | 1 | 1 | +| [`idle_connection_timeout`](#idle-connection-timeout-option) | `3s` | `15s` | `1s` | `60s` | +| [`backoff.init`](#backoff-init-option) | none | none | `5s` | none | +| [`backoff.max`](#backoff-max-option) | none | none | `300s` | none | + + + +## Elasticsearch APIs [es-apis] + +Auditbeat will use the `_bulk` API from {{es}}, the events are sent in the order they arrive to the publishing pipeline, a single `_bulk` request may contain events from different inputs/modules. Temporary failures are re-tried. + +The status code for each event is checked and handled as: + +* `< 300`: The event is counted as `events.acked` +* `409` (Conflict): The event is counted as `events.duplicates` +* `429` (Too Many Requests): The event is counted as `events.toomany` +* `> 399 and < 500`: The `non_indexable_policy` is applied. + + diff --git a/docs/reference/auditbeat/enable-auditbeat-debugging.md b/docs/reference/auditbeat/enable-auditbeat-debugging.md new file mode 100644 index 000000000000..aeb10d30bc1c --- /dev/null +++ b/docs/reference/auditbeat/enable-auditbeat-debugging.md @@ -0,0 +1,31 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/enable-auditbeat-debugging.html +--- + +# Debug [enable-auditbeat-debugging] + +By default, Auditbeat sends all its output to syslog. When you run Auditbeat in the foreground, you can use the `-e` command line flag to redirect the output to standard error instead. For example: + +```sh +auditbeat -e +``` + +The default configuration file is auditbeat.yml (the location of the file varies by platform). You can use a different configuration file by specifying the `-c` flag. For example: + +```sh +auditbeat -e -c myauditbeatconfig.yml +``` + +You can increase the verbosity of debug messages by enabling one or more debug selectors. For example, to view publisher-related messages, start Auditbeat with the `publisher` selector: + +```sh +auditbeat -e -d "publisher" +``` + +If you want all the debugging output (fair warning, it’s quite a lot), you can use `*`, like this: + +```sh +auditbeat -e -d "*" +``` + diff --git a/docs/reference/auditbeat/error-found-unexpected-character.md b/docs/reference/auditbeat/error-found-unexpected-character.md new file mode 100644 index 000000000000..fb523f7099e3 --- /dev/null +++ b/docs/reference/auditbeat/error-found-unexpected-character.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/error-found-unexpected-character.html +--- + +# Found unexpected or unknown characters [error-found-unexpected-character] + +Either there is a problem with the structure of your config file, or you have used a path or expression that the YAML parser cannot resolve because the config file contains characters that aren’t properly escaped. + +If the YAML file contains paths with spaces or unusual characters, wrap the paths in single quotation marks (see [Wrap paths in single quotation marks](/reference/auditbeat/yaml-tips.md#wrap-paths-in-quotes)). + +Also see the general advice under [*Avoid YAML formatting problems*](/reference/auditbeat/yaml-tips.md). + diff --git a/docs/reference/auditbeat/error-loading-config.md b/docs/reference/auditbeat/error-loading-config.md new file mode 100644 index 000000000000..300664e9a2ca --- /dev/null +++ b/docs/reference/auditbeat/error-loading-config.md @@ -0,0 +1,14 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/error-loading-config.html +--- + +# Error loading config file [error-loading-config] + +You may encounter errors loading the config file on POSIX operating systems if: + +* an unauthorized user tries to load the config file, or +* the config file has the wrong permissions. + +See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md) for more about resolving these errors. + diff --git a/docs/reference/auditbeat/exported-fields-auditd.md b/docs/reference/auditbeat/exported-fields-auditd.md new file mode 100644 index 000000000000..f3e4f5e62f0f --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-auditd.md @@ -0,0 +1,1575 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-auditd.html +--- + +# Auditd fields [exported-fields-auditd] + +These are the fields generated by the auditd module. + +**`user.auid`** +: type: alias + +alias to: user.audit.id + + +**`user.uid`** +: type: alias + +alias to: user.id + + +**`user.fsuid`** +: type: alias + +alias to: user.filesystem.id + + +**`user.suid`** +: type: alias + +alias to: user.saved.id + + +**`user.gid`** +: type: alias + +alias to: user.group.id + + +**`user.sgid`** +: type: alias + +alias to: user.saved.group.id + + +**`user.fsgid`** +: type: alias + +alias to: user.filesystem.group.id + + + +## name_map [_name_map] + +If `resolve_ids` is set to true in the configuration then `name_map` will contain a mapping of uid field names to the resolved name (e.g. auid → root). + +**`user.name_map.auid`** +: type: alias + +alias to: user.audit.name + + +**`user.name_map.uid`** +: type: alias + +alias to: user.name + + +**`user.name_map.fsuid`** +: type: alias + +alias to: user.filesystem.name + + +**`user.name_map.suid`** +: type: alias + +alias to: user.saved.name + + +**`user.name_map.gid`** +: type: alias + +alias to: user.group.name + + +**`user.name_map.sgid`** +: type: alias + +alias to: user.saved.group.name + + +**`user.name_map.fsgid`** +: type: alias + +alias to: user.filesystem.group.name + + + +## selinux [_selinux] + +The SELinux identity of the actor. + +**`user.selinux.user`** +: account submitted for authentication + +type: keyword + + +**`user.selinux.role`** +: user’s SELinux role + +type: keyword + + +**`user.selinux.domain`** +: The actor’s SELinux domain or type. + +type: keyword + + +**`user.selinux.level`** +: The actor’s SELinux level. + +type: keyword + +example: s0 + + +**`user.selinux.category`** +: The actor’s SELinux category or compartments. + +type: keyword + + + +## process [_process] + +Process attributes. + +**`process.cwd`** +: The current working directory. + +type: alias + +alias to: process.working_directory + + + +## source [_source] + +Source that triggered the event. + +**`source.path`** +: This is the path associated with a unix socket. + +type: keyword + + + +## destination [_destination] + +Destination address that triggered the event. + +**`destination.path`** +: This is the path associated with a unix socket. + +type: keyword + + +**`auditd.message_type`** +: The audit message type (e.g. syscall or apparmor_denied). + +type: keyword + +example: syscall + + +**`auditd.sequence`** +: The sequence number of the event as assigned by the kernel. Sequence numbers are stored as a uint32 in the kernel and can rollover. + +type: long + + +**`auditd.session`** +: The session ID assigned to a login. All events related to a login session will have the same value. + +type: keyword + + +**`auditd.result`** +: The result of the audited operation (success/fail). + +type: keyword + +example: success or fail + + + +## actor [_actor] + +The actor is the user that triggered the audit event. + +**`auditd.summary.actor.primary`** +: The primary identity of the actor. This is the actor’s original login ID. It will not change even if the user changes to another account. + +type: keyword + + +**`auditd.summary.actor.secondary`** +: The secondary identity of the actor. This is typically the same as the primary, except for when the user has used `su`. + +type: keyword + + + +## object [_object] + +This is the thing or object being acted upon in the event. + +**`auditd.summary.object.type`** +: A description of the what the "thing" is (e.g. file, socket, user-session). + +type: keyword + + +**`auditd.summary.object.primary`** +: type: keyword + + +**`auditd.summary.object.secondary`** +: type: keyword + + +**`auditd.summary.how`** +: This describes how the action was performed. Usually this is the exe or command that was being executed that triggered the event. + +type: keyword + + + +## paths [_paths] + +List of paths associated with the event. + +**`auditd.paths.inode`** +: inode number + +type: keyword + + +**`auditd.paths.dev`** +: device name as found in /dev + +type: keyword + + +**`auditd.paths.obj_user`** +: type: keyword + + +**`auditd.paths.obj_role`** +: type: keyword + + +**`auditd.paths.obj_domain`** +: type: keyword + + +**`auditd.paths.obj_level`** +: type: keyword + + +**`auditd.paths.objtype`** +: type: keyword + + +**`auditd.paths.ouid`** +: file owner user ID + +type: keyword + + +**`auditd.paths.rdev`** +: the device identifier (special files only) + +type: keyword + + +**`auditd.paths.nametype`** +: kind of file operation being referenced + +type: keyword + + +**`auditd.paths.ogid`** +: file owner group ID + +type: keyword + + +**`auditd.paths.item`** +: which item is being recorded + +type: keyword + + +**`auditd.paths.mode`** +: mode flags on a file + +type: keyword + + +**`auditd.paths.name`** +: file name in avcs + +type: keyword + + + +## data [_data_2] + +The data from the audit messages. + +**`auditd.data.action`** +: netfilter packet disposition + +type: keyword + + +**`auditd.data.minor`** +: device minor number + +type: keyword + + +**`auditd.data.acct`** +: a user’s account name + +type: keyword + + +**`auditd.data.addr`** +: the remote address that the user is connecting from + +type: keyword + + +**`auditd.data.cipher`** +: name of crypto cipher selected + +type: keyword + + +**`auditd.data.id`** +: during account changes + +type: keyword + + +**`auditd.data.entries`** +: number of entries in the netfilter table + +type: keyword + + +**`auditd.data.kind`** +: server or client in crypto operation + +type: keyword + + +**`auditd.data.ksize`** +: key size for crypto operation + +type: keyword + + +**`auditd.data.spid`** +: sent process ID + +type: keyword + + +**`auditd.data.arch`** +: the elf architecture flags + +type: keyword + + +**`auditd.data.argc`** +: the number of arguments to an execve syscall + +type: keyword + + +**`auditd.data.major`** +: device major number + +type: keyword + + +**`auditd.data.unit`** +: systemd unit + +type: keyword + + +**`auditd.data.table`** +: netfilter table name + +type: keyword + + +**`auditd.data.terminal`** +: terminal name the user is running programs on + +type: keyword + + +**`auditd.data.grantors`** +: pam modules approving the action + +type: keyword + + +**`auditd.data.direction`** +: direction of crypto operation + +type: keyword + + +**`auditd.data.op`** +: the operation being performed that is audited + +type: keyword + + +**`auditd.data.tty`** +: tty udevice the user is running programs on + +type: keyword + + +**`auditd.data.syscall`** +: syscall number in effect when the event occurred + +type: keyword + + +**`auditd.data.data`** +: TTY text + +type: keyword + + +**`auditd.data.family`** +: netfilter protocol + +type: keyword + + +**`auditd.data.mac`** +: crypto MAC algorithm selected + +type: keyword + + +**`auditd.data.pfs`** +: perfect forward secrecy method + +type: keyword + + +**`auditd.data.items`** +: the number of path records in the event + +type: keyword + + +**`auditd.data.a0`** +: type: keyword + + +**`auditd.data.a1`** +: type: keyword + + +**`auditd.data.a2`** +: type: keyword + + +**`auditd.data.a3`** +: type: keyword + + +**`auditd.data.hostname`** +: the hostname that the user is connecting from + +type: keyword + + +**`auditd.data.lport`** +: local network port + +type: keyword + + +**`auditd.data.rport`** +: remote port number + +type: keyword + + +**`auditd.data.exit`** +: syscall exit code + +type: keyword + + +**`auditd.data.fp`** +: crypto key finger print + +type: keyword + + +**`auditd.data.laddr`** +: local network address + +type: keyword + + +**`auditd.data.sport`** +: local port number + +type: keyword + + +**`auditd.data.capability`** +: posix capabilities + +type: keyword + + +**`auditd.data.nargs`** +: the number of arguments to a socket call + +type: keyword + + +**`auditd.data.new-enabled`** +: new TTY audit enabled setting + +type: keyword + + +**`auditd.data.audit_backlog_limit`** +: audit system’s backlog queue size + +type: keyword + + +**`auditd.data.dir`** +: directory name + +type: keyword + + +**`auditd.data.cap_pe`** +: process effective capability map + +type: keyword + + +**`auditd.data.model`** +: security model being used for virt + +type: keyword + + +**`auditd.data.new_pp`** +: new process permitted capability map + +type: keyword + + +**`auditd.data.old-enabled`** +: present TTY audit enabled setting + +type: keyword + + +**`auditd.data.oauid`** +: object’s login user ID + +type: keyword + + +**`auditd.data.old`** +: old value + +type: keyword + + +**`auditd.data.banners`** +: banners used on printed page + +type: keyword + + +**`auditd.data.feature`** +: kernel feature being changed + +type: keyword + + +**`auditd.data.vm-ctx`** +: the vm’s context string + +type: keyword + + +**`auditd.data.opid`** +: object’s process ID + +type: keyword + + +**`auditd.data.seperms`** +: SELinux permissions being used + +type: keyword + + +**`auditd.data.seresult`** +: SELinux AVC decision granted/denied + +type: keyword + + +**`auditd.data.new-rng`** +: device name of rng being added from a vm + +type: keyword + + +**`auditd.data.old-net`** +: present MAC address assigned to vm + +type: keyword + + +**`auditd.data.sigev_signo`** +: signal number + +type: keyword + + +**`auditd.data.ino`** +: inode number + +type: keyword + + +**`auditd.data.old_enforcing`** +: old MAC enforcement status + +type: keyword + + +**`auditd.data.old-vcpu`** +: present number of CPU cores + +type: keyword + + +**`auditd.data.range`** +: user’s SE Linux range + +type: keyword + + +**`auditd.data.res`** +: result of the audited operation(success/fail) + +type: keyword + + +**`auditd.data.added`** +: number of new files detected + +type: keyword + + +**`auditd.data.fam`** +: socket address family + +type: keyword + + +**`auditd.data.nlnk-pid`** +: pid of netlink packet sender + +type: keyword + + +**`auditd.data.subj`** +: lspp subject’s context string + +type: keyword + + +**`auditd.data.a[0-3]`** +: the arguments to a syscall + +type: keyword + + +**`auditd.data.cgroup`** +: path to cgroup in sysfs + +type: keyword + + +**`auditd.data.kernel`** +: kernel’s version number + +type: keyword + + +**`auditd.data.ocomm`** +: object’s command line name + +type: keyword + + +**`auditd.data.new-net`** +: MAC address being assigned to vm + +type: keyword + + +**`auditd.data.permissive`** +: SELinux is in permissive mode + +type: keyword + + +**`auditd.data.class`** +: resource class assigned to vm + +type: keyword + + +**`auditd.data.compat`** +: is_compat_task result + +type: keyword + + +**`auditd.data.fi`** +: file assigned inherited capability map + +type: keyword + + +**`auditd.data.changed`** +: number of changed files + +type: keyword + + +**`auditd.data.msg`** +: the payload of the audit record + +type: keyword + + +**`auditd.data.dport`** +: remote port number + +type: keyword + + +**`auditd.data.new-seuser`** +: new SELinux user + +type: keyword + + +**`auditd.data.invalid_context`** +: SELinux context + +type: keyword + + +**`auditd.data.dmac`** +: remote MAC address + +type: keyword + + +**`auditd.data.ipx-net`** +: IPX network number + +type: keyword + + +**`auditd.data.iuid`** +: ipc object’s user ID + +type: keyword + + +**`auditd.data.macproto`** +: ethernet packet type ID field + +type: keyword + + +**`auditd.data.obj`** +: lspp object context string + +type: keyword + + +**`auditd.data.ipid`** +: IP datagram fragment identifier + +type: keyword + + +**`auditd.data.new-fs`** +: file system being added to vm + +type: keyword + + +**`auditd.data.vm-pid`** +: vm’s process ID + +type: keyword + + +**`auditd.data.cap_pi`** +: process inherited capability map + +type: keyword + + +**`auditd.data.old-auid`** +: previous auid value + +type: keyword + + +**`auditd.data.oses`** +: object’s session ID + +type: keyword + + +**`auditd.data.fd`** +: file descriptor number + +type: keyword + + +**`auditd.data.igid`** +: ipc object’s group ID + +type: keyword + + +**`auditd.data.new-disk`** +: disk being added to vm + +type: keyword + + +**`auditd.data.parent`** +: the inode number of the parent file + +type: keyword + + +**`auditd.data.len`** +: length + +type: keyword + + +**`auditd.data.oflag`** +: open syscall flags + +type: keyword + + +**`auditd.data.uuid`** +: a UUID + +type: keyword + + +**`auditd.data.code`** +: seccomp action code + +type: keyword + + +**`auditd.data.nlnk-grp`** +: netlink group number + +type: keyword + + +**`auditd.data.cap_fp`** +: file permitted capability map + +type: keyword + + +**`auditd.data.new-mem`** +: new amount of memory in KB + +type: keyword + + +**`auditd.data.seperm`** +: SELinux permission being decided on + +type: keyword + + +**`auditd.data.enforcing`** +: new MAC enforcement status + +type: keyword + + +**`auditd.data.new-chardev`** +: new character device being assigned to vm + +type: keyword + + +**`auditd.data.old-rng`** +: device name of rng being removed from a vm + +type: keyword + + +**`auditd.data.outif`** +: out interface number + +type: keyword + + +**`auditd.data.cmd`** +: command being executed + +type: keyword + + +**`auditd.data.hook`** +: netfilter hook that packet came from + +type: keyword + + +**`auditd.data.new-level`** +: new run level + +type: keyword + + +**`auditd.data.sauid`** +: sent login user ID + +type: keyword + + +**`auditd.data.sig`** +: signal number + +type: keyword + + +**`auditd.data.audit_backlog_wait_time`** +: audit system’s backlog wait time + +type: keyword + + +**`auditd.data.printer`** +: printer name + +type: keyword + + +**`auditd.data.old-mem`** +: present amount of memory in KB + +type: keyword + + +**`auditd.data.perm`** +: the file permission being used + +type: keyword + + +**`auditd.data.old_pi`** +: old process inherited capability map + +type: keyword + + +**`auditd.data.state`** +: audit daemon configuration resulting state + +type: keyword + + +**`auditd.data.format`** +: audit log’s format + +type: keyword + + +**`auditd.data.new_gid`** +: new group ID being assigned + +type: keyword + + +**`auditd.data.tcontext`** +: the target’s or object’s context string + +type: keyword + + +**`auditd.data.maj`** +: device major number + +type: keyword + + +**`auditd.data.watch`** +: file name in a watch record + +type: keyword + + +**`auditd.data.device`** +: device name + +type: keyword + + +**`auditd.data.grp`** +: group name + +type: keyword + + +**`auditd.data.bool`** +: name of SELinux boolean + +type: keyword + + +**`auditd.data.icmp_type`** +: type of icmp message + +type: keyword + + +**`auditd.data.new_lock`** +: new value of feature lock + +type: keyword + + +**`auditd.data.old_prom`** +: network promiscuity flag + +type: keyword + + +**`auditd.data.acl`** +: access mode of resource assigned to vm + +type: keyword + + +**`auditd.data.ip`** +: network address of a printer + +type: keyword + + +**`auditd.data.new_pi`** +: new process inherited capability map + +type: keyword + + +**`auditd.data.default-context`** +: default MAC context + +type: keyword + + +**`auditd.data.inode_gid`** +: group ID of the inode’s owner + +type: keyword + + +**`auditd.data.new-log_passwd`** +: new value for TTY password logging + +type: keyword + + +**`auditd.data.new_pe`** +: new process effective capability map + +type: keyword + + +**`auditd.data.selected-context`** +: new MAC context assigned to session + +type: keyword + + +**`auditd.data.cap_fver`** +: file system capabilities version number + +type: keyword + + +**`auditd.data.file`** +: file name + +type: keyword + + +**`auditd.data.net`** +: network MAC address + +type: keyword + + +**`auditd.data.virt`** +: kind of virtualization being referenced + +type: keyword + + +**`auditd.data.cap_pp`** +: process permitted capability map + +type: keyword + + +**`auditd.data.old-range`** +: present SELinux range + +type: keyword + + +**`auditd.data.resrc`** +: resource being assigned + +type: keyword + + +**`auditd.data.new-range`** +: new SELinux range + +type: keyword + + +**`auditd.data.obj_gid`** +: group ID of object + +type: keyword + + +**`auditd.data.proto`** +: network protocol + +type: keyword + + +**`auditd.data.old-disk`** +: disk being removed from vm + +type: keyword + + +**`auditd.data.audit_failure`** +: audit system’s failure mode + +type: keyword + + +**`auditd.data.inif`** +: in interface number + +type: keyword + + +**`auditd.data.vm`** +: virtual machine name + +type: keyword + + +**`auditd.data.flags`** +: mmap syscall flags + +type: keyword + + +**`auditd.data.nlnk-fam`** +: netlink protocol number + +type: keyword + + +**`auditd.data.old-fs`** +: file system being removed from vm + +type: keyword + + +**`auditd.data.old-ses`** +: previous ses value + +type: keyword + + +**`auditd.data.seqno`** +: sequence number + +type: keyword + + +**`auditd.data.fver`** +: file system capabilities version number + +type: keyword + + +**`auditd.data.qbytes`** +: ipc objects quantity of bytes + +type: keyword + + +**`auditd.data.seuser`** +: user’s SE Linux user acct + +type: keyword + + +**`auditd.data.cap_fe`** +: file assigned effective capability map + +type: keyword + + +**`auditd.data.new-vcpu`** +: new number of CPU cores + +type: keyword + + +**`auditd.data.old-level`** +: old run level + +type: keyword + + +**`auditd.data.old_pp`** +: old process permitted capability map + +type: keyword + + +**`auditd.data.daddr`** +: remote IP address + +type: keyword + + +**`auditd.data.old-role`** +: present SELinux role + +type: keyword + + +**`auditd.data.ioctlcmd`** +: The request argument to the ioctl syscall + +type: keyword + + +**`auditd.data.smac`** +: local MAC address + +type: keyword + + +**`auditd.data.apparmor`** +: apparmor event information + +type: keyword + + +**`auditd.data.fe`** +: file assigned effective capability map + +type: keyword + + +**`auditd.data.perm_mask`** +: file permission mask that triggered a watch event + +type: keyword + + +**`auditd.data.ses`** +: login session ID + +type: keyword + + +**`auditd.data.cap_fi`** +: file inherited capability map + +type: keyword + + +**`auditd.data.obj_uid`** +: user ID of object + +type: keyword + + +**`auditd.data.reason`** +: text string denoting a reason for the action + +type: keyword + + +**`auditd.data.list`** +: the audit system’s filter list number + +type: keyword + + +**`auditd.data.old_lock`** +: present value of feature lock + +type: keyword + + +**`auditd.data.bus`** +: name of subsystem bus a vm resource belongs to + +type: keyword + + +**`auditd.data.old_pe`** +: old process effective capability map + +type: keyword + + +**`auditd.data.new-role`** +: new SELinux role + +type: keyword + + +**`auditd.data.prom`** +: network promiscuity flag + +type: keyword + + +**`auditd.data.uri`** +: URI pointing to a printer + +type: keyword + + +**`auditd.data.audit_enabled`** +: audit systems’s enable/disable status + +type: keyword + + +**`auditd.data.old-log_passwd`** +: present value for TTY password logging + +type: keyword + + +**`auditd.data.old-seuser`** +: present SELinux user + +type: keyword + + +**`auditd.data.per`** +: linux personality + +type: keyword + + +**`auditd.data.scontext`** +: the subject’s context string + +type: keyword + + +**`auditd.data.tclass`** +: target’s object classification + +type: keyword + + +**`auditd.data.ver`** +: audit daemon’s version number + +type: keyword + + +**`auditd.data.new`** +: value being set in feature + +type: keyword + + +**`auditd.data.val`** +: generic value associated with the operation + +type: keyword + + +**`auditd.data.img-ctx`** +: the vm’s disk image context string + +type: keyword + + +**`auditd.data.old-chardev`** +: present character device assigned to vm + +type: keyword + + +**`auditd.data.old_val`** +: current value of SELinux boolean + +type: keyword + + +**`auditd.data.success`** +: whether the syscall was successful or not + +type: keyword + + +**`auditd.data.inode_uid`** +: user ID of the inode’s owner + +type: keyword + + +**`auditd.data.removed`** +: number of deleted files + +type: keyword + + +**`auditd.data.socket.port`** +: The port number. + +type: keyword + + +**`auditd.data.socket.saddr`** +: The raw socket address structure. + +type: keyword + + +**`auditd.data.socket.addr`** +: The remote address. + +type: keyword + + +**`auditd.data.socket.family`** +: The socket family (unix, ipv4, ipv6, netlink). + +type: keyword + +example: unix + + +**`auditd.data.socket.path`** +: This is the path associated with a unix socket. + +type: keyword + + +**`auditd.messages`** +: An ordered list of the raw messages received from the kernel that were used to construct this document. This field is present if an error occurred processing the data or if `include_raw_message` is set in the config. + +type: alias + +alias to: event.original + + +**`auditd.warnings`** +: The warnings generated by the Beat during the construction of the event. These are disabled by default and are used for development and debug purposes only. + +type: alias + +alias to: error.message + + + +## geoip [_geoip] + +The geoip fields are defined as a convenience in case you decide to enrich the data using a geoip filter in Logstash or an Elasticsearch geoip ingest processor. + +**`geoip.continent_name`** +: The name of the continent. + +type: keyword + + +**`geoip.city_name`** +: The name of the city. + +type: keyword + + +**`geoip.region_name`** +: The name of the region. + +type: keyword + + +**`geoip.country_iso_code`** +: Country ISO code. + +type: keyword + + +**`geoip.location`** +: The longitude and latitude. + +type: geo_point + + diff --git a/docs/reference/auditbeat/exported-fields-beat-common.md b/docs/reference/auditbeat/exported-fields-beat-common.md new file mode 100644 index 000000000000..38c384eb9849 --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-beat-common.md @@ -0,0 +1,47 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-beat-common.html +--- + +# Beat fields [exported-fields-beat-common] + +Contains common beat fields available in all event types. + +**`agent.hostname`** +: Deprecated - use agent.name or agent.id to identify an agent. + +type: alias + +alias to: agent.name + + +**`beat.timezone`** +: type: alias + +alias to: event.timezone + + +**`fields`** +: Contains user configurable fields. + +type: object + + +**`beat.name`** +: type: alias + +alias to: host.name + + +**`beat.hostname`** +: type: alias + +alias to: agent.name + + +**`timeseries.instance`** +: Time series instance id + +type: keyword + + diff --git a/docs/reference/auditbeat/exported-fields-cloud.md b/docs/reference/auditbeat/exported-fields-cloud.md new file mode 100644 index 000000000000..1e9c2c59a67f --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-cloud.md @@ -0,0 +1,57 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-cloud.html +--- + +# Cloud provider metadata fields [exported-fields-cloud] + +Metadata from cloud providers added by the add_cloud_metadata processor. + +**`cloud.image.id`** +: Image ID for the cloud instance. + +example: ami-abcd1234 + + +**`meta.cloud.provider`** +: type: alias + +alias to: cloud.provider + + +**`meta.cloud.instance_id`** +: type: alias + +alias to: cloud.instance.id + + +**`meta.cloud.instance_name`** +: type: alias + +alias to: cloud.instance.name + + +**`meta.cloud.machine_type`** +: type: alias + +alias to: cloud.machine.type + + +**`meta.cloud.availability_zone`** +: type: alias + +alias to: cloud.availability_zone + + +**`meta.cloud.project_id`** +: type: alias + +alias to: cloud.project.id + + +**`meta.cloud.region`** +: type: alias + +alias to: cloud.region + + diff --git a/docs/reference/auditbeat/exported-fields-common.md b/docs/reference/auditbeat/exported-fields-common.md new file mode 100644 index 000000000000..6c5753afcc25 --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-common.md @@ -0,0 +1,163 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-common.html +--- + +# Common fields [exported-fields-common] + +Contains common fields available in all event types. + + +## file [_file] + +File attributes. + +**`file.setuid`** +: Set if the file has the `setuid` bit set. Omitted otherwise. + +type: boolean + +example: True + + +**`file.setgid`** +: Set if the file has the `setgid` bit set. Omitted otherwise. + +type: boolean + +example: True + + +**`file.origin`** +: An array of strings describing a possible external origin for this file. For example, the URL it was downloaded from. Only supported in macOS, via the kMDItemWhereFroms attribute. Omitted if origin information is not available. + +type: keyword + + +**`file.origin.text`** +: This is an analyzed field that is useful for full text search on the origin data. + +type: text + + + +## selinux [_selinux_2] + +The SELinux identity of the file. + +**`file.selinux.user`** +: The owner of the object. + +type: keyword + + +**`file.selinux.role`** +: The object’s SELinux role. + +type: keyword + + +**`file.selinux.domain`** +: The object’s SELinux domain or type. + +type: keyword + + +**`file.selinux.level`** +: The object’s SELinux level. + +type: keyword + +example: s0 + + + +## user [_user] + +User information. + + +## audit [_audit] + +Audit user information. + +**`user.audit.id`** +: Audit user ID. + +type: keyword + + +**`user.audit.name`** +: Audit user name. + +type: keyword + + + +## filesystem [_filesystem] + +Filesystem user information. + +**`user.filesystem.id`** +: Filesystem user ID. + +type: keyword + + +**`user.filesystem.name`** +: Filesystem user name. + +type: keyword + + + +## group [_group] + +Filesystem group information. + +**`user.filesystem.group.id`** +: Filesystem group ID. + +type: keyword + + +**`user.filesystem.group.name`** +: Filesystem group name. + +type: keyword + + + +## saved [_saved] + +Saved user information. + +**`user.saved.id`** +: Saved user ID. + +type: keyword + + +**`user.saved.name`** +: Saved user name. + +type: keyword + + + +## group [_group_2] + +Saved group information. + +**`user.saved.group.id`** +: Saved group ID. + +type: keyword + + +**`user.saved.group.name`** +: Saved group name. + +type: keyword + + diff --git a/docs/reference/auditbeat/exported-fields-docker-processor.md b/docs/reference/auditbeat/exported-fields-docker-processor.md new file mode 100644 index 000000000000..aa3d77a624d8 --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-docker-processor.md @@ -0,0 +1,33 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-docker-processor.html +--- + +# Docker fields [exported-fields-docker-processor] + +Docker stats collected from Docker. + +**`docker.container.id`** +: type: alias + +alias to: container.id + + +**`docker.container.image`** +: type: alias + +alias to: container.image.name + + +**`docker.container.name`** +: type: alias + +alias to: container.name + + +**`docker.container.labels`** +: Image labels. + +type: object + + diff --git a/docs/reference/auditbeat/exported-fields-ecs.md b/docs/reference/auditbeat/exported-fields-ecs.md new file mode 100644 index 000000000000..14ac2a297c9d --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-ecs.md @@ -0,0 +1,10423 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-ecs.html +--- + +# ECS fields [exported-fields-ecs] + +This section defines Elastic Common Schema (ECS) fields—a common set of fields to be used when storing event data in {{es}}. + +This is an exhaustive list, and fields listed here are not necessarily used by Auditbeat. The goal of ECS is to enable and encourage users of {{es}} to normalize their event data, so that they can better analyze, visualize, and correlate the data represented in their events. + +See the [ECS reference](ecs://reference/index.md) for more information. + +**`@timestamp`** +: Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events. + +type: date + +example: 2016-05-23T08:05:34.853Z + +required: True + + +**`labels`** +: Custom key/value pairs. Can be used to add meta information to events. Should not contain nested objects. All values are stored as keyword. Example: `docker` and `k8s` labels. + +type: object + +example: {"application": "foo-bar", "env": "production"} + + +**`message`** +: For log events the message field contains the log message, optimized for viewing in a log viewer. For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. If multiple messages exist, they can be combined into one message. + +type: match_only_text + +example: Hello World + + +**`tags`** +: List of keywords used to tag each event. + +type: keyword + +example: ["production", "env2"] + + + +## agent [_agent] + +The agent fields contain the data about the software entity, if any, that collects, detects, or observes events on a host, or takes measurements on a host. Examples include Beats. Agents may also run on observers. ECS agent.* fields shall be populated with details of the agent running on the host or observer where the event happened or the measurement was taken. + +**`agent.build.original`** +: Extended build information for the agent. This field is intended to contain any build information that a data source may provide, no specific formatting is required. + +type: keyword + +example: metricbeat version 7.6.0 (amd64), libbeat 7.6.0 [6a23e8f8f30f5001ba344e4e54d8d9cb82cb107c built 2020-02-05 23:10:10 +0000 UTC] + + +**`agent.ephemeral_id`** +: Ephemeral identifier of this agent (if one exists). This id normally changes across restarts, but `agent.id` does not. + +type: keyword + +example: 8a4f500f + + +**`agent.id`** +: Unique identifier of this agent (if one exists). Example: For Beats this would be beat.id. + +type: keyword + +example: 8a4f500d + + +**`agent.name`** +: Custom name of the agent. This is a name that can be given to an agent. This can be helpful if for example two Filebeat instances are running on the same host but a human readable separation is needed on which Filebeat instance data is coming from. If no name is given, the name is often left empty. + +type: keyword + +example: foo + + +**`agent.type`** +: Type of the agent. The agent type always stays the same and should be given by the agent used. In case of Filebeat the agent would always be Filebeat also if two Filebeat instances are run on the same machine. + +type: keyword + +example: filebeat + + +**`agent.version`** +: Version of the agent. + +type: keyword + +example: 6.0.0-rc2 + + + +## as [_as] + +An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the internet. + +**`as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`as.organization.name.text`** +: type: match_only_text + + + +## client [_client] + +A client is defined as the initiator of a network connection for events regarding sessions, connections, or bidirectional flow records. For TCP events, the client is the initiator of the TCP connection that sends the SYN packet(s). For other protocols, the client is generally the initiator or requestor in the network transaction. Some systems use the term "originator" to refer the client in TCP connections. The client fields describe details about the system acting as the client in the network event. Client fields are usually populated in conjunction with server fields. Client fields are generally not populated for packet-level events. Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. + +**`client.address`** +: Some event client addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. + +type: keyword + + +**`client.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`client.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`client.as.organization.name.text`** +: type: match_only_text + + +**`client.bytes`** +: Bytes sent from the client to the server. + +type: long + +example: 184 + +format: bytes + + +**`client.domain`** +: The domain name of the client system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. + +type: keyword + +example: foo.example.com + + +**`client.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`client.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`client.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`client.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`client.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`client.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`client.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`client.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`client.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`client.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`client.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`client.ip`** +: IP address of the client (IPv4 or IPv6). + +type: ip + + +**`client.mac`** +: MAC address of the client. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: 00-00-5E-00-53-23 + + +**`client.nat.ip`** +: Translated IP of source based NAT sessions (e.g. internal client to internet). Typically connections traversing load balancers, firewalls, or routers. + +type: ip + + +**`client.nat.port`** +: Translated port of source based NAT sessions (e.g. internal client to internet). Typically connections traversing load balancers, firewalls, or routers. + +type: long + +format: string + + +**`client.packets`** +: Packets sent from the client to the server. + +type: long + +example: 12 + + +**`client.port`** +: Port of the client. + +type: long + +format: string + + +**`client.registered_domain`** +: The highest registered client domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`client.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`client.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`client.user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`client.user.email`** +: User email address. + +type: keyword + + +**`client.user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`client.user.full_name.text`** +: type: match_only_text + + +**`client.user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`client.user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`client.user.group.name`** +: Name of the group. + +type: keyword + + +**`client.user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`client.user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`client.user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`client.user.name.text`** +: type: match_only_text + + +**`client.user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## cloud [_cloud] + +Fields related to the cloud or infrastructure the events are coming from. + +**`cloud.account.id`** +: The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. + +type: keyword + +example: 666777888999 + + +**`cloud.account.name`** +: The cloud account name or alias used to identify different entities in a multi-tenant environment. Examples: AWS account name, Google Cloud ORG display name. + +type: keyword + +example: elastic-dev + + +**`cloud.availability_zone`** +: Availability zone in which this host, resource, or service is located. + +type: keyword + +example: us-east-1c + + +**`cloud.instance.id`** +: Instance ID of the host machine. + +type: keyword + +example: i-1234567890abcdef0 + + +**`cloud.instance.name`** +: Instance name of the host machine. + +type: keyword + + +**`cloud.machine.type`** +: Machine type of the host machine. + +type: keyword + +example: t2.medium + + +**`cloud.origin.account.id`** +: The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. + +type: keyword + +example: 666777888999 + + +**`cloud.origin.account.name`** +: The cloud account name or alias used to identify different entities in a multi-tenant environment. Examples: AWS account name, Google Cloud ORG display name. + +type: keyword + +example: elastic-dev + + +**`cloud.origin.availability_zone`** +: Availability zone in which this host, resource, or service is located. + +type: keyword + +example: us-east-1c + + +**`cloud.origin.instance.id`** +: Instance ID of the host machine. + +type: keyword + +example: i-1234567890abcdef0 + + +**`cloud.origin.instance.name`** +: Instance name of the host machine. + +type: keyword + + +**`cloud.origin.machine.type`** +: Machine type of the host machine. + +type: keyword + +example: t2.medium + + +**`cloud.origin.project.id`** +: The cloud project identifier. Examples: Google Cloud Project id, Azure Project id. + +type: keyword + +example: my-project + + +**`cloud.origin.project.name`** +: The cloud project name. Examples: Google Cloud Project name, Azure Project name. + +type: keyword + +example: my project + + +**`cloud.origin.provider`** +: Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. + +type: keyword + +example: aws + + +**`cloud.origin.region`** +: Region in which this host, resource, or service is located. + +type: keyword + +example: us-east-1 + + +**`cloud.origin.service.name`** +: The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. Examples: app engine, app service, cloud run, fargate, lambda. + +type: keyword + +example: lambda + + +**`cloud.project.id`** +: The cloud project identifier. Examples: Google Cloud Project id, Azure Project id. + +type: keyword + +example: my-project + + +**`cloud.project.name`** +: The cloud project name. Examples: Google Cloud Project name, Azure Project name. + +type: keyword + +example: my project + + +**`cloud.provider`** +: Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. + +type: keyword + +example: aws + + +**`cloud.region`** +: Region in which this host, resource, or service is located. + +type: keyword + +example: us-east-1 + + +**`cloud.service.name`** +: The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. Examples: app engine, app service, cloud run, fargate, lambda. + +type: keyword + +example: lambda + + +**`cloud.target.account.id`** +: The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. + +type: keyword + +example: 666777888999 + + +**`cloud.target.account.name`** +: The cloud account name or alias used to identify different entities in a multi-tenant environment. Examples: AWS account name, Google Cloud ORG display name. + +type: keyword + +example: elastic-dev + + +**`cloud.target.availability_zone`** +: Availability zone in which this host, resource, or service is located. + +type: keyword + +example: us-east-1c + + +**`cloud.target.instance.id`** +: Instance ID of the host machine. + +type: keyword + +example: i-1234567890abcdef0 + + +**`cloud.target.instance.name`** +: Instance name of the host machine. + +type: keyword + + +**`cloud.target.machine.type`** +: Machine type of the host machine. + +type: keyword + +example: t2.medium + + +**`cloud.target.project.id`** +: The cloud project identifier. Examples: Google Cloud Project id, Azure Project id. + +type: keyword + +example: my-project + + +**`cloud.target.project.name`** +: The cloud project name. Examples: Google Cloud Project name, Azure Project name. + +type: keyword + +example: my project + + +**`cloud.target.provider`** +: Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. + +type: keyword + +example: aws + + +**`cloud.target.region`** +: Region in which this host, resource, or service is located. + +type: keyword + +example: us-east-1 + + +**`cloud.target.service.name`** +: The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. Examples: app engine, app service, cloud run, fargate, lambda. + +type: keyword + +example: lambda + + + +## code_signature [_code_signature] + +These fields contain information about binary code signatures. + +**`code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + + +## container [_container] + +Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. + +**`container.cpu.usage`** +: Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. Scaling factor: 1000. + +type: scaled_float + + +**`container.disk.read.bytes`** +: The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection. + +type: long + + +**`container.disk.write.bytes`** +: The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection. + +type: long + + +**`container.id`** +: Unique container id. + +type: keyword + + +**`container.image.name`** +: Name of the image the container was built on. + +type: keyword + + +**`container.image.tag`** +: Container image tags. + +type: keyword + + +**`container.labels`** +: Image labels. + +type: object + + +**`container.memory.usage`** +: Memory usage percentage and it ranges from 0 to 1. Scaling factor: 1000. + +type: scaled_float + + +**`container.name`** +: Container name. + +type: keyword + + +**`container.network.egress.bytes`** +: The number of bytes (gauge) sent out on all network interfaces by the container since the last metric collection. + +type: long + + +**`container.network.ingress.bytes`** +: The number of bytes received (gauge) on all network interfaces by the container since the last metric collection. + +type: long + + +**`container.runtime`** +: Runtime managing this container. + +type: keyword + +example: docker + + + +## data_stream [_data_stream] + +The data_stream fields take part in defining the new data stream naming scheme. In the new data stream naming scheme the value of the data stream fields combine to the name of the actual data stream in the following manner: `{data_stream.type}-{data_stream.dataset}-{data_stream.namespace}`. This means the fields can only contain characters that are valid as part of names of data streams. More details about this can be found in this [blog post](https://www.elastic.co/blog/an-introduction-to-the-elastic-data-stream-naming-scheme). An Elasticsearch data stream consists of one or more backing indices, and a data stream name forms part of the backing indices names. Due to this convention, data streams must also follow index naming restrictions. For example, data stream names cannot include `\`, `/`, `*`, `?`, `"`, `<`, `>`, `|`, ` ` (space character), `,`, or `#`. Please see the Elasticsearch reference for additional [restrictions](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create). + +**`data_stream.dataset`** +: The field can contain anything that makes sense to signify the source of the data. Examples include `nginx.access`, `prometheus`, `endpoint` etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. `event.dataset` should have the same value as `data_stream.dataset`. Beyond the Elasticsearch data stream naming criteria noted above, the `dataset` value has additional restrictions: * Must not contain `-` * No longer than 100 characters + +type: constant_keyword + +example: nginx.access + + +**`data_stream.namespace`** +: A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with `default`. If no value is used, it falls back to `default`. Beyond the Elasticsearch index naming criteria noted above, `namespace` value has the additional restrictions: * Must not contain `-` * No longer than 100 characters + +type: constant_keyword + +example: production + + +**`data_stream.type`** +: An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future. + +type: constant_keyword + +example: logs + + + +## destination [_destination_2] + +Destination fields capture details about the receiver of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. Destination fields are usually populated in conjunction with source fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. + +**`destination.address`** +: Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. + +type: keyword + + +**`destination.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`destination.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`destination.as.organization.name.text`** +: type: match_only_text + + +**`destination.bytes`** +: Bytes sent from the destination to the source. + +type: long + +example: 184 + +format: bytes + + +**`destination.domain`** +: The domain name of the destination system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. + +type: keyword + +example: foo.example.com + + +**`destination.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`destination.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`destination.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`destination.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`destination.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`destination.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`destination.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`destination.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`destination.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`destination.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`destination.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`destination.ip`** +: IP address of the destination (IPv4 or IPv6). + +type: ip + + +**`destination.mac`** +: MAC address of the destination. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: 00-00-5E-00-53-23 + + +**`destination.nat.ip`** +: Translated ip of destination based NAT sessions (e.g. internet to private DMZ) Typically used with load balancers, firewalls, or routers. + +type: ip + + +**`destination.nat.port`** +: Port the source session is translated to by NAT Device. Typically used with load balancers, firewalls, or routers. + +type: long + +format: string + + +**`destination.packets`** +: Packets sent from the destination to the source. + +type: long + +example: 12 + + +**`destination.port`** +: Port of the destination. + +type: long + +format: string + + +**`destination.registered_domain`** +: The highest registered destination domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`destination.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`destination.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`destination.user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`destination.user.email`** +: User email address. + +type: keyword + + +**`destination.user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`destination.user.full_name.text`** +: type: match_only_text + + +**`destination.user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`destination.user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`destination.user.group.name`** +: Name of the group. + +type: keyword + + +**`destination.user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`destination.user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`destination.user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`destination.user.name.text`** +: type: match_only_text + + +**`destination.user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## dll [_dll] + +These fields contain information about code libraries dynamically loaded into processes. + +Many operating systems refer to "shared code libraries" with different names, but this field set refers to all of the following: * Dynamic-link library (`.dll`) commonly used on Windows * Shared Object (`.so`) commonly used on Unix-like operating systems * Dynamic library (`.dylib`) commonly used on macOS + +**`dll.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`dll.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`dll.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`dll.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`dll.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`dll.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`dll.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`dll.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`dll.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`dll.hash.md5`** +: MD5 hash. + +type: keyword + + +**`dll.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`dll.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`dll.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`dll.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`dll.name`** +: Name of the library. This generally maps to the name of the file on disk. + +type: keyword + +example: kernel32.dll + + +**`dll.path`** +: Full file path of the library. + +type: keyword + +example: C:\Windows\System32\kernel32.dll + + +**`dll.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`dll.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`dll.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`dll.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`dll.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`dll.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`dll.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + + +## dns [_dns] + +Fields describing DNS queries and answers. DNS events should either represent a single DNS query prior to getting answers (`dns.type:query`) or they should represent a full exchange and contain the query details as well as all of the answers that were provided for this query (`dns.type:answer`). + +**`dns.answers`** +: An array containing an object for each answer section returned by the server. The main keys that should be present in these objects are defined by ECS. Records that have more information may contain more keys than what ECS defines. Not all DNS data sources give all details about DNS answers. At minimum, answer objects must contain the `data` key. If more information is available, map as much of it to ECS as possible, and add any additional fields to the answer objects as custom fields. + +type: object + + +**`dns.answers.class`** +: The class of DNS data contained in this resource record. + +type: keyword + +example: IN + + +**`dns.answers.data`** +: The data describing the resource. The meaning of this data depends on the type and class of the resource record. + +type: keyword + +example: 10.10.10.10 + + +**`dns.answers.name`** +: The domain name to which this resource record pertains. If a chain of CNAME is being resolved, each answer’s `name` should be the one that corresponds with the answer’s `data`. It should not simply be the original `question.name` repeated. + +type: keyword + +example: www.example.com + + +**`dns.answers.ttl`** +: The time interval in seconds that this resource record may be cached before it should be discarded. Zero values mean that the data should not be cached. + +type: long + +example: 180 + + +**`dns.answers.type`** +: The type of data contained in this resource record. + +type: keyword + +example: CNAME + + +**`dns.header_flags`** +: Array of 2 letter DNS header flags. Expected values are: AA, TC, RD, RA, AD, CD, DO. + +type: keyword + +example: ["RD", "RA"] + + +**`dns.id`** +: The DNS packet identifier assigned by the program that generated the query. The identifier is copied to the response. + +type: keyword + +example: 62111 + + +**`dns.op_code`** +: The DNS operation code that specifies the kind of query in the message. This value is set by the originator of a query and copied into the response. + +type: keyword + +example: QUERY + + +**`dns.question.class`** +: The class of records being queried. + +type: keyword + +example: IN + + +**`dns.question.name`** +: The name being queried. If the name field contains non-printable characters (below 32 or above 126), those characters should be represented as escaped base 10 integers (\DDD). Back slashes and quotes should be escaped. Tabs, carriage returns, and line feeds should be converted to \t, \r, and \n respectively. + +type: keyword + +example: www.example.com + + +**`dns.question.registered_domain`** +: The highest registered domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`dns.question.subdomain`** +: The subdomain is all of the labels under the registered_domain. If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: www + + +**`dns.question.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`dns.question.type`** +: The type of record being queried. + +type: keyword + +example: AAAA + + +**`dns.resolved_ip`** +: Array containing all IPs seen in `answers.data`. The `answers` array can be difficult to use, because of the variety of data formats it can contain. Extracting all IP addresses seen in there to `dns.resolved_ip` makes it possible to index them as IP addresses, and makes them easier to visualize and query for. + +type: ip + +example: ["10.10.10.10", "10.10.10.11"] + + +**`dns.response_code`** +: The DNS response code. + +type: keyword + +example: NOERROR + + +**`dns.type`** +: The type of DNS event captured, query or answer. If your source of DNS events only gives you DNS queries, you should only create dns events of type `dns.type:query`. If your source of DNS events gives you answers as well, you should create one event per query (optionally as soon as the query is seen). And a second event containing all query details as well as an array of answers. + +type: keyword + +example: answer + + + +## ecs [_ecs] + +Meta-information specific to ECS. + +**`ecs.version`** +: ECS version this event conforms to. `ecs.version` is a required field and must exist in all events. When querying across multiple indices — which may conform to slightly different ECS versions — this field lets integrations adjust to the schema version of the events. + +type: keyword + +example: 1.0.0 + +required: True + + + +## elf [_elf] + +These fields contain Linux Executable Linkable Format (ELF) metadata. + +**`elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + + +## error [_error] + +These fields can represent errors of any kind. Use them for errors that happen while fetching events or in cases where the event itself contains an error. + +**`error.code`** +: Error code describing the error. + +type: keyword + + +**`error.id`** +: Unique identifier for the error. + +type: keyword + + +**`error.message`** +: Error message. + +type: match_only_text + + +**`error.stack_trace`** +: The stack trace of this error in plain text. + +type: wildcard + + +**`error.stack_trace.text`** +: type: match_only_text + + +**`error.type`** +: The type of the error, for example the class name of the exception. + +type: keyword + +example: java.lang.NullPointerException + + + +## event [_event] + +The event fields are used for context information about the log or metric event itself. A log is defined as an event containing details of something that happened. Log events must include the time at which the thing happened. Examples of log events include a process starting on a host, a network packet being sent from a source to a destination, or a network connection between a client and a server being initiated or closed. A metric is defined as an event containing one or more numerical measurements and the time at which the measurement was taken. Examples of metric events include memory pressure measured on a host and device temperature. See the `event.kind` definition in this section for additional details about metric and state events. + +**`event.action`** +: The action captured by the event. This describes the information in the event. It is more specific than `event.category`. Examples are `group-add`, `process-started`, `file-created`. The value is normally defined by the implementer. + +type: keyword + +example: user-password-change + + +**`event.agent_id_status`** +: Agents are normally responsible for populating the `agent.id` field value. If the system receiving events is capable of validating the value based on authentication information for the client then this field can be used to reflect the outcome of that validation. For example if the agent’s connection is authenticated with mTLS and the client cert contains the ID of the agent to which the cert was issued then the `agent.id` value in events can be checked against the certificate. If the values match then `event.agent_id_status: verified` is added to the event, otherwise one of the other allowed values should be used. If no validation is performed then the field should be omitted. The allowed values are: `verified` - The `agent.id` field value matches expected value obtained from auth metadata. `mismatch` - The `agent.id` field value does not match the expected value obtained from auth metadata. `missing` - There was no `agent.id` field in the event to validate. `auth_metadata_missing` - There was no auth metadata or it was missing information about the agent ID. + +type: keyword + +example: verified + + +**`event.category`** +: This is one of four ECS Categorization Fields, and indicates the second level in the ECS category hierarchy. `event.category` represents the "big buckets" of ECS categories. For example, filtering on `event.category:process` yields all events relating to process activity. This field is closely related to `event.type`, which is used as a subcategory. This field is an array. This will allow proper categorization of some events that fall in multiple categories. + +type: keyword + +example: authentication + + +**`event.code`** +: Identification code for this event, if one exists. Some event sources use event codes to identify messages unambiguously, regardless of message language or wording adjustments over time. An example of this is the Windows Event ID. + +type: keyword + +example: 4648 + + +**`event.created`** +: event.created contains the date/time when the event was first read by an agent, or by your pipeline. This field is distinct from @timestamp in that @timestamp typically contain the time extracted from the original event. In most situations, these two timestamps will be slightly different. The difference can be used to calculate the delay between your source generating an event, and the time when your agent first processed it. This can be used to monitor your agent’s or pipeline’s ability to keep up with your event source. In case the two timestamps are identical, @timestamp should be used. + +type: date + +example: 2016-05-23T08:05:34.857Z + + +**`event.dataset`** +: Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It’s recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name. + +type: keyword + +example: apache.access + + +**`event.duration`** +: Duration of the event in nanoseconds. If event.start and event.end are known this value should be the difference between the end and start time. + +type: long + +format: duration + + +**`event.end`** +: event.end contains the date when the event ended or when the activity was last observed. + +type: date + + +**`event.hash`** +: Hash (perhaps logstash fingerprint) of raw field to be able to demonstrate log integrity. + +type: keyword + +example: 123456789012345678901234567890ABCD + + +**`event.id`** +: Unique ID to describe the event. + +type: keyword + +example: 8a4f500d + + +**`event.ingested`** +: Timestamp when an event arrived in the central data store. This is different from `@timestamp`, which is when the event originally occurred. It’s also different from `event.created`, which is meant to capture the first time an agent saw the event. In normal conditions, assuming no tampering, the timestamps should chronologically look like this: `@timestamp` < `event.created` < `event.ingested`. + +type: date + +example: 2016-05-23T08:05:35.101Z + + +**`event.kind`** +: This is one of four ECS Categorization Fields, and indicates the highest level in the ECS category hierarchy. `event.kind` gives high-level information about what type of information the event contains, without being specific to the contents of the event. For example, values of this field distinguish alert events from metric events. The value of this field can be used to inform how these kinds of events should be handled. They may warrant different retention, different access control, it may also help understand whether the data coming in at a regular interval or not. + +type: keyword + +example: alert + + +**`event.module`** +: Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), `event.module` should contain the name of this module. + +type: keyword + +example: apache + + +**`event.original`** +: Raw text message of entire event. Used to demonstrate log integrity or where the full log message (before splitting it up in multiple parts) may be required, e.g. for reindex. This field is not indexed and doc_values are disabled. It cannot be searched, but it can be retrieved from `_source`. If users wish to override this and index this field, please see `Field data types` in the `Elasticsearch Reference`. + +type: keyword + +example: Sep 19 08:26:10 host CEF:0|Security| threatmanager|1.0|100| worm successfully stopped|10|src=10.0.0.1 dst=2.1.2.2spt=1232 + +Field is not indexed. + + +**`event.outcome`** +: This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. `event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. Note that when a single transaction is described in multiple events, each event may populate different values of `event.outcome`, according to their perspective. Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with `event.type:info`, or any events for which an outcome does not make logical sense. + +type: keyword + +example: success + + +**`event.provider`** +: Source of the event. Event transports such as Syslog or the Windows Event Log typically mention the source of an event. It can be the name of the software that generated the event (e.g. Sysmon, httpd), or of a subsystem of the operating system (kernel, Microsoft-Windows-Security-Auditing). + +type: keyword + +example: kernel + + +**`event.reason`** +: Reason why this event happened, according to the source. This describes the why of a particular action or outcome captured in the event. Where `event.action` captures the action from the event, `event.reason` describes why that action was taken. For example, a web proxy with an `event.action` which denied the request may also populate `event.reason` with the reason why (e.g. `blocked site`). + +type: keyword + +example: Terminated an unexpected process + + +**`event.reference`** +: Reference URL linking to additional information about this event. This URL links to a static definition of this event. Alert events, indicated by `event.kind:alert`, are a common use case for this field. + +type: keyword + +example: [https://system.example.com/event/#0001234](https://system.example.com/event/#0001234) + + +**`event.risk_score`** +: Risk score or priority of the event (e.g. security solutions). Use your system’s original value here. + +type: float + + +**`event.risk_score_norm`** +: Normalized risk score or priority of the event, on a scale of 0 to 100. This is mainly useful if you use more than one system that assigns risk scores, and you want to see a normalized value across all systems. + +type: float + + +**`event.sequence`** +: Sequence number of the event. The sequence number is a value published by some event sources, to make the exact ordering of events unambiguous, regardless of the timestamp precision. + +type: long + +format: string + + +**`event.severity`** +: The numeric severity of the event according to your event source. What the different severity values mean can be different between sources and use cases. It’s up to the implementer to make sure severities are consistent across events from the same source. The Syslog severity belongs in `log.syslog.severity.code`. `event.severity` is meant to represent the severity according to the event source (e.g. firewall, IDS). If the event source does not publish its own severity, you may optionally copy the `log.syslog.severity.code` to `event.severity`. + +type: long + +example: 7 + +format: string + + +**`event.start`** +: event.start contains the date when the event started or when the activity was first observed. + +type: date + + +**`event.timezone`** +: This field should be populated when the event’s timestamp does not include timezone information already (e.g. default Syslog timestamps). It’s optional otherwise. Acceptable timezone formats are: a canonical ID (e.g. "Europe/Amsterdam"), abbreviated (e.g. "EST") or an HH:mm differential (e.g. "-05:00"). + +type: keyword + + +**`event.type`** +: This is one of four ECS Categorization Fields, and indicates the third level in the ECS category hierarchy. `event.type` represents a categorization "sub-bucket" that, when used along with the `event.category` field values, enables filtering events down to a level appropriate for single visualization. This field is an array. This will allow proper categorization of some events that fall in multiple event types. + +type: keyword + + +**`event.url`** +: URL linking to an external system to continue investigation of this event. This URL links to another system where in-depth investigation of the specific occurrence of this event can take place. Alert events, indicated by `event.kind:alert`, are a common use case for this field. + +type: keyword + +example: [https://mysystem.example.com/alert/5271dedb-f5b0-4218-87f0-4ac4870a38fe](https://mysystem.example.com/alert/5271dedb-f5b0-4218-87f0-4ac4870a38fe) + + + +## faas [_faas] + +The user fields describe information about the function as a service that is relevant to the event. + +**`faas.coldstart`** +: Boolean value indicating a cold start of a function. + +type: boolean + + +**`faas.execution`** +: The execution ID of the current function execution. + +type: keyword + +example: af9d5aa4-a685-4c5f-a22b-444f80b3cc28 + + +**`faas.trigger`** +: Details about the function trigger. + +type: nested + + +**`faas.trigger.request_id`** +: The ID of the trigger request , message, event, etc. + +type: keyword + +example: 123456789 + + +**`faas.trigger.type`** +: The trigger for the function execution. Expected values are: * http * pubsub * datasource * timer * other + +type: keyword + +example: http + + + +## file [_file_2] + +A file is defined as a set of information that has been created on, or has existed on a filesystem. File objects can be associated with host events, network events, and/or file events (e.g., those produced by File Integrity Monitoring [FIM] products or services). File fields provide details about the affected file associated with the event or metric. + +**`file.accessed`** +: Last time the file was accessed. Note that not all filesystems keep track of access time. + +type: date + + +**`file.attributes`** +: Array of file attributes. Attributes names will vary by platform. Here’s a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. + +type: keyword + +example: ["readonly", "system"] + + +**`file.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`file.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`file.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`file.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`file.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`file.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`file.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`file.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`file.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`file.created`** +: File creation time. Note that not all filesystems store the creation time. + +type: date + + +**`file.ctime`** +: Last time the file attributes or metadata changed. Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. + +type: date + + +**`file.device`** +: Device that is the source of the file. + +type: keyword + +example: sda + + +**`file.directory`** +: Directory where the file is located. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice + + +**`file.drive_letter`** +: Drive letter where the file is located. This field is only relevant on Windows. The value should be uppercase, and not include the colon. + +type: keyword + +example: C + + +**`file.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`file.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`file.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`file.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`file.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`file.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`file.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`file.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`file.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`file.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`file.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`file.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`file.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`file.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`file.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`file.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`file.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`file.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`file.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`file.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`file.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`file.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`file.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`file.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`file.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`file.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`file.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`file.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`file.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`file.extension`** +: File extension, excluding the leading dot. Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`file.fork_name`** +: A fork is additional data associated with a filesystem object. On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. + +type: keyword + +example: Zone.Identifer + + +**`file.gid`** +: Primary group ID (GID) of the file. + +type: keyword + +example: 1001 + + +**`file.group`** +: Primary group name of the file. + +type: keyword + +example: alice + + +**`file.hash.md5`** +: MD5 hash. + +type: keyword + + +**`file.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`file.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`file.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`file.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`file.inode`** +: Inode representing the file in the filesystem. + +type: keyword + +example: 256383 + + +**`file.mime_type`** +: MIME type should identify the format of the file or stream of bytes using [IANA official types](https://www.iana.org/assignments/media-types/media-types.xhtml), where possible. When more than one type is applicable, the most specific type should be used. + +type: keyword + + +**`file.mode`** +: Mode of the file in octal representation. + +type: keyword + +example: 0640 + + +**`file.mtime`** +: Last time the file content was modified. + +type: date + + +**`file.name`** +: Name of the file including the extension, without the directory. + +type: keyword + +example: example.png + + +**`file.owner`** +: File owner’s username. + +type: keyword + +example: alice + + +**`file.path`** +: Full path to the file, including the file name. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice/example.png + + +**`file.path.text`** +: type: match_only_text + + +**`file.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`file.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`file.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`file.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`file.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`file.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`file.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`file.size`** +: File size in bytes. Only relevant when `file.type` is "file". + +type: long + +example: 16384 + + +**`file.target_path`** +: Target path for symlinks. + +type: keyword + + +**`file.target_path.text`** +: type: match_only_text + + +**`file.type`** +: File type (file, dir, or symlink). + +type: keyword + +example: file + + +**`file.uid`** +: The user ID (UID) or security identifier (SID) of the file owner. + +type: keyword + +example: 1001 + + +**`file.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`file.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`file.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`file.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`file.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`file.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`file.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`file.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`file.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`file.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`file.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`file.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`file.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`file.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`file.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`file.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`file.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`file.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`file.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`file.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`file.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`file.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`file.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`file.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + + +## geo [_geo] + +Geo fields can carry data about a specific location related to an event. This geolocation information can be derived from techniques such as Geo IP, or be user-supplied. + +**`geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + + +## group [_group_3] + +The group fields are meant to represent groups that are relevant to the event. + +**`group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`group.name`** +: Name of the group. + +type: keyword + + + +## hash [_hash] + +The hash fields represent different bitwise hash algorithms and their values. Field names for common hashes (e.g. MD5, SHA1) are predefined. Add fields for other hashes by lowercasing the hash algorithm name and using underscore separators as appropriate (snake case, e.g. sha3_512). Note that this fieldset is used for common hashes that may be computed over a range of generic bytes. Entity-specific hashes such as ja3 or imphash are placed in the fieldsets to which they relate (tls and pe, respectively). + +**`hash.md5`** +: MD5 hash. + +type: keyword + + +**`hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + + +## host [_host] + +A host is defined as a general computing instance. ECS host.* fields should be populated with details about the host on which the event happened, or from which the measurement was taken. Host types include hardware, virtual machines, Docker containers, and Kubernetes nodes. + +**`host.architecture`** +: Operating system architecture. + +type: keyword + +example: x86_64 + + +**`host.cpu.usage`** +: Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. Scaling factor: 1000. For example: For a two core host, this value should be the average of the two cores, between 0 and 1. + +type: scaled_float + + +**`host.disk.read.bytes`** +: The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection. + +type: long + + +**`host.disk.write.bytes`** +: The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection. + +type: long + + +**`host.domain`** +: Name of the domain of which the host is a member. For example, on Windows this could be the host’s Active Directory domain or NetBIOS domain name. For Linux this could be the domain of the host’s LDAP provider. + +type: keyword + +example: CONTOSO + + +**`host.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`host.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`host.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`host.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`host.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`host.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`host.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`host.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`host.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`host.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`host.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`host.hostname`** +: Hostname of the host. It normally contains what the `hostname` command returns on the host machine. + +type: keyword + + +**`host.id`** +: Unique host id. As hostname is not always unique, use values that are meaningful in your environment. Example: The current usage of `beat.name`. + +type: keyword + + +**`host.ip`** +: Host ip addresses. + +type: ip + + +**`host.mac`** +: Host MAC addresses. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] + + +**`host.name`** +: Name of the host. It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. + +type: keyword + + +**`host.network.egress.bytes`** +: The number of bytes (gauge) sent out on all network interfaces by the host since the last metric collection. + +type: long + + +**`host.network.egress.packets`** +: The number of packets (gauge) sent out on all network interfaces by the host since the last metric collection. + +type: long + + +**`host.network.ingress.bytes`** +: The number of bytes received (gauge) on all network interfaces by the host since the last metric collection. + +type: long + + +**`host.network.ingress.packets`** +: The number of packets (gauge) received on all network interfaces by the host since the last metric collection. + +type: long + + +**`host.os.family`** +: OS family (such as redhat, debian, freebsd, windows). + +type: keyword + +example: debian + + +**`host.os.full`** +: Operating system name, including the version or code name. + +type: keyword + +example: Mac OS Mojave + + +**`host.os.full.text`** +: type: match_only_text + + +**`host.os.kernel`** +: Operating system kernel version as a raw string. + +type: keyword + +example: 4.4.0-112-generic + + +**`host.os.name`** +: Operating system name, without the version. + +type: keyword + +example: Mac OS X + + +**`host.os.name.text`** +: type: match_only_text + + +**`host.os.platform`** +: Operating system platform (such centos, ubuntu, windows). + +type: keyword + +example: darwin + + +**`host.os.type`** +: Use the `os.type` field to categorize the operating system into one of the broad commercial families. One of these following values should be used (lowercase): linux, macos, unix, windows. If the OS you’re dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. + +type: keyword + +example: macos + + +**`host.os.version`** +: Operating system version as a raw string. + +type: keyword + +example: 10.14.1 + + +**`host.type`** +: Type of host. For Cloud providers this can be the machine type like `t2.medium`. If vm, this could be the container, for example, or other information meaningful in your environment. + +type: keyword + + +**`host.uptime`** +: Seconds the host has been up. + +type: long + +example: 1325 + + + +## http [_http] + +Fields related to HTTP activity. Use the `url` field set to store the url of the request. + +**`http.request.body.bytes`** +: Size in bytes of the request body. + +type: long + +example: 887 + +format: bytes + + +**`http.request.body.content`** +: The full HTTP request body. + +type: wildcard + +example: Hello world + + +**`http.request.body.content.text`** +: type: match_only_text + + +**`http.request.bytes`** +: Total size in bytes of the request (body and headers). + +type: long + +example: 1437 + +format: bytes + + +**`http.request.id`** +: A unique identifier for each HTTP request to correlate logs between clients and servers in transactions. The id may be contained in a non-standard HTTP header, such as `X-Request-ID` or `X-Correlation-ID`. + +type: keyword + +example: 123e4567-e89b-12d3-a456-426614174000 + + +**`http.request.method`** +: HTTP request method. The value should retain its casing from the original event. For example, `GET`, `get`, and `GeT` are all considered valid values for this field. + +type: keyword + +example: POST + + +**`http.request.mime_type`** +: Mime type of the body of the request. This value must only be populated based on the content of the request body, not on the `Content-Type` header. Comparing the mime type of a request with the request’s Content-Type header can be helpful in detecting threats or misconfigured clients. + +type: keyword + +example: image/gif + + +**`http.request.referrer`** +: Referrer for this HTTP request. + +type: keyword + +example: [https://blog.example.com/](https://blog.example.com/) + + +**`http.response.body.bytes`** +: Size in bytes of the response body. + +type: long + +example: 887 + +format: bytes + + +**`http.response.body.content`** +: The full HTTP response body. + +type: wildcard + +example: Hello world + + +**`http.response.body.content.text`** +: type: match_only_text + + +**`http.response.bytes`** +: Total size in bytes of the response (body and headers). + +type: long + +example: 1437 + +format: bytes + + +**`http.response.mime_type`** +: Mime type of the body of the response. This value must only be populated based on the content of the response body, not on the `Content-Type` header. Comparing the mime type of a response with the response’s Content-Type header can be helpful in detecting misconfigured servers. + +type: keyword + +example: image/gif + + +**`http.response.status_code`** +: HTTP response status code. + +type: long + +example: 404 + +format: string + + +**`http.version`** +: HTTP version. + +type: keyword + +example: 1.1 + + + +## interface [_interface] + +The interface fields are used to record ingress and egress interface information when reported by an observer (e.g. firewall, router, load balancer) in the context of the observer handling a network connection. In the case of a single observer interface (e.g. network sensor on a span port) only the observer.ingress information should be populated. + +**`interface.alias`** +: Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. + +type: keyword + +example: outside + + +**`interface.id`** +: Interface ID as reported by an observer (typically SNMP interface ID). + +type: keyword + +example: 10 + + +**`interface.name`** +: Interface name as reported by the system. + +type: keyword + +example: eth0 + + + +## log [_log] + +Details about the event’s logging mechanism or logging transport. The log.* fields are typically populated with details about the logging mechanism used to create and/or transport the event. For example, syslog details belong under `log.syslog.*`. The details specific to your event source are typically not logged under `log.*`, but rather in `event.*` or in other ECS fields. + +**`log.file.path`** +: Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. If the event wasn’t read from a log file, do not populate this field. + +type: keyword + +example: /var/log/fun-times.log + + +**`log.level`** +: Original log level of the log event. If the source of the event provides a log level or textual severity, this is the one that goes in `log.level`. If your source doesn’t specify one, you may put your event transport’s severity here (e.g. Syslog severity). Some examples are `warn`, `err`, `i`, `informational`. + +type: keyword + +example: error + + +**`log.logger`** +: The name of the logger inside an application. This is usually the name of the class which initialized the logger, or can be a custom name. + +type: keyword + +example: org.elasticsearch.bootstrap.Bootstrap + + +**`log.origin.file.line`** +: The line number of the file containing the source code which originated the log event. + +type: long + +example: 42 + + +**`log.origin.file.name`** +: The name of the file containing the source code which originated the log event. Note that this field is not meant to capture the log file. The correct field to capture the log file is `log.file.path`. + +type: keyword + +example: Bootstrap.java + + +**`log.origin.function`** +: The name of the function or method which originated the log event. + +type: keyword + +example: init + + +**`log.syslog`** +: The Syslog metadata of the event, if the event was transmitted via Syslog. Please see RFCs 5424 or 3164. + +type: object + + +**`log.syslog.facility.code`** +: The Syslog numeric facility of the log event, if available. According to RFCs 5424 and 3164, this value should be an integer between 0 and 23. + +type: long + +example: 23 + +format: string + + +**`log.syslog.facility.name`** +: The Syslog text-based facility of the log event, if available. + +type: keyword + +example: local7 + + +**`log.syslog.priority`** +: Syslog numeric priority of the event, if available. According to RFCs 5424 and 3164, the priority is 8 * facility + severity. This number is therefore expected to contain a value between 0 and 191. + +type: long + +example: 135 + +format: string + + +**`log.syslog.severity.code`** +: The Syslog numeric severity of the log event, if available. If the event source publishing via Syslog provides a different numeric severity value (e.g. firewall, IDS), your source’s numeric severity should go to `event.severity`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `event.severity`. + +type: long + +example: 3 + + +**`log.syslog.severity.name`** +: The Syslog numeric severity of the log event, if available. If the event source publishing via Syslog provides a different severity value (e.g. firewall, IDS), your source’s text severity should go to `log.level`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `log.level`. + +type: keyword + +example: Error + + + +## network [_network] + +The network is defined as the communication path over which a host or network event happens. The network.* fields should be populated with details about the network activity associated with an event. + +**`network.application`** +: When a specific application or service is identified from network connection details (source/dest IPs, ports, certificates, or wire format), this field captures the application’s or service’s name. For example, the original event identifies the network connection being from a specific web service in a `https` network connection, like `facebook` or `twitter`. The field value must be normalized to lowercase for querying. + +type: keyword + +example: aim + + +**`network.bytes`** +: Total bytes transferred in both directions. If `source.bytes` and `destination.bytes` are known, `network.bytes` is their sum. + +type: long + +example: 368 + +format: bytes + + +**`network.community_id`** +: A hash of source and destination IPs and ports, as well as the protocol used in a communication. This is a tool-agnostic standard to identify flows. Learn more at [https://github.com/corelight/community-id-spec](https://github.com/corelight/community-id-spec). + +type: keyword + +example: 1:hO+sN4H+MG5MY/8hIrXPqc4ZQz0= + + +**`network.direction`** +: Direction of the network traffic. Recommended values are: * ingress * egress * inbound * outbound * internal * external * unknown + +When mapping events from a host-based monitoring context, populate this field from the host’s point of view, using the values "ingress" or "egress". When mapping events from a network or perimeter-based monitoring context, populate this field from the point of view of the network perimeter, using the values "inbound", "outbound", "internal" or "external". Note that "internal" is not crossing perimeter boundaries, and is meant to describe communication between two hosts within the perimeter. Note also that "external" is meant to describe traffic between two hosts that are external to the perimeter. This could for example be useful for ISPs or VPN service providers. + +type: keyword + +example: inbound + + +**`network.forwarded_ip`** +: Host IP address when the source IP address is the proxy. + +type: ip + +example: 192.1.1.2 + + +**`network.iana_number`** +: IANA Protocol Number ([https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml)). Standardized list of protocols. This aligns well with NetFlow and sFlow related logs which use the IANA Protocol Number. + +type: keyword + +example: 6 + + +**`network.inner`** +: Network.inner fields are added in addition to network.vlan fields to describe the innermost VLAN when q-in-q VLAN tagging is present. Allowed fields include vlan.id and vlan.name. Inner vlan fields are typically used when sending traffic with multiple 802.1q encapsulations to a network sensor (e.g. Zeek, Wireshark.) + +type: object + + +**`network.inner.vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`network.inner.vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + +**`network.name`** +: Name given by operators to sections of their network. + +type: keyword + +example: Guest Wifi + + +**`network.packets`** +: Total packets transferred in both directions. If `source.packets` and `destination.packets` are known, `network.packets` is their sum. + +type: long + +example: 24 + + +**`network.protocol`** +: In the OSI Model this would be the Application Layer protocol. For example, `http`, `dns`, or `ssh`. The field value must be normalized to lowercase for querying. + +type: keyword + +example: http + + +**`network.transport`** +: Same as network.iana_number, but instead using the Keyword name of the transport layer (udp, tcp, ipv6-icmp, etc.) The field value must be normalized to lowercase for querying. + +type: keyword + +example: tcp + + +**`network.type`** +: In the OSI Model this would be the Network Layer. ipv4, ipv6, ipsec, pim, etc The field value must be normalized to lowercase for querying. + +type: keyword + +example: ipv4 + + +**`network.vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`network.vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + + +## observer [_observer] + +An observer is defined as a special network, security, or application device used to detect, observe, or create network, security, or application-related events and metrics. This could be a custom hardware appliance or a server that has been configured to run special network, security, or application software. Examples include firewalls, web proxies, intrusion detection/prevention systems, network monitoring sensors, web application firewalls, data loss prevention systems, and APM servers. The observer.* fields shall be populated with details of the system, if any, that detects, observes and/or creates a network, security, or application event or metric. Message queues and ETL components used in processing events or metrics are not considered observers in ECS. + +**`observer.egress`** +: Observer.egress holds information like interface number and name, vlan, and zone information to classify egress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. + +type: object + + +**`observer.egress.interface.alias`** +: Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. + +type: keyword + +example: outside + + +**`observer.egress.interface.id`** +: Interface ID as reported by an observer (typically SNMP interface ID). + +type: keyword + +example: 10 + + +**`observer.egress.interface.name`** +: Interface name as reported by the system. + +type: keyword + +example: eth0 + + +**`observer.egress.vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`observer.egress.vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + +**`observer.egress.zone`** +: Network zone of outbound traffic as reported by the observer to categorize the destination area of egress traffic, e.g. Internal, External, DMZ, HR, Legal, etc. + +type: keyword + +example: Public_Internet + + +**`observer.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`observer.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`observer.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`observer.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`observer.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`observer.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`observer.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`observer.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`observer.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`observer.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`observer.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`observer.hostname`** +: Hostname of the observer. + +type: keyword + + +**`observer.ingress`** +: Observer.ingress holds information like interface number and name, vlan, and zone information to classify ingress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. + +type: object + + +**`observer.ingress.interface.alias`** +: Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. + +type: keyword + +example: outside + + +**`observer.ingress.interface.id`** +: Interface ID as reported by an observer (typically SNMP interface ID). + +type: keyword + +example: 10 + + +**`observer.ingress.interface.name`** +: Interface name as reported by the system. + +type: keyword + +example: eth0 + + +**`observer.ingress.vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`observer.ingress.vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + +**`observer.ingress.zone`** +: Network zone of incoming traffic as reported by the observer to categorize the source area of ingress traffic. e.g. internal, External, DMZ, HR, Legal, etc. + +type: keyword + +example: DMZ + + +**`observer.ip`** +: IP addresses of the observer. + +type: ip + + +**`observer.mac`** +: MAC addresses of the observer. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] + + +**`observer.name`** +: Custom name of the observer. This is a name that can be given to an observer. This can be helpful for example if multiple firewalls of the same model are used in an organization. If no custom name is needed, the field can be left empty. + +type: keyword + +example: 1_proxySG + + +**`observer.os.family`** +: OS family (such as redhat, debian, freebsd, windows). + +type: keyword + +example: debian + + +**`observer.os.full`** +: Operating system name, including the version or code name. + +type: keyword + +example: Mac OS Mojave + + +**`observer.os.full.text`** +: type: match_only_text + + +**`observer.os.kernel`** +: Operating system kernel version as a raw string. + +type: keyword + +example: 4.4.0-112-generic + + +**`observer.os.name`** +: Operating system name, without the version. + +type: keyword + +example: Mac OS X + + +**`observer.os.name.text`** +: type: match_only_text + + +**`observer.os.platform`** +: Operating system platform (such centos, ubuntu, windows). + +type: keyword + +example: darwin + + +**`observer.os.type`** +: Use the `os.type` field to categorize the operating system into one of the broad commercial families. One of these following values should be used (lowercase): linux, macos, unix, windows. If the OS you’re dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. + +type: keyword + +example: macos + + +**`observer.os.version`** +: Operating system version as a raw string. + +type: keyword + +example: 10.14.1 + + +**`observer.product`** +: The product name of the observer. + +type: keyword + +example: s200 + + +**`observer.serial_number`** +: Observer serial number. + +type: keyword + + +**`observer.type`** +: The type of the observer the data is coming from. There is no predefined list of observer types. Some examples are `forwarder`, `firewall`, `ids`, `ips`, `proxy`, `poller`, `sensor`, `APM server`. + +type: keyword + +example: firewall + + +**`observer.vendor`** +: Vendor name of the observer. + +type: keyword + +example: Symantec + + +**`observer.version`** +: Observer version. + +type: keyword + + + +## orchestrator [_orchestrator] + +Fields that describe the resources which container orchestrators manage or act upon. + +**`orchestrator.api_version`** +: API version being used to carry out the action + +type: keyword + +example: v1beta1 + + +**`orchestrator.cluster.name`** +: Name of the cluster. + +type: keyword + + +**`orchestrator.cluster.url`** +: URL of the API used to manage the cluster. + +type: keyword + + +**`orchestrator.cluster.version`** +: The version of the cluster. + +type: keyword + + +**`orchestrator.namespace`** +: Namespace in which the action is taking place. + +type: keyword + +example: kube-system + + +**`orchestrator.organization`** +: Organization affected by the event (for multi-tenant orchestrator setups). + +type: keyword + +example: elastic + + +**`orchestrator.resource.name`** +: Name of the resource being acted upon. + +type: keyword + +example: test-pod-cdcws + + +**`orchestrator.resource.type`** +: Type of resource being acted upon. + +type: keyword + +example: service + + +**`orchestrator.type`** +: Orchestrator cluster type (e.g. kubernetes, nomad or cloudfoundry). + +type: keyword + +example: kubernetes + + + +## organization [_organization] + +The organization fields enrich data with information about the company or entity the data is associated with. These fields help you arrange or filter data stored in an index by one or multiple organizations. + +**`organization.id`** +: Unique identifier for the organization. + +type: keyword + + +**`organization.name`** +: Organization name. + +type: keyword + + +**`organization.name.text`** +: type: match_only_text + + + +## os [_os] + +The OS fields contain information about the operating system. + +**`os.family`** +: OS family (such as redhat, debian, freebsd, windows). + +type: keyword + +example: debian + + +**`os.full`** +: Operating system name, including the version or code name. + +type: keyword + +example: Mac OS Mojave + + +**`os.full.text`** +: type: match_only_text + + +**`os.kernel`** +: Operating system kernel version as a raw string. + +type: keyword + +example: 4.4.0-112-generic + + +**`os.name`** +: Operating system name, without the version. + +type: keyword + +example: Mac OS X + + +**`os.name.text`** +: type: match_only_text + + +**`os.platform`** +: Operating system platform (such centos, ubuntu, windows). + +type: keyword + +example: darwin + + +**`os.type`** +: Use the `os.type` field to categorize the operating system into one of the broad commercial families. One of these following values should be used (lowercase): linux, macos, unix, windows. If the OS you’re dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. + +type: keyword + +example: macos + + +**`os.version`** +: Operating system version as a raw string. + +type: keyword + +example: 10.14.1 + + + +## package [_package] + +These fields contain information about an installed software package. It contains general information about a package, such as name, version or size. It also contains installation details, such as time or location. + +**`package.architecture`** +: Package architecture. + +type: keyword + +example: x86_64 + + +**`package.build_version`** +: Additional information about the build version of the installed package. For example use the commit SHA of a non-released package. + +type: keyword + +example: 36f4f7e89dd61b0988b12ee000b98966867710cd + + +**`package.checksum`** +: Checksum of the installed package for verification. + +type: keyword + +example: 68b329da9893e34099c7d8ad5cb9c940 + + +**`package.description`** +: Description of the package. + +type: keyword + +example: Open source programming language to build simple/reliable/efficient software. + + +**`package.install_scope`** +: Indicating how the package was installed, e.g. user-local, global. + +type: keyword + +example: global + + +**`package.installed`** +: Time when package was installed. + +type: date + + +**`package.license`** +: License under which the package was released. Use a short name, e.g. the license identifier from SPDX License List where possible ([https://spdx.org/licenses/](https://spdx.org/licenses/)). + +type: keyword + +example: Apache License 2.0 + + +**`package.name`** +: Package name + +type: keyword + +example: go + + +**`package.path`** +: Path where the package is installed. + +type: keyword + +example: /usr/local/Cellar/go/1.12.9/ + + +**`package.reference`** +: Home page or reference URL of the software in this package, if available. + +type: keyword + +example: [https://golang.org](https://golang.org) + + +**`package.size`** +: Package size in bytes. + +type: long + +example: 62231 + +format: string + + +**`package.type`** +: Type of package. This should contain the package file type, rather than the package manager name. Examples: rpm, dpkg, brew, npm, gem, nupkg, jar. + +type: keyword + +example: rpm + + +**`package.version`** +: Package version + +type: keyword + +example: 1.12.9 + + + +## pe [_pe] + +These fields contain Windows Portable Executable (PE) metadata. + +**`pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + + +## process [_process_2] + +These fields contain information about a process. These fields can help you correlate metrics information with a process id/name from a log message. The `process.pid` often stays in the metric itself and is copied to the global field for correlation. + +**`process.args`** +: Array of process arguments, starting with the absolute path to the executable. May be filtered to protect sensitive information. + +type: keyword + +example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] + + +**`process.args_count`** +: Length of the process.args array. This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. + +type: long + +example: 4 + + +**`process.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`process.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`process.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`process.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`process.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`process.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`process.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`process.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`process.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`process.command_line`** +: Full command line that started the process, including the absolute path to the executable, and all arguments. Some arguments may be filtered to protect sensitive information. + +type: wildcard + +example: /usr/bin/ssh -l user 10.0.0.16 + + +**`process.command_line.text`** +: type: match_only_text + + +**`process.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`process.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`process.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`process.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`process.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`process.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`process.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`process.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`process.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`process.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`process.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`process.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`process.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`process.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`process.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`process.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`process.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`process.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`process.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`process.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`process.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`process.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`process.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`process.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`process.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`process.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`process.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`process.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`process.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`process.end`** +: The time the process ended. + +type: date + +example: 2016-05-23T08:05:34.853Z + + +**`process.entity_id`** +: Unique identifier for the process. The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. + +type: keyword + +example: c2c455d9f99375d + + +**`process.executable`** +: Absolute path to the process executable. + +type: keyword + +example: /usr/bin/ssh + + +**`process.executable.text`** +: type: match_only_text + + +**`process.exit_code`** +: The exit code of the process, if this is a termination event. The field should be absent if there is no exit code for the event (e.g. process start). + +type: long + +example: 137 + + +**`process.hash.md5`** +: MD5 hash. + +type: keyword + + +**`process.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`process.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`process.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`process.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`process.name`** +: Process name. Sometimes called program name or similar. + +type: keyword + +example: ssh + + +**`process.name.text`** +: type: match_only_text + + +**`process.parent.args`** +: Array of process arguments, starting with the absolute path to the executable. May be filtered to protect sensitive information. + +type: keyword + +example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] + + +**`process.parent.args_count`** +: Length of the process.args array. This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. + +type: long + +example: 4 + + +**`process.parent.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`process.parent.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`process.parent.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`process.parent.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`process.parent.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`process.parent.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`process.parent.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`process.parent.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`process.parent.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`process.parent.command_line`** +: Full command line that started the process, including the absolute path to the executable, and all arguments. Some arguments may be filtered to protect sensitive information. + +type: wildcard + +example: /usr/bin/ssh -l user 10.0.0.16 + + +**`process.parent.command_line.text`** +: type: match_only_text + + +**`process.parent.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`process.parent.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`process.parent.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`process.parent.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`process.parent.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`process.parent.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`process.parent.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`process.parent.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`process.parent.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`process.parent.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`process.parent.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`process.parent.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`process.parent.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`process.parent.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`process.parent.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`process.parent.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`process.parent.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`process.parent.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`process.parent.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`process.parent.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`process.parent.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`process.parent.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`process.parent.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`process.parent.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`process.parent.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`process.parent.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`process.parent.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`process.parent.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`process.parent.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`process.parent.end`** +: The time the process ended. + +type: date + +example: 2016-05-23T08:05:34.853Z + + +**`process.parent.entity_id`** +: Unique identifier for the process. The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. + +type: keyword + +example: c2c455d9f99375d + + +**`process.parent.executable`** +: Absolute path to the process executable. + +type: keyword + +example: /usr/bin/ssh + + +**`process.parent.executable.text`** +: type: match_only_text + + +**`process.parent.exit_code`** +: The exit code of the process, if this is a termination event. The field should be absent if there is no exit code for the event (e.g. process start). + +type: long + +example: 137 + + +**`process.parent.hash.md5`** +: MD5 hash. + +type: keyword + + +**`process.parent.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`process.parent.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`process.parent.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`process.parent.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`process.parent.name`** +: Process name. Sometimes called program name or similar. + +type: keyword + +example: ssh + + +**`process.parent.name.text`** +: type: match_only_text + + +**`process.parent.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`process.parent.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`process.parent.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`process.parent.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`process.parent.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`process.parent.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`process.parent.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`process.parent.pgid`** +: Identifier of the group of processes the process belongs to. + +type: long + +format: string + + +**`process.parent.pid`** +: Process id. + +type: long + +example: 4242 + +format: string + + +**`process.parent.start`** +: The time the process started. + +type: date + +example: 2016-05-23T08:05:34.853Z + + +**`process.parent.thread.id`** +: Thread ID. + +type: long + +example: 4242 + +format: string + + +**`process.parent.thread.name`** +: Thread name. + +type: keyword + +example: thread-0 + + +**`process.parent.title`** +: Process title. The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. + +type: keyword + + +**`process.parent.title.text`** +: type: match_only_text + + +**`process.parent.uptime`** +: Seconds the process has been up. + +type: long + +example: 1325 + + +**`process.parent.working_directory`** +: The working directory of the process. + +type: keyword + +example: /home/alice + + +**`process.parent.working_directory.text`** +: type: match_only_text + + +**`process.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`process.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`process.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`process.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`process.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`process.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`process.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`process.pgid`** +: Identifier of the group of processes the process belongs to. + +type: long + +format: string + + +**`process.pid`** +: Process id. + +type: long + +example: 4242 + +format: string + + +**`process.start`** +: The time the process started. + +type: date + +example: 2016-05-23T08:05:34.853Z + + +**`process.thread.id`** +: Thread ID. + +type: long + +example: 4242 + +format: string + + +**`process.thread.name`** +: Thread name. + +type: keyword + +example: thread-0 + + +**`process.title`** +: Process title. The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. + +type: keyword + + +**`process.title.text`** +: type: match_only_text + + +**`process.uptime`** +: Seconds the process has been up. + +type: long + +example: 1325 + + +**`process.working_directory`** +: The working directory of the process. + +type: keyword + +example: /home/alice + + +**`process.working_directory.text`** +: type: match_only_text + + + +## registry [_registry] + +Fields related to Windows Registry operations. + +**`registry.data.bytes`** +: Original bytes written with base64 encoding. For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. + +type: keyword + +example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= + + +**`registry.data.strings`** +: Content when writing string types. Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). + +type: wildcard + +example: ["C:\rta\red_ttp\bin\myapp.exe"] + + +**`registry.data.type`** +: Standard registry type for encoding contents + +type: keyword + +example: REG_SZ + + +**`registry.hive`** +: Abbreviated name for the hive. + +type: keyword + +example: HKLM + + +**`registry.key`** +: Hive-relative path of keys. + +type: keyword + +example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe + + +**`registry.path`** +: Full path, including hive, key and value + +type: keyword + +example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger + + +**`registry.value`** +: Name of the value written. + +type: keyword + +example: Debugger + + + +## related [_related] + +This field set is meant to facilitate pivoting around a piece of data. Some pieces of information can be seen in many places in an ECS event. To facilitate searching for them, store an array of all seen values to their corresponding field in `related.`. A concrete example is IP addresses, which can be under host, observer, source, destination, client, server, and network.forwarded_ip. If you append all IPs to `related.ip`, you can then search for a given IP trivially, no matter where it appeared, by querying `related.ip:192.0.2.15`. + +**`related.hash`** +: All the hashes seen on your event. Populating this field, then using it to search for hashes can help in situations where you’re unsure what the hash algorithm is (and therefore which key name to search). + +type: keyword + + +**`related.hosts`** +: All hostnames or other host identifiers seen on your event. Example identifiers include FQDNs, domain names, workstation names, or aliases. + +type: keyword + + +**`related.ip`** +: All of the IPs seen on your event. + +type: ip + + +**`related.user`** +: All the user names or other user identifiers seen on the event. + +type: keyword + + + +## rule [_rule] + +Rule fields are used to capture the specifics of any observer or agent rules that generate alerts or other notable events. Examples of data sources that would populate the rule fields include: network admission control platforms, network or host IDS/IPS, network firewalls, web application firewalls, url filters, endpoint detection and response (EDR) systems, etc. + +**`rule.author`** +: Name, organization, or pseudonym of the author or authors who created the rule used to generate this event. + +type: keyword + +example: ["Star-Lord"] + + +**`rule.category`** +: A categorization value keyword used by the entity using the rule for detection of this event. + +type: keyword + +example: Attempted Information Leak + + +**`rule.description`** +: The description of the rule generating the event. + +type: keyword + +example: Block requests to public DNS over HTTPS / TLS protocols + + +**`rule.id`** +: A rule ID that is unique within the scope of an agent, observer, or other entity using the rule for detection of this event. + +type: keyword + +example: 101 + + +**`rule.license`** +: Name of the license under which the rule used to generate this event is made available. + +type: keyword + +example: Apache 2.0 + + +**`rule.name`** +: The name of the rule or signature generating the event. + +type: keyword + +example: BLOCK_DNS_over_TLS + + +**`rule.reference`** +: Reference URL to additional information about the rule used to generate this event. The URL can point to the vendor’s documentation about the rule. If that’s not available, it can also be a link to a more general page describing this type of alert. + +type: keyword + +example: [https://en.wikipedia.org/wiki/DNS_over_TLS](https://en.wikipedia.org/wiki/DNS_over_TLS) + + +**`rule.ruleset`** +: Name of the ruleset, policy, group, or parent category in which the rule used to generate this event is a member. + +type: keyword + +example: Standard_Protocol_Filters + + +**`rule.uuid`** +: A rule ID that is unique within the scope of a set or group of agents, observers, or other entities using the rule for detection of this event. + +type: keyword + +example: 1100110011 + + +**`rule.version`** +: The version / revision of the rule being used for analysis. + +type: keyword + +example: 1.1 + + + +## server [_server] + +A Server is defined as the responder in a network connection for events regarding sessions, connections, or bidirectional flow records. For TCP events, the server is the receiver of the initial SYN packet(s) of the TCP connection. For other protocols, the server is generally the responder in the network transaction. Some systems actually use the term "responder" to refer the server in TCP connections. The server fields describe details about the system acting as the server in the network event. Server fields are usually populated in conjunction with client fields. Server fields are generally not populated for packet-level events. Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. + +**`server.address`** +: Some event server addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. + +type: keyword + + +**`server.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`server.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`server.as.organization.name.text`** +: type: match_only_text + + +**`server.bytes`** +: Bytes sent from the server to the client. + +type: long + +example: 184 + +format: bytes + + +**`server.domain`** +: The domain name of the server system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. + +type: keyword + +example: foo.example.com + + +**`server.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`server.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`server.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`server.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`server.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`server.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`server.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`server.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`server.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`server.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`server.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`server.ip`** +: IP address of the server (IPv4 or IPv6). + +type: ip + + +**`server.mac`** +: MAC address of the server. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: 00-00-5E-00-53-23 + + +**`server.nat.ip`** +: Translated ip of destination based NAT sessions (e.g. internet to private DMZ) Typically used with load balancers, firewalls, or routers. + +type: ip + + +**`server.nat.port`** +: Translated port of destination based NAT sessions (e.g. internet to private DMZ) Typically used with load balancers, firewalls, or routers. + +type: long + +format: string + + +**`server.packets`** +: Packets sent from the server to the client. + +type: long + +example: 12 + + +**`server.port`** +: Port of the server. + +type: long + +format: string + + +**`server.registered_domain`** +: The highest registered server domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`server.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`server.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`server.user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`server.user.email`** +: User email address. + +type: keyword + + +**`server.user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`server.user.full_name.text`** +: type: match_only_text + + +**`server.user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`server.user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`server.user.group.name`** +: Name of the group. + +type: keyword + + +**`server.user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`server.user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`server.user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`server.user.name.text`** +: type: match_only_text + + +**`server.user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## service [_service] + +The service fields describe the service for or from which the data was collected. These fields help you find and correlate logs for a specific service and version. + +**`service.address`** +: Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). + +type: keyword + +example: 172.26.0.2:5432 + + +**`service.environment`** +: Identifies the environment where the service is running. If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. + +type: keyword + +example: production + + +**`service.ephemeral_id`** +: Ephemeral identifier of this service (if one exists). This id normally changes across restarts, but `service.id` does not. + +type: keyword + +example: 8a4f500f + + +**`service.id`** +: Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. + +type: keyword + +example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 + + +**`service.name`** +: Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. + +type: keyword + +example: elasticsearch-metrics + + +**`service.node.name`** +: Name of a service node. This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn’t have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. + +type: keyword + +example: instance-0000000016 + + +**`service.origin.address`** +: Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). + +type: keyword + +example: 172.26.0.2:5432 + + +**`service.origin.environment`** +: Identifies the environment where the service is running. If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. + +type: keyword + +example: production + + +**`service.origin.ephemeral_id`** +: Ephemeral identifier of this service (if one exists). This id normally changes across restarts, but `service.id` does not. + +type: keyword + +example: 8a4f500f + + +**`service.origin.id`** +: Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. + +type: keyword + +example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 + + +**`service.origin.name`** +: Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. + +type: keyword + +example: elasticsearch-metrics + + +**`service.origin.node.name`** +: Name of a service node. This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn’t have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. + +type: keyword + +example: instance-0000000016 + + +**`service.origin.state`** +: Current state of the service. + +type: keyword + + +**`service.origin.type`** +: The type of the service data is collected from. The type can be used to group and correlate logs and metrics from one service type. Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. + +type: keyword + +example: elasticsearch + + +**`service.origin.version`** +: Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. + +type: keyword + +example: 3.2.4 + + +**`service.state`** +: Current state of the service. + +type: keyword + + +**`service.target.address`** +: Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). + +type: keyword + +example: 172.26.0.2:5432 + + +**`service.target.environment`** +: Identifies the environment where the service is running. If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. + +type: keyword + +example: production + + +**`service.target.ephemeral_id`** +: Ephemeral identifier of this service (if one exists). This id normally changes across restarts, but `service.id` does not. + +type: keyword + +example: 8a4f500f + + +**`service.target.id`** +: Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. + +type: keyword + +example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 + + +**`service.target.name`** +: Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. + +type: keyword + +example: elasticsearch-metrics + + +**`service.target.node.name`** +: Name of a service node. This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn’t have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. + +type: keyword + +example: instance-0000000016 + + +**`service.target.state`** +: Current state of the service. + +type: keyword + + +**`service.target.type`** +: The type of the service data is collected from. The type can be used to group and correlate logs and metrics from one service type. Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. + +type: keyword + +example: elasticsearch + + +**`service.target.version`** +: Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. + +type: keyword + +example: 3.2.4 + + +**`service.type`** +: The type of the service data is collected from. The type can be used to group and correlate logs and metrics from one service type. Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. + +type: keyword + +example: elasticsearch + + +**`service.version`** +: Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. + +type: keyword + +example: 3.2.4 + + + +## source [_source_2] + +Source fields capture details about the sender of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. Source fields are usually populated in conjunction with destination fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. + +**`source.address`** +: Some event source addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. + +type: keyword + + +**`source.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`source.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`source.as.organization.name.text`** +: type: match_only_text + + +**`source.bytes`** +: Bytes sent from the source to the destination. + +type: long + +example: 184 + +format: bytes + + +**`source.domain`** +: The domain name of the source system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. + +type: keyword + +example: foo.example.com + + +**`source.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`source.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`source.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`source.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`source.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`source.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`source.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`source.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`source.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`source.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`source.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`source.ip`** +: IP address of the source (IPv4 or IPv6). + +type: ip + + +**`source.mac`** +: MAC address of the source. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: 00-00-5E-00-53-23 + + +**`source.nat.ip`** +: Translated ip of source based NAT sessions (e.g. internal client to internet) Typically connections traversing load balancers, firewalls, or routers. + +type: ip + + +**`source.nat.port`** +: Translated port of source based NAT sessions. (e.g. internal client to internet) Typically used with load balancers, firewalls, or routers. + +type: long + +format: string + + +**`source.packets`** +: Packets sent from the source to the destination. + +type: long + +example: 12 + + +**`source.port`** +: Port of the source. + +type: long + +format: string + + +**`source.registered_domain`** +: The highest registered source domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`source.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`source.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`source.user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`source.user.email`** +: User email address. + +type: keyword + + +**`source.user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`source.user.full_name.text`** +: type: match_only_text + + +**`source.user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`source.user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`source.user.group.name`** +: Name of the group. + +type: keyword + + +**`source.user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`source.user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`source.user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`source.user.name.text`** +: type: match_only_text + + +**`source.user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## threat [_threat] + +Fields to classify events and alerts according to a threat taxonomy such as the MITRE ATT&CK® framework. These fields are for users to classify alerts from all of their sources (e.g. IDS, NGFW, etc.) within a common taxonomy. The threat.tactic.* fields are meant to capture the high level category of the threat (e.g. "impact"). The threat.technique.* fields are meant to capture which kind of approach is used by this detected threat, to accomplish the goal (e.g. "endpoint denial of service"). + +**`threat.enrichments`** +: A list of associated indicators objects enriching the event, and the context of that association/enrichment. + +type: nested + + +**`threat.enrichments.indicator`** +: Object containing associated indicators enriching the event. + +type: object + + +**`threat.enrichments.indicator.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`threat.enrichments.indicator.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`threat.enrichments.indicator.as.organization.name.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.confidence`** +: Identifies the vendor-neutral confidence rating using the None/Low/Medium/High scale defined in Appendix A of the STIX 2.1 framework. Vendor-specific confidence scales may be added as custom fields. Expected values are: * Not Specified * None * Low * Medium * High + +type: keyword + +example: Medium + + +**`threat.enrichments.indicator.description`** +: Describes the type of action conducted by the threat. + +type: keyword + +example: IP x.x.x.x was observed delivering the Angler EK. + + +**`threat.enrichments.indicator.email.address`** +: Identifies a threat indicator as an email address (irrespective of direction). + +type: keyword + +example: `phish@example.com` + + +**`threat.enrichments.indicator.file.accessed`** +: Last time the file was accessed. Note that not all filesystems keep track of access time. + +type: date + + +**`threat.enrichments.indicator.file.attributes`** +: Array of file attributes. Attributes names will vary by platform. Here’s a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. + +type: keyword + +example: ["readonly", "system"] + + +**`threat.enrichments.indicator.file.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`threat.enrichments.indicator.file.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`threat.enrichments.indicator.file.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`threat.enrichments.indicator.file.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`threat.enrichments.indicator.file.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`threat.enrichments.indicator.file.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`threat.enrichments.indicator.file.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`threat.enrichments.indicator.file.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`threat.enrichments.indicator.file.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`threat.enrichments.indicator.file.created`** +: File creation time. Note that not all filesystems store the creation time. + +type: date + + +**`threat.enrichments.indicator.file.ctime`** +: Last time the file attributes or metadata changed. Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. + +type: date + + +**`threat.enrichments.indicator.file.device`** +: Device that is the source of the file. + +type: keyword + +example: sda + + +**`threat.enrichments.indicator.file.directory`** +: Directory where the file is located. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice + + +**`threat.enrichments.indicator.file.drive_letter`** +: Drive letter where the file is located. This field is only relevant on Windows. The value should be uppercase, and not include the colon. + +type: keyword + +example: C + + +**`threat.enrichments.indicator.file.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`threat.enrichments.indicator.file.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`threat.enrichments.indicator.file.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`threat.enrichments.indicator.file.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`threat.enrichments.indicator.file.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`threat.enrichments.indicator.file.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`threat.enrichments.indicator.file.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`threat.enrichments.indicator.file.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`threat.enrichments.indicator.file.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`threat.enrichments.indicator.file.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`threat.enrichments.indicator.file.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`threat.enrichments.indicator.file.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`threat.enrichments.indicator.file.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`threat.enrichments.indicator.file.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`threat.enrichments.indicator.file.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`threat.enrichments.indicator.file.extension`** +: File extension, excluding the leading dot. Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`threat.enrichments.indicator.file.fork_name`** +: A fork is additional data associated with a filesystem object. On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. + +type: keyword + +example: Zone.Identifer + + +**`threat.enrichments.indicator.file.gid`** +: Primary group ID (GID) of the file. + +type: keyword + +example: 1001 + + +**`threat.enrichments.indicator.file.group`** +: Primary group name of the file. + +type: keyword + +example: alice + + +**`threat.enrichments.indicator.file.hash.md5`** +: MD5 hash. + +type: keyword + + +**`threat.enrichments.indicator.file.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`threat.enrichments.indicator.file.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`threat.enrichments.indicator.file.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`threat.enrichments.indicator.file.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`threat.enrichments.indicator.file.inode`** +: Inode representing the file in the filesystem. + +type: keyword + +example: 256383 + + +**`threat.enrichments.indicator.file.mime_type`** +: MIME type should identify the format of the file or stream of bytes using [IANA official types](https://www.iana.org/assignments/media-types/media-types.xhtml), where possible. When more than one type is applicable, the most specific type should be used. + +type: keyword + + +**`threat.enrichments.indicator.file.mode`** +: Mode of the file in octal representation. + +type: keyword + +example: 0640 + + +**`threat.enrichments.indicator.file.mtime`** +: Last time the file content was modified. + +type: date + + +**`threat.enrichments.indicator.file.name`** +: Name of the file including the extension, without the directory. + +type: keyword + +example: example.png + + +**`threat.enrichments.indicator.file.owner`** +: File owner’s username. + +type: keyword + +example: alice + + +**`threat.enrichments.indicator.file.path`** +: Full path to the file, including the file name. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice/example.png + + +**`threat.enrichments.indicator.file.path.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.file.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`threat.enrichments.indicator.file.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`threat.enrichments.indicator.file.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`threat.enrichments.indicator.file.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`threat.enrichments.indicator.file.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`threat.enrichments.indicator.file.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`threat.enrichments.indicator.file.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`threat.enrichments.indicator.file.size`** +: File size in bytes. Only relevant when `file.type` is "file". + +type: long + +example: 16384 + + +**`threat.enrichments.indicator.file.target_path`** +: Target path for symlinks. + +type: keyword + + +**`threat.enrichments.indicator.file.target_path.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.file.type`** +: File type (file, dir, or symlink). + +type: keyword + +example: file + + +**`threat.enrichments.indicator.file.uid`** +: The user ID (UID) or security identifier (SID) of the file owner. + +type: keyword + +example: 1001 + + +**`threat.enrichments.indicator.file.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`threat.enrichments.indicator.file.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`threat.enrichments.indicator.file.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`threat.enrichments.indicator.file.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`threat.enrichments.indicator.file.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`threat.enrichments.indicator.file.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`threat.enrichments.indicator.file.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`threat.enrichments.indicator.file.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.enrichments.indicator.file.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`threat.enrichments.indicator.file.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`threat.enrichments.indicator.file.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`threat.enrichments.indicator.file.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`threat.enrichments.indicator.file.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`threat.enrichments.indicator.file.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`threat.enrichments.indicator.file.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`threat.enrichments.indicator.file.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`threat.enrichments.indicator.file.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`threat.enrichments.indicator.file.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`threat.enrichments.indicator.file.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`threat.enrichments.indicator.file.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`threat.enrichments.indicator.file.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`threat.enrichments.indicator.file.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`threat.enrichments.indicator.file.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.enrichments.indicator.file.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`threat.enrichments.indicator.first_seen`** +: The date and time when intelligence source first reported sighting this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.enrichments.indicator.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`threat.enrichments.indicator.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`threat.enrichments.indicator.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`threat.enrichments.indicator.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`threat.enrichments.indicator.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`threat.enrichments.indicator.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`threat.enrichments.indicator.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`threat.enrichments.indicator.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`threat.enrichments.indicator.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`threat.enrichments.indicator.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`threat.enrichments.indicator.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`threat.enrichments.indicator.ip`** +: Identifies a threat indicator as an IP address (irrespective of direction). + +type: ip + +example: 1.2.3.4 + + +**`threat.enrichments.indicator.last_seen`** +: The date and time when intelligence source last reported sighting this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.enrichments.indicator.marking.tlp`** +: Traffic Light Protocol sharing markings. Recommended values are: * WHITE * GREEN * AMBER * RED + +type: keyword + +example: White + + +**`threat.enrichments.indicator.modified_at`** +: The date and time when intelligence source last modified information for this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.enrichments.indicator.port`** +: Identifies a threat indicator as a port number (irrespective of direction). + +type: long + +example: 443 + + +**`threat.enrichments.indicator.provider`** +: The name of the indicator’s provider. + +type: keyword + +example: lrz_urlhaus + + +**`threat.enrichments.indicator.reference`** +: Reference URL linking to additional information about this indicator. + +type: keyword + +example: [https://system.example.com/indicator/0001234](https://system.example.com/indicator/0001234) + + +**`threat.enrichments.indicator.registry.data.bytes`** +: Original bytes written with base64 encoding. For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. + +type: keyword + +example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= + + +**`threat.enrichments.indicator.registry.data.strings`** +: Content when writing string types. Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). + +type: wildcard + +example: ["C:\rta\red_ttp\bin\myapp.exe"] + + +**`threat.enrichments.indicator.registry.data.type`** +: Standard registry type for encoding contents + +type: keyword + +example: REG_SZ + + +**`threat.enrichments.indicator.registry.hive`** +: Abbreviated name for the hive. + +type: keyword + +example: HKLM + + +**`threat.enrichments.indicator.registry.key`** +: Hive-relative path of keys. + +type: keyword + +example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe + + +**`threat.enrichments.indicator.registry.path`** +: Full path, including hive, key and value + +type: keyword + +example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger + + +**`threat.enrichments.indicator.registry.value`** +: Name of the value written. + +type: keyword + +example: Debugger + + +**`threat.enrichments.indicator.scanner_stats`** +: Count of AV/EDR vendors that successfully detected malicious file or URL. + +type: long + +example: 4 + + +**`threat.enrichments.indicator.sightings`** +: Number of times this indicator was observed conducting threat activity. + +type: long + +example: 20 + + +**`threat.enrichments.indicator.type`** +: Type of indicator as represented by Cyber Observable in STIX 2.0. Recommended values: * autonomous-system * artifact * directory * domain-name * email-addr * file * ipv4-addr * ipv6-addr * mac-addr * mutex * port * process * software * url * user-account * windows-registry-key * x509-certificate + +type: keyword + +example: ipv4-addr + + +**`threat.enrichments.indicator.url.domain`** +: Domain of the url, such as "www.elastic.co". In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. + +type: keyword + +example: www.elastic.co + + +**`threat.enrichments.indicator.url.extension`** +: The field contains the file extension from the original request url, excluding the leading dot. The file extension is only set if it exists, as not every url has a file extension. The leading period must not be included. For example, the value must be "png", not ".png". Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`threat.enrichments.indicator.url.fragment`** +: Portion of the url after the `#`, such as "top". The `#` is not part of the fragment. + +type: keyword + + +**`threat.enrichments.indicator.url.full`** +: If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) + + +**`threat.enrichments.indicator.url.full.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.url.original`** +: Unmodified original url as seen in the event source. Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. This field is meant to represent the URL as it was observed, complete or not. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) or /search?q=elasticsearch + + +**`threat.enrichments.indicator.url.original.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.url.password`** +: Password of the request. + +type: keyword + + +**`threat.enrichments.indicator.url.path`** +: Path of the request, such as "/search". + +type: wildcard + + +**`threat.enrichments.indicator.url.port`** +: Port of the request, such as 443. + +type: long + +example: 443 + +format: string + + +**`threat.enrichments.indicator.url.query`** +: The query field describes the query string of the request, such as "q=elasticsearch". The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. + +type: keyword + + +**`threat.enrichments.indicator.url.registered_domain`** +: The highest registered url domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`threat.enrichments.indicator.url.scheme`** +: Scheme of the request, such as "https". Note: The `:` is not part of the scheme. + +type: keyword + +example: https + + +**`threat.enrichments.indicator.url.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`threat.enrichments.indicator.url.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`threat.enrichments.indicator.url.username`** +: Username of the request. + +type: keyword + + +**`threat.enrichments.indicator.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`threat.enrichments.indicator.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`threat.enrichments.indicator.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`threat.enrichments.indicator.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`threat.enrichments.indicator.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`threat.enrichments.indicator.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`threat.enrichments.indicator.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`threat.enrichments.indicator.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.enrichments.indicator.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`threat.enrichments.indicator.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`threat.enrichments.indicator.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`threat.enrichments.indicator.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`threat.enrichments.indicator.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`threat.enrichments.indicator.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`threat.enrichments.indicator.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`threat.enrichments.indicator.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`threat.enrichments.indicator.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`threat.enrichments.indicator.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`threat.enrichments.indicator.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`threat.enrichments.indicator.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`threat.enrichments.indicator.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`threat.enrichments.indicator.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`threat.enrichments.indicator.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.enrichments.indicator.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`threat.enrichments.matched.atomic`** +: Identifies the atomic indicator value that matched a local environment endpoint or network event. + +type: keyword + +example: bad-domain.com + + +**`threat.enrichments.matched.field`** +: Identifies the field of the atomic indicator that matched a local environment endpoint or network event. + +type: keyword + +example: file.hash.sha256 + + +**`threat.enrichments.matched.id`** +: Identifies the _id of the indicator document enriching the event. + +type: keyword + +example: ff93aee5-86a1-4a61-b0e6-0cdc313d01b5 + + +**`threat.enrichments.matched.index`** +: Identifies the _index of the indicator document enriching the event. + +type: keyword + +example: filebeat-8.0.0-2021.05.23-000011 + + +**`threat.enrichments.matched.type`** +: Identifies the type of match that caused the event to be enriched with the given indicator + +type: keyword + +example: indicator_match_rule + + +**`threat.framework`** +: Name of the threat framework used to further categorize and classify the tactic and technique of the reported threat. Framework classification can be provided by detecting systems, evaluated at ingest time, or retrospectively tagged to events. + +type: keyword + +example: MITRE ATT&CK + + +**`threat.group.alias`** +: The alias(es) of the group for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® group alias(es). + +type: keyword + +example: [ "Magecart Group 6" ] + + +**`threat.group.id`** +: The id of the group for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® group id. + +type: keyword + +example: G0037 + + +**`threat.group.name`** +: The name of the group for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® group name. + +type: keyword + +example: FIN6 + + +**`threat.group.reference`** +: The reference URL of the group for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® group reference URL. + +type: keyword + +example: [https://attack.mitre.org/groups/G0037/](https://attack.mitre.org/groups/G0037/) + + +**`threat.indicator.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`threat.indicator.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`threat.indicator.as.organization.name.text`** +: type: match_only_text + + +**`threat.indicator.confidence`** +: Identifies the vendor-neutral confidence rating using the None/Low/Medium/High scale defined in Appendix A of the STIX 2.1 framework. Vendor-specific confidence scales may be added as custom fields. Expected values are: * Not Specified * None * Low * Medium * High + +type: keyword + +example: Medium + + +**`threat.indicator.description`** +: Describes the type of action conducted by the threat. + +type: keyword + +example: IP x.x.x.x was observed delivering the Angler EK. + + +**`threat.indicator.email.address`** +: Identifies a threat indicator as an email address (irrespective of direction). + +type: keyword + +example: `phish@example.com` + + +**`threat.indicator.file.accessed`** +: Last time the file was accessed. Note that not all filesystems keep track of access time. + +type: date + + +**`threat.indicator.file.attributes`** +: Array of file attributes. Attributes names will vary by platform. Here’s a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. + +type: keyword + +example: ["readonly", "system"] + + +**`threat.indicator.file.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`threat.indicator.file.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`threat.indicator.file.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`threat.indicator.file.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`threat.indicator.file.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`threat.indicator.file.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`threat.indicator.file.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`threat.indicator.file.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`threat.indicator.file.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`threat.indicator.file.created`** +: File creation time. Note that not all filesystems store the creation time. + +type: date + + +**`threat.indicator.file.ctime`** +: Last time the file attributes or metadata changed. Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. + +type: date + + +**`threat.indicator.file.device`** +: Device that is the source of the file. + +type: keyword + +example: sda + + +**`threat.indicator.file.directory`** +: Directory where the file is located. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice + + +**`threat.indicator.file.drive_letter`** +: Drive letter where the file is located. This field is only relevant on Windows. The value should be uppercase, and not include the colon. + +type: keyword + +example: C + + +**`threat.indicator.file.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`threat.indicator.file.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`threat.indicator.file.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`threat.indicator.file.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`threat.indicator.file.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`threat.indicator.file.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`threat.indicator.file.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`threat.indicator.file.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`threat.indicator.file.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`threat.indicator.file.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`threat.indicator.file.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`threat.indicator.file.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`threat.indicator.file.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`threat.indicator.file.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`threat.indicator.file.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`threat.indicator.file.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`threat.indicator.file.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`threat.indicator.file.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`threat.indicator.file.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`threat.indicator.file.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`threat.indicator.file.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`threat.indicator.file.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`threat.indicator.file.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`threat.indicator.file.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`threat.indicator.file.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`threat.indicator.file.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`threat.indicator.file.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`threat.indicator.file.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`threat.indicator.file.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`threat.indicator.file.extension`** +: File extension, excluding the leading dot. Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`threat.indicator.file.fork_name`** +: A fork is additional data associated with a filesystem object. On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. + +type: keyword + +example: Zone.Identifer + + +**`threat.indicator.file.gid`** +: Primary group ID (GID) of the file. + +type: keyword + +example: 1001 + + +**`threat.indicator.file.group`** +: Primary group name of the file. + +type: keyword + +example: alice + + +**`threat.indicator.file.hash.md5`** +: MD5 hash. + +type: keyword + + +**`threat.indicator.file.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`threat.indicator.file.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`threat.indicator.file.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`threat.indicator.file.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`threat.indicator.file.inode`** +: Inode representing the file in the filesystem. + +type: keyword + +example: 256383 + + +**`threat.indicator.file.mime_type`** +: MIME type should identify the format of the file or stream of bytes using [IANA official types](https://www.iana.org/assignments/media-types/media-types.xhtml), where possible. When more than one type is applicable, the most specific type should be used. + +type: keyword + + +**`threat.indicator.file.mode`** +: Mode of the file in octal representation. + +type: keyword + +example: 0640 + + +**`threat.indicator.file.mtime`** +: Last time the file content was modified. + +type: date + + +**`threat.indicator.file.name`** +: Name of the file including the extension, without the directory. + +type: keyword + +example: example.png + + +**`threat.indicator.file.owner`** +: File owner’s username. + +type: keyword + +example: alice + + +**`threat.indicator.file.path`** +: Full path to the file, including the file name. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice/example.png + + +**`threat.indicator.file.path.text`** +: type: match_only_text + + +**`threat.indicator.file.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`threat.indicator.file.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`threat.indicator.file.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`threat.indicator.file.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`threat.indicator.file.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`threat.indicator.file.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`threat.indicator.file.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`threat.indicator.file.size`** +: File size in bytes. Only relevant when `file.type` is "file". + +type: long + +example: 16384 + + +**`threat.indicator.file.target_path`** +: Target path for symlinks. + +type: keyword + + +**`threat.indicator.file.target_path.text`** +: type: match_only_text + + +**`threat.indicator.file.type`** +: File type (file, dir, or symlink). + +type: keyword + +example: file + + +**`threat.indicator.file.uid`** +: The user ID (UID) or security identifier (SID) of the file owner. + +type: keyword + +example: 1001 + + +**`threat.indicator.file.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`threat.indicator.file.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`threat.indicator.file.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`threat.indicator.file.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`threat.indicator.file.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`threat.indicator.file.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`threat.indicator.file.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`threat.indicator.file.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.indicator.file.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`threat.indicator.file.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`threat.indicator.file.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`threat.indicator.file.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`threat.indicator.file.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`threat.indicator.file.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`threat.indicator.file.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`threat.indicator.file.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`threat.indicator.file.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`threat.indicator.file.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`threat.indicator.file.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`threat.indicator.file.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`threat.indicator.file.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`threat.indicator.file.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`threat.indicator.file.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.indicator.file.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`threat.indicator.first_seen`** +: The date and time when intelligence source first reported sighting this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.indicator.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`threat.indicator.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`threat.indicator.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`threat.indicator.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`threat.indicator.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`threat.indicator.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`threat.indicator.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`threat.indicator.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`threat.indicator.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`threat.indicator.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`threat.indicator.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`threat.indicator.ip`** +: Identifies a threat indicator as an IP address (irrespective of direction). + +type: ip + +example: 1.2.3.4 + + +**`threat.indicator.last_seen`** +: The date and time when intelligence source last reported sighting this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.indicator.marking.tlp`** +: Traffic Light Protocol sharing markings. Recommended values are: * WHITE * GREEN * AMBER * RED + +type: keyword + +example: WHITE + + +**`threat.indicator.modified_at`** +: The date and time when intelligence source last modified information for this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.indicator.port`** +: Identifies a threat indicator as a port number (irrespective of direction). + +type: long + +example: 443 + + +**`threat.indicator.provider`** +: The name of the indicator’s provider. + +type: keyword + +example: lrz_urlhaus + + +**`threat.indicator.reference`** +: Reference URL linking to additional information about this indicator. + +type: keyword + +example: [https://system.example.com/indicator/0001234](https://system.example.com/indicator/0001234) + + +**`threat.indicator.registry.data.bytes`** +: Original bytes written with base64 encoding. For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. + +type: keyword + +example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= + + +**`threat.indicator.registry.data.strings`** +: Content when writing string types. Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). + +type: wildcard + +example: ["C:\rta\red_ttp\bin\myapp.exe"] + + +**`threat.indicator.registry.data.type`** +: Standard registry type for encoding contents + +type: keyword + +example: REG_SZ + + +**`threat.indicator.registry.hive`** +: Abbreviated name for the hive. + +type: keyword + +example: HKLM + + +**`threat.indicator.registry.key`** +: Hive-relative path of keys. + +type: keyword + +example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe + + +**`threat.indicator.registry.path`** +: Full path, including hive, key and value + +type: keyword + +example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger + + +**`threat.indicator.registry.value`** +: Name of the value written. + +type: keyword + +example: Debugger + + +**`threat.indicator.scanner_stats`** +: Count of AV/EDR vendors that successfully detected malicious file or URL. + +type: long + +example: 4 + + +**`threat.indicator.sightings`** +: Number of times this indicator was observed conducting threat activity. + +type: long + +example: 20 + + +**`threat.indicator.type`** +: Type of indicator as represented by Cyber Observable in STIX 2.0. Recommended values: * autonomous-system * artifact * directory * domain-name * email-addr * file * ipv4-addr * ipv6-addr * mac-addr * mutex * port * process * software * url * user-account * windows-registry-key * x509-certificate + +type: keyword + +example: ipv4-addr + + +**`threat.indicator.url.domain`** +: Domain of the url, such as "www.elastic.co". In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. + +type: keyword + +example: www.elastic.co + + +**`threat.indicator.url.extension`** +: The field contains the file extension from the original request url, excluding the leading dot. The file extension is only set if it exists, as not every url has a file extension. The leading period must not be included. For example, the value must be "png", not ".png". Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`threat.indicator.url.fragment`** +: Portion of the url after the `#`, such as "top". The `#` is not part of the fragment. + +type: keyword + + +**`threat.indicator.url.full`** +: If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) + + +**`threat.indicator.url.full.text`** +: type: match_only_text + + +**`threat.indicator.url.original`** +: Unmodified original url as seen in the event source. Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. This field is meant to represent the URL as it was observed, complete or not. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) or /search?q=elasticsearch + + +**`threat.indicator.url.original.text`** +: type: match_only_text + + +**`threat.indicator.url.password`** +: Password of the request. + +type: keyword + + +**`threat.indicator.url.path`** +: Path of the request, such as "/search". + +type: wildcard + + +**`threat.indicator.url.port`** +: Port of the request, such as 443. + +type: long + +example: 443 + +format: string + + +**`threat.indicator.url.query`** +: The query field describes the query string of the request, such as "q=elasticsearch". The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. + +type: keyword + + +**`threat.indicator.url.registered_domain`** +: The highest registered url domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`threat.indicator.url.scheme`** +: Scheme of the request, such as "https". Note: The `:` is not part of the scheme. + +type: keyword + +example: https + + +**`threat.indicator.url.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`threat.indicator.url.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`threat.indicator.url.username`** +: Username of the request. + +type: keyword + + +**`threat.indicator.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`threat.indicator.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`threat.indicator.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`threat.indicator.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`threat.indicator.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`threat.indicator.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`threat.indicator.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`threat.indicator.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.indicator.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`threat.indicator.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`threat.indicator.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`threat.indicator.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`threat.indicator.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`threat.indicator.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`threat.indicator.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`threat.indicator.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`threat.indicator.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`threat.indicator.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`threat.indicator.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`threat.indicator.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`threat.indicator.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`threat.indicator.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`threat.indicator.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.indicator.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`threat.software.alias`** +: The alias(es) of the software for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® associated software description. + +type: keyword + +example: [ "X-Agent" ] + + +**`threat.software.id`** +: The id of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. While not required, you can use a MITRE ATT&CK® software id. + +type: keyword + +example: S0552 + + +**`threat.software.name`** +: The name of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. While not required, you can use a MITRE ATT&CK® software name. + +type: keyword + +example: AdFind + + +**`threat.software.platforms`** +: The platforms of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. Recommended Values: * AWS * Azure * Azure AD * GCP * Linux * macOS * Network * Office 365 * SaaS * Windows + +While not required, you can use a MITRE ATT&CK® software platforms. + +type: keyword + +example: [ "Windows" ] + + +**`threat.software.reference`** +: The reference URL of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. While not required, you can use a MITRE ATT&CK® software reference URL. + +type: keyword + +example: [https://attack.mitre.org/software/S0552/](https://attack.mitre.org/software/S0552/) + + +**`threat.software.type`** +: The type of software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. Recommended values * Malware * Tool + +``` +While not required, you can use a MITRE ATT&CK® software type. +``` +type: keyword + +example: Tool + + +**`threat.tactic.id`** +: The id of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. [https://attack.mitre.org/tactics/TA0002/](https://attack.mitre.org/tactics/TA0002/) ) + +type: keyword + +example: TA0002 + + +**`threat.tactic.name`** +: Name of the type of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. [https://attack.mitre.org/tactics/TA0002/](https://attack.mitre.org/tactics/TA0002/)) + +type: keyword + +example: Execution + + +**`threat.tactic.reference`** +: The reference url of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. [https://attack.mitre.org/tactics/TA0002/](https://attack.mitre.org/tactics/TA0002/) ) + +type: keyword + +example: [https://attack.mitre.org/tactics/TA0002/](https://attack.mitre.org/tactics/TA0002/) + + +**`threat.technique.id`** +: The id of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. [https://attack.mitre.org/techniques/T1059/](https://attack.mitre.org/techniques/T1059/)) + +type: keyword + +example: T1059 + + +**`threat.technique.name`** +: The name of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. [https://attack.mitre.org/techniques/T1059/](https://attack.mitre.org/techniques/T1059/)) + +type: keyword + +example: Command and Scripting Interpreter + + +**`threat.technique.name.text`** +: type: match_only_text + + +**`threat.technique.reference`** +: The reference url of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. [https://attack.mitre.org/techniques/T1059/](https://attack.mitre.org/techniques/T1059/)) + +type: keyword + +example: [https://attack.mitre.org/techniques/T1059/](https://attack.mitre.org/techniques/T1059/) + + +**`threat.technique.subtechnique.id`** +: The full id of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. [https://attack.mitre.org/techniques/T1059/001/](https://attack.mitre.org/techniques/T1059/001/)) + +type: keyword + +example: T1059.001 + + +**`threat.technique.subtechnique.name`** +: The name of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. [https://attack.mitre.org/techniques/T1059/001/](https://attack.mitre.org/techniques/T1059/001/)) + +type: keyword + +example: PowerShell + + +**`threat.technique.subtechnique.name.text`** +: type: match_only_text + + +**`threat.technique.subtechnique.reference`** +: The reference url of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. [https://attack.mitre.org/techniques/T1059/001/](https://attack.mitre.org/techniques/T1059/001/)) + +type: keyword + +example: [https://attack.mitre.org/techniques/T1059/001/](https://attack.mitre.org/techniques/T1059/001/) + + + +## tls [_tls] + +Fields related to a TLS connection. These fields focus on the TLS protocol itself and intentionally avoids in-depth analysis of the related x.509 certificate files. + +**`tls.cipher`** +: String indicating the cipher used during the current connection. + +type: keyword + +example: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 + + +**`tls.client.certificate`** +: PEM-encoded stand-alone certificate offered by the client. This is usually mutually-exclusive of `client.certificate_chain` since this value also exists in that list. + +type: keyword + +example: MII…​ + + +**`tls.client.certificate_chain`** +: Array of PEM-encoded certificates that make up the certificate chain offered by the client. This is usually mutually-exclusive of `client.certificate` since that value should be the first certificate in the chain. + +type: keyword + +example: ["MII…​", "MII…​"] + + +**`tls.client.hash.md5`** +: Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC + + +**`tls.client.hash.sha1`** +: Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 9E393D93138888D288266C2D915214D1D1CCEB2A + + +**`tls.client.hash.sha256`** +: Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 + + +**`tls.client.issuer`** +: Distinguished name of subject of the issuer of the x.509 certificate presented by the client. + +type: keyword + +example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com + + +**`tls.client.ja3`** +: A hash that identifies clients based on how they perform an SSL/TLS handshake. + +type: keyword + +example: d4e5b18d6b55c71272893221c96ba240 + + +**`tls.client.not_after`** +: Date/Time indicating when client certificate is no longer considered valid. + +type: date + +example: 2021-01-01T00:00:00.000Z + + +**`tls.client.not_before`** +: Date/Time indicating when client certificate is first considered valid. + +type: date + +example: 1970-01-01T00:00:00.000Z + + +**`tls.client.server_name`** +: Also called an SNI, this tells the server which hostname to which the client is attempting to connect to. When this value is available, it should get copied to `destination.domain`. + +type: keyword + +example: www.elastic.co + + +**`tls.client.subject`** +: Distinguished name of subject of the x.509 certificate presented by the client. + +type: keyword + +example: CN=myclient, OU=Documentation Team, DC=example, DC=com + + +**`tls.client.supported_ciphers`** +: Array of ciphers offered by the client during the client hello. + +type: keyword + +example: ["TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "…​"] + + +**`tls.client.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`tls.client.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`tls.client.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`tls.client.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`tls.client.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`tls.client.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`tls.client.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`tls.client.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`tls.client.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`tls.client.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`tls.client.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`tls.client.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`tls.client.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`tls.client.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`tls.client.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`tls.client.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`tls.client.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`tls.client.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`tls.client.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`tls.client.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`tls.client.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`tls.client.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`tls.client.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`tls.client.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`tls.curve`** +: String indicating the curve used for the given cipher, when applicable. + +type: keyword + +example: secp256r1 + + +**`tls.established`** +: Boolean flag indicating if the TLS negotiation was successful and transitioned to an encrypted tunnel. + +type: boolean + + +**`tls.next_protocol`** +: String indicating the protocol being tunneled. Per the values in the IANA registry ([https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids](https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids)), this string should be lower case. + +type: keyword + +example: http/1.1 + + +**`tls.resumed`** +: Boolean flag indicating if this TLS connection was resumed from an existing TLS negotiation. + +type: boolean + + +**`tls.server.certificate`** +: PEM-encoded stand-alone certificate offered by the server. This is usually mutually-exclusive of `server.certificate_chain` since this value also exists in that list. + +type: keyword + +example: MII…​ + + +**`tls.server.certificate_chain`** +: Array of PEM-encoded certificates that make up the certificate chain offered by the server. This is usually mutually-exclusive of `server.certificate` since that value should be the first certificate in the chain. + +type: keyword + +example: ["MII…​", "MII…​"] + + +**`tls.server.hash.md5`** +: Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC + + +**`tls.server.hash.sha1`** +: Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 9E393D93138888D288266C2D915214D1D1CCEB2A + + +**`tls.server.hash.sha256`** +: Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 + + +**`tls.server.issuer`** +: Subject of the issuer of the x.509 certificate presented by the server. + +type: keyword + +example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com + + +**`tls.server.ja3s`** +: A hash that identifies servers based on how they perform an SSL/TLS handshake. + +type: keyword + +example: 394441ab65754e2207b1e1b457b3641d + + +**`tls.server.not_after`** +: Timestamp indicating when server certificate is no longer considered valid. + +type: date + +example: 2021-01-01T00:00:00.000Z + + +**`tls.server.not_before`** +: Timestamp indicating when server certificate is first considered valid. + +type: date + +example: 1970-01-01T00:00:00.000Z + + +**`tls.server.subject`** +: Subject of the x.509 certificate presented by the server. + +type: keyword + +example: CN=www.example.com, OU=Infrastructure Team, DC=example, DC=com + + +**`tls.server.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`tls.server.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`tls.server.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`tls.server.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`tls.server.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`tls.server.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`tls.server.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`tls.server.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`tls.server.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`tls.server.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`tls.server.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`tls.server.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`tls.server.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`tls.server.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`tls.server.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`tls.server.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`tls.server.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`tls.server.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`tls.server.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`tls.server.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`tls.server.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`tls.server.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`tls.server.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`tls.server.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`tls.version`** +: Numeric part of the version parsed from the original string. + +type: keyword + +example: 1.2 + + +**`tls.version_protocol`** +: Normalized lowercase protocol name parsed from original string. + +type: keyword + +example: tls + + +**`span.id`** +: Unique identifier of the span within the scope of its trace. A span represents an operation within a transaction, such as a request to another service, or a database query. + +type: keyword + +example: 3ff9a8981b7ccd5a + + +**`trace.id`** +: Unique identifier of the trace. A trace groups multiple events like transactions that belong together. For example, a user request handled by multiple inter-connected services. + +type: keyword + +example: 4bf92f3577b34da6a3ce929d0e0e4736 + + +**`transaction.id`** +: Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server. + +type: keyword + +example: 00f067aa0ba902b7 + + + +## url [_url] + +URL fields provide support for complete or partial URLs, and supports the breaking down into scheme, domain, path, and so on. + +**`url.domain`** +: Domain of the url, such as "www.elastic.co". In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. + +type: keyword + +example: www.elastic.co + + +**`url.extension`** +: The field contains the file extension from the original request url, excluding the leading dot. The file extension is only set if it exists, as not every url has a file extension. The leading period must not be included. For example, the value must be "png", not ".png". Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`url.fragment`** +: Portion of the url after the `#`, such as "top". The `#` is not part of the fragment. + +type: keyword + + +**`url.full`** +: If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) + + +**`url.full.text`** +: type: match_only_text + + +**`url.original`** +: Unmodified original url as seen in the event source. Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. This field is meant to represent the URL as it was observed, complete or not. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) or /search?q=elasticsearch + + +**`url.original.text`** +: type: match_only_text + + +**`url.password`** +: Password of the request. + +type: keyword + + +**`url.path`** +: Path of the request, such as "/search". + +type: wildcard + + +**`url.port`** +: Port of the request, such as 443. + +type: long + +example: 443 + +format: string + + +**`url.query`** +: The query field describes the query string of the request, such as "q=elasticsearch". The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. + +type: keyword + + +**`url.registered_domain`** +: The highest registered url domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`url.scheme`** +: Scheme of the request, such as "https". Note: The `:` is not part of the scheme. + +type: keyword + +example: https + + +**`url.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`url.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`url.username`** +: Username of the request. + +type: keyword + + + +## user [_user_2] + +The user fields describe information about the user that is relevant to the event. Fields can have one entry or multiple entries. If a user has more than one id, provide an array that includes all of them. + +**`user.changes.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.changes.email`** +: User email address. + +type: keyword + + +**`user.changes.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`user.changes.full_name.text`** +: type: match_only_text + + +**`user.changes.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.changes.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.changes.group.name`** +: Name of the group. + +type: keyword + + +**`user.changes.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`user.changes.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`user.changes.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`user.changes.name.text`** +: type: match_only_text + + +**`user.changes.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + +**`user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.effective.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.effective.email`** +: User email address. + +type: keyword + + +**`user.effective.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`user.effective.full_name.text`** +: type: match_only_text + + +**`user.effective.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.effective.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.effective.group.name`** +: Name of the group. + +type: keyword + + +**`user.effective.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`user.effective.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`user.effective.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`user.effective.name.text`** +: type: match_only_text + + +**`user.effective.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + +**`user.email`** +: User email address. + +type: keyword + + +**`user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`user.full_name.text`** +: type: match_only_text + + +**`user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.group.name`** +: Name of the group. + +type: keyword + + +**`user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`user.name.text`** +: type: match_only_text + + +**`user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + +**`user.target.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.target.email`** +: User email address. + +type: keyword + + +**`user.target.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`user.target.full_name.text`** +: type: match_only_text + + +**`user.target.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.target.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.target.group.name`** +: Name of the group. + +type: keyword + + +**`user.target.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`user.target.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`user.target.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`user.target.name.text`** +: type: match_only_text + + +**`user.target.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## user_agent [_user_agent] + +The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. + +**`user_agent.device.name`** +: Name of the device. + +type: keyword + +example: iPhone + + +**`user_agent.name`** +: Name of the user agent. + +type: keyword + +example: Safari + + +**`user_agent.original`** +: Unparsed user_agent string. + +type: keyword + +example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 + + +**`user_agent.original.text`** +: type: match_only_text + + +**`user_agent.os.family`** +: OS family (such as redhat, debian, freebsd, windows). + +type: keyword + +example: debian + + +**`user_agent.os.full`** +: Operating system name, including the version or code name. + +type: keyword + +example: Mac OS Mojave + + +**`user_agent.os.full.text`** +: type: match_only_text + + +**`user_agent.os.kernel`** +: Operating system kernel version as a raw string. + +type: keyword + +example: 4.4.0-112-generic + + +**`user_agent.os.name`** +: Operating system name, without the version. + +type: keyword + +example: Mac OS X + + +**`user_agent.os.name.text`** +: type: match_only_text + + +**`user_agent.os.platform`** +: Operating system platform (such centos, ubuntu, windows). + +type: keyword + +example: darwin + + +**`user_agent.os.type`** +: Use the `os.type` field to categorize the operating system into one of the broad commercial families. One of these following values should be used (lowercase): linux, macos, unix, windows. If the OS you’re dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. + +type: keyword + +example: macos + + +**`user_agent.os.version`** +: Operating system version as a raw string. + +type: keyword + +example: 10.14.1 + + +**`user_agent.version`** +: Version of the user agent. + +type: keyword + +example: 12.0 + + + +## vlan [_vlan] + +The VLAN fields are used to identify 802.1q tag(s) of a packet, as well as ingress and egress VLAN associations of an observer in relation to a specific packet or connection. Network.vlan fields are used to record a single VLAN tag, or the outer tag in the case of q-in-q encapsulations, for a packet or connection as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. Network.inner VLAN fields are used to report inner q-in-q 802.1q tags (multiple 802.1q encapsulations) as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. Network.inner VLAN fields should only be used in addition to network.vlan fields to indicate q-in-q tagging. Observer.ingress and observer.egress VLAN values are used to record observer specific information when observer events contain discrete ingress and egress VLAN information, typically provided by firewalls, routers, or load balancers. + +**`vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + + +## vulnerability [_vulnerability] + +The vulnerability fields describe information about a vulnerability that is relevant to an event. + +**`vulnerability.category`** +: The type of system or architecture that the vulnerability affects. These may be platform-specific (for example, Debian or SUSE) or general (for example, Database or Firewall). For example ([Qualys vulnerability categories](https://qualysguard.qualys.com/qwebhelp/fo_portal/knowledgebase/vulnerability_categories.htm)) This field must be an array. + +type: keyword + +example: ["Firewall"] + + +**`vulnerability.classification`** +: The classification of the vulnerability scoring system. For example ([https://www.first.org/cvss/](https://www.first.org/cvss/)) + +type: keyword + +example: CVSS + + +**`vulnerability.description`** +: The description of the vulnerability that provides additional context of the vulnerability. For example ([Common Vulnerabilities and Exposure CVE description](https://cve.mitre.org/about/faqs.html#cve_entry_descriptions_created)) + +type: keyword + +example: In macOS before 2.12.6, there is a vulnerability in the RPC…​ + + +**`vulnerability.description.text`** +: type: match_only_text + + +**`vulnerability.enumeration`** +: The type of identifier used for this vulnerability. For example ([https://cve.mitre.org/about/](https://cve.mitre.org/about/)) + +type: keyword + +example: CVE + + +**`vulnerability.id`** +: The identification (ID) is the number portion of a vulnerability entry. It includes a unique identification number for the vulnerability. For example ([Common Vulnerabilities and Exposure CVE ID](https://cve.mitre.org/about/faqs.html#what_is_cve_id)) + +type: keyword + +example: CVE-2019-00001 + + +**`vulnerability.reference`** +: A resource that provides additional information, context, and mitigations for the identified vulnerability. + +type: keyword + +example: [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111) + + +**`vulnerability.report_id`** +: The report or scan identification number. + +type: keyword + +example: 20191018.0001 + + +**`vulnerability.scanner.vendor`** +: The name of the vulnerability scanner vendor. + +type: keyword + +example: Tenable + + +**`vulnerability.score.base`** +: Scores can range from 0.0 to 10.0, with 10.0 being the most severe. Base scores cover an assessment for exploitability metrics (attack vector, complexity, privileges, and user interaction), impact metrics (confidentiality, integrity, and availability), and scope. For example ([https://www.first.org/cvss/specification-document](https://www.first.org/cvss/specification-document)) + +type: float + +example: 5.5 + + +**`vulnerability.score.environmental`** +: Scores can range from 0.0 to 10.0, with 10.0 being the most severe. Environmental scores cover an assessment for any modified Base metrics, confidentiality, integrity, and availability requirements. For example ([https://www.first.org/cvss/specification-document](https://www.first.org/cvss/specification-document)) + +type: float + +example: 5.5 + + +**`vulnerability.score.temporal`** +: Scores can range from 0.0 to 10.0, with 10.0 being the most severe. Temporal scores cover an assessment for code maturity, remediation level, and confidence. For example ([https://www.first.org/cvss/specification-document](https://www.first.org/cvss/specification-document)) + +type: float + + +**`vulnerability.score.version`** +: The National Vulnerability Database (NVD) provides qualitative severity rankings of "Low", "Medium", and "High" for CVSS v2.0 base score ranges in addition to the severity ratings for CVSS v3.0 as they are defined in the CVSS v3.0 specification. CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based non-profit organization, whose mission is to help computer security incident response teams across the world. For example ([https://nvd.nist.gov/vuln-metrics/cvss](https://nvd.nist.gov/vuln-metrics/cvss)) + +type: keyword + +example: 2.0 + + +**`vulnerability.severity`** +: The severity of the vulnerability can help with metrics and internal prioritization regarding remediation. For example ([https://nvd.nist.gov/vuln-metrics/cvss](https://nvd.nist.gov/vuln-metrics/cvss)) + +type: keyword + +example: Critical + + + +## x509 [_x509] + +This implements the common core fields for x509 certificates. This information is likely logged with TLS sessions, digital signatures found in executable binaries, S/MIME information in email bodies, or analysis of files on disk. When the certificate relates to a file, use the fields at `file.x509`. When hashes of the DER-encoded certificate are available, the `hash` data set should be populated as well (e.g. `file.hash.sha256`). Events that contain certificate information about network connections, should use the x509 fields under the relevant TLS fields: `tls.server.x509` and/or `tls.client.x509`. + +**`x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + diff --git a/docs/reference/auditbeat/exported-fields-file_integrity.md b/docs/reference/auditbeat/exported-fields-file_integrity.md new file mode 100644 index 000000000000..6b3ed764019e --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-file_integrity.md @@ -0,0 +1,424 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-file_integrity.html +--- + +# File Integrity fields [exported-fields-file_integrity] + +These are the fields generated by the file_integrity module. + + +## file [_file_3] + +File attributes. + + +## elf [_elf_2] + +These fields contain Linux Executable Linkable Format (ELF) metadata. + +**`file.elf.go_imports`** +: List of imported Go language element names and types. + +type: flattened + + +**`file.elf.go_imports_names_entropy`** +: Shannon entropy calculation from the list of Go imports. + +type: long + +format: number + + +**`file.elf.go_imports_names_var_entropy`** +: Variance for Shannon entropy calculation from the list of Go imports. + +type: long + +format: number + + +**`file.elf.go_import_hash`** +: A hash of the Go language imports in an ELF file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma). + +type: keyword + +example: 10bddcb4cee42080f76c88d9ff964491 + + +**`file.elf.go_stripped`** +: Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable. + +type: boolean + + +**`file.elf.imports_names_entropy`** +: Shannon entropy calculation from the list of imported element names and types. + +type: long + +format: number + + +**`file.elf.imports_names_var_entropy`** +: Variance for Shannon entropy calculation from the list of imported element names and types. + +type: long + +format: number + + +**`file.elf.import_hash`** +: A hash of the imports in an ELF file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. This is an ELF implementation of the Windows PE imphash. + +type: keyword + +example: d41d8cd98f00b204e9800998ecf8427e + + +**`file.elf.sections.var_entropy`** +: Variance for Shannon entropy calculation from the section. + +type: long + +format: number + + + +## macho [_macho] + +These fields contain Mach object file Format (Mach-O) metadata. + +**`file.macho.go_imports`** +: List of imported Go language element names and types. + +type: flattened + + +**`file.macho.go_imports_names_entropy`** +: Shannon entropy calculation from the list of Go imports. + +type: long + +format: number + + +**`file.macho.go_imports_names_var_entropy`** +: Variance for Shannon entropy calculation from the list of Go imports. + +type: long + +format: number + + +**`file.macho.go_import_hash`** +: A hash of the Go language imports in a Mach-O file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma). + +type: keyword + +example: 10bddcb4cee42080f76c88d9ff964491 + + +**`file.macho.go_stripped`** +: Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable. + +type: boolean + + +**`file.macho.imports`** +: List of imported element names and types. + +type: flattened + + +**`file.macho.imports_names_entropy`** +: Shannon entropy calculation from the list of imported element names and types. + +type: long + +format: number + + +**`file.macho.imports_names_var_entropy`** +: Variance for Shannon entropy calculation from the list of imported element names and types. + +type: long + +format: number + + +**`file.macho.import_hash`** +: A hash of the imports in a Mach-O file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. This is a synonym for symhash. + +type: keyword + +example: d3ccf195b62a9279c3c19af1080497ec + + +**`file.macho.sections`** +: An array containing an object for each section of the Mach-O file. The keys that should be present in these objects are defined by sub-fields underneath `macho.sections.*`. + +type: nested + + +**`file.macho.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`file.macho.sections.var_entropy`** +: Variance for Shannon entropy calculation from the section. + +type: long + +format: number + + +**`file.macho.sections.name`** +: Mach-O Section List name. + +type: keyword + + +**`file.macho.sections.physical_size`** +: Mach-O Section List physical size. + +type: long + +format: string + + +**`file.macho.sections.virtual_size`** +: Mach-O Section List virtual size. + +type: long + +format: string + + +**`file.macho.symhash`** +: A hash of the imports in a Mach-O file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. + +type: keyword + +example: d3ccf195b62a9279c3c19af1080497ec + + + +## pe [_pe_2] + +These fields contain Windows Portable Executable (PE) metadata. + +**`file.pe.go_imports`** +: List of imported Go language element names and types. + +type: flattened + + +**`file.pe.go_imports_names_entropy`** +: Shannon entropy calculation from the list of Go imports. + +type: long + +format: number + + +**`file.pe.go_imports_names_var_entropy`** +: Variance for Shannon entropy calculation from the list of Go imports. + +type: long + +format: number + + +**`file.pe.go_import_hash`** +: A hash of the Go language imports in a PE file excluding standard library imports. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. The algorithm used to calculate the Go symbol hash and a reference implementation are available [here](https://github.com/elastic/toutoumomoma). + +type: keyword + +example: 10bddcb4cee42080f76c88d9ff964491 + + +**`file.pe.go_stripped`** +: Set to true if the file is a Go executable that has had its symbols stripped or obfuscated and false if an unobfuscated Go executable. + +type: boolean + + +**`file.pe.imports`** +: List of imported element names and types. + +type: flattened + + +**`file.pe.imports_names_entropy`** +: Shannon entropy calculation from the list of imported element names and types. + +type: long + +format: number + + +**`file.pe.imports_names_var_entropy`** +: Variance for Shannon entropy calculation from the list of imported element names and types. + +type: long + +format: number + + +**`file.pe.import_hash`** +: A hash of the imports in a PE file. An import hash can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. This is a synonym for imphash. + +type: keyword + + +**`file.pe.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `pe.sections.*`. + +type: nested + + +**`file.pe.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`file.pe.sections.var_entropy`** +: Variance for Shannon entropy calculation from the section. + +type: long + +format: number + + +**`file.pe.sections.name`** +: PE Section List name. + +type: keyword + + +**`file.pe.sections.physical_size`** +: PE Section List physical size. + +type: long + +format: string + + +**`file.pe.sections.virtual_size`** +: PE Section List virtual size. + +type: long + +format: string + + + +## hash [_hash_2] + +Hashes of the file. The keys are algorithm names and the values are the hex encoded digest values. + +**`hash.blake2b_256`** +: BLAKE2b-256 hash of the file. + +type: keyword + + +**`hash.blake2b_384`** +: BLAKE2b-384 hash of the file. + +type: keyword + + +**`hash.blake2b_512`** +: BLAKE2b-512 hash of the file. + +type: keyword + + +**`hash.md5`** +: MD5 hash of the file. + +type: keyword + + +**`hash.sha1`** +: SHA1 hash of the file. + +type: keyword + + +**`hash.sha224`** +: SHA224 hash of the file. + +type: keyword + + +**`hash.sha256`** +: SHA256 hash of the file. + +type: keyword + + +**`hash.sha384`** +: SHA384 hash of the file. + +type: keyword + + +**`hash.sha3_224`** +: SHA3_224 hash of the file. + +type: keyword + + +**`hash.sha3_256`** +: SHA3_256 hash of the file. + +type: keyword + + +**`hash.sha3_384`** +: SHA3_384 hash of the file. + +type: keyword + + +**`hash.sha3_512`** +: SHA3_512 hash of the file. + +type: keyword + + +**`hash.sha512`** +: SHA512 hash of the file. + +type: keyword + + +**`hash.sha512_224`** +: SHA512/224 hash of the file. + +type: keyword + + +**`hash.sha512_256`** +: SHA512/256 hash of the file. + +type: keyword + + +**`hash.xxh64`** +: XX64 hash of the file. + +type: keyword + + diff --git a/docs/reference/auditbeat/exported-fields-host-processor.md b/docs/reference/auditbeat/exported-fields-host-processor.md new file mode 100644 index 000000000000..000cd178de6c --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-host-processor.md @@ -0,0 +1,31 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-host-processor.html +--- + +# Host fields [exported-fields-host-processor] + +Info collected for the host machine. + +**`host.containerized`** +: If the host is a container. + +type: boolean + + +**`host.os.build`** +: OS build information. + +type: keyword + +example: 18D109 + + +**`host.os.codename`** +: OS codename, if any. + +type: keyword + +example: stretch + + diff --git a/docs/reference/auditbeat/exported-fields-jolokia-autodiscover.md b/docs/reference/auditbeat/exported-fields-jolokia-autodiscover.md new file mode 100644 index 000000000000..c6cb8d08936c --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-jolokia-autodiscover.md @@ -0,0 +1,51 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-jolokia-autodiscover.html +--- + +# Jolokia Discovery autodiscover provider fields [exported-fields-jolokia-autodiscover] + +Metadata from Jolokia Discovery added by the jolokia provider. + +**`jolokia.agent.version`** +: Version number of jolokia agent. + +type: keyword + + +**`jolokia.agent.id`** +: Each agent has a unique id which can be either provided during startup of the agent in form of a configuration parameter or being autodetected. If autodected, the id has several parts: The IP, the process id, hashcode of the agent and its type. + +type: keyword + + +**`jolokia.server.product`** +: The container product if detected. + +type: keyword + + +**`jolokia.server.version`** +: The container’s version (if detected). + +type: keyword + + +**`jolokia.server.vendor`** +: The vendor of the container the agent is running in. + +type: keyword + + +**`jolokia.url`** +: The URL how this agent can be contacted. + +type: keyword + + +**`jolokia.secured`** +: Whether the agent was configured for authentication or not. + +type: boolean + + diff --git a/docs/reference/auditbeat/exported-fields-kubernetes-processor.md b/docs/reference/auditbeat/exported-fields-kubernetes-processor.md new file mode 100644 index 000000000000..e7e61358f7f0 --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-kubernetes-processor.md @@ -0,0 +1,87 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-kubernetes-processor.html +--- + +# Kubernetes fields [exported-fields-kubernetes-processor] + +Kubernetes metadata added by the kubernetes processor + +**`kubernetes.pod.name`** +: Kubernetes pod name + +type: keyword + + +**`kubernetes.pod.uid`** +: Kubernetes Pod UID + +type: keyword + + +**`kubernetes.pod.ip`** +: Kubernetes Pod IP + +type: ip + + +**`kubernetes.namespace`** +: Kubernetes namespace + +type: keyword + + +**`kubernetes.node.name`** +: Kubernetes node name + +type: keyword + + +**`kubernetes.node.hostname`** +: Kubernetes hostname as reported by the node’s kernel + +type: keyword + + +**`kubernetes.labels.*`** +: Kubernetes labels map + +type: object + + +**`kubernetes.annotations.*`** +: Kubernetes annotations map + +type: object + + +**`kubernetes.selectors.*`** +: Kubernetes selectors map + +type: object + + +**`kubernetes.replicaset.name`** +: Kubernetes replicaset name + +type: keyword + + +**`kubernetes.deployment.name`** +: Kubernetes deployment name + +type: keyword + + +**`kubernetes.statefulset.name`** +: Kubernetes statefulset name + +type: keyword + + +**`kubernetes.container.name`** +: Kubernetes container name (different than the name from the runtime) + +type: keyword + + diff --git a/docs/reference/auditbeat/exported-fields-process.md b/docs/reference/auditbeat/exported-fields-process.md new file mode 100644 index 000000000000..91f014574da2 --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-process.md @@ -0,0 +1,38 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-process.html +--- + +# Process fields [exported-fields-process] + +Process metadata fields + +**`process.exe`** +: type: alias + +alias to: process.executable + + + +## owner [_owner] + +Process owner information. + +**`process.owner.id`** +: Unique identifier of the user. + +type: keyword + + +**`process.owner.name`** +: Short name or login of the user. + +type: keyword + +example: albert + + +**`process.owner.name.text`** +: type: text + + diff --git a/docs/reference/auditbeat/exported-fields-system.md b/docs/reference/auditbeat/exported-fields-system.md new file mode 100644 index 000000000000..913ca9a32c33 --- /dev/null +++ b/docs/reference/auditbeat/exported-fields-system.md @@ -0,0 +1,364 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields-system.html +--- + +# System fields [exported-fields-system] + +These are the fields generated by the system module. + +**`event.origin`** +: Origin of the event. This can be a file path (e.g. `/var/log/log.1`), or the name of the system component that supplied the data (e.g. `netlink`). + +type: keyword + + +**`user.entity_id`** +: ID uniquely identifying the user on a host. It is computed as a SHA-256 hash of the host ID, user ID, and user name. + +type: keyword + + +**`user.terminal`** +: Terminal of the user. + +type: keyword + + +**`process.thread.capabilities.effective`** +: This is the set of capabilities used by the kernel to perform permission checks for the thread. + +type: keyword + +example: ["CAP_BPF", "CAP_SYS_ADMIN"] + + +**`process.thread.capabilities.permitted`** +: This is a limiting superset for the effective capabilities that the thread may assume. + +type: keyword + +example: ["CAP_BPF", "CAP_SYS_ADMIN"] + + + +## hash [_hash_3] + +Hashes of the executable. The keys are algorithm names and the values are the hex encoded digest values. + +**`process.hash.blake2b_256`** +: BLAKE2b-256 hash of the executable. + +type: keyword + + +**`process.hash.blake2b_384`** +: BLAKE2b-384 hash of the executable. + +type: keyword + + +**`process.hash.blake2b_512`** +: BLAKE2b-512 hash of the executable. + +type: keyword + + +**`process.hash.sha224`** +: SHA224 hash of the executable. + +type: keyword + + +**`process.hash.sha384`** +: SHA384 hash of the executable. + +type: keyword + + +**`process.hash.sha3_224`** +: SHA3_224 hash of the executable. + +type: keyword + + +**`process.hash.sha3_256`** +: SHA3_256 hash of the executable. + +type: keyword + + +**`process.hash.sha3_384`** +: SHA3_384 hash of the executable. + +type: keyword + + +**`process.hash.sha3_512`** +: SHA3_512 hash of the executable. + +type: keyword + + +**`process.hash.sha512_224`** +: SHA512/224 hash of the executable. + +type: keyword + + +**`process.hash.sha512_256`** +: SHA512/256 hash of the executable. + +type: keyword + + +**`process.hash.xxh64`** +: XX64 hash of the executable. + +type: keyword + + + +## system.audit [_system_audit] + + +## host [_host_2] + +`host` contains general host information. + +**`system.audit.host.uptime`** +: Uptime in nanoseconds. + +type: long + +format: duration + + +**`system.audit.host.boottime`** +: Boot time. + +type: date + + +**`system.audit.host.containerized`** +: Set if host is a container. + +type: boolean + + +**`system.audit.host.timezone.name`** +: Name of the timezone of the host, e.g. BST. + +type: keyword + + +**`system.audit.host.timezone.offset.sec`** +: Timezone offset in seconds. + +type: long + + +**`system.audit.host.hostname`** +: Hostname. + +type: keyword + + +**`system.audit.host.id`** +: Host ID. + +type: keyword + + +**`system.audit.host.architecture`** +: Host architecture (e.g. x86_64). + +type: keyword + + +**`system.audit.host.mac`** +: MAC addresses. + +type: keyword + + +**`system.audit.host.ip`** +: IP addresses. + +type: ip + + + +## os [_os_2] + +`os` contains information about the operating system. + +**`system.audit.host.os.codename`** +: OS codename, if any (e.g. stretch). + +type: keyword + + +**`system.audit.host.os.platform`** +: OS platform (e.g. centos, ubuntu, windows). + +type: keyword + + +**`system.audit.host.os.name`** +: OS name (e.g. Mac OS X). + +type: keyword + + +**`system.audit.host.os.family`** +: OS family (e.g. redhat, debian, freebsd, windows). + +type: keyword + + +**`system.audit.host.os.version`** +: OS version. + +type: keyword + + +**`system.audit.host.os.kernel`** +: The operating system’s kernel version. + +type: keyword + + +**`system.audit.host.os.type`** +: OS type (see ECS os.type). + +type: keyword + + + +## package [_package_2] + +`package` contains information about an installed or removed package. + +**`system.audit.package.entity_id`** +: ID uniquely identifying the package. It is computed as a SHA-256 hash of the host ID, package name, and package version. + +type: keyword + + +**`system.audit.package.name`** +: Package name. + +type: keyword + + +**`system.audit.package.version`** +: Package version. + +type: keyword + + +**`system.audit.package.release`** +: Package release. + +type: keyword + + +**`system.audit.package.arch`** +: Package architecture. + +type: keyword + + +**`system.audit.package.license`** +: Package license. + +type: keyword + + +**`system.audit.package.installtime`** +: Package install time. + +type: date + + +**`system.audit.package.size`** +: Package size. + +type: long + + +**`system.audit.package.summary`** +: Package summary. + + +**`system.audit.package.url`** +: Package URL. + +type: keyword + + + +## user [_user_3] + +`user` contains information about the users on a system. + +**`system.audit.user.name`** +: User name. + +type: keyword + + +**`system.audit.user.uid`** +: User ID. + +type: keyword + + +**`system.audit.user.gid`** +: Group ID. + +type: keyword + + +**`system.audit.user.dir`** +: User’s home directory. + +type: keyword + + +**`system.audit.user.shell`** +: Program to run at login. + +type: keyword + + +**`system.audit.user.user_information`** +: General user information. On Linux, this is the gecos field. + +type: keyword + + +**`system.audit.user.group`** +: `group` contains information about any groups the user is part of (beyond the user’s primary group). + +type: object + + + +## password [_password_5] + +`password` contains information about a user’s password (not the password itself). + +**`system.audit.user.password.type`** +: A user’s password type. Possible values are `shadow_password` (the password hash is in the shadow file), `password_disabled`, `no_password` (this is dangerous as anyone can log in), and `crypt_password` (when the password field in /etc/passwd seems to contain an encrypted password). + +type: keyword + + +**`system.audit.user.password.last_changed`** +: The day the user’s password was last changed. + +type: date + + diff --git a/docs/reference/auditbeat/exported-fields.md b/docs/reference/auditbeat/exported-fields.md new file mode 100644 index 000000000000..61e48cfd67aa --- /dev/null +++ b/docs/reference/auditbeat/exported-fields.md @@ -0,0 +1,22 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/exported-fields.html +--- + +# Exported fields [exported-fields] + +This document describes the fields that are exported by Auditbeat. They are grouped in the following categories: + +* [*Auditd fields*](/reference/auditbeat/exported-fields-auditd.md) +* [*Beat fields*](/reference/auditbeat/exported-fields-beat-common.md) +* [*Cloud provider metadata fields*](/reference/auditbeat/exported-fields-cloud.md) +* [*Common fields*](/reference/auditbeat/exported-fields-common.md) +* [*Docker fields*](/reference/auditbeat/exported-fields-docker-processor.md) +* [*ECS fields*](/reference/auditbeat/exported-fields-ecs.md) +* [*File Integrity fields*](/reference/auditbeat/exported-fields-file_integrity.md) +* [*Host fields*](/reference/auditbeat/exported-fields-host-processor.md) +* [*Jolokia Discovery autodiscover provider fields*](/reference/auditbeat/exported-fields-jolokia-autodiscover.md) +* [*Kubernetes fields*](/reference/auditbeat/exported-fields-kubernetes-processor.md) +* [*Process fields*](/reference/auditbeat/exported-fields-process.md) +* [*System fields*](/reference/auditbeat/exported-fields-system.md) + diff --git a/docs/reference/auditbeat/extract-array.md b/docs/reference/auditbeat/extract-array.md new file mode 100644 index 000000000000..fd8555430899 --- /dev/null +++ b/docs/reference/auditbeat/extract-array.md @@ -0,0 +1,46 @@ +--- +navigation_title: "extract_array" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/extract-array.html +--- + +# Extract array [extract-array] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `extract_array` processor populates fields with values read from an array field. The following example will populate `source.ip` with the first element of the `my_array` field, `destination.ip` with the second element, and `network.transport` with the third. + +```yaml +processors: + - extract_array: + field: my_array + mappings: + source.ip: 0 + destination.ip: 1 + network.transport: 2 +``` + +The following settings are supported: + +`field` +: The array field whose elements are to be extracted. + +`mappings` +: Maps each field name to an array index. Use 0 for the first element in the array. Multiple fields can be mapped to the same array element. + +`ignore_missing` +: (Optional) Whether to ignore events where the array field is missing. The default is `false`, which will fail processing of an event if the specified field does not exist. Set it to `true` to ignore this condition. + +`overwrite_keys` +: Whether the target fields specified in the mapping are overwritten if they already exist. The default is `false`, which will fail processing if a target field already exists. + +`fail_on_error` +: (Optional) If set to `true` and an error happens, changes to the event are reverted, and the original event is returned. If set to `false`, processing continues despite errors. Default is `true`. + +`omit_empty` +: (Optional) Whether empty values are extracted from the array. If set to `true`, instead of the target field being set to an empty value, it is left unset. The empty string (`""`), an empty array (`[]`) or an empty object (`{}`) are considered empty values. Default is `false`. + diff --git a/docs/reference/auditbeat/faq.md b/docs/reference/auditbeat/faq.md new file mode 100644 index 000000000000..1b8d8e77bd95 --- /dev/null +++ b/docs/reference/auditbeat/faq.md @@ -0,0 +1,21 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/faq.html +--- + +# Common problems [faq] + +This section describes common problems you might encounter with Auditbeat. Also check out the [Auditbeat discussion forum](https://discuss.elastic.co/c/beats/auditbeat). + + + + + + + + + + + + + diff --git a/docs/reference/auditbeat/feature-roles.md b/docs/reference/auditbeat/feature-roles.md new file mode 100644 index 000000000000..f0d161ba9415 --- /dev/null +++ b/docs/reference/auditbeat/feature-roles.md @@ -0,0 +1,25 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/feature-roles.html +--- + +# Grant users access to secured resources [feature-roles] + +You can use role-based access control to grant users access to secured resources. The roles that you set up depend on your organization’s security requirements and the minimum privileges required to use specific features. + +Typically you need the create the following separate roles: + +* [setup role](/reference/auditbeat/privileges-to-setup-beats.md) for setting up index templates and other dependencies +* [monitoring role](/reference/auditbeat/privileges-to-publish-monitoring.md) for sending monitoring information +* [writer role](/reference/auditbeat/privileges-to-publish-events.md) for publishing events collected by Auditbeat +* [reader role](/reference/auditbeat/kibana-user-privileges.md) for {{kib}} users who need to view and create visualizations that access Auditbeat data + +{{es-security-features}} provides [built-in roles](elasticsearch://reference/elasticsearch/roles.md) that grant a subset of the privileges needed by Auditbeat users. When possible, use the built-in roles to minimize the affect of future changes on your security strategy. + +Instead of using usernames and passwords, roles and privileges can be assigned to API keys to grant access to Elasticsearch resources. See [*Grant access using API keys*](/reference/auditbeat/beats-api-keys.md) for more information. + + + + + + diff --git a/docs/reference/auditbeat/file-output.md b/docs/reference/auditbeat/file-output.md new file mode 100644 index 000000000000..ff782532405f --- /dev/null +++ b/docs/reference/auditbeat/file-output.md @@ -0,0 +1,89 @@ +--- +navigation_title: "File" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/file-output.html +--- + +# Configure the File output [file-output] + + +The File output dumps the transactions into a file where each transaction is in a JSON format. Currently, this output is used for testing, but it can be used as input for Logstash. + +To use this output, edit the Auditbeat configuration file to disable the {{es}} output by commenting it out, and enable the file output by adding `output.file`. + +Example configuration: + +```yaml +output.file: + path: "/tmp/auditbeat" + filename: auditbeat + #rotate_every_kb: 10000 + #number_of_files: 7 + #permissions: 0600 + #rotate_on_startup: true +``` + +## Configuration options [_configuration_options_6] + +You can specify the following `output.file` options in the `auditbeat.yml` config file: + +### `enabled` [_enabled_5] + +The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + +### `path` [path] + +The path to the directory where the generated files will be saved. This option is mandatory. + +The path may include the timestamp when the file output is initialized using the `+FORMAT` syntax where `FORMAT` is a valid [time format](https://github.com/elastic/beats/blob/main/libbeat/common/dtfmt/doc.go), and enclosed with expansion braces: `%{+FORMAT}`. For example: + +``` +path: 'fileoutput-%{+yyyy.MM.dd}' +``` + + +### `filename` [_filename] + +The name of the generated files. The default is set to the Beat name. For example, the files generated by default for Auditbeat would be `"auditbeat-{{datetime}}.ndjson"`, `"auditbeat-{{datetime}}-1.ndjson"`, `"auditbeat-{{datetime}}-2.ndjson"`, and so on. + + +### `rotate_every_kb` [_rotate_every_kb] + +The maximum size in kilobytes of each file. When this size is reached, the files are rotated. The default value is 10240 KB. + + +### `number_of_files` [_number_of_files] + +The maximum number of files to save under [`path`](#path). When this number of files is reached, the oldest file is deleted, and the rest of the files are shifted from last to first. The number of files must be between 2 and 1024. The default is 7. + + +### `permissions` [_permissions] + +Permissions to use for file creation. The default is 0600. + + +### `rotate_on_startup` [_rotate_on_startup] + +If the output file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to true. + + +### `codec` [_codec_3] + +Output codec configuration. If the `codec` section is missing, events will be json encoded. + +See [Change the output codec](/reference/auditbeat/configuration-output-codec.md) for more information. + + +### `queue` [_queue_5] + +Configuration options for internal queue. + +See [Internal queue](/reference/auditbeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `auditbeat.yml` or the `output` section but not both. + + + diff --git a/docs/reference/auditbeat/filtering-enhancing-data.md b/docs/reference/auditbeat/filtering-enhancing-data.md new file mode 100644 index 000000000000..ca9313943148 --- /dev/null +++ b/docs/reference/auditbeat/filtering-enhancing-data.md @@ -0,0 +1,70 @@ +--- +navigation_title: "Processors" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/filtering-and-enhancing-data.html +--- + +# Filter and enhance data with processors [filtering-and-enhancing-data] + + +You can [define processors](/reference/auditbeat/defining-processors.md) in your configuration to process events before they are sent to the configured output. The libbeat library provides processors for: + +* reducing the number of exported fields +* enhancing events with additional metadata +* performing additional processing and decoding + +Each processor receives an event, applies a defined action to the event, and returns the event. If you define a list of processors, they are executed in the order they are defined in the Auditbeat configuration file. + +```yaml +event -> processor 1 -> event1 -> processor 2 -> event2 ... +``` + +::::{important} +It’s recommended to do all drop and renaming of existing fields as the last step in a processor configuration. This is because dropping or renaming fields can remove data necessary for the next processor in the chain, for example dropping the `source.ip` field would remove one of the fields necessary for the `community_id` processor to function. If it’s necessary to remove, rename or overwrite an existing event field, please make sure it’s done by a corresponding processor ([`drop_fields`](/reference/auditbeat/drop-fields.md), [`rename`](/reference/auditbeat/rename-fields.md) or [`add_fields`](/reference/auditbeat/add-fields.md)) placed at the end of the processor list defined in the input configuration. +:::: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/reference/auditbeat/fingerprint.md b/docs/reference/auditbeat/fingerprint.md new file mode 100644 index 000000000000..eaefff14b269 --- /dev/null +++ b/docs/reference/auditbeat/fingerprint.md @@ -0,0 +1,38 @@ +--- +navigation_title: "fingerprint" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/fingerprint.html +--- + +# Generate a fingerprint of an event [fingerprint] + + +The `fingerprint` processor generates a fingerprint of an event based on a specified subset of its fields. + +The value that is hashed is constructed as a concatenation of the field name and field value separated by `|`. For example `|field1|value1|field2|value2|`. + +Nested fields are supported in the following format: `"field1.field2"` e.g: `["log.path.file", "foo"]` + +```yaml +processors: + - fingerprint: + fields: ["field1", "field2", ...] +``` + +The following settings are supported: + +`fields` +: List of fields to use as the source for the fingerprint. The list will be alphabetically sorted by the processor. + +`ignore_missing` +: (Optional) Whether to ignore missing fields. Default is `false`. + +`target_field` +: (Optional) Field in which the generated fingerprint should be stored. Default is `fingerprint`. + +`method` +: (Optional) Algorithm to use for computing the fingerprint. Must be one of: `md5`, `sha1`, `sha256`, `sha384`, `sha512`, `xxhash`. Default is `sha256`. + +`encoding` +: (Optional) Encoding to use on the fingerprint value. Must be one of `hex`, `base32`, or `base64`. Default is `hex`. + diff --git a/docs/reference/auditbeat/getting-help.md b/docs/reference/auditbeat/getting-help.md new file mode 100644 index 000000000000..a3253c386d4b --- /dev/null +++ b/docs/reference/auditbeat/getting-help.md @@ -0,0 +1,16 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/getting-help.html +--- + +# Get Help [getting-help] + +Start by searching the [Auditbeat discussion forum](https://discuss.elastic.co/c/beats/auditbeat) for your issue. If you can’t find a resolution, open a new issue or add a comment to an existing one. Make sure you provide the following information, and we’ll help you troubleshoot the problem: + +* Auditbeat version +* Operating System +* Configuration +* Any supporting information, such as debugging output, that will help us diagnose your problem. See [*Debug*](/reference/auditbeat/enable-auditbeat-debugging.md) for more details. + +If you’re sure you found a bug, you can open a ticket on [GitHub](https://github.com/elastic/beats/issues?state=open). Note, however, that we close GitHub issues containing questions or requests for help if they don’t indicate the presence of a bug. + diff --git a/docs/reference/auditbeat/howto-guides.md b/docs/reference/auditbeat/howto-guides.md new file mode 100644 index 000000000000..04b8239315c2 --- /dev/null +++ b/docs/reference/auditbeat/howto-guides.md @@ -0,0 +1,18 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/howto-guides.html +--- + +# How to guides [howto-guides] + +Learn how to perform common Auditbeat configuration tasks. + +* [*Load the {{es}} index template*](/reference/auditbeat/auditbeat-template.md) +* [*Change the index name*](/reference/auditbeat/change-index-name.md) +* [*Load {{kib}} dashboards*](/reference/auditbeat/load-kibana-dashboards.md) +* [*Enrich events with geoIP information*](/reference/auditbeat/auditbeat-geoip.md) +* [*Use environment variables in the configuration*](/reference/auditbeat/using-environ-vars.md) +* [*Parse data using an ingest pipeline*](/reference/auditbeat/configuring-ingest-node.md) +* [*Use environment variables in the configuration*](/reference/auditbeat/using-environ-vars.md) +* [*Avoid YAML formatting problems*](/reference/auditbeat/yaml-tips.md) + diff --git a/docs/reference/auditbeat/http-endpoint.md b/docs/reference/auditbeat/http-endpoint.md new file mode 100644 index 000000000000..c6686b788ec1 --- /dev/null +++ b/docs/reference/auditbeat/http-endpoint.md @@ -0,0 +1,187 @@ +--- +navigation_title: "HTTP endpoint" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/http-endpoint.html +--- + +# Configure an HTTP endpoint for metrics [http-endpoint] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +Auditbeat can expose internal metrics through an HTTP endpoint. These are useful to monitor the internal state of the Beat. For security reasons the endpoint is disabled by default, as you may want to avoid exposing this info. + +The HTTP endpoint has the following configuration settings: + +`http.enabled` +: (Optional) Enable the HTTP endpoint. Default is `false`. + +`http.host` +: (Optional) Bind to this hostname, IP address, unix socket (unix:///var/run/auditbeat.sock) or Windows named pipe (npipe:///auditbeat). It is recommended to use only localhost. Default is `localhost` + +`http.port` +: (Optional) Port on which the HTTP endpoint will bind. Default is `5066`. + +`http.named_pipe.user` +: (Optional) User to use to create the named pipe, only work on Windows, Default to the current user. + +`http.named_pipe.security_descriptor` +: (Optional) Windows Security descriptor string defined in the SDDL format. Default to read and write permission for the current user. + +`http.pprof.enabled` +: (Optional) Enable the `/debug/pprof/` endpoints when serving HTTP. It is recommended that this is only enabled on localhost as these endpoints may leak data. Default is `false`. + +`http.pprof.block_profile_rate` +: (Optional) `block_profile_rate` controls the fraction of goroutine blocking events that are reported in the blocking profile available from `/debug/pprof/block`. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked. To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate ⇐ 0. Defaults to 0. + +`http.pprof.mem_profile_rate` +: (Optional) `mem_profile_rate` controls the fraction of memory allocations that are recorded and reported in the memory profile available from `/debug/pprof/heap`. The profiler aims to sample an average of one allocation per `mem_profile_rate` bytes allocated. To include every allocated block in the profile, set `mem_profile_rate` to 1. To turn off profiling entirely, set `mem_profile_rate` to 0. Defaults to 524288. + +`http.pprof.mutex_profile_rate` +: (Optional) `mutex_profile_rate` controls the fraction of mutex contention events that are reported in the mutex profile available from `/debug/pprof/mutex`. On average 1/rate events are reported. To turn off profiling entirely, pass rate 0. The default value is 0. + +This is the list of paths you can access. For pretty JSON output append `?pretty` to the URL. + +You can query a unix socket using the `cURL` command and the `--unix-socket` flag. + +```js +curl -XGET --unix-socket '/var/run/{beatname_lc}.sock' 'http:/stats/?pretty' +``` + + +## Info [_info] + +`/` provides basic info from the Auditbeat. Example: + +```js +curl -XGET 'localhost:5066/?pretty' +``` + +```js +{ + "beat": "auditbeat", + "hostname": "example.lan", + "name": "example.lan", + "uuid": "34f6c6e1-45a8-4b12-9125-11b3e6e89866", + "version": "9.0.0-beta1" +} +``` + + +## Stats [_stats] + +`/stats` reports internal metrics. Example: + +```js +curl -XGET 'localhost:5066/stats?pretty' +``` + +```js +{ + "beat": { + "cpu": { + "system": { + "ticks": 1710, + "time": { + "ms": 1712 + } + }, + "total": { + "ticks": 3420, + "time": { + "ms": 3424 + }, + "value": 3420 + }, + "user": { + "ticks": 1710, + "time": { + "ms": 1712 + } + } + }, + "info": { + "ephemeral_id": "ab4287c4-d907-4d9d-b074-d8c3cec4a577", + "uptime": { + "ms": 195547 + } + }, + "memstats": { + "gc_next": 17855152, + "memory_alloc": 9433384, + "memory_total": 492478864, + "rss": 50405376 + }, + "runtime": { + "goroutines": 22 + } + }, + "libbeat": { + "config": { + "module": { + "running": 0, + "starts": 0, + "stops": 0 + }, + "scans": 1, + "reloads": 1 + }, + "output": { + "events": { + "acked": 0, + "active": 0, + "batches": 0, + "dropped": 0, + "duplicates": 0, + "failed": 0, + "total": 0 + }, + "read": { + "bytes": 0, + "errors": 0 + }, + "type": "elasticsearch", + "write": { + "bytes": 0, + "errors": 0 + } + }, + "pipeline": { + "clients": 6, + "events": { + "active": 716, + "dropped": 0, + "failed": 0, + "filtered": 0, + "published": 716, + "retry": 278, + "total": 716 + }, + "queue": { + "acked": 0 + } + } + }, + "system": { + "cpu": { + "cores": 4 + }, + "load": { + "1": 2.22, + "15": 1.8, + "5": 1.74, + "norm": { + "1": 0.555, + "15": 0.45, + "5": 0.435 + } + } + } +} +``` + +The actual output may contain more metrics specific to Auditbeat + diff --git a/docs/reference/auditbeat/ilm.md b/docs/reference/auditbeat/ilm.md new file mode 100644 index 000000000000..9fbb89a4d2de --- /dev/null +++ b/docs/reference/auditbeat/ilm.md @@ -0,0 +1,58 @@ +--- +navigation_title: "Index lifecycle management (ILM)" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/ilm.html +--- + +# Configure index lifecycle management [ilm] + + +Use the [index lifecycle management](docs-content://manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md) (ILM) feature in {{es}} to manage your Auditbeat their backing indices of your data streams as they age. Auditbeat loads the default policy automatically and applies it to any data streams created by Auditbeat. + +You can view and edit the policy in the **Index lifecycle policies** UI in {{kib}}. For more information about working with the UI, see [Index lifecyle policies](docs-content://manage-data/lifecycle/index-lifecycle-management.md). + +Example configuration: + +```yaml +setup.ilm.enabled: true +``` + +::::{warning} +If index lifecycle management is enabled (which is typically the default), `setup.template.name` and `setup.template.pattern` are ignored. +:::: + + + +## Configuration options [_configuration_options_10] + +You can specify the following settings in the `setup.ilm` section of the `auditbeat.yml` config file: + + +### `setup.ilm.enabled` [setup-ilm-option] + +Enables or disables index lifecycle management on any new indices created by Auditbeat. Valid values are `true` and `false`. + + +### `setup.ilm.policy_name` [setup-ilm-policy_name-option] + +The name to use for the lifecycle policy. The default is `auditbeat`. + + +### `setup.ilm.policy_file` [setup-ilm-policy_file-option] + +The path to a JSON file that contains a lifecycle policy configuration. Use this setting to load your own lifecycle policy. + +For more information about lifecycle policies, see [Set up index lifecycle management policy](docs-content://manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md) in the *{{es}} Reference*. + + +### `setup.ilm.check_exists` [setup-ilm-check_exists-option] + +When set to `false`, disables the check for an existing lifecycle policy. The default is `true`. You need to disable this check if the Auditbeat user connecting to a secured cluster doesn’t have the `read_ilm` privilege. + +If you set this option to `false`, lifecycle policy will not be installed, even if `setup.ilm.overwrite` is set to `true`. + + +### `setup.ilm.overwrite` [setup-ilm-overwrite-option] + +When set to `true`, the lifecycle policy is overwritten at startup. The default is `false`. + diff --git a/auditbeat/docs/images/auditbeat-auditd-dashboard.png b/docs/reference/auditbeat/images/auditbeat-auditd-dashboard.png similarity index 100% rename from auditbeat/docs/images/auditbeat-auditd-dashboard.png rename to docs/reference/auditbeat/images/auditbeat-auditd-dashboard.png diff --git a/x-pack/auditbeat/docs/images/auditbeat-system-host-dashboard.png b/docs/reference/auditbeat/images/auditbeat-system-host-dashboard.png similarity index 100% rename from x-pack/auditbeat/docs/images/auditbeat-system-host-dashboard.png rename to docs/reference/auditbeat/images/auditbeat-system-host-dashboard.png diff --git a/x-pack/auditbeat/docs/images/auditbeat-system-login-dashboard.png b/docs/reference/auditbeat/images/auditbeat-system-login-dashboard.png similarity index 100% rename from x-pack/auditbeat/docs/images/auditbeat-system-login-dashboard.png rename to docs/reference/auditbeat/images/auditbeat-system-login-dashboard.png diff --git a/x-pack/auditbeat/docs/images/auditbeat-system-overview-dashboard.png b/docs/reference/auditbeat/images/auditbeat-system-overview-dashboard.png similarity index 100% rename from x-pack/auditbeat/docs/images/auditbeat-system-overview-dashboard.png rename to docs/reference/auditbeat/images/auditbeat-system-overview-dashboard.png diff --git a/x-pack/auditbeat/docs/images/auditbeat-system-package-dashboard.png b/docs/reference/auditbeat/images/auditbeat-system-package-dashboard.png similarity index 100% rename from x-pack/auditbeat/docs/images/auditbeat-system-package-dashboard.png rename to docs/reference/auditbeat/images/auditbeat-system-package-dashboard.png diff --git a/x-pack/auditbeat/docs/images/auditbeat-system-process-dashboard.png b/docs/reference/auditbeat/images/auditbeat-system-process-dashboard.png similarity index 100% rename from x-pack/auditbeat/docs/images/auditbeat-system-process-dashboard.png rename to docs/reference/auditbeat/images/auditbeat-system-process-dashboard.png diff --git a/x-pack/auditbeat/docs/images/auditbeat-system-user-dashboard.png b/docs/reference/auditbeat/images/auditbeat-system-user-dashboard.png similarity index 100% rename from x-pack/auditbeat/docs/images/auditbeat-system-user-dashboard.png rename to docs/reference/auditbeat/images/auditbeat-system-user-dashboard.png diff --git a/auditbeat/docs/images/coordinate-map.png b/docs/reference/auditbeat/images/coordinate-map.png similarity index 100% rename from auditbeat/docs/images/coordinate-map.png rename to docs/reference/auditbeat/images/coordinate-map.png diff --git a/docs/reference/auditbeat/include-fields.md b/docs/reference/auditbeat/include-fields.md new file mode 100644 index 000000000000..4d68d654b9c5 --- /dev/null +++ b/docs/reference/auditbeat/include-fields.md @@ -0,0 +1,28 @@ +--- +navigation_title: "include_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/include-fields.html +--- + +# Keep fields from events [include-fields] + + +The `include_fields` processor specifies which fields to export if a certain condition is fulfilled. The condition is optional. If it’s missing, the specified fields are always exported. The `@timestamp`, `@metadata` and `type` fields are always exported, even if they are not defined in the `include_fields` list. + +```yaml +processors: + - include_fields: + when: + condition + fields: ["field1", "field2", ...] +``` + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + +You can specify multiple `include_fields` processors under the `processors` section. + +::::{note} +If you define an empty list of fields under `include_fields`, then only the required fields, `@timestamp` and `type`, are exported. +:::: + + diff --git a/docs/reference/auditbeat/kafka-output.md b/docs/reference/auditbeat/kafka-output.md new file mode 100644 index 000000000000..1ba8f434e2e6 --- /dev/null +++ b/docs/reference/auditbeat/kafka-output.md @@ -0,0 +1,331 @@ +--- +navigation_title: "Kafka" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/kafka-output.html +--- + +# Configure the Kafka output [kafka-output] + + +The Kafka output sends events to Apache Kafka. + +To use this output, edit the Auditbeat configuration file to disable the {{es}} output by commenting it out, and enable the Kafka output by uncommenting the Kafka section. + +::::{note} +For Kafka version 0.10.0.0+ the message creation timestamp is set by beats and equals to the initial timestamp of the event. This affects the retention policy in Kafka: for example, if a beat event was created 2 weeks ago, the retention policy is set to 7 days and the message from beats arrives to Kafka today, it’s going to be immediately discarded since the timestamp value is before the last 7 days. It’s possible to change this behavior by setting timestamps on message arrival instead, so the message is not discarded but kept for 7 more days. To do that, please set `log.message.timestamp.type` to `LogAppendTime` (default `CreateTime`) in the Kafka configuration. +:::: + + +Example configuration: + +```yaml +output.kafka: + # initial brokers for reading cluster metadata + hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"] + + # message topic selection + partitioning + topic: '%{[fields.log_topic]}' + partition.round_robin: + reachable_only: false + + required_acks: 1 + compression: gzip + max_message_bytes: 1000000 +``` + +::::{note} +Events bigger than [`max_message_bytes`](#kafka-max_message_bytes) will be dropped. To avoid this problem, make sure Auditbeat does not generate events bigger than [`max_message_bytes`](#kafka-max_message_bytes). +:::: + + +## Compatibility [kafka-compatibility] + +This output can connect to Kafka version 0.8.2.0 and later. Older versions might work as well, but are not supported. When using Kafka 4.0 and newer, the version must be set to at least `"2.1.0"` + + +## Configuration options [_configuration_options_4] + +You can specify the following options in the `kafka` section of the `auditbeat.yml` config file: + +### `enabled` [_enabled_3] + +The `enabled` config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + +### `hosts` [_hosts] + +The list of Kafka broker addresses from where to fetch the cluster metadata. The cluster metadata contain the actual Kafka brokers events are published to. + + +### `version` [_version] + +Kafka protocol version that Auditbeat will request when connecting. Defaults to 2.1.0. When using Kafka 4.0 and newer, the version must be set to at least `"2.1.0"` + +Valid values are all kafka releases in between `0.8.2.0` and `2.6.0`. + +The protocol version controls the Kafka client features available to Auditbeat; it does not prevent Auditbeat from connecting to Kafka versions newer than the protocol version. + +See [Compatibility](#kafka-compatibility) for information on supported versions. + + +### `username` [_username_2] + +The username for connecting to Kafka. If username is configured, the password must be configured as well. + + +### `password` [_password_2] + +The password for connecting to Kafka. + + +### `sasl.mechanism` [_sasl_mechanism] + +The SASL mechanism to use when connecting to Kafka. It can be one of: + +* `PLAIN` for SASL/PLAIN. +* `SCRAM-SHA-256` for SCRAM-SHA-256. +* `SCRAM-SHA-512` for SCRAM-SHA-512. + +If `sasl.mechanism` is not set, `PLAIN` is used if `username` and `password` are provided. Otherwise, SASL authentication is disabled. + +To use `GSSAPI` mechanism to authenticate with Kerberos, you must leave this field empty, and use the [`kerberos`](#kerberos-option-kafka) options. + + +### `topic` [topic-option-kafka] + +The Kafka topic used for produced events. + +You can set the topic dynamically by using a format string to access any event field. For example, this configuration uses a custom field, `fields.log_topic`, to set the topic for each event: + +```yaml +topic: '%{[fields.log_topic]}' +``` + +::::{tip} +To learn how to add custom fields to events, see the [`fields`](/reference/auditbeat/configuration-general-options.md#libbeat-configuration-fields) option. +:::: + + +See the [`topics`](#topics-option-kafka) setting for other ways to set the topic dynamically. + + +### `topics` [topics-option-kafka] + +An array of topic selector rules. Each rule specifies the `topic` to use for events that match the rule. During publishing, Auditbeat sets the `topic` for each event based on the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the `topics` setting is missing or no rule matches, the [`topic`](#topic-option-kafka) field is used. + +Rule settings: + +**`topic`** +: The topic format string to use. If this string contains field references, such as `%{[fields.name]}`, the fields must exist, or the rule fails. + +**`mappings`** +: A dictionary that takes the value returned by `topic` and maps it to a new name. + +**`default`** +: The default string value to use if `mappings` does not find a match. + +**`when`** +: A condition that must succeed in order to execute the current rule. All the [conditions](/reference/auditbeat/defining-processors.md#conditions) supported by processors are also supported here. + +The following example sets the topic based on whether the message field contains the specified string: + +```yaml +output.kafka: + hosts: ["localhost:9092"] + topic: "logs-%{[agent.version]}" + topics: + - topic: "critical-%{[agent.version]}" + when.contains: + message: "CRITICAL" + - topic: "error-%{[agent.version]}" + when.contains: + message: "ERR" +``` + +This configuration results in topics named `critical-9.0.0-beta1`, `error-9.0.0-beta1`, and `logs-9.0.0-beta1`. + + +### `key` [_key] + +Optional formatted string specifying the Kafka event key. If configured, the event key can be extracted from the event using a format string. + +See the Kafka documentation for the implications of a particular choice of key; by default, the key is chosen by the Kafka cluster. + + +### `partition` [_partition] + +Kafka output broker event partitioning strategy. Must be one of `random`, `round_robin`, or `hash`. By default the `hash` partitioner is used. + +**`random.group_events`**: Sets the number of events to be published to the same partition, before the partitioner selects a new partition by random. The default value is 1 meaning after each event a new partition is picked randomly. + +**`round_robin.group_events`**: Sets the number of events to be published to the same partition, before the partitioner selects the next partition. The default value is 1 meaning after each event the next partition will be selected. + +**`hash.hash`**: List of fields used to compute the partitioning hash value from. If no field is configured, the events `key` value will be used. + +**`hash.random`**: Randomly distribute events if no hash or key value can be computed. + +All partitioners will try to publish events to all partitions by default. If a partition’s leader becomes unreachable for the beat, the output might block. All partitioners support setting `reachable_only` to overwrite this behavior. If `reachable_only` is set to `true`, events will be published to available partitions only. + +::::{note} +Publishing to a subset of available partitions potentially increases resource usage because events may become unevenly distributed. +:::: + + + +### `headers` [_headers_2] + +A header is a key-value pair, and multiple headers can be included with the same `key`. Only string values are supported. These headers will be included in each produced Kafka message. + +```yaml +output.kafka: + hosts: ["localhost:9092"] + topic: "logs-%{[agent.version]}" + headers: + - key: "some-key" + value: "some value" + - key: "another-key" + value: "another value" +``` + + +### `client_id` [_client_id] + +The configurable ClientID used for logging, debugging, and auditing purposes. The default is "beats". + + +### `codec` [_codec] + +Output codec configuration. If the `codec` section is missing, events will be json encoded. + +See [Change the output codec](/reference/auditbeat/configuration-output-codec.md) for more information. + + +### `metadata` [_metadata] + +Kafka metadata update settings. The metadata do contain information about brokers, topics, partition, and active leaders to use for publishing. + +**`refresh_frequency`** +: Metadata refresh interval. Defaults to 10 minutes. + +**`full`** +: Strategy to use when fetching metadata, when this option is `true`, the client will maintain a full set of metadata for all the available topics, if the this option is set to `false` it will only refresh the metadata for the configured topics. The default is false. + +**`retry.max`** +: Total number of metadata update retries when cluster is in middle of leader election. The default is 3. + +**`retry.backoff`** +: Waiting time between retries during leader elections. Default is 250ms. + + +### `max_retries` [_max_retries_3] + +The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. + +Set `max_retries` to a value less than 0 to retry until all events are published. + +The default is 3. + + +### `backoff.init` [_backoff_init_2] + +The number of seconds to wait before trying to republish to Kafka after a network error. After waiting `backoff.init` seconds, Auditbeat tries to republish. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful publish, the backoff timer is reset. The default is 1s. + + +### `backoff.max` [_backoff_max_2] + +The maximum number of seconds to wait before attempting to republish to Kafka after a network error. The default is 60s. + + +### `bulk_max_size` [_bulk_max_size_2] + +The maximum number of events to bulk in a single Kafka request. The default is 2048. + + +### `bulk_flush_frequency` [_bulk_flush_frequency] + +Duration to wait before sending bulk Kafka request. 0 is no delay. The default is 0. + + +### `timeout` [_timeout_3] + +The number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds). + + +### `broker_timeout` [_broker_timeout] + +The maximum duration a broker will wait for number of required ACKs. The default is 10s. + + +### `channel_buffer_size` [_channel_buffer_size] + +Per Kafka broker number of messages buffered in output pipeline. The default is 256. + + +### `keep_alive` [_keep_alive] + +The keep-alive period for an active network connection. If 0s, keep-alives are disabled. The default is 0 seconds. + + +### `compression` [_compression] + +Sets the output compression codec. Must be one of `none`, `snappy`, `lz4`, `gzip` and `zstd`. The default is `gzip`. + +::::{admonition} Known issue with Azure Event Hub for Kafka +:class: important + +When targeting Azure Event Hub for Kafka, set `compression` to `none` as the provided codecs are not supported. + +:::: + + + +### `compression_level` [_compression_level_2] + +Sets the compression level used by gzip. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression). + +Increasing the compression level will reduce the network usage but will increase the cpu usage. + +The default value is 4. + + +### `max_message_bytes` [kafka-max_message_bytes] + +The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. The default value is 1000000 (bytes). This value should be equal to or less than the broker’s `message.max.bytes`. + + +### `required_acks` [_required_acks] + +The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1. + +Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error. + + +### `ssl` [_ssl_3] + +Configuration options for SSL parameters like the root CA for Kafka connections. The Kafka host keystore should be created with the `-keyalg RSA` argument to ensure it uses a cipher supported by [Filebeat’s Kafka library](https://github.com/Shopify/sarama/wiki/Frequently-Asked-Questions#why-cant-sarama-connect-to-my-kafka-cluster-using-ssl). See [SSL](/reference/auditbeat/configuration-ssl.md) for more information. + + +### `kerberos` [kerberos-option-kafka] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +Configuration options for Kerberos authentication. + +See [Kerberos](/reference/auditbeat/configuration-kerberos.md) for more information. + + +### `queue` [_queue_3] + +Configuration options for internal queue. + +See [Internal queue](/reference/auditbeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `auditbeat.yml` or the `output` section but not both. + + + diff --git a/docs/reference/auditbeat/keystore.md b/docs/reference/auditbeat/keystore.md new file mode 100644 index 000000000000..2cf4fdebf453 --- /dev/null +++ b/docs/reference/auditbeat/keystore.md @@ -0,0 +1,87 @@ +--- +navigation_title: "Secrets keystore" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/keystore.html +--- + +# Secrets keystore for secure settings [keystore] + + +When you configure Auditbeat, you might need to specify sensitive settings, such as passwords. Rather than relying on file system permissions to protect these values, you can use the Auditbeat keystore to obfuscate stored secret values for use in configuration settings. + +After adding a key and its secret value to the keystore, you can use the key in place of the secret value when you configure sensitive settings. + +The syntax for referencing keys is identical to the syntax for environment variables: + +`${KEY}` + +Where KEY is the name of the key. + +For example, imagine that the keystore contains a key called `ES_PWD` with the value `yourelasticsearchpassword`: + +* In the configuration file, use `output.elasticsearch.password: "${ES_PWD}"` +* On the command line, use: `-E "output.elasticsearch.password=\${ES_PWD}"` + +When Auditbeat unpacks the configuration, it resolves keys before resolving environment variables and other variables. + +Notice that the Auditbeat keystore differs from the Elasticsearch keystore. Whereas the Elasticsearch keystore lets you store `elasticsearch.yml` values by name, the Auditbeat keystore lets you specify arbitrary names that you can reference in the Auditbeat configuration. + +To create and manage keys, use the `keystore` command. See the [command reference](/reference/auditbeat/command-line-options.md#keystore-command) for the full command syntax, including optional flags. + +::::{note} +The `keystore` command must be run by the same user who will run Auditbeat. +:::: + + + +## Create a keystore [creating-keystore] + +To create a secrets keystore, use: + +```sh +auditbeat keystore create +``` + +Auditbeat creates the keystore in the directory defined by the `path.data` configuration setting. + + +## Add keys [add-keys-to-keystore] + +To store sensitive values, such as authentication credentials for Elasticsearch, use the `keystore add` command: + +```sh +auditbeat keystore add ES_PWD +``` + +When prompted, enter a value for the key. + +To overwrite an existing key’s value, use the `--force` flag: + +```sh +auditbeat keystore add ES_PWD --force +``` + +To pass the value through stdin, use the `--stdin` flag. You can also use `--force`: + +```sh +cat /file/containing/setting/value | auditbeat keystore add ES_PWD --stdin --force +``` + + +## List keys [list-settings] + +To list the keys defined in the keystore, use: + +```sh +auditbeat keystore list +``` + + +## Remove keys [remove-settings] + +To remove a key from the keystore, use: + +```sh +auditbeat keystore remove ES_PWD +``` + diff --git a/docs/reference/auditbeat/kibana-user-privileges.md b/docs/reference/auditbeat/kibana-user-privileges.md new file mode 100644 index 000000000000..5ffa8ca6b401 --- /dev/null +++ b/docs/reference/auditbeat/kibana-user-privileges.md @@ -0,0 +1,27 @@ +--- +navigation_title: "Create a _reader_ user" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/kibana-user-privileges.html +--- + +# Grant privileges and roles needed to read Auditbeat data from {{kib}} [kibana-user-privileges] + + +{{kib}} users typically need to view dashboards and visualizations that contain Auditbeat data. These users might also need to create and edit dashboards and visualizations. + +To grant users the required privileges: + +1. Create a **reader role**, called something like `auditbeat_reader`, that has the following privilege: + + | Type | Privilege | Purpose | + | --- | --- | --- | + | Index | `read` on `auditbeat-*` indices | Read data indexed by Auditbeat | + | Spaces | `Read` or `All` on Dashboards, Visualize, and Discover | Allow the user to view, edit, and create dashboards, as well as browse data. | + +2. Assign the **reader role**, along with the following built-in roles, to users who need to read Auditbeat data: + + | Role | Purpose | + | --- | --- | + | `monitoring_user` | Allow users to monitor the health of Auditbeat itself. Only assign this role to users who manage Auditbeat. | + + diff --git a/docs/reference/auditbeat/learn-more-security.md b/docs/reference/auditbeat/learn-more-security.md new file mode 100644 index 000000000000..cfc24ca647f6 --- /dev/null +++ b/docs/reference/auditbeat/learn-more-security.md @@ -0,0 +1,12 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/learn-more-security.html +--- + +# Learn more about privileges, roles, and users [learn-more-security] + +Want to learn more about creating users and roles? See [Secure a cluster](docs-content://deploy-manage/security.md). Also see: + +* [Security privileges](elasticsearch://reference/elasticsearch/security-privileges.md) for a description of available privileges +* [Built-in roles](elasticsearch://reference/elasticsearch/roles.md) for a description of roles that you can assign to users + diff --git a/docs/reference/auditbeat/linux-seccomp.md b/docs/reference/auditbeat/linux-seccomp.md new file mode 100644 index 000000000000..e14effdcb388 --- /dev/null +++ b/docs/reference/auditbeat/linux-seccomp.md @@ -0,0 +1,75 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/linux-seccomp.html +--- + +# Use Linux Secure Computing Mode (seccomp) [linux-seccomp] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +On Linux 3.17 and later, Auditbeat can take advantage of secure computing mode, also known as seccomp. Seccomp restricts the system calls that a process can issue. Specifically Auditbeat can load a seccomp BPF filter at process start-up that drops the privileges to invoke specific system calls. Once a filter is loaded by the process it cannot be removed. + +The kernel exposes a large number of system calls that are not used by Auditbeat. By installing a seccomp filter, you can limit the total kernel surface exposed to Auditbeat (principle of least privilege). This minimizes the impact of unknown vulnerabilities that might be found in the process. + +The filter is expressed as a Berkeley Packet Filter (BPF) program. The BPF program is generated based on a policy defined by Auditbeat. The policy can be customized through configuration as well. + +A seccomp policy is architecture specific due to the fact that system calls vary by architecture. Auditbeat includes a whitelist seccomp policy for the amd64 and 386 architectures. You can view those policies [here](https://github.com/elastic/beats/tree/master/libbeat/common/seccomp). + + +## Seccomp Policy Configuration [seccomp-policy-config] + +The seccomp policy can be customized through the configuration policy. This is an example blacklist policy that prohibits `execve`, `execveat`, `fork`, and `vfork` syscalls. + +```yaml +seccomp: + default_action: allow <1> + syscalls: + - action: errno <2> + names: <3> + - execve + - execveat + - fork + - vfork +``` + +1. If the system call being invoked by the process does not match one of the names below then it will be allowed. +2. If the system call being invoked matches one of the names below then an error will be returned to caller. This is known as a blacklist policy. +3. These are system calls being prohibited. + + +These are the configuration options for a seccomp policy. + +**`enabled`** +: On Linux, this option is enabled by default. To disable seccomp filter loading, set this option to `false`. + +**`default_action`** +: The default action to take when none of the defined system calls match. See [action](#seccomp-policy-config-action) for the full list of values. This is required. + +**`syscalls`** +: Each object in this list must contain an `action` and a list of system call `names`. The list must contain at least one item. + +**`names`** +: A list of system call names. The system call name must exist for the runtime architecture, otherwise an error will be logged and the filter will not be installed. At least one system call must be defined. + +$$$seccomp-policy-config-action$$$ + +**`action`** +: The action to take when any of the system calls listed in `names` is executed. This is required. These are the available action values. The actions that are available depend on the kernel version. + + * `errno` - The system call will return `EPERM` (permission denied) to the caller. + * `trace` - The kernel will notify a `ptrace` tracer. If no tracer is present then the system call fails with `ENOSYS` (function not implemented). + * `trap` - The kernel will send a `SIGSYS` signal to the calling thread and not execute the system call. The Go runtime will exit. + * `kill_thread` - The kernel will immediately terminate the thread. Other threads will continue to execute. + * `kill_process` - The kernel will terminate the process. Available in Linux 4.14 and later. + * `log` - The kernel will log the system call before executing it. Available in Linux 4.14 and later. (This does not go to the Beat’s log.) + * `allow` - The kernel will allow the system call to execute. + + + +## Auditbeat Reports Seccomp Violations [_auditbeat_reports_seccomp_violations] + +You can use Auditbeat to report any seccomp violations that occur on the system. The kernel generates an event for each violation and Auditbeat reports the event. The `event.action` value will be `violated-seccomp-policy` and the event will contain information about the process and system call. + diff --git a/docs/reference/auditbeat/load-kibana-dashboards.md b/docs/reference/auditbeat/load-kibana-dashboards.md new file mode 100644 index 000000000000..462bf83869b2 --- /dev/null +++ b/docs/reference/auditbeat/load-kibana-dashboards.md @@ -0,0 +1,146 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/load-kibana-dashboards.html +--- + +# Load Kibana dashboards [load-kibana-dashboards] + +Auditbeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Auditbeat data in Kibana. Before you can use the dashboards, you need to create the index pattern, `auditbeat-*`, and load the dashboards into Kibana. + +To do this, you can either run the `setup` command (as described here) or [configure dashboard loading](/reference/auditbeat/configuration-dashboards.md) in the `auditbeat.yml` config file. This requires a Kibana endpoint configuration. If you didn’t already configure a Kibana endpoint, see [{{kib}} endpoint](/reference/auditbeat/setup-kibana-endpoint.md). + + +## Load dashboards [load-dashboards] + +Make sure Kibana is running before you perform this step. If you are accessing a secured Kibana instance, make sure you’ve configured credentials as described in the [Quick start: installation and configuration](/reference/auditbeat/auditbeat-installation-configuration.md). + +To load the recommended index template for writing to {{es}} and deploy the sample dashboards for visualizing the data in {{kib}}, use the command that works with your system. + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +auditbeat setup --dashboards +``` +:::::: + +::::::{tab-item} RPM +```sh +auditbeat setup --dashboards +``` +:::::: + +::::::{tab-item} MacOS +```sh +./auditbeat setup --dashboards +``` +:::::: + +::::::{tab-item} Linux +```sh +./auditbeat setup --dashboards +``` +:::::: + +::::::{tab-item} Docker +```sh +docker run --rm --net="host" docker.elastic.co/beats/auditbeat:9.0.0-beta1 setup --dashboards +``` +:::::: + +::::::{tab-item} Windows +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). + +From the PowerShell prompt, change to the directory where you installed Auditbeat, and run: + +```sh +PS > .\auditbeat.exe setup --dashboards +``` +:::::: + +::::::: +For more options, such as loading customized dashboards, see [Importing Existing Beat Dashboards](http://www.elastic.co/guide/en/beats/devguide/master/import-dashboards.md). If you’ve configured the Logstash output, see [Load dashboards for Logstash output](#load-dashboards-logstash). + + +## Load dashboards for Logstash output [load-dashboards-logstash] + +During dashboard loading, Auditbeat connects to Elasticsearch to check version information. To load dashboards when the Logstash output is enabled, you need to temporarily disable the Logstash output and enable Elasticsearch. To connect to a secured Elasticsearch cluster, you also need to pass Elasticsearch credentials. + +::::{tip} +The example shows a hard-coded password, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md). +:::: + + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +auditbeat setup -e \ + -E output.logstash.enabled=false \ + -E output.elasticsearch.hosts=['localhost:9200'] \ + -E output.elasticsearch.username=auditbeat_internal \ + -E output.elasticsearch.password={pwd} \ + -E setup.kibana.host=localhost:5601 +``` +:::::: + +::::::{tab-item} RPM +```sh +auditbeat setup -e \ + -E output.logstash.enabled=false \ + -E output.elasticsearch.hosts=['localhost:9200'] \ + -E output.elasticsearch.username=auditbeat_internal \ + -E output.elasticsearch.password={pwd} \ + -E setup.kibana.host=localhost:5601 +``` +:::::: + +::::::{tab-item} MacOS +```sh +./auditbeat setup -e \ + -E output.logstash.enabled=false \ + -E output.elasticsearch.hosts=['localhost:9200'] \ + -E output.elasticsearch.username=auditbeat_internal \ + -E output.elasticsearch.password={pwd} \ + -E setup.kibana.host=localhost:5601 +``` +:::::: + +::::::{tab-item} Linux +```sh +./auditbeat setup -e \ + -E output.logstash.enabled=false \ + -E output.elasticsearch.hosts=['localhost:9200'] \ + -E output.elasticsearch.username=auditbeat_internal \ + -E output.elasticsearch.password={pwd} \ + -E setup.kibana.host=localhost:5601 +``` +:::::: + +::::::{tab-item} Docker +```sh +docker run --rm --net="host" docker.elastic.co/beats/auditbeat:9.0.0-beta1 setup -e \ + -E output.logstash.enabled=false \ + -E output.elasticsearch.hosts=['localhost:9200'] \ + -E output.elasticsearch.username=auditbeat_internal \ + -E output.elasticsearch.password={pwd} \ + -E setup.kibana.host=localhost:5601 +``` +:::::: + +::::::{tab-item} Windows +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). + +From the PowerShell prompt, change to the directory where you installed Auditbeat, and run: + +```sh +PS > .\auditbeat.exe setup -e ` + -E output.logstash.enabled=false ` + -E output.elasticsearch.hosts=['localhost:9200'] ` + -E output.elasticsearch.username=auditbeat_internal ` + -E output.elasticsearch.password={pwd} ` + -E setup.kibana.host=localhost:5601 +``` +:::::: + +::::::: diff --git a/docs/reference/auditbeat/logstash-output.md b/docs/reference/auditbeat/logstash-output.md new file mode 100644 index 000000000000..a065c513ebc8 --- /dev/null +++ b/docs/reference/auditbeat/logstash-output.md @@ -0,0 +1,245 @@ +--- +navigation_title: "Logstash" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/logstash-output.html +--- + +# Configure the Logstash output [logstash-output] + + +The {{ls}} output sends events directly to {{ls}} by using the lumberjack protocol, which runs over TCP. {{ls}} allows for additional processing and routing of generated events. + +::::{admonition} Prerequisite +:class: important + +To send events to {{ls}}, you also need to create a {{ls}} configuration pipeline that listens for incoming Beats connections and indexes the received events into {{es}}. For more information, see [Getting Started with {{ls}}](logstash://reference/getting-started-with-logstash.md). Also see the documentation for the [{{beats}} input](logstash://reference/plugins-inputs-beats.md) and [{{es}} output](logstash://reference/plugins-outputs-elasticsearch.md) plugins. +:::: + + +If you want to use {{ls}} to perform additional processing on the data collected by Auditbeat, you need to configure Auditbeat to use {{ls}}. + +To do this, edit the Auditbeat configuration file to disable the {{es}} output by commenting it out and enable the {{ls}} output by uncommenting the {{ls}} section: + +```yaml +output.logstash: + hosts: ["127.0.0.1:5044"] +``` + +The `hosts` option specifies the {{ls}} server and the port (`5044`) where {{ls}} is configured to listen for incoming Beats connections. + +For this configuration, you must [load the index template into {{es}} manually](/reference/auditbeat/auditbeat-template.md#load-template-manually) because the options for auto loading the template are only available for the {{es}} output. + +## Accessing metadata fields [_accessing_metadata_fields] + +Every event sent to {{ls}} contains the following metadata fields that you can use in {{ls}} for indexing and filtering: + +```json +{ + ... + "@metadata": { <1> + "beat": "auditbeat", <2> + "version": "9.0.0-beta1" <3> + } +} +``` + +1. Auditbeat uses the `@metadata` field to send metadata to {{ls}}. See the [{{ls}} documentation](logstash://reference/event-dependent-configuration.md#metadata) for more about the `@metadata` field. +2. The default is auditbeat. To change this value, set the [`index`](#logstash-index) option in the Auditbeat config file. +3. The current version of Auditbeat. + + +You can access this metadata from within the {{ls}} config file to set values dynamically based on the contents of the metadata. + +For example, the following {{ls}} configuration file tells {{ls}} to use the index reported by Auditbeat for indexing events into {{es}}: + +```json +input { + beats { + port => 5044 + } +} + +output { + elasticsearch { + hosts => ["http://localhost:9200"] + index => "%{[@metadata][beat]}-%{[@metadata][version]}" <1> + action => "create" + } +} +``` + +1. `%{[@metadata][beat]}` sets the first part of the index name to the value of the `beat` metadata field and `%{[@metadata][version]}` sets the second part to the Beat’s version. For example: `auditbeat-9.0.0-beta1`. + + +Events indexed into {{es}} with the {{ls}} configuration shown here will be similar to events directly indexed by Auditbeat into {{es}}. + +::::{note} +If ILM is not being used, set `index` to `%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}` instead so {{ls}} creates an index per day, based on the `@timestamp` value of the events coming from Beats. +:::: + + + +## Compatibility [_compatibility_2] + +This output works with all compatible versions of {{ls}}. See the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_compatibility). + + +## Configuration options [_configuration_options_3] + +You can specify the following options in the `logstash` section of the `auditbeat.yml` config file: + +### `enabled` [_enabled_2] + +The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + +### `hosts` [hosts] + +The list of known {{ls}} servers to connect to. If load balancing is disabled, but multiple hosts are configured, one host is selected randomly (there is no precedence). If one host becomes unreachable, another one is selected randomly. + +All entries in this list can contain a port number. The default port number 5044 will be used if no number is given. + + +### `compression_level` [_compression_level] + +The gzip compression level. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression). + +Increasing the compression level will reduce the network usage but will increase the CPU usage. + +The default value is 3. + + +### `escape_html` [_escape_html_2] + +Configure escaping of HTML in strings. Set to `true` to enable escaping. + +The default value is `false`. + + +### `worker` or `workers` [_worker_or_workers] + +The number of workers per configured host publishing events to {{ls}}. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). + + +### `loadbalance` [loadbalance] + +When `loadbalance: true` is set, Auditbeat connects to all configured hosts and sends data through all connections in parallel. If a connection fails, data is sent to the remaining hosts until it can be reestablished. Data will still be sent as long as Auditbeat can connect to at least one of its configured hosts. + +When `loadbalance: false` is set, Auditbeat sends data to a single host at a time. The target host is chosen at random from the list of configured hosts, and all data is sent to that target until the connection fails, when a new target is selected. Data will still be sent as long as Auditbeat can connect to at least one of its configured hosts. To rotate through the list of configured hosts over time, use this option in conjunction with the `ttl` setting to close the connection at the configured interval and choose a new target host. + +The default value is `false`. + +```yaml +output.logstash: + hosts: ["localhost:5044", "localhost:5045"] + loadbalance: true + index: auditbeat +``` + + +### `ttl` [_ttl] + +Time to live for a connection to {{ls}} after which the connection will be re-established. Useful when {{ls}} hosts represent load balancers. Since the connections to {{ls}} hosts are sticky, operating behind load balancers can lead to uneven load distribution between the instances. Specifying a TTL on the connection allows to achieve equal connection distribution between the instances. Specifying a TTL of 0 will disable this feature. + +The default value is 0. This setting accepts [duration](/reference/libbeat/config-file-format-type.md#_duration) data type values. + +::::{note} +The "ttl" option is not yet supported on an async {{ls}} client (one with the "pipelining" option set). +:::: + + + +### `pipelining` [_pipelining] + +Configures the number of batches to be sent asynchronously to {{ls}} while waiting for ACK from {{ls}}. Output only becomes blocking once number of `pipelining` batches have been written. Pipelining is disabled if a value of 0 is configured. The default value is 2. + + +### `proxy_url` [_proxy_url_2] + +The URL of the SOCKS5 proxy to use when connecting to the {{ls}} servers. The value must be a URL with a scheme of `socks5://`. The protocol used to communicate to {{ls}} is not based on HTTP so a web-proxy cannot be used. + +If the SOCKS5 proxy server requires client authentication, then a username and password can be embedded in the URL as shown in the example. + +When using a proxy, hostnames are resolved on the proxy server instead of on the client. You can change this behavior by setting the [`proxy_use_local_resolver`](#logstash-proxy-use-local-resolver) option. + +```yaml +output.logstash: + hosts: ["remote-host:5044"] + proxy_url: socks5://user:password@socks5-proxy:2233 +``` + + +### `proxy_use_local_resolver` [logstash-proxy-use-local-resolver] + +The `proxy_use_local_resolver` option determines if {{ls}} hostnames are resolved locally when using a proxy. The default value is false, which means that when a proxy is used the name resolution occurs on the proxy server. + + +### `index` [logstash-index] + +The index root name to write events to. The default is the Beat name. For example `"auditbeat"` generates `"[auditbeat-]9.0.0-beta1-YYYY.MM.DD"` indices (for example, `"auditbeat-9.0.0-beta1-2017.04.26"`). + +::::{note} +This parameter’s value will be assigned to the `metadata.beat` field. It can then be accessed in {{ls}}'s output section as `%{[@metadata][beat]}`. +:::: + + + +### `ssl` [_ssl_2] + +Configuration options for SSL parameters like the root CA for {{ls}} connections. See [SSL](/reference/auditbeat/configuration-ssl.md) for more information. To use SSL, you must also configure the [Beats input plugin for Logstash](logstash://reference/plugins-inputs-beats.md) to use SSL/TLS. + + +### `timeout` [_timeout_2] + +The number of seconds to wait for responses from the {{ls}} server before timing out. The default is 30 (seconds). + + +### `max_retries` [_max_retries_2] + +The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. + +Set `max_retries` to a value less than 0 to retry until all events are published. + +The default is 3. + + +### `bulk_max_size` [_bulk_max_size] + +The maximum number of events to bulk in a single {{ls}} request. The default is 2048. + +Events can be collected into batches. Auditbeat will split batches read from the queue which are larger than `bulk_max_size` into multiple batches. + +Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput. + +Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. + + +### `slow_start` [_slow_start] + +If enabled, only a subset of events in a batch of events is transferred per transaction. The number of events to be sent increases up to `bulk_max_size` if no error is encountered. On error, the number of events per transaction is reduced again. + +The default is `false`. + + +### `backoff.init` [_backoff_init] + +The number of seconds to wait before trying to reconnect to {{ls}} after a network error. After waiting `backoff.init` seconds, Auditbeat tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is 1s. + + +### `backoff.max` [_backoff_max] + +The maximum number of seconds to wait before attempting to connect to {{ls}} after a network error. The default is 60s. + + +### `queue` [_queue_2] + +Configuration options for internal queue. + +See [Internal queue](/reference/auditbeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `auditbeat.yml` or the `output` section but not both. + + + diff --git a/docs/reference/auditbeat/madvdontneed-rss.md b/docs/reference/auditbeat/madvdontneed-rss.md new file mode 100644 index 000000000000..9af1b0ccce94 --- /dev/null +++ b/docs/reference/auditbeat/madvdontneed-rss.md @@ -0,0 +1,9 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/madvdontneed-rss.html +--- + +# High RSS memory usage due to MADV settings [madvdontneed-rss] + +In versions of Auditbeat prior to 7.10.2, the go runtime defaults to `MADV_FREE` by default. In some cases, this can lead to high RSS memory usage while the kernel waits to reclaim any pages assigned to Auditbeat. On versions prior to 7.10.2, set the `GODEBUG="madvdontneed=1"` environment variable if you run into RSS usage issues. + diff --git a/docs/reference/auditbeat/metadata-missing.md b/docs/reference/auditbeat/metadata-missing.md new file mode 100644 index 000000000000..de51069cb206 --- /dev/null +++ b/docs/reference/auditbeat/metadata-missing.md @@ -0,0 +1,14 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/metadata-missing.html +--- + +# @metadata is missing in Logstash [metadata-missing] + +{{ls}} outputs remove `@metadata` fields automatically. Therefore, if {{ls}} instances are chained directly or via some message queue (for example, Redis or Kafka), the `@metadata` field will not be available in the final {{ls}} instance. + +::::{tip} +To preserve `@metadata` fields, use the {{ls}} mutate filter with the rename setting to rename the fields to non-internal fields. +:::: + + diff --git a/docs/reference/auditbeat/monitoring-internal-collection.md b/docs/reference/auditbeat/monitoring-internal-collection.md new file mode 100644 index 000000000000..806e9eb4245a --- /dev/null +++ b/docs/reference/auditbeat/monitoring-internal-collection.md @@ -0,0 +1,73 @@ +--- +navigation_title: "Use internal collection" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/monitoring-internal-collection.html +--- + +# Use internal collection to send monitoring data [monitoring-internal-collection] + + +Use internal collectors to send {{beats}} monitoring data directly to your monitoring cluster. Or as an alternative to internal collection, use [Use {{metricbeat}} collection](/reference/auditbeat/monitoring-metricbeat-collection.md). The benefit of using internal collection instead of {{metricbeat}} is that you have fewer pieces of software to install and maintain. + +1. Create an API key or user that has appropriate authority to send system-level monitoring data to {{es}}. For example, you can use the built-in `beats_system` user or assign the built-in `beats_system` role to another user. For more information on the required privileges, see [Create a *monitoring* user](/reference/auditbeat/privileges-to-publish-monitoring.md). For more information on how to use API keys, see [*Grant access using API keys*](/reference/auditbeat/beats-api-keys.md). +2. Add the `monitoring` settings in the Auditbeat configuration file. If you configured the {{es}} output and want to send Auditbeat monitoring events to the same {{es}} cluster, specify the following minimal configuration: + + ```yaml + monitoring: + enabled: true + elasticsearch: + api_key: id:api_key <1> + username: beats_system + password: somepassword + ``` + + 1. Specify one of `api_key` or `username`/`password`. + + + If you want to send monitoring events to an [{{ecloud}}](https://cloud.elastic.co/) monitoring cluster, you can use two simpler settings. When defined, these settings overwrite settings from other parts in the configuration. For example: + + ```yaml + monitoring: + enabled: true + cloud.id: 'staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==' + cloud.auth: 'elastic:{pwd}' + ``` + + If you configured a different output, such as {{ls}} or you want to send Auditbeat monitoring events to a separate {{es}} cluster (referred to as the *monitoring cluster*), you must specify additional configuration options. For example: + + ```yaml + monitoring: + enabled: true + cluster_uuid: PRODUCTION_ES_CLUSTER_UUID <1> + elasticsearch: + hosts: ["https://example.com:9200", "https://example2.com:9200"] <2> + api_key: id:api_key <3> + username: beats_system + password: somepassword + ``` + + 1. This setting identifies the {{es}} cluster under which the monitoring data for this Auditbeat instance will appear in the Stack Monitoring UI. To get a cluster’s `cluster_uuid`, call the `GET /` API against that production cluster. + 2. This setting identifies the hosts and port numbers of {{es}} nodes that are part of the monitoring cluster. + 3. Specify one of `api_key` or `username`/`password`. + + + If you want to use PKI authentication to send monitoring events to {{es}}, you must specify a different set of configuration options. For example: + + ```yaml + monitoring: + enabled: true + cluster_uuid: PRODUCTION_ES_CLUSTER_UUID + elasticsearch: + hosts: ["https://example.com:9200", "https://example2.com:9200"] + username: "" + ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + ssl.certificate: "/etc/pki/client/cert.pem" + ssl.key: "/etc/pki/client/cert.key" + ``` + + You must specify the `username` as `""` explicitly so that the username from the client certificate (`CN`) is used. See [SSL](/reference/auditbeat/configuration-ssl.md) for more information about SSL settings. + +3. Start Auditbeat. +4. [View the monitoring data in {{kib}}](docs-content://deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md). + + diff --git a/docs/reference/auditbeat/monitoring-metricbeat-collection.md b/docs/reference/auditbeat/monitoring-metricbeat-collection.md new file mode 100644 index 000000000000..5b5b03831125 --- /dev/null +++ b/docs/reference/auditbeat/monitoring-metricbeat-collection.md @@ -0,0 +1,163 @@ +--- +navigation_title: "Use {{metricbeat}} collection" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/monitoring-metricbeat-collection.html +--- + +# Use {{metricbeat}} to send monitoring data [monitoring-metricbeat-collection] + + +In 7.3 and later, you can use {{metricbeat}} to collect data about Auditbeat and ship it to the monitoring cluster. The benefit of using {{metricbeat}} instead of internal collection is that the monitoring agent remains active even if the Auditbeat instance dies. + +To collect and ship monitoring data: + +1. [Configure the shipper you want to monitor](#configure-shipper) +2. [Install and configure {{metricbeat}} to collect monitoring data](#configure-metricbeat) + + +## Configure the shipper you want to monitor [configure-shipper] + +1. Enable the HTTP endpoint to allow external collection of monitoring data: + + Add the following setting in the Auditbeat configuration file (`auditbeat.yml`): + + ```yaml + http.enabled: true + ``` + + By default, metrics are exposed on port 5066. If you need to monitor multiple {{beats}} shippers running on the same server, set `http.port` to expose metrics for each shipper on a different port number: + + ```yaml + http.port: 5067 + ``` + +2. Disable the default collection of Auditbeat monitoring metrics.
+ + Add the following setting in the Auditbeat configuration file (`auditbeat.yml`): + + ```yaml + monitoring.enabled: false + ``` + + For more information, see [Monitoring configuration options](/reference/auditbeat/configuration-monitor.md). + +3. Configure host (optional).
+ + If you intend to get metrics using {{metricbeat}} installed on another server, you need to bind the Auditbeat to host’s IP: + + ```yaml + http.host: xxx.xxx.xxx.xxx + ``` + +4. Configure cluster UUID.
+ + The cluster UUID is necessary if you want to see {{beats}} monitoring in the {{kib}} stack monitoring view. The monitoring data will be grouped under the cluster for that UUID. To associate Auditbeat with the cluster UUID, set: + + ```yaml + monitoring.cluster_uuid: "cluster-uuid" + ``` + +5. Start Auditbeat. + + +## Install and configure {{metricbeat}} to collect monitoring data [configure-metricbeat] + +1. Install {{metricbeat}} on the same server as Auditbeat. To learn how, see [Get started with {{metricbeat}}](/reference/metricbeat/metricbeat-installation-configuration.md). If you already have {{metricbeat}} installed on the server, skip this step. +2. Enable the `beat-xpack` module in {{metricbeat}}.
+ + For example, to enable the default configuration in the `modules.d` directory, run the following command, using the correct command syntax for your OS: + + ```sh + metricbeat modules enable beat-xpack + ``` + + For more information, see [Configure modules](/reference/metricbeat/configuration-metricbeat.md) and [beat module](/reference/metricbeat/metricbeat-module-beat.md). + +3. Configure the `beat-xpack` module in {{metricbeat}}.
+ + The `modules.d/beat-xpack.yml` file contains the following settings: + + ```yaml + - module: beat + metricsets: + - stats + - state + period: 10s + hosts: ["http://localhost:5066"] + #username: "user" + #password: "secret" + xpack.enabled: true + ``` + + Set the `hosts`, `username`, and `password` settings as required by your environment. For other module settings, it’s recommended that you accept the defaults. + + By default, the module collects Auditbeat monitoring data from `localhost:5066`. If you exposed the metrics on a different host or port when you enabled the HTTP endpoint, update the `hosts` setting. + + To monitor multiple {{beats}} agents, specify a list of hosts, for example: + + ```yaml + hosts: ["http://localhost:5066","http://localhost:5067","http://localhost:5068"] + ``` + + If you configured Auditbeat to use encrypted communications, you must access it via HTTPS. For example, use a `hosts` setting like `https://localhost:5066`. + + If the Elastic {{security-features}} are enabled, you must also provide a user ID and password so that {{metricbeat}} can collect metrics successfully: + + 1. Create a user on the {{es}} cluster that has the `remote_monitoring_collector` [built-in role](elasticsearch://reference/elasticsearch/roles.md). Alternatively, if it’s available in your environment, use the `remote_monitoring_user` [built-in user](docs-content://deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). + 2. Add the `username` and `password` settings to the beat module configuration file. + +4. Optional: Disable the system module in the {{metricbeat}}. + + By default, the [system module](/reference/metricbeat/metricbeat-module-system.md) is enabled. The information it collects, however, is not shown on the **Stack Monitoring** page in {{kib}}. Unless you want to use that information for other purposes, run the following command: + + ```sh + metricbeat modules disable system + ``` + +5. Identify where to send the monitoring data.
+ + ::::{tip} + In production environments, we strongly recommend using a separate cluster (referred to as the *monitoring cluster*) to store the data. Using a separate monitoring cluster prevents production cluster outages from impacting your ability to access your monitoring data. It also prevents monitoring activities from impacting the performance of your production cluster. + :::: + + + For example, specify the {{es}} output information in the {{metricbeat}} configuration file (`metricbeat.yml`): + + ```yaml + output.elasticsearch: + # Array of hosts to connect to. + hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1> + + # Optional protocol and basic auth credentials. + #protocol: "https" + #api_key: "id:api_key" <2> + #username: "elastic" + #password: "changeme" + ``` + + 1. In this example, the data is stored on a monitoring cluster with nodes `es-mon-1` and `es-mon-2`. + 2. Specify one of `api_key` or `username`/`password`. + + + If you configured the monitoring cluster to use encrypted communications, you must access it via HTTPS. For example, use a `hosts` setting like `https://es-mon-1:9200`. + + ::::{important} + The {{es}} {{monitor-features}} use ingest pipelines. The cluster that stores the monitoring data must have at least one node with the `ingest` role. + :::: + + + If the {{es}} {{security-features}} are enabled on the monitoring cluster, you must provide a valid user ID and password so that {{metricbeat}} can send metrics successfully: + + 1. Create a user on the monitoring cluster that has the `remote_monitoring_agent` [built-in role](elasticsearch://reference/elasticsearch/roles.md). Alternatively, if it’s available in your environment, use the `remote_monitoring_user` [built-in user](docs-content://deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). + + ::::{tip} + If you’re using index lifecycle management, the remote monitoring user requires additional privileges to create and read indices. For more information, see [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md). + :::: + + 2. Add the `username` and `password` settings to the {{es}} output information in the {{metricbeat}} configuration file. + + For more information about these configuration options, see [Configure the {{es}} output](/reference/metricbeat/elasticsearch-output.md). + +6. [Start {{metricbeat}}](/reference/metricbeat/metricbeat-starting.md) to begin collecting monitoring data. +7. [View the monitoring data in {{kib}}](docs-content://deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md). + diff --git a/docs/reference/auditbeat/monitoring-shows-fewer-than-expected-beats.md b/docs/reference/auditbeat/monitoring-shows-fewer-than-expected-beats.md new file mode 100644 index 000000000000..f6e2ea279057 --- /dev/null +++ b/docs/reference/auditbeat/monitoring-shows-fewer-than-expected-beats.md @@ -0,0 +1,9 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/monitoring-shows-fewer-than-expected-beats.html +--- + +# Monitoring UI shows fewer Beats than expected [monitoring-shows-fewer-than-expected-beats] + +If you are running multiple Beat instances on the same host, make sure they each have a distinct `path.data` value. + diff --git a/docs/reference/auditbeat/monitoring.md b/docs/reference/auditbeat/monitoring.md new file mode 100644 index 000000000000..b8207b724b55 --- /dev/null +++ b/docs/reference/auditbeat/monitoring.md @@ -0,0 +1,16 @@ +--- +navigation_title: "Monitor" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/monitoring.html +--- + +# Monitor Auditbeat [monitoring] + + +You can use the {{stack}} {{monitor-features}} to gain insight into the health of Auditbeat instances running in your environment. + +To monitor Auditbeat, make sure monitoring is enabled on your {{es}} cluster, then configure the method used to collect Auditbeat metrics. You can use one of following methods: + +* [Internal collection](/reference/auditbeat/monitoring-internal-collection.md) - Internal collectors send monitoring data directly to your monitoring cluster. +* [{{metricbeat}} collection](/reference/auditbeat/monitoring-metricbeat-collection.md) - {{metricbeat}} collects monitoring data from your Auditbeat instance and sends it directly to your monitoring cluster. + diff --git a/docs/reference/auditbeat/move-fields.md b/docs/reference/auditbeat/move-fields.md new file mode 100644 index 000000000000..fa7c4eacbc8d --- /dev/null +++ b/docs/reference/auditbeat/move-fields.md @@ -0,0 +1,93 @@ +--- +navigation_title: "move_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/move-fields.html +--- + +# Move fields [move-fields] + + +The `move_fields` processor moves event fields from one object into another. It can also rearrange fields or add a prefix to fields. + +The processor extracts fields from `from`, then uses `fields` and `exclude` as filters to choose which fields to move into the `to` field. + +For example, given the following event: + +```json +{ + "app": { + "method": "a", + "elapsed_time": 100, + "user_id": 100, + "message": "i'm a message" + } +} +``` + +To move `method` and `elapsed_time` into another object, use this configuration: + +```yaml +processors: + - move_fields: + from: "app" + fields: ["method", "elapsed_time"], + to: "rpc." +``` + +Your final event will be: + +```json +{ + "app": { + "user_id": 100, + "message": "i'm a message", + "rpc": { + "method": "a", + "elapsed_time": 100 + } + } +} +``` + +To add a prefix to the whole event: + +```json +{ + "app": { "method": "a"}, + "cost": 100 +} +``` + +Use this configuration: + +```yaml +processors: + - move_fields: + to: "my_prefix_" +``` + +Your final event will be: + +```json +{ + "my_prefix_app": { "method": "a"}, + "my_prefix_cost": 100 +} +``` + +| Name | Required | Default | Description | | +| --- | --- | --- | --- | --- | +| `from` | no | | Which field you want extract. This field and any nested fields will be moved into `to` unless they are filtered out. If empty, indicates event root. | | +| `fields` | no | | Which fields to extract from `from` and move to `to`. An empty list indicates all fields. | | +| `ignore_missing` | no | false | Ignore "not found" errors when extracting fields. | | +| `exclude` | no | | A list of fields to exclude and not move. | | +| `to` | yes | | These fields extract from `from` destination field prefix the `to` will base on fields root. | | + +```yaml +processors: + - move_fields: + from: "app" + fields: [ "method", "elapsed_time" ] + to: "rpc." +``` + diff --git a/docs/reference/auditbeat/privileges-to-publish-events.md b/docs/reference/auditbeat/privileges-to-publish-events.md new file mode 100644 index 000000000000..9f3dd37d0ba3 --- /dev/null +++ b/docs/reference/auditbeat/privileges-to-publish-events.md @@ -0,0 +1,37 @@ +--- +navigation_title: "Create a _publishing_ user" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/privileges-to-publish-events.html +--- + +# Grant privileges and roles needed for publishing [privileges-to-publish-events] + + +Users who publish events to {{es}} need to create and write to Auditbeat indices. To minimize the privileges required by the writer role, use the [setup role](/reference/auditbeat/privileges-to-setup-beats.md) to pre-load dependencies. This section assumes that you’ve run the setup. + +When using ILM, turn off the ILM setup check in the Auditbeat config file before running Auditbeat to publish events: + +```yaml +setup.ilm.check_exists: false +``` + +To grant the required privileges: + +1. Create a **writer role**, called something like `auditbeat_writer`, that has the following privileges: + + ::::{note} + The `monitor` cluster privilege and the `create_doc` and `auto_configure` privileges on `auditbeat-*` indices are required in every configuration. + :::: + + + | Type | Privilege | Purpose | + | --- | --- | --- | + | Cluster | `monitor` | Retrieve cluster details (e.g. version) | + | Cluster | `read_ilm` | Read the ILM policy when connecting to clusters that support ILM.Not needed when `setup.ilm.check_exists` is `false`. | + | Index | `create_doc` on `auditbeat-*` indices | Write events into {{es}} | + | Index | `auto_configure` on `auditbeat-*` indices | Update the datastream mapping. Consider either disabling entirely or adding therule `-{{beat_default_index_prefix}}-*` to the cluster settings[action.auto_create_index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create)to prevent unwanted indices creations from the agents. | + + Omit any privileges that aren’t relevant in your environment. + +2. Assign the **writer role** to users who will index events into {{es}}. + diff --git a/docs/reference/auditbeat/privileges-to-publish-monitoring.md b/docs/reference/auditbeat/privileges-to-publish-monitoring.md new file mode 100644 index 000000000000..fd04644d5f63 --- /dev/null +++ b/docs/reference/auditbeat/privileges-to-publish-monitoring.md @@ -0,0 +1,61 @@ +--- +navigation_title: "Create a _monitoring_ user" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/privileges-to-publish-monitoring.html +--- + +# Grant privileges and roles needed for monitoring [privileges-to-publish-monitoring] + + +{{es-security-features}} provides built-in users and roles for monitoring. The privileges and roles needed depend on the method used to collect monitoring data. + +::::{admonition} Important note for {{ecloud}} users +:class: important + +Built-in users are not available when running our [hosted {{ess}}](https://www.elastic.co/cloud/elasticsearch-service) on {{ecloud}}. To send monitoring data securely, create a monitoring user and grant it the roles described in the following sections. + +:::: + + +* If you’re using [internal collection](/reference/auditbeat/monitoring-internal-collection.md) to collect metrics about Auditbeat, {{es-security-features}} provides the `beats_system` [built-in user](docs-content://deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md) and `beats_system` [built-in role](elasticsearch://reference/elasticsearch/roles.md) to send monitoring information. You can use the built-in user, if it’s available in your environment, or create a user who has the privileges needed to send monitoring information. + + If you use the `beats_system` user, make sure you set the password. + + If you don’t use the `beats_system` user: + + 1. Create a **monitoring role**, called something like `auditbeat_monitoring`, that has the following privileges: + + | Type | Privilege | Purpose | + | --- | --- | --- | + | Cluster | `monitor` | Retrieve cluster details (e.g. version) | + | Index | `create_index` on `.monitoring-beats-*` indices | Create monitoring indices in {{es}} | + | Index | `create_doc` on `.monitoring-beats-*` indices | Write monitoring events into {{es}} | + + 2. Assign the **monitoring role**, along with the following built-in roles, to users who need to monitor Auditbeat: + + | Role | Purpose | + | --- | --- | + | `kibana_admin` | Use {{kib}} | + | `monitoring_user` | Use **Stack Monitoring** in {{kib}} to monitor Auditbeat | + +* If you’re [using {{metricbeat}}](/reference/auditbeat/monitoring-metricbeat-collection.md) to collect metrics about Auditbeat, {{es-security-features}} provides the `remote_monitoring_user` [built-in user](docs-content://deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md), and the `remote_monitoring_collector` and `remote_monitoring_agent` [built-in roles](elasticsearch://reference/elasticsearch/roles.md) for collecting and sending monitoring information. You can use the built-in user, if it’s available in your environment, or create a user who has the privileges needed to collect and send monitoring information. + + If you use the `remote_monitoring_user` user, make sure you set the password. + + If you don’t use the `remote_monitoring_user` user: + + 1. Create a user on the production cluster who will collect and send monitoring information. + 2. Assign the following roles to the user: + + | Role | Purpose | + | --- | --- | + | `remote_monitoring_collector` | Collect monitoring metrics from Auditbeat | + | `remote_monitoring_agent` | Send monitoring data to the monitoring cluster | + + 3. Assign the following role to users who will view the monitoring data in {{kib}}: + + | Role | Purpose | + | --- | --- | + | `monitoring_user` | Use **Stack Monitoring** in {{kib}} to monitor Auditbeat | + + diff --git a/docs/reference/auditbeat/privileges-to-setup-beats.md b/docs/reference/auditbeat/privileges-to-setup-beats.md new file mode 100644 index 000000000000..bd7676a93ee8 --- /dev/null +++ b/docs/reference/auditbeat/privileges-to-setup-beats.md @@ -0,0 +1,42 @@ +--- +navigation_title: "Create a _setup_ user" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/privileges-to-setup-beats.html +--- + +# Grant privileges and roles needed for setup [privileges-to-setup-beats] + + +::::{important} +Setting up Auditbeat is an admin-level task that requires extra privileges. As a best practice, grant the setup role to administrators only, and use a more restrictive role for event publishing. +:::: + + +Administrators who set up Auditbeat typically need to load mappings, dashboards, and other objects used to index data into {{es}} and visualize it in {{kib}}. + +To grant users the required privileges: + +1. Create a **setup role**, called something like `auditbeat_setup`, that has the following privileges: + + | Type | Privilege | Purpose | + | --- | --- | --- | + | Cluster | `monitor` | Retrieve cluster details (e.g. version) | + | Cluster | `manage_ilm` | Set up and manage index lifecycle management (ILM) policy | + | Index | `manage` on `auditbeat-*` indices | Load data stream | + + Omit any privileges that aren’t relevant in your environment. + + ::::{note} + These instructions assume that you are using the default name for Auditbeat indices. If `auditbeat-*` is not listed, or you are using a custom name, enter it manually and modify the privileges to match your index naming pattern. + :::: + +2. Assign the **setup role**, along with the following built-in roles, to users who need to set up Auditbeat: + + | Role | Purpose | + | --- | --- | + | `kibana_admin` | Load dependencies, such as example dashboards, if available, into {{kib}} | + | `ingest_admin` | Set up index templates and, if available, ingest pipelines | + + Omit any roles that aren’t relevant in your environment. + + diff --git a/docs/reference/auditbeat/processor-dns.md b/docs/reference/auditbeat/processor-dns.md new file mode 100644 index 000000000000..8f3587ddcfdb --- /dev/null +++ b/docs/reference/auditbeat/processor-dns.md @@ -0,0 +1,102 @@ +--- +navigation_title: "dns" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/processor-dns.html +--- + +# DNS Reverse Lookup [processor-dns] + + +The `dns` processor performs DNS queries. It caches the responses that it receives in accordance to the time-to-live (TTL) value contained in the response. It also caches failures that occur during lookups. Each instance of this processor maintains its own independent cache. + +The processor uses its own DNS resolver to send requests to nameservers and does not use the operating system’s resolver. It does not read any values contained in `/etc/hosts`. + +This processor can significantly slow down your pipeline’s throughput if you have a high latency network or slow upstream nameserver. The cache will help with performance, but if the addresses being resolved have a high cardinality then the cache benefits will be diminished due to the high miss ratio. + +By way of example, if each DNS lookup takes 2 milliseconds, the maximum throughput you can achieve is 500 events per second (1000 milliseconds / 2 milliseconds). If you have a high cache hit ratio then your throughput can be higher. + +The processor can send the following query types: + +* `A` - IPv4 addresses +* `AAAA` - IPv6 addresses +* `TXT` - arbitrary human-readable text data +* `PTR` - reverse IP address lookups + +The output value is a list of strings for all query types except `PTR`. For `PTR` queries the output value is a string. + +This is a minimal configuration example that resolves the IP addresses contained in two fields. + +```yaml +processors: + - dns: + type: reverse + fields: + source.ip: source.domain + destination.ip: destination.domain +``` + +Next is a configuration example showing all options. + +```yaml +processors: +- dns: + type: reverse + action: append + transport: tls + fields: + server.ip: server.domain + client.ip: client.domain + success_cache: + capacity.initial: 1000 + capacity.max: 10000 + min_ttl: 1m + failure_cache: + capacity.initial: 1000 + capacity.max: 10000 + ttl: 1m + nameservers: ['192.0.2.1', '203.0.113.1'] + timeout: 500ms + tag_on_failure: [_dns_reverse_lookup_failed] +``` + +The `dns` processor has the following configuration settings: + +`type` +: The type of DNS query to perform. The supported types are `A`, `AAAA`, `PTR` (or `reverse`), and `TXT`. + +`action` +: This defines the behavior of the processor when the target field already exists in the event. The options are `append` (default) and `replace`. + +`fields` +: This is a mapping of source field names to target field names. The value of the source field will be used in the DNS query and result will be written to the target field. + +`success_cache.capacity.initial` +: The initial number of items that the success cache will be allocated to hold. When initialized the processor will allocate the memory for this number of items. Default value is `1000`. + +`success_cache.capacity.max` +: The maximum number of items that the success cache can hold. When the maximum capacity is reached a random item is evicted. Default value is `10000`. + +`success_cache.min_ttl` +: The duration of the minimum alternative cache TTL for successful DNS responses. Ensures that `TTL=0` successful reverse DNS responses can be cached. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Default value is `1m`. + +`failure_cache.capacity.initial` +: The initial number of items that the failure cache will be allocated to hold. When initialized the processor will allocate the memory for this number of items. Default value is `1000`. + +`failure_cache.capacity.max` +: The maximum number of items that the failure cache can hold. When the maximum capacity is reached a random item is evicted. Default value is `10000`. + +`failure_cache.ttl` +: The duration for which failures are cached. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Default value is `1m`. + +`nameservers` +: A list of nameservers to query. If there are multiple servers, the resolver queries them in the order listed. If none are specified then it will read the nameservers listed in `/etc/resolv.conf` once at initialization. On Windows you must always supply at least one nameserver. + +`timeout` +: The duration after which a DNS query will timeout. This is timeout for each DNS request so if you have 2 nameservers then the total timeout will be 2 times this value. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Default value is `500ms`. + +`tag_on_failure` +: A list of tags to add to the event when any lookup fails. The tags are only added once even if multiple lookups fail. By default, no tags are added upon failure. + +`transport` +: The type of transport connection that should be used can either be `tls` (DNS over TLS) or `udp`. Defaults to `udp`. + diff --git a/docs/reference/auditbeat/processor-registered-domain.md b/docs/reference/auditbeat/processor-registered-domain.md new file mode 100644 index 000000000000..db232ba58986 --- /dev/null +++ b/docs/reference/auditbeat/processor-registered-domain.md @@ -0,0 +1,36 @@ +--- +navigation_title: "registered_domain" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/processor-registered-domain.html +--- + +# Registered Domain [processor-registered-domain] + + +The `registered_domain` processor reads a field containing a hostname and then writes the "registered domain" contained in the hostname to the target field. For example, given `www.google.co.uk` the processor would output `google.co.uk`. In other words the "registered domain" is the effective top-level domain (`co.uk`) plus one level (`google`). Optionally, it can store the rest of the domain, the `subdomain` into another target field. + +This processor uses the Mozilla Public Suffix list to determine the value. + +```yaml +processors: + - registered_domain: + field: dns.question.name + target_field: dns.question.registered_domain + target_etld_field: dns.question.top_level_domain + target_subdomain_field: dns.question.sudomain + ignore_missing: true + ignore_failure: true +``` + +The `registered_domain` processor has the following configuration settings: + +| Name | Required | Default | Description | | +| --- | --- | --- | --- | --- | +| `field` | yes | | Source field containing a fully qualified domain name (FQDN). | | +| `target_field` | yes | | Target field for the registered domain value. | | +| `target_etld_field` | no | | Target field for the effective top-level domain value. | | +| `target_subdomain_field` | no | | Target subdomain field for the subdomain value. | | +| `ignore_missing` | no | false | Ignore errors when the source field is missing. | | +| `ignore_failure` | no | false | Ignore all errors produced by the processor. | | +| `id` | no | | An identifier for this processor instance. Useful for debugging. | | + diff --git a/docs/reference/auditbeat/processor-translate-guid.md b/docs/reference/auditbeat/processor-translate-guid.md new file mode 100644 index 000000000000..d6595dd7b246 --- /dev/null +++ b/docs/reference/auditbeat/processor-translate-guid.md @@ -0,0 +1,79 @@ +--- +navigation_title: "translate_ldap_attribute" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/processor-translate-guid.html +--- + +# Translate GUID [processor-translate-guid] + + +The `translate_ldap_attribute` processor translates an LDAP attributes between eachother. It is typically used to translate AD Global Unique Identifiers (GUID) into their common names. + +Every object on an Active Directory or an LDAP server is issued a GUID. Internal processes refer to their GUID’s rather than the object’s name and these values sometimes appear in logs. + +If the search attribute is invalid (malformed) or does not map to any object on the domain then this will result in the processor returning an error unless `ignore_failure` is set. + +The result of this operation is an array of values, given that a single attribute can hold multiple values. + +Note: the search attribute is expected to map to a single object. If it doesn’t, no error will be returned, but only results of the first entry will be added to the event. + +```yaml +processors: + - translate_ldap_attribute: + field: winlog.event_data.ObjectGuid + ldap_address: "ldap://" + ldap_base_dn: "dc=example,dc=com" + ignore_missing: true + ignore_failure: true +``` + +The `translate_ldap_attribute` processor has the following configuration settings: + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | yes | | Source field containing a GUID. | +| `target_field` | no | | Target field for the mapped attribute value. If not set it will be replaced in place. | +| `ldap_address` | yes | | LDAP server address. eg: `ldap://ds.example.com:389` | +| `ldap_base_dn` | yes | | LDAP base DN. eg: `dc=example,dc=com` | +| `ldap_bind_user` | no | | LDAP user. | +| `ldap_bind_password` | no | | LDAP password. | +| `ldap_search_attribute` | yes | `objectGUID` | LDAP attribute to search by. | +| `ldap_mapped_attribute` | yes | `cn` | LDAP attribute to map to. | +| `ldap_search_time_limit` | no | 30 | LDAP search time limit in seconds. | +| `ldap_ssl`* | no | 30 | LDAP TLS/SSL connection settings. | +| `ignore_missing` | no | false | Ignore errors when the source field is missing. | +| `ignore_failure` | no | false | Ignore all errors produced by the processor. | + +* Also see [SSL](/reference/auditbeat/configuration-ssl.md) for a full description of the `ldap_ssl` options. + +If the searches are slow or you expect a high amount of different key attributes to be found, consider using a cache processor to speed processing: + +```yaml +processors: + - cache: + backend: + memory: + id: ldapguids + get: + key_field: winlog.event_data.ObjectGuid + target_field: winlog.common_name + ignore_missing: true + - if: + not: + - has_fields: winlog.common_name + then: + - translate_ldap_attribute: + field: winlog.event_data.ObjectGuid + target_field: winlog.common_name + ldap_address: "ldap://" + ldap_base_dn: "dc=example,dc=com" + - cache: + backend: + memory: + id: ldapguids + capacity: 10000 + put: + key_field: winlog.event_data.ObjectGuid + value_field: winlog.common_name +``` + diff --git a/docs/reference/auditbeat/processor-translate-sid.md b/docs/reference/auditbeat/processor-translate-sid.md new file mode 100644 index 000000000000..b6d1bef8e860 --- /dev/null +++ b/docs/reference/auditbeat/processor-translate-sid.md @@ -0,0 +1,38 @@ +--- +navigation_title: "translate_sid" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/processor-translate-sid.html +--- + +# Translate SID [processor-translate-sid] + + +The `translate_sid` processor translates a Windows security identifier (SID) into an account name. It retrieves the name of the account associated with the SID, the first domain on which the SID is found, and the type of account. This is only available on Windows. + +Every account on a network is issued a unique SID when the account is first created. Internal processes in Windows refer to an account’s SID rather than the account’s user or group name and these values sometimes appear in logs. + +If the SID is invalid (malformed) or does not map to any account on the local system or domain then this will result in the processor returning an error unless `ignore_failure` is set. + +```yaml +processors: + - translate_sid: + field: winlog.event_data.MemberSid + account_name_target: user.name + domain_target: user.domain + ignore_missing: true + ignore_failure: true +``` + +The `translate_sid` processor has the following configuration settings: + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | yes | | Source field containing a Windows security identifier (SID). | +| `account_name_target` | yes* | | Target field for the account name value. | +| `account_type_target` | yes* | | Target field for the account type value. | +| `domain_target` | yes* | | Target field for the domain value. | +| `ignore_missing` | no | false | Ignore errors when the source field is missing. | +| `ignore_failure` | no | false | Ignore all errors produced by the processor. | + +* At least one of `account_name_target`, `account_type_target`, and `domain_target` is required to be configured. + diff --git a/docs/reference/auditbeat/publishing-ls-fails-connection-reset-by-peer.md b/docs/reference/auditbeat/publishing-ls-fails-connection-reset-by-peer.md new file mode 100644 index 000000000000..4384fac7f1a2 --- /dev/null +++ b/docs/reference/auditbeat/publishing-ls-fails-connection-reset-by-peer.md @@ -0,0 +1,18 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/publishing-ls-fails-connection-reset-by-peer.html +--- + +# Publishing to Logstash fails with "connection reset by peer" message [publishing-ls-fails-connection-reset-by-peer] + +Auditbeat requires a persistent TCP connection to {{ls}}. If a firewall interferes with the connection, you might see errors like this: + +```shell +Failed to publish events caused by: write tcp ... write: connection reset by peer +``` + +To solve the problem: + +* make sure the firewall is not closing connections between Auditbeat and {{ls}}, or +* set the `ttl` value in the [{{ls}} output](/reference/auditbeat/logstash-output.md) to a value that’s lower than the maximum time allowed by the firewall, and set `pipelining` to 0 (pipelining cannot be enabled when `ttl` is used). + diff --git a/docs/reference/auditbeat/rate-limit.md b/docs/reference/auditbeat/rate-limit.md new file mode 100644 index 000000000000..730f0715174b --- /dev/null +++ b/docs/reference/auditbeat/rate-limit.md @@ -0,0 +1,43 @@ +--- +navigation_title: "rate_limit" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/rate-limit.html +--- + +# Rate limit the flow of events [rate-limit] + + +The `rate_limit` processor limits the throughput of events based on the specified configuration. + +In the current implementation, rate-limited events are dropped. Future implementations may allow rate-limited events to be handled differently. + +```yaml +processors: +- rate_limit: + limit: "10000/m" +``` + +```yaml +processors: +- rate_limit: + fields: + - "cloudfoundry.org.name" + limit: "400/s" +``` + +```yaml +processors: +- if.equals.cloudfoundry.org.name: "acme" + then: + - rate_limit: + limit: "500/s" +``` + +The following settings are supported: + +`limit` +: The rate limit. Supported time units for the rate are `s` (per second), `m` (per minute), and `h` (per hour). + +`fields` +: (Optional) List of fields. The rate limit will be applied to each distinct value derived by combining the values of these fields. + diff --git a/docs/reference/auditbeat/redis-output.md b/docs/reference/auditbeat/redis-output.md new file mode 100644 index 000000000000..8c5ac0aebf5b --- /dev/null +++ b/docs/reference/auditbeat/redis-output.md @@ -0,0 +1,209 @@ +--- +navigation_title: "Redis" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/redis-output.html +--- + +# Configure the Redis output [redis-output] + + +The Redis output inserts the events into a Redis list or a Redis channel. This output plugin is compatible with the [Redis input plugin](logstash://reference/plugins-inputs-redis.md) for Logstash. + +To use this output, edit the Auditbeat configuration file to disable the {{es}} output by commenting it out, and enable the Redis output by adding `output.redis`. + +Example configuration: + +```yaml +output.redis: + hosts: ["localhost"] + password: "my_password" + key: "auditbeat" + db: 0 + timeout: 5 +``` + +## Compatibility [_compatibility_3] + +This output is expected to work with all Redis versions between 3.2.4 and 5.0.8. Other versions might work as well, but are not supported. + + +## Configuration options [_configuration_options_5] + +You can specify the following `output.redis` options in the `auditbeat.yml` config file: + +### `enabled` [_enabled_4] + +The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + +### `hosts` [_hosts_2] + +The list of Redis servers to connect to. If load balancing is enabled, the events are distributed to the servers in the list. If one server becomes unreachable, the events are distributed to the reachable servers only. You can define each Redis server by specifying `HOST` or `HOST:PORT`. For example: `"192.15.3.2"` or `"test.redis.io:12345"`. If you don’t specify a port number, the value configured by `port` is used. Configure each Redis server with an `IP:PORT` pair or with a `URL`. For example: `redis://localhost:6379` or `rediss://localhost:6379`. URLs can include a server-specific password. For example: `redis://:password@localhost:6379`. The `redis` scheme will disable the `ssl` settings for the host, while `rediss` will enforce TLS. If `rediss` is specified and no `ssl` settings are configured, the output uses the system certificate store. + + +### `index` [_index] + +The index name added to the events metadata for use by Logstash. The default is "auditbeat". + + +### `key` [key-option-redis] + +The name of the Redis list or channel the events are published to. If not configured, the value of the `index` setting is used. + +You can set the key dynamically by using a format string to access any event field. For example, this configuration uses a custom field, `fields.list`, to set the Redis list key. If `fields.list` is missing, `fallback` is used: + +```yaml +output.redis: + hosts: ["localhost"] + key: "%{[fields.list]:fallback}" +``` + +::::{tip} +To learn how to add custom fields to events, see the [`fields`](/reference/auditbeat/configuration-general-options.md#libbeat-configuration-fields) option. +:::: + + +See the [`keys`](#keys-option-redis) setting for other ways to set the key dynamically. + + +### `keys` [keys-option-redis] + +An array of key selector rules. Each rule specifies the `key` to use for events that match the rule. During publishing, Auditbeat uses the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the `keys` setting is missing or no rule matches, the [`key`](#key-option-redis) setting is used. + +Rule settings: + +**`index`** +: The key format string to use. If this string contains field references, such as `%{[fields.name]}`, the fields must exist, or the rule fails. + +**`mappings`** +: A dictionary that takes the value returned by `key` and maps it to a new name. + +**`default`** +: The default string value to use if `mappings` does not find a match. + +**`when`** +: A condition that must succeed in order to execute the current rule. All the [conditions](/reference/auditbeat/defining-processors.md#conditions) supported by processors are also supported here. + +Example `keys` settings: + +```yaml +output.redis: + hosts: ["localhost"] + key: "default_list" + keys: + - key: "info_list" # send to info_list if `message` field contains INFO + when.contains: + message: "INFO" + - key: "debug_list" # send to debug_list if `message` field contains DEBUG + when.contains: + message: "DEBUG" + - key: "%{[fields.list]}" + mappings: + http: "frontend_list" + nginx: "frontend_list" + mysql: "backend_list" +``` + + +### `password` [_password_3] + +The password to authenticate with. The default is no authentication. + + +### `db` [_db] + +The Redis database number where the events are published. The default is 0. + + +### `datatype` [_datatype] + +The Redis data type to use for publishing events.If the data type is `list`, the Redis RPUSH command is used and all events are added to the list with the key defined under `key`. If the data type `channel` is used, the Redis `PUBLISH` command is used and means that all events are pushed to the pub/sub mechanism of Redis. The name of the channel is the one defined under `key`. The default value is `list`. + + +### `codec` [_codec_2] + +Output codec configuration. If the `codec` section is missing, events will be json encoded. + +See [Change the output codec](/reference/auditbeat/configuration-output-codec.md) for more information. + + +### `worker` or `workers` [_worker_or_workers_2] + +The number of workers to use for each host configured to publish events to Redis. Use this setting along with the `loadbalance` option. For example, if you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). + + +### `loadbalance` [_loadbalance_2] + +When `loadbalance: true` is set, Auditbeat connects to all configured hosts and sends data through all connections in parallel. If a connection fails, data is sent to the remaining hosts until it can be reestablished. Data will still be sent as long as Auditbeat can connect to at least one of its configured hosts. + +When `loadbalance: false` is set, Auditbeat sends data to a single host at a time. The target host is chosen at random from the list of configured hosts, and all data is sent to that target until the connection fails, when a new target is selected. Data will still be sent as long as Auditbeat can connect to at least one of its configured hosts. + +The default value is `true`. + + +### `timeout` [_timeout_4] + +The Redis connection timeout in seconds. The default is 5 seconds. + + +### `backoff.init` [_backoff_init_3] + +The number of seconds to wait before trying to reconnect to Redis after a network error. After waiting `backoff.init` seconds, Auditbeat tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is 1s. + + +### `backoff.max` [_backoff_max_3] + +The maximum number of seconds to wait before attempting to connect to Redis after a network error. The default is 60s. + + +### `max_retries` [_max_retries_4] + +The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. + +Set `max_retries` to a value less than 0 to retry until all events are published. + +The default is 3. + + +### `bulk_max_size` [_bulk_max_size_3] + +The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. + +Events can be collected into batches. Auditbeat will split batches read from the queue which are larger than `bulk_max_size` into multiple batches. + +Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput. + +Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. + + +### `ssl` [_ssl_4] + +Configuration options for SSL parameters like the root CA for Redis connections guarded by SSL proxies (for example [stunnel](https://www.stunnel.org)). See [SSL](/reference/auditbeat/configuration-ssl.md) for more information. + + +### `proxy_url` [_proxy_url_3] + +The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The value must be a URL with a scheme of `socks5://`. You cannot use a web proxy because the protocol used to communicate with Redis is not based on HTTP. + +If the SOCKS5 proxy server requires client authentication, you can embed a username and password in the URL. + +When using a proxy, hostnames are resolved on the proxy server instead of on the client. You can change this behavior by setting the [`proxy_use_local_resolver`](#redis-proxy-use-local-resolver) option. + + +### `proxy_use_local_resolver` [redis-proxy-use-local-resolver] + +This option determines whether Redis hostnames are resolved locally when using a proxy. The default value is false, which means that name resolution occurs on the proxy server. + + +### `queue` [_queue_4] + +Configuration options for internal queue. + +See [Internal queue](/reference/auditbeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `auditbeat.yml` or the `output` section but not both. + + + diff --git a/docs/reference/auditbeat/regexp-support.md b/docs/reference/auditbeat/regexp-support.md new file mode 100644 index 000000000000..3243ae9a0685 --- /dev/null +++ b/docs/reference/auditbeat/regexp-support.md @@ -0,0 +1,111 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/regexp-support.html +--- + +# Regular expression support [regexp-support] + +Auditbeat regular expression support is based on [RE2](https://godoc.org/regexp/syntax). + +Before using a regular expression in the config file, refer to the documentation to verify that the option you are setting accepts a regular expression. + +::::{note} +We recommend that you wrap regular expressions in single quotation marks to work around YAML’s string escaping rules. For example, `'^\[?[0-9][0-9]:?[0-9][0-9]|^[[:graph:]]+'`. +:::: + + +For more examples of supported regexp patterns, see [Managing Multiline Messages](/reference/filebeat/multiline-examples.md). Although the examples pertain to Filebeat, the regexp patterns are applicable to other use cases. + +The following patterns are supported: + +* [Single Characters](#single-characters) +* [Composites](#composites) +* [Repetitions](#repetitions) +* [Groupings](#grouping) +* [Empty Strings](#empty-strings) +* [Escape Sequences](#escape-sequences) +* [ASCII Character Classes](#ascii-character-classes) +* [Perl Character Classes](#perl-character-classes) + +| Pattern | Description | +| --- | --- | +| $$$single-characters$$$**Single Characters** | | +| `x` | single character | +| `.` | any character | +| `[xyz]` | character class | +| `[^xyz]` | negated character class | +| `[[:alpha:]]` | ASCII character class | +| `[[:^alpha:]]` | negated ASCII character class | +| `\d` | Perl character class | +| `\D` | negated Perl character class | +| `\pN` | Unicode character class (one-letter name) | +| `\p{{Greek}}` | Unicode character class | +| `\PN` | negated Unicode character class (one-letter name) | +| `\P{{Greek}}` | negated Unicode character class | +| $$$composites$$$**Composites** | | +| `xy` | `x` followed by `y` | +| `x|y` | `x` or `y` (prefer `x`) | +| $$$repetitions$$$**Repetitions** | | +| `x*` | zero or more `x` | +| `x+` | one or more `x` | +| `x?` | zero or one `x` | +| `x{n,m}` | `n` or `n+1` or …​ or `m` `x`, prefer more | +| `x{n,}` | `n` or more `x`, prefer more | +| `x{{n}}` | exactly `n` `x` | +| `x*?` | zero or more `x`, prefer fewer | +| `x+?` | one or more `x`, prefer fewer | +| `x??` | zero or one `x`, prefer zero | +| `x{n,m}?` | `n` or `n+1` or …​ or `m` `x`, prefer fewer | +| `x{n,}?` | `n` or more `x`, prefer fewer | +| `x{{n}}?` | exactly `n` `x` | +| $$$grouping$$$**Grouping** | | +| `(re)` | numbered capturing group (submatch) | +| `(?Pre)` | named & numbered capturing group (submatch) | +| `(?:re)` | non-capturing group | +| `(?i)abc` | set flags within current group, non-capturing | +| `(?i:re)` | set flags during re, non-capturing | +| `(?i)PaTTeRN` | case-insensitive (default false) | +| `(?m)multiline` | multi-line mode: `^` and `$` match begin/end line in addition to begin/end text (default false) | +| `(?s)pattern.` | let `.` match `\n` (default false) | +| `(?U)x*abc` | ungreedy: swap meaning of `x*` and `x*?`, `x+` and `x+?`, etc (default false) | +| $$$empty-strings$$$**Empty Strings** | | +| `^` | at beginning of text or line (`m`=true) | +| `$` | at end of text (like `\z` not `\Z`) or line (`m`=true) | +| `\A` | at beginning of text | +| `\b` | at ASCII word boundary (`\w` on one side and `\W`, `\A`, or `\z` on the other) | +| `\B` | not at ASCII word boundary | +| `\z` | at end of text | +| $$$escape-sequences$$$**Escape Sequences** | | +| `\a` | bell (same as `\007`) | +| `\f` | form feed (same as `\014`) | +| `\t` | horizontal tab (same as `\011`) | +| `\n` | newline (same as `\012`) | +| `\r` | carriage return (same as `\015`) | +| `\v` | vertical tab character (same as `\013`) | +| `\*` | literal `*`, for any punctuation character `*` | +| `\123` | octal character code (up to three digits) | +| `\x7F` | two-digit hex character code | +| `\x{{10FFFF}}` | hex character code | +| `\Q...\E` | literal text `...` even if `...` has punctuation | +| $$$ascii-character-classes$$$**ASCII Character Classes** | | +| `[[:alnum:]]` | alphanumeric (same as `[0-9A-Za-z]`) | +| `[[:alpha:]]` | alphabetic (same as `[A-Za-z]`) | +| `[[:ascii:]]` | ASCII (same as `\x00-\x7F]`) | +| `[[:blank:]]` | blank (same as `[\t ]`) | +| `[[:cntrl:]]` | control (same as `[\x00-\x1F\x7F]`) | +| `[[:digit:]]` | digits (same as `[0-9]`) | +| `[[:graph:]]` | graphical (same as ```[!-~] == [A-Za-z0-9!"#$%&'()*+,\-./:;<=>?@[\\\]^_` ``` ```{|}~]```) | +| `[[:lower:]]` | lower case (same as `[a-z]`) | +| `[[:print:]]` | printable (same as `[ -~] == [ [:graph:]]`) | +| `[[:punct:]]` | punctuation (same as ```[!-/:-@[-`{-~]```) | +| `[[:space:]]` | whitespace (same as `[\t\n\v\f\r ]`) | +| `[[:upper:]]` | upper case (same as `[A-Z]`) | +| `[[:word:]]` | word characters (same as `[0-9A-Za-z_]`) | +| `[[:xdigit:]]` | hex digit (same as `[0-9A-Fa-f]`) | +| $$$perl-character-classes$$$**Supported Perl Character Classes** | | +| `\d` | digits (same as `[0-9]`) | +| `\D` | not digits (same as `[^0-9]`) | +| `\s` | whitespace (same as `[\t\n\f\r ]`) | +| `\S` | not whitespace (same as `[^\t\n\f\r ]`) | +| `\w` | word characters (same as `[0-9A-Za-z_]`) | +| `\W` | not word characters (same as `[^0-9A-Za-z_]`) | diff --git a/docs/reference/auditbeat/rename-fields.md b/docs/reference/auditbeat/rename-fields.md new file mode 100644 index 000000000000..1f96667679d1 --- /dev/null +++ b/docs/reference/auditbeat/rename-fields.md @@ -0,0 +1,43 @@ +--- +navigation_title: "rename" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/rename-fields.html +--- + +# Rename fields from events [rename-fields] + + +The `rename` processor specifies a list of fields to rename. Under the `fields` key, each entry contains a `from: old-key` and a `to: new-key` pair, where: + +* `from` is the original field name. It’s supported to use `@metadata.` prefix for `from` and rename keys in the event metadata instead of event fields. +* `to` is the target field name + +The `rename` processor cannot be used to overwrite fields. To overwrite fields either first rename the target field, or use the `drop_fields` processor to drop the field and then rename the field. + +::::{tip} +You can rename fields to resolve field name conflicts. For example, if an event has two fields, `c` and `c.b` (where `b` is a subfield of `c`), assigning scalar values results in an {{es}} error at ingest time. The assignment `{"c": 1, "c.b": 2}` would result in an error because `c` is an object and cannot be assigned a scalar value. To prevent this conflict, rename `c` to `c.value` before assigning values. +:::: + + +```yaml +processors: + - rename: + fields: + - from: "a.g" + to: "e.d" + ignore_missing: false + fail_on_error: true +``` + +The `rename` processor has the following configuration settings: + +`ignore_missing` +: (Optional) If set to true, no error is logged in case a key which should be renamed is missing. Default is `false`. + +`fail_on_error` +: (Optional) If set to true, in case of an error the renaming of fields is stopped and the original event is returned. If set to false, renaming continues also if an error happened during renaming. Default is `true`. + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + +You can specify multiple `rename` processors under the `processors` section. + diff --git a/docs/reference/auditbeat/replace-fields.md b/docs/reference/auditbeat/replace-fields.md new file mode 100644 index 000000000000..9224357c02f0 --- /dev/null +++ b/docs/reference/auditbeat/replace-fields.md @@ -0,0 +1,44 @@ +--- +navigation_title: "replace" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/replace-fields.html +--- + +# Replace fields from events [replace-fields] + + +The `replace` processor takes a list of fields to search for a matching value and replaces the matching value with a specified string. + +The `replace` processor cannot be used to create a completely new value. + +::::{tip} +You can use this processor to truncate a field value or replace it with a new string value. You can also use this processor to mask PII information. +:::: + + + +## Example [_example] + +The following example changes the path from `/usr/bin` to `/usr/local/bin`: + +```yaml + - replace: + fields: + - field: "file.path" + pattern: "/usr/" + replacement: "/usr/local/" + ignore_missing: false + fail_on_error: true +``` + + +## Configuration settings [_configuration_settings] + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | List of one or more items. Each item contains a `field: field-name`, `pattern: regex-pattern`, and `replacement: replacement-string`, where:

* `field` is the original field name. You can use the `@metadata.` prefix in this field to replace values in the event metadata instead of event fields.
* `pattern` is the regex pattern to match the field’s value
* `replacement` is the replacement string to use to update the field’s value
| +| `ignore_missing` | No | `false` | Whether to ignore missing fields. If `true`, no error is logged if the specified field is missing. | +| `fail_on_error` | No | `true` | Whether to fail replacement of field values if an error occurs.If `true` and there’s an error, the replacement of field values is stopped, and the original event is returned.If `false`, replacement continues even if an error occurs during replacement. | + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/auditbeat/running-on-docker.md b/docs/reference/auditbeat/running-on-docker.md new file mode 100644 index 000000000000..c78cd78ac846 --- /dev/null +++ b/docs/reference/auditbeat/running-on-docker.md @@ -0,0 +1,164 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/running-on-docker.html +--- + +# Run Auditbeat on Docker [running-on-docker] + +Docker images for Auditbeat are available from the Elastic Docker registry. The base image is [centos:7](https://hub.docker.com/_/centos/). + +A list of all published Docker images and tags is available at [www.docker.elastic.co](https://www.docker.elastic.co). + +These images are free to use under the Elastic license. They contain open source and free commercial features and access to paid commercial features. [Start a 30-day trial](docs-content://deploy-manage/license/manage-your-license-in-self-managed-cluster.md) to try out all of the paid commercial features. See the [Subscriptions](https://www.elastic.co/subscriptions) page for information about Elastic license levels. + +## Pull the image [_pull_the_image] + +Obtaining Auditbeat for Docker is as simple as issuing a `docker pull` command against the Elastic Docker registry. + +::::{warning} +Version 9.0.0-beta1 of Auditbeat has not yet been released. No Docker image is currently available for Auditbeat 9.0.0-beta1. +:::: + + +```sh +docker pull docker.elastic.co/beats/auditbeat:9.0.0-beta1 +``` + +Alternatively, you can download other Docker images that contain only features available under the Apache 2.0 license. To download the images, go to [www.docker.elastic.co](https://www.docker.elastic.co). + +As another option, you can use the hardened [Wolfi](https://wolfi.dev/) image. Using Wolfi images requires Docker version 20.10.10 or higher. For details about why the Wolfi images have been introduced, refer to our article [Reducing CVEs in Elastic container images](https://www.elastic.co/blog/reducing-cves-in-elastic-container-images). + +```bash +docker pull docker.elastic.co/beats/auditbeat-wolfi:9.0.0-beta1 +``` + + +## Optional: Verify the image [_optional_verify_the_image] + +You can use the [Cosign application](https://docs.sigstore.dev/cosign/installation/) to verify the Auditbeat Docker image signature. + +::::{warning} +Version 9.0.0-beta1 of Auditbeat has not yet been released. No Docker image is currently available for Auditbeat 9.0.0-beta1. +:::: + + +```sh +wget https://artifacts.elastic.co/cosign.pub +cosign verify --key cosign.pub docker.elastic.co/beats/auditbeat:9.0.0-beta1 +``` + +The `cosign` command prints the check results and the signature payload in JSON format: + +```sh +Verification for docker.elastic.co/beats/auditbeat:9.0.0-beta1 -- +The following checks were performed on each of these signatures: + - The cosign claims were validated + - Existence of the claims in the transparency log was verified offline + - The signatures were verified against the specified public key +``` + + +## Run the Auditbeat setup [_run_the_auditbeat_setup] + +::::{important} +A [known issue](https://github.com/elastic/beats/issues/42038) in version 8.17.0 prevents {{beats}} Docker images from starting when no options are provided. When running an image on that version, add an `--environment container` parameter to avoid the problem. This is planned to be addressed in issue [#42060](https://github.com/elastic/beats/pull/42060). +:::: + + +Running Auditbeat with the setup command will create the index pattern and load visualizations , dashboards, and machine learning jobs. Run this command: + +```sh +docker run --rm \ + --cap-add="AUDIT_CONTROL" \ + --cap-add="AUDIT_READ" \ + docker.elastic.co/beats/auditbeat:9.0.0-beta1 \ + setup -E setup.kibana.host=kibana:5601 \ + -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> +``` + +1. Substitute your Kibana and Elasticsearch hosts and ports. +2. If you are using the hosted {{ess}} in {{ecloud}}, replace the `-E output.elasticsearch.hosts` line with the Cloud ID and elastic password using this syntax: + + +```shell +-E cloud.id= \ +-E cloud.auth=elastic: +``` + + +## Run Auditbeat on a read-only file system [_run_auditbeat_on_a_read_only_file_system] + +If you’d like to run Auditbeat in a Docker container on a read-only file system, you can do so by specifying the `--read-only` option. Auditbeat requires a stateful directory to store application data, so with the `--read-only` option you also need to use the `--mount` option to specify a path to where that data can be stored. + +For example: + +```sh +docker run --rm \ + --mount type=bind,source=$(pwd)/data,destination=/usr/share/auditbeat/data \ + --read-only \ + docker.elastic.co/beats/auditbeat:9.0.0-beta1 +``` + + +## Configure Auditbeat on Docker [_configure_auditbeat_on_docker] + +The Docker image provides several methods for configuring Auditbeat. The conventional approach is to provide a configuration file via a volume mount, but it’s also possible to create a custom image with your configuration included. + +### Example configuration file [_example_configuration_file] + +Download this example configuration file as a starting point: + +```sh +curl -L -O https://raw.githubusercontent.com/elastic/beats/master/deploy/docker/auditbeat.docker.yml +``` + + +### Volume-mounted configuration [_volume_mounted_configuration] + +One way to configure Auditbeat on Docker is to provide `auditbeat.docker.yml` via a volume mount. With `docker run`, the volume mount can be specified like this. + +```sh +docker run -d \ + --name=auditbeat \ + --user=root \ + --volume="$(pwd)/auditbeat.docker.yml:/usr/share/auditbeat/auditbeat.yml:ro" \ + --cap-add="AUDIT_CONTROL" \ + --cap-add="AUDIT_READ" \ + --pid=host \ + docker.elastic.co/beats/auditbeat:9.0.0-beta1 -e \ + --strict.perms=false \ + -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> +``` + +1. Substitute your Elasticsearch hosts and ports. +2. If you are using the hosted {{ess}} in {{ecloud}}, replace the `-E output.elasticsearch.hosts` line with the Cloud ID and elastic password using the syntax shown earlier. + + + +### Customize your configuration [_customize_your_configuration] + +The `auditbeat.docker.yml` downloaded earlier should be customized for your environment. See [Configure](/reference/auditbeat/configuring-howto-auditbeat.md) for more details. Edit the configuration file and customize it to match your environment then re-deploy your Auditbeat container. + + +### Custom image configuration [_custom_image_configuration] + +It’s possible to embed your Auditbeat configuration in a custom image. Here is an example Dockerfile to achieve this: + +```dockerfile +FROM docker.elastic.co/beats/auditbeat:9.0.0-beta1 +COPY auditbeat.yml /usr/share/auditbeat/auditbeat.yml +``` + + + +## Special requirements [_special_requirements] + +Under Docker, Auditbeat runs as a non-root user, but requires some privileged capabilities to operate correctly. Ensure that the `AUDIT_CONTROL` and `AUDIT_READ` capabilities are available to the container. + +It is also essential to run Auditbeat in the host PID namespace. + +```sh +docker run --cap-add=AUDIT_CONTROL --cap-add=AUDIT_READ --user=root --pid=host docker.elastic.co/beats/auditbeat:9.0.0-beta1 +``` + + diff --git a/docs/reference/auditbeat/running-on-kubernetes.md b/docs/reference/auditbeat/running-on-kubernetes.md new file mode 100644 index 000000000000..69e1e9efa761 --- /dev/null +++ b/docs/reference/auditbeat/running-on-kubernetes.md @@ -0,0 +1,87 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/running-on-kubernetes.html +--- + +# Running Auditbeat on Kubernetes [running-on-kubernetes] + +Auditbeat [Docker images](/reference/auditbeat/running-on-docker.md) can be used on Kubernetes to check files integrity. + +::::{tip} +Running {{ecloud}} on Kubernetes? See [Run {{beats}} on ECK](docs-content://deploy-manage/deploy/cloud-on-k8s/beats.md). +:::: + + +However, version 9.0.0-beta1 of Auditbeat has not yet been released, so no Docker image is currently available for this version. + + +## Kubernetes deploy manifests [_kubernetes_deploy_manifests] + +By deploying Auditbeat as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) we ensure we get a running instance on each node of the cluster. + +Everything is deployed under `kube-system` namespace, you can change that by updating the YAML file. + +To get the manifests just run: + +```sh +curl -L -O https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/auditbeat-kubernetes.yaml +``` + +::::{warning} +If you are using Kubernetes 1.7 or earlier: Auditbeat uses a hostPath volume to persist internal data, it’s located under /var/lib/auditbeat-data. The manifest uses folder autocreation (`DirectoryOrCreate`), which was introduced in Kubernetes 1.8. You will need to remove `type: DirectoryOrCreate` from the manifest and create the host folder yourself. + +:::: + + + +## Settings [_settings] + +Some parameters are exposed in the manifest to configure logs destination, by default they will use an existing Elasticsearch deploy if it’s present, but you may want to change that behavior, so just edit the YAML file and modify them: + +```yaml +- name: ELASTICSEARCH_HOST + value: elasticsearch +- name: ELASTICSEARCH_PORT + value: "9200" +- name: ELASTICSEARCH_USERNAME + value: elastic +- name: ELASTICSEARCH_PASSWORD + value: changeme +``` + + +### Running Auditbeat on control plane nodes [_running_auditbeat_on_control_plane_nodes] + +Kubernetes control plane nodes can use [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to limit the workloads that can run on them. To run Auditbeat on control plane nodes you may need to update the Daemonset spec to include proper tolerations: + +```yaml +spec: + tolerations: + - key: node-role.kubernetes.io/control-plane + effect: NoSchedule +``` + + +## Deploy [_deploy] + +To deploy Auditbeat to Kubernetes just run: + +```sh +kubectl create -f auditbeat-kubernetes.yaml +``` + +Then you should be able to check the status by running: + +```sh +$ kubectl --namespace=kube-system get ds/auditbeat + +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE +auditbeat 32 32 0 32 0 1m +``` + +::::{warning} +Auditbeat is able to monitor the file integrity of files in pods, to do that, the directories with the container root file systems have to be mounted as volumes in the Auditbeat container. For example, containers executed with containerd have their root file systems under `/run/containerd`. The [reference manifest](https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/auditbeat-kubernetes.yaml) contains an example of this. + +:::: + + diff --git a/docs/reference/auditbeat/running-with-systemd.md b/docs/reference/auditbeat/running-with-systemd.md new file mode 100644 index 000000000000..728271ebcbff --- /dev/null +++ b/docs/reference/auditbeat/running-with-systemd.md @@ -0,0 +1,86 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/running-with-systemd.html +--- + +# Auditbeat and systemd [running-with-systemd] + +The DEB and RPM packages include a service unit for Linux systems with systemd. On these systems, you can manage Auditbeat by using the usual systemd commands. + +The service unit is configured with `UMask=0027` which means the most permissive mask allowed for files created by Auditbeat is `0640`. All configured file permissions higher than `0640` will be ignored. Please edit the unit file manually in case you need to change that. + +## Start and stop Auditbeat [_start_and_stop_auditbeat] + +Use `systemctl` to start or stop Auditbeat: + +```sh +sudo systemctl start auditbeat +``` + +```sh +sudo systemctl stop auditbeat +``` + +By default, the Auditbeat service starts automatically when the system boots. To enable or disable auto start use: + +```sh +sudo systemctl enable auditbeat +``` + +```sh +sudo systemctl disable auditbeat +``` + + +## Auditbeat status and logs [_auditbeat_status_and_logs] + +To get the service status, use `systemctl`: + +```sh +systemctl status auditbeat +``` + +Logs are stored by default in journald. To view the Logs, use `journalctl`: + +```sh +journalctl -u auditbeat.service +``` + + +## Customize systemd unit for Auditbeat [_customize_systemd_unit_for_auditbeat] + +The systemd service unit file includes environment variables that you can override to change the default options. + +| Variable | Description | Default value | +| --- | --- | --- | +| BEAT_LOG_OPTS | Log options | | +| BEAT_CONFIG_OPTS | Flags for configuration file path | ``-c /etc/auditbeat/auditbeat.yml`` | +| BEAT_PATH_OPTS | Other paths | ``--path.home /usr/share/auditbeat --path.config /etc/auditbeat --path.data /var/lib/auditbeat --path.logs /var/log/auditbeat`` | + +::::{note} +You can use `BEAT_LOG_OPTS` to set debug selectors for logging. However, to configure logging behavior, set the logging options described in [Configure logging](/reference/auditbeat/configuration-logging.md). +:::: + + +To override these variables, create a drop-in unit file in the `/etc/systemd/system/auditbeat.service.d` directory. + +For example a file with the following content placed in `/etc/systemd/system/auditbeat.service.d/debug.conf` would override `BEAT_LOG_OPTS` to enable debug for Elasticsearch output. + +```text +[Service] +Environment="BEAT_LOG_OPTS=-d elasticsearch" +``` + +To apply your changes, reload the systemd configuration and restart the service: + +```sh +systemctl daemon-reload +systemctl restart auditbeat +``` + +::::{note} +It is recommended that you use a configuration management tool to include drop-in unit files. If you need to add a drop-in manually, use `systemctl edit auditbeat.service`. +:::: + + + diff --git a/docs/reference/auditbeat/securing-auditbeat.md b/docs/reference/auditbeat/securing-auditbeat.md new file mode 100644 index 000000000000..be5afc590c63 --- /dev/null +++ b/docs/reference/auditbeat/securing-auditbeat.md @@ -0,0 +1,25 @@ +--- +navigation_title: "Secure" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/securing-auditbeat.html +--- + +# Secure Auditbeat [securing-auditbeat] + + +The following topics provide information about securing the Auditbeat process and connecting to a cluster that has {{security-features}} enabled. + +You can use role-based access control and optionally, API keys to grant Auditbeat users access to secured resources. + +* [*Grant users access to secured resources*](/reference/auditbeat/feature-roles.md) +* [*Grant access using API keys*](/reference/auditbeat/beats-api-keys.md). + +After privileged users have been created, use authentication to connect to a secured Elastic cluster. + +* [*Secure communication with Elasticsearch*](/reference/auditbeat/securing-communication-elasticsearch.md) +* [*Secure communication with Logstash*](/reference/auditbeat/configuring-ssl-logstash.md) + +On Linux, Auditbeat can take advantage of secure computing mode to restrict the system calls that a process can issue. + +* [*Use Linux Secure Computing Mode (seccomp)*](/reference/auditbeat/linux-seccomp.md) + diff --git a/docs/reference/auditbeat/securing-communication-elasticsearch.md b/docs/reference/auditbeat/securing-communication-elasticsearch.md new file mode 100644 index 000000000000..f6fda6096876 --- /dev/null +++ b/docs/reference/auditbeat/securing-communication-elasticsearch.md @@ -0,0 +1,104 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/securing-communication-elasticsearch.html +--- + +# Secure communication with Elasticsearch [securing-communication-elasticsearch] + +When sending data to a secured cluster through the `elasticsearch` output, Auditbeat can use any of the following authentication methods: + +* Basic authentication credentials (username and password). +* Token-based API authentication. +* A client certificate. + +Authentication is specified in the Auditbeat configuration file: + +* To use **basic authentication**, specify the `username` and `password` settings under `output.elasticsearch`. For example: + + ```yaml + output.elasticsearch: + hosts: ["https://myEShost:9200"] + username: "auditbeat_writer" <1> + password: "{pwd}" <2> + ``` + + 1. This user needs the privileges required to publish events to {{es}}. To create a user like this, see [Create a *publishing* user](/reference/auditbeat/privileges-to-publish-events.md). + 2. This example shows a hard-coded password, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md). + +* To use token-based **API key authentication**, specify the `api_key` under `output.elasticsearch`. For example: + + ```yaml + output.elasticsearch: + hosts: ["https://myEShost:9200"] + api_key: "ZCV7VnwBgnX0T19fN8Qe:KnR6yE41RrSowb0kQ0HWoA" <1> + ``` + + 1. This API key must have the privileges required to publish events to {{es}}. To create an API key like this, see [*Grant access using API keys*](/reference/auditbeat/beats-api-keys.md). + + +* To use **Public Key Infrastructure (PKI) certificates** to authenticate users, specify the `certificate` and `key` settings under `output.elasticsearch`. For example: + + ```yaml + output.elasticsearch: + hosts: ["https://myEShost:9200"] + ssl.certificate: "/etc/pki/client/cert.pem" <1> + ssl.key: "/etc/pki/client/cert.key" <2> + ``` + + 1. The path to the certificate for SSL client authentication + 2. The client certificate key + + + These settings assume that the distinguished name (DN) in the certificate is mapped to the appropriate roles in the `role_mapping.yml` file on each node in the {{es}} cluster. For more information, see [Using role mapping files](docs-content://deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md#mapping-roles-file). + + By default, Auditbeat uses the list of trusted certificate authorities (CA) from the operating system where Auditbeat is running. If the certificate authority that signed your node certificates is not in the host system’s trusted certificate authorities list, you need to add the path to the `.pem` file that contains your CA’s certificate to the Auditbeat configuration. This will configure Auditbeat to use a specific list of CA certificates instead of the default list from the OS. + + Here is an example configuration: + + ```yaml + output.elasticsearch: + hosts: ["https://myEShost:9200"] + ssl.certificate_authorities: <1> + - /etc/pki/my_root_ca.pem + - /etc/pki/my_other_ca.pem + ssl.certificate: "/etc/pki/client.pem" <2> + ssl.key: "/etc/pki/key.pem" <3> + ``` + + 1. Specify the path to the local `.pem` file that contains your Certificate Authority’s certificate. This is needed if you use your own CA to sign your node certificates. + 2. The path to the certificate for SSL client authentication + 3. The client certificate key + + + ::::{note} + For any given connection, the SSL/TLS certificates must have a subject that matches the value specified for `hosts`, or the SSL handshake fails. For example, if you specify `hosts: ["foobar:9200"]`, the certificate MUST include `foobar` in the subject (`CN=foobar`) or as a subject alternative name (SAN). Make sure the hostname resolves to the correct IP address. If no DNS is available, then you can associate the IP address with your hostname in `/etc/hosts` (on Unix) or `C:\Windows\System32\drivers\etc\hosts` (on Windows). + :::: + + + +## Secure communication with the Kibana endpoint [securing-communication-kibana] + +If you’ve configured the [{{kib}} endpoint](/reference/auditbeat/setup-kibana-endpoint.md), you can also specify credentials for authenticating with {{kib}} under `kibana.setup`. If no credentials are specified, Kibana will use the configured authentication method in the Elasticsearch output. + +For example, specify a unique username and password to connect to Kibana like this: + +```yaml +setup.kibana: + host: "mykibanahost:5601" + username: "auditbeat_kib_setup" <1> + password: "{pwd}" <2> +``` + +1. This user needs privileges required to set up dashboards. To create a user like this, see [Create a *setup* user](/reference/auditbeat/privileges-to-setup-beats.md). +2. This example shows a hard-coded password, but you should store sensitive values in the [secrets keystore](/reference/auditbeat/keystore.md). + + + +## Learn more about secure communication [securing-communication-learn-more] + +More information on sending data to a secured cluster is available in the configuration reference: + +* [Elasticsearch](/reference/auditbeat/elasticsearch-output.md) +* [SSL](/reference/auditbeat/configuration-ssl.md) +* [{{kib}} endpoint](/reference/auditbeat/setup-kibana-endpoint.md) + diff --git a/docs/reference/auditbeat/setting-up-running.md b/docs/reference/auditbeat/setting-up-running.md new file mode 100644 index 000000000000..f50b575571d4 --- /dev/null +++ b/docs/reference/auditbeat/setting-up-running.md @@ -0,0 +1,32 @@ +--- +navigation_title: "Set up and run" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/setting-up-and-running.html +--- + +# Set up and run Auditbeat [setting-up-and-running] + + +Before reading this section, see [Quick start: installation and configuration](/reference/auditbeat/auditbeat-installation-configuration.md) for basic installation instructions to get you started. + +This section includes additional information on how to install, set up, and run Auditbeat, including: + +* [Directory layout](/reference/auditbeat/directory-layout.md) +* [Secrets keystore](/reference/auditbeat/keystore.md) +* [Command reference](/reference/auditbeat/command-line-options.md) +* [Repositories for APT and YUM](/reference/auditbeat/setup-repositories.md) +* [Run Auditbeat on Docker](/reference/auditbeat/running-on-docker.md) +* [Running Auditbeat on Kubernetes](/reference/auditbeat/running-on-kubernetes.md) +* [Auditbeat and systemd](/reference/auditbeat/running-with-systemd.md) +* [Start Auditbeat](/reference/auditbeat/auditbeat-starting.md) +* [Stop Auditbeat](/reference/auditbeat/shutdown.md) + + + + + + + + + + diff --git a/docs/reference/auditbeat/setup-kibana-endpoint.md b/docs/reference/auditbeat/setup-kibana-endpoint.md new file mode 100644 index 000000000000..075a517453ca --- /dev/null +++ b/docs/reference/auditbeat/setup-kibana-endpoint.md @@ -0,0 +1,96 @@ +--- +navigation_title: "{{kib}} endpoint" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/setup-kibana-endpoint.html +--- + +# Configure the {{kib}} endpoint [setup-kibana-endpoint] + + +{{kib}} dashboards are loaded into {{kib}} via the {{kib}} API. This requires a {{kib}} endpoint configuration. For details on authenticating to the {{kib}} API, see [Authentication](https://www.elastic.co/docs/api/doc/kibana/authentication). + +You configure the endpoint in the `setup.kibana` section of the `auditbeat.yml` config file. + +Here is an example configuration: + +```yaml +setup.kibana.host: "http://localhost:5601" +``` + + +## Configuration options [_configuration_options_11] + +You can specify the following options in the `setup.kibana` section of the `auditbeat.yml` config file: + + +### `setup.kibana.host` [_setup_kibana_host] + +The {{kib}} host where the dashboards will be loaded. The default is `127.0.0.1:5601`. The value of `host` can be a `URL` or `IP:PORT`. For example: `http://192.15.3.2`, `192:15.3.2:5601` or `http://192.15.3.2:6701/path`. If no port is specified, `5601` is used. + +::::{note} +When a node is defined as an `IP:PORT`, the *scheme* and *path* are taken from the [setup.kibana.protocol](#kibana-protocol-option) and [setup.kibana.path](#kibana-path-option) config options. +:::: + + +IPv6 addresses must be defined using the following format: `https://[2001:db8::1]:5601`. + + +### `setup.kibana.protocol` [kibana-protocol-option] + +The name of the protocol {{kib}} is reachable on. The options are: `http` or `https`. The default is `http`. However, if you specify a URL for host, the value of `protocol` is overridden by whatever scheme you specify in the URL. + +Example config: + +```yaml +setup.kibana.host: "192.0.2.255:5601" +setup.kibana.protocol: "http" +setup.kibana.path: /kibana +``` + + +### `setup.kibana.username` [_setup_kibana_username] + +The basic authentication username for connecting to {{kib}}. If you don’t specify a value for this setting, Auditbeat uses the `username` specified for the {{es}} output. + + +### `setup.kibana.password` [_setup_kibana_password] + +The basic authentication password for connecting to {{kib}}. If you don’t specify a value for this setting, Auditbeat uses the `password` specified for the {{es}} output. + + +### `setup.kibana.path` [kibana-path-option] + +An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where {{kib}} listens behind an HTTP reverse proxy that exports the API under a custom prefix. + + +### `setup.kibana.space.id` [kibana-space-id-option] + +The [Kibana space](docs-content://deploy-manage/manage-spaces.md) ID to use. If specified, Auditbeat loads {{kib}} assets into this {{kib}} space. Omit this option to use the default space. + + +#### `setup.kibana.headers` [_setup_kibana_headers] + +Custom HTTP headers to add to each request sent to {{kib}}. Example: + +```yaml +setup.kibana.headers: + X-My-Header: Header contents +``` + + +### `setup.kibana.ssl.enabled` [_setup_kibana_ssl_enabled] + +Enables Auditbeat to use SSL settings when connecting to {{kib}} via HTTPS. If you configure Auditbeat to connect over HTTPS, this setting defaults to `true` and Auditbeat uses the default SSL settings. + +Example configuration: + +```yaml +setup.kibana.host: "https://192.0.2.255:5601" +setup.kibana.ssl.enabled: true +setup.kibana.ssl.certificate_authorities: ["/etc/client/ca.pem"] +setup.kibana.ssl.certificate: "/etc/client/cert.pem" +setup.kibana.ssl.key: "/etc/client/cert.key +``` + +See [SSL](/reference/auditbeat/configuration-ssl.md) for more information. + diff --git a/docs/reference/auditbeat/setup-repositories.md b/docs/reference/auditbeat/setup-repositories.md new file mode 100644 index 000000000000..73e59739fac8 --- /dev/null +++ b/docs/reference/auditbeat/setup-repositories.md @@ -0,0 +1,26 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/setup-repositories.html +--- + +# Repositories for APT and YUM [setup-repositories] + +We have repositories available for APT and YUM-based distributions. Note that we provide binary packages, but no source packages. + +We use the PGP key [D88E42B4](https://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4), Elasticsearch Signing Key, with fingerprint + +``` +4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4 +``` +to sign all our packages. It is available from [https://pgp.mit.edu](https://pgp.mit.edu). + + +## APT [_apt] + +Version 9.0.0-beta1 of Beats has not yet been released. + + +## YUM [_yum] + +Version 9.0.0-beta1 of Beats has not yet been released. + diff --git a/docs/reference/auditbeat/shutdown.md b/docs/reference/auditbeat/shutdown.md new file mode 100644 index 000000000000..88db8de3b2e3 --- /dev/null +++ b/docs/reference/auditbeat/shutdown.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/shutdown.html +--- + +# Stop Auditbeat [shutdown] + +An orderly shutdown of Auditbeat ensures that it has a chance to clean up and close outstanding resources. You can help ensure an orderly shutdown by stopping Auditbeat properly. + +If you’re running Auditbeat as a service, you can stop it via the service management functionality provided by your installation. + +If you’re running Auditbeat directly in the console, you can stop it by entering **Ctrl-C**. Alternatively, send SIGTERM to the Auditbeat process on a POSIX system. + diff --git a/docs/reference/auditbeat/ssl-client-fails.md b/docs/reference/auditbeat/ssl-client-fails.md new file mode 100644 index 000000000000..64be8603c45e --- /dev/null +++ b/docs/reference/auditbeat/ssl-client-fails.md @@ -0,0 +1,71 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/ssl-client-fails.html +--- + +# SSL client fails to connect to Logstash [ssl-client-fails] + +The host running {{ls}} might be unreachable or the certificate may not be valid. To resolve your issue: + +* Make sure that {{ls}} is running and you can connect to it. First, try to ping the {{ls}} host to verify that you can reach it from the host running Auditbeat. Then use either `nc` or `telnet` to make sure that the port is available. For example: + + ```shell + ping + telnet 5044 + ``` + +* Verify that the certificate is valid and that the hostname and IP match. + + ::::{tip} + For testing purposes only, you can set `verification_mode: none` to disable hostname checking. + :::: + +* Use OpenSSL to test connectivity to the {{ls}} server and diagnose problems. See the [OpenSSL documentation](https://www.openssl.org/docs/manmaster/man1/openssl-s_client.md) for more info. +* Make sure that you have enabled SSL (set `ssl => true`) when configuring the [Beats input plugin for {{ls}}](logstash://reference/plugins-inputs-beats.md). + +## Common SSL-Related Errors and Resolutions [_common_ssl_related_errors_and_resolutions] + +Here are some common errors and ways to fix them: + +* [tls: failed to parse private key](#failed-to-parse-private-key) +* [x509: cannot validate certificate](#cannot-validate-certificate) +* [getsockopt: no route to host](#getsockopt-no-route-to-host) +* [getsockopt: connection refused](#getsockopt-connection-refused) +* [No connection could be made because the target machine actively refused it](#target-machine-refused-connection) + +### tls: failed to parse private key [failed-to-parse-private-key] + +This might occur for a few reasons: + +* The encrypted file is not recognized as an encrypted PEM block. Auditbeat tries to use the encrypted content as the final key, which fails. +* The file is correctly encrypted in a PEM block, but the decrypted content is not a key in a format that Auditbeat recognizes. The key must be encoded as PEM format. +* The passphrase is missing or has an error. + + +### x509: cannot validate certificate for because it doesn’t contain any IP SANs [cannot-validate-certificate] + +This happens because your certificate is only valid for the hostname present in the Subject field. + +To resolve this problem, try one of these solutions: + +* Create a DNS entry for the hostname mapping it to the server’s IP. +* Create an entry in `/etc/hosts` for the hostname. Or on Windows add an entry to `C:\Windows\System32\drivers\etc\hosts`. +* Re-create the server certificate and add a SubjectAltName (SAN) for the IP address of the server. This makes the server’s certificate valid for both the hostname and the IP address. + + +### getsockopt: no route to host [getsockopt-no-route-to-host] + +This is not a SSL problem. It’s a networking problem. Make sure the two hosts can communicate. + + +### getsockopt: connection refused [getsockopt-connection-refused] + +This is not a SSL problem. Make sure that {{ls}} is running and that there is no firewall blocking the traffic. + + +### No connection could be made because the target machine actively refused it [target-machine-refused-connection] + +A firewall is refusing the connection. Check if a firewall is blocking the traffic on the client, the network, or the destination host. + + + diff --git a/docs/reference/auditbeat/syslog.md b/docs/reference/auditbeat/syslog.md new file mode 100644 index 000000000000..ab6d781cc70a --- /dev/null +++ b/docs/reference/auditbeat/syslog.md @@ -0,0 +1,156 @@ +--- +navigation_title: "syslog" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/syslog.html +--- + +# Syslog [syslog] + + +The syslog processor parses RFC 3146 and/or RFC 5424 formatted syslog messages that are stored in a field. The processor itself does not handle receiving syslog messages from external sources. This is done through an input, such as the TCP input. Certain integrations, when enabled through configuration, will embed the syslog processor to process syslog messages, such as Custom TCP Logs and Custom UDP Logs. + + +## Configuration [_configuration] + +The `syslog` processor parses RFC 3146 and/or RFC 5424 formatted syslog messages that are stored under the `field` key. + +The supported configuration options are: + +`field` +: (Required) Source field containing the syslog message. Defaults to `message`. + +`format` +: (Optional) The syslog format to use, `rfc3164`, or `rfc5424`. To automatically detect the format from the log entries, set this option to `auto`. The default is `auto`. + +`timezone` +: (Optional) IANA time zone name(e.g. `America/New York`) or a fixed time offset (e.g. +0200) to use when parsing syslog timestamps that do not contain a time zone. `Local` may be specified to use the machine’s local time zone. Defaults to `Local`. + +`overwrite_keys` +: (Optional) A boolean that specifies whether keys that already exist in the event are overwritten by keys from the syslog message. The default value is `true`. + +`ignore_missing` +: (Optional) If `true` the processor will not return an error when a specified field does not exist. Defaults to `false`. + +`ignore_failure` +: (Optional) Ignore all errors produced by the processor. Defaults to `false`. + +`tag` +: (Optional) An identifier for this processor. Useful for debugging. + +Example: + +```yaml +processors: + - syslog: + field: message +``` + +```json +{ + "message": "<165>1 2022-01-11T22:14:15.003Z mymachine.example.com eventslog 1024 ID47 [exampleSDID@32473 iut=\"3\" eventSource=\"Application\" eventID=\"1011\"][examplePriority@32473 class=\"high\"] this is the message" +} +``` + +Will produce the following output: + +```json +{ + "@timestamp": "2022-01-11T22:14:15.003Z", + "log": { + "syslog": { + "priority": 165, + "facility": { + "code": 20, + "name": "local4" + }, + "severity": { + "code": 5, + "name": "Notice" + }, + "hostname": "mymachine.example.com", + "appname": "eventslog", + "procid": "1024", + "msgid": "ID47", + "version": 1, + "structured_data": { + "exampleSDID@32473": { + "iut": "3", + "eventSource": "Application", + "eventID": "1011" + }, + "examplePriority@32473": { + "class": "high" + } + } + } + }, + "message": "this is the message" +} +``` + + +## Timestamps [_timestamps] + +The RFC 3164 format accepts the following forms of timestamps: + +* Local timestamp (`Mmm dd hh:mm:ss`): + + * `Jan 23 14:09:01` + +* RFC-3339*: + + * `2003-10-11T22:14:15Z` + * `2003-10-11T22:14:15.123456Z` + * `2003-10-11T22:14:15-06:00` + * `2003-10-11T22:14:15.123456-06:00` + + +**Note**: The local timestamp (for example, `Jan 23 14:09:01`) that accompanies an RFC 3164 message lacks year and time zone information. The time zone will be enriched using the `timezone` configuration option, and the year will be enriched using the Auditbeat system’s local time (accounting for time zones). Because of this, it is possible for messages to appear in the future. An example of when this might happen is logs generated on December 31 2021 are ingested on January 1 2022. The logs would be enriched with the year 2022 instead of 2021. + +The RFC 5424 format accepts the following forms of timestamps: + +* RFC-3339: + + * `2003-10-11T22:14:15Z` + * `2003-10-11T22:14:15.123456Z` + * `2003-10-11T22:14:15-06:00` + * `2003-10-11T22:14:15.123456-06:00` + + +Formats with an asterisk (*) are a non-standard allowance. + + +## Structured Data [_structured_data] + +For RFC 5424-formatted logs, if the structured data cannot be parsed according to RFC standards, the original structured data text will be prepended to the message field, separated by a space. + + +## Metrics [_metrics] + +Internal metrics are available to assist with debugging efforts. The metrics are served from the metrics HTTP endpoint (for example: `http://localhost:5066/stats`) and are found under `processor.syslog.[instance ID]` or `processor.syslog.[tag]-[instance ID]` if a **tag** is provided. See [HTTP endpoint](/reference/auditbeat/http-endpoint.md) for more information on configuration the metrics HTTP endpoint. + +For example, here are metrics from a processor with a **tag** of `log-input` and an **instance ID** of `1`: + +```json +{ + "processor": { + "syslog": { + "log-input-1": { + "failure": 10, + "missing": 0, + "success": 3 + } + } + } +} +``` + +`failure` +: Measures the number of occurrences where a message was unable to be parsed. + +`missing` +: Measures the number of occurrences where an event was missing the required input field. + +`success` +: Measures the number of successfully parsed syslog messages. + diff --git a/docs/reference/auditbeat/troubleshooting.md b/docs/reference/auditbeat/troubleshooting.md new file mode 100644 index 000000000000..897a8f875721 --- /dev/null +++ b/docs/reference/auditbeat/troubleshooting.md @@ -0,0 +1,14 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/troubleshooting.html +--- + +# Troubleshoot [troubleshooting] + +If you have issues installing or running Auditbeat, read the following tips: + +* [*Get Help*](/reference/auditbeat/getting-help.md) +* [*Debug*](/reference/auditbeat/enable-auditbeat-debugging.md) +* [Understand logged metrics](/reference/auditbeat/understand-auditbeat-logs.md) +* [*Common problems*](/reference/auditbeat/faq.md) + diff --git a/docs/reference/auditbeat/truncate-fields.md b/docs/reference/auditbeat/truncate-fields.md new file mode 100644 index 000000000000..66bec172b72e --- /dev/null +++ b/docs/reference/auditbeat/truncate-fields.md @@ -0,0 +1,38 @@ +--- +navigation_title: "truncate_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/truncate-fields.html +--- + +# Truncate fields [truncate-fields] + + +The `truncate_fields` processor truncates a field to a given size. If the size of the field is smaller than the limit, the field is left as is. + +`fields` +: List of fields to truncate. It’s supported to use `@metadata.` prefix for the fields and truncate values in the event metadata instead of event fields. + +`max_bytes` +: Maximum number of bytes in a field. Mutually exclusive with `max_characters`. + +`max_characters` +: Maximum number of characters in a field. Mutually exclusive with `max_bytes`. + +`fail_on_error` +: (Optional) If set to true, in case of an error the changes to the event are reverted, and the original event is returned. If set to `false`, processing continues also if an error happens. Default is `true`. + +`ignore_missing` +: (Optional) Whether to ignore events that lack the source field. The default is `false`, which will fail processing of an event if a field is missing. + +For example, this configuration truncates the field named `message` to 5 characters: + +```yaml +processors: + - truncate_fields: + fields: + - message + max_characters: 5 + fail_on_error: false + ignore_missing: true +``` + diff --git a/docs/reference/auditbeat/ulimit.md b/docs/reference/auditbeat/ulimit.md new file mode 100644 index 000000000000..626cb10e4c20 --- /dev/null +++ b/docs/reference/auditbeat/ulimit.md @@ -0,0 +1,28 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/ulimit.html +--- + +# Auditbeat fails to watch folders because too many files are open [ulimit] + +Because of the way file monitoring is implemented on macOS, you may see a warning similar to the following: + +```shell +eventreader_fsnotify.go:42: WARN [audit.file] Failed to watch /usr/bin: too many +open files (check the max number of open files allowed with 'ulimit -a') +``` + +To resolve this issue, run Auditbeat with the `ulimit` set to a larger value, for example: + +```sh +sudo sh -c 'ulimit -n 8192 && ./Auditbeat -e +``` + +Or: + +```sh +sudo su +ulimit -n 8192 +./auditbeat -e +``` + diff --git a/docs/reference/auditbeat/understand-auditbeat-logs.md b/docs/reference/auditbeat/understand-auditbeat-logs.md new file mode 100644 index 000000000000..8d720213d925 --- /dev/null +++ b/docs/reference/auditbeat/understand-auditbeat-logs.md @@ -0,0 +1,210 @@ +--- +navigation_title: "Understand logged metrics" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/understand-auditbeat-logs.html +--- + +# Understand metrics in Auditbeat logs [understand-auditbeat-logs] + + +Every 30 seconds (by default), Auditbeat collects a *snapshot* of metrics about itself. From this snapshot, Auditbeat computes a *delta snapshot*; this delta snapshot contains any metrics that have *changed* since the last snapshot. Note that the values of the metrics are the values when the snapshot is taken, *NOT* the *difference* in values from the last snapshot. + +If this delta snapshot contains *any* metrics (indicating at least one metric that has changed since the last snapshot), this delta snapshot is serialized as JSON and emitted in Auditbeat’s logs at the `INFO` log level. Most snapshot fields report the change in the metric since the last snapshot, however some fields are *gauges*, which always report the current value. Here is an example of such a log entry: + +```json +{"log.level":"info","@timestamp":"2023-07-14T12:50:36.811Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":187},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":0}}}},"cpu":{"system":{"ticks":692690,"time":{"ms":60}},"total":{"ticks":3167250,"time":{"ms":150},"value":3167250},"user":{"ticks":2474560,"time":{"ms":90}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":32},"info":{"ephemeral_id":"2bab8688-34c0-4522-80af-db86948d547d","uptime":{"ms":617670096},"version":"8.6.2"},"memstats":{"gc_next":57189272,"memory_alloc":43589824,"memory_total":275281335792,"rss":183574528},"runtime":{"goroutines":212}},"filebeat":{"events":{"active":5,"added":52,"done":49},"harvester":{"open_files":6,"running":6,"started":1}},"libbeat":{"config":{"module":{"running":15}},"output":{"events":{"acked":48,"active":0,"batches":6,"total":48},"read":{"bytes":210},"write":{"bytes":26923}},"pipeline":{"clients":15,"events":{"active":5,"filtered":1,"published":51,"total":52},"queue":{"max_events":3500,"filled":{"events":5,"bytes":6425,"pct":0.0014},"added":{"events":52,"bytes":65702},"consumed":{"events":52,"bytes":65702},"removed":{"events":48,"bytes":59277},"acked":48}}},"registrar":{"states":{"current":14,"update":49},"writes":{"success":6,"total":6}},"system":{"load":{"1":0.91,"15":0.37,"5":0.4,"norm":{"1":0.1138,"15":0.0463,"5":0.05}}}},"ecs.version":"1.6.0"}} +``` + + +## Details [_details] + +Focussing on the `.monitoring.metrics` field, and formatting the JSON, it’s value is: + +```json +{ + "beat": { + "cgroup": { + "memory": { + "mem": { + "usage": { + "bytes": 0 + } + } + } + }, + "cpu": { + "system": { + "ticks": 692690, + "time": { + "ms": 60 + } + }, + "total": { + "ticks": 3167250, + "time": { + "ms": 150 + }, + "value": 3167250 + }, + "user": { + "ticks": 2474560, + "time": { + "ms": 90 + } + } + }, + "handles": { + "limit": { + "hard": 1048576, + "soft": 1048576 + }, + "open": 32 + }, + "info": { + "ephemeral_id": "2bab8688-34c0-4522-80af-db86948d547d", + "uptime": { + "ms": 617670096 + }, + "version": "8.6.2" + }, + "memstats": { + "gc_next": 57189272, + "memory_alloc": 43589824, + "memory_total": 275281335792, + "rss": 183574528 + }, + "runtime": { + "goroutines": 212 + } + }, + "filebeat": { + "events": { + "active": 5, + "added": 52, + "done": 49 + }, + "harvester": { + "open_files": 6, + "running": 6, + "started": 1 + } + }, + "libbeat": { + "config": { + "module": { + "running": 15 + } + }, + "output": { + "events": { + "acked": 48, + "active": 0, + "batches": 6, + "total": 48 + }, + "read": { + "bytes": 210 + }, + "write": { + "bytes": 26923 + } + }, + "pipeline": { + "clients": 15, + "events": { + "active": 5, + "filtered": 1, + "published": 51, + "total": 52 + }, + "queue": { + "max_events": 3500, + "filled": { + "events": 5, + "bytes": 6425, + "pct": 0.0014 + }, + "added": { + "events": 52, + "bytes": 65702 + }, + "consumed": { + "events": 52, + "bytes": 65702 + }, + "removed": { + "events": 48, + "bytes": 59277 + }, + "acked": 48 + } + } + }, + "registrar": { + "states": { + "current": 14, + "update": 49 + }, + "writes": { + "success": 6, + "total": 6 + } + }, + "system": { + "load": { + "1": 0.91, + "15": 0.37, + "5": 0.4, + "norm": { + "1": 0.1138, + "15": 0.0463, + "5": 0.05 + } + } + } +} +``` + +The following tables explain the meaning of the most important fields under `.monitoring.metrics` and also provide hints that might be helpful in troubleshooting Auditbeat issues. + +| Field path (relative to `.monitoring.metrics`) | Type | Meaning | Troubleshooting hints | +| --- | --- | --- | --- | +| `.beat` | Object | Information that is common to all Beats, e.g. version, goroutines, file handles, CPU, memory | | +| `.libbeat` | Object | Information about the publisher pipeline and output, also common to all Beats | | + +| Field path (relative to `.monitoring.metrics.beat`) | Type | Meaning | Troubleshooting hints | +| --- | --- | --- | --- | +| `.runtime.goroutines` | Integer | Number of goroutines running | If this number grows over time, it indicates a goroutine leak | + +| Field path (relative to `.monitoring.metrics.libbeat`) | Type | Meaning | Troubleshooting hints | +| --- | --- | --- | --- | +| `.pipeline.events.active` | Integer | Number of events currently in the libbeat publisher pipeline. | If this number grows over time, it may indicate that Auditbeat is producing events faster than the output can consume them. Consider increasing the number of output workers (if this setting is supported by the output; {{es}} and {{ls}} outputs support this setting). The pipeline includes events currently being processed as well as events in the queue. So this metric can sometimes end up slightly higher than the queue size. If this metric reaches the maximum queue size (`queue.mem.events` for the in-memory queue), it almost certainly indicates backpressure on Auditbeat, implying that Auditbeat may need to temporarily stop ingesting more events from the source until this backpressure is relieved. | +| `.output.events.total` | Integer | Number of events currently being processed by the output. | If this number grows over time, it may indicate that the output destination (e.g. {{ls}} pipeline or {{es}} cluster) is not able to accept events at the same or faster rate than what Auditbeat is sending to it. | +| `.output.events.acked` | Integer | Number of events acknowledged by the output destination. | Generally, we want this number to be the same as `.output.events.total` as this indicates that the output destination has reliably received all the events sent to it. | +| `.output.events.failed` | Integer | Number of events that Auditbeat tried to send to the output destination, but the destination failed to receive them. | Generally, we want this field to be absent or its value to be zero. When the value is greater than zero, it’s useful to check Auditbeat’s logs right before this log entry’s `@timestamp` to see if there are any connectivity issues with the output destination. Note that failed events are not lost or dropped; they will be sent back to the publisher pipeline for retrying later. | +| `.output.events.dropped` | Integer | Number of events that Auditbeat gave up sending to the output destination because of a permanent (non-retryable) error. | `.output.events.dead_letter` | +| Integer | Number of events that Auditbeat successfully sent to a configured dead letter index after they failed to ingest in the primary index. | `.output.write.latency` | Object | + +| Field path (relative to `.monitoring.metrics.libbeat.pipeline`) | Type | Meaning | Troubleshooting hints | +| --- | --- | --- | --- | +| `.queue.max_events` | Integer (gauge) | The queue’s maximum event count if it has one, otherwise zero. | `.queue.max_bytes` | +| Integer (gauge) | The queue’s maximum byte count if it has one, otherwise zero. | `.queue.filled.events` | Integer (gauge) | +| Number of events currently stored by the queue. | | `.queue.filled.bytes` | Integer (gauge) | +| Number of bytes currently stored by the queue. | | `.queue.filled.pct` | Float (gauge) | +| How full the queue is relative to its maximum size, as a fraction from 0 to 1. | Low throughput while `queue.filled.pct` is low means congestion in the input. Low throughput while `queue.filled.pct` is high means congestion in the output. | `.queue.added.events` | Integer | +| Number of events added to the queue by input workers. | | `.queue.added.bytes` | Integer | +| Number of bytes added to the queue by input workers. | | `.queue.consumed.events` | Integer | +| Number of events sent to output workers. | | `.queue.consumed.bytes` | Integer | +| Number of bytes sent to output workers. | | `.queue.removed.events` | Integer | +| Number of events removed from the queue after being processed by output workers. | | `.queue.removed.bytes` | Integer | + +When using the memory queue, byte metrics are only set if the output supports them. Currently only the Elasticsearch output supports byte metrics. + + +## Useful commands [_useful_commands_2] + + +### Parse monitoring metrics from unstructured Auditbeat logs [_parse_monitoring_metrics_from_unstructured_auditbeat_logs] + +For Auditbeat versions that emit unstructured logs, the following script can be used to parse monitoring metrics from such logs: [https://github.com/elastic/beats/blob/main/script/metrics_from_log_file.sh](https://github.com/elastic/beats/blob/main/script/metrics_from_log_file.sh). + diff --git a/docs/reference/auditbeat/upgrading-auditbeat.md b/docs/reference/auditbeat/upgrading-auditbeat.md new file mode 100644 index 000000000000..b82256334a22 --- /dev/null +++ b/docs/reference/auditbeat/upgrading-auditbeat.md @@ -0,0 +1,12 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/upgrading-auditbeat.html +--- + +# Upgrade Auditbeat [upgrading-auditbeat] + +For information about upgrading to a new version, see: + +* [Breaking Changes](/release-notes/breaking-changes.md) +* [Upgrade](/reference/libbeat/upgrading.md) + diff --git a/docs/reference/auditbeat/urldecode.md b/docs/reference/auditbeat/urldecode.md new file mode 100644 index 000000000000..82911d16c83f --- /dev/null +++ b/docs/reference/auditbeat/urldecode.md @@ -0,0 +1,38 @@ +--- +navigation_title: "urldecode" +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/urldecode.html +--- + +# URL Decode [urldecode] + + +The `urldecode` processor specifies a list of fields to decode from URL encoded format. Under the `fields` key, each entry contains a `from: source-field` and a `to: target-field` pair, where: + +* `from` is the source field name +* `to` is the target field name (defaults to the `from` value) + +```yaml +processors: + - urldecode: + fields: + - from: "field1" + to: "field2" + ignore_missing: false + fail_on_error: true +``` + +In the example above: + +* field1 is decoded in field2 + +The `urldecode` processor has the following configuration settings: + +`ignore_missing` +: (Optional) If set to true, no error is logged in case a key which should be URL-decoded is missing. Default is `false`. + +`fail_on_error` +: (Optional) If set to true, in case of an error the URL-decoding of fields is stopped and the original event is returned. If set to false, decoding continues also if an error happened during decoding. Default is `true`. + +See [Conditions](/reference/auditbeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/auditbeat/using-environ-vars.md b/docs/reference/auditbeat/using-environ-vars.md new file mode 100644 index 000000000000..581b7e8febf0 --- /dev/null +++ b/docs/reference/auditbeat/using-environ-vars.md @@ -0,0 +1,80 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/using-environ-vars.html +--- + +# Use environment variables in the configuration [using-environ-vars] + +You can use environment variable references in the config file to set values that need to be configurable during deployment. To do this, use: + +`${VAR}` + +Where `VAR` is the name of the environment variable. + +Each variable reference is replaced at startup by the value of the environment variable. The replacement is case-sensitive and occurs before the YAML file is parsed. References to undefined variables are replaced by empty strings unless you specify a default value or custom error text. + +To specify a default value, use: + +`${VAR:default_value}` + +Where `default_value` is the value to use if the environment variable is undefined. + +To specify custom error text, use: + +`${VAR:?error_text}` + +Where `error_text` is custom text that will be prepended to the error message if the environment variable cannot be expanded. + +If you need to use a special character in your configuration file, use `$` to escape the expansion. For example, you can escape `${` or `}` with `$${` or `$}`. + +After changing the value of an environment variable, you need to restart Auditbeat to pick up the new value. + +::::{note} +You can also specify environment variables when you override a config setting from the command line by using the `-E` option. For example: + +`-E name=${NAME}` + +:::: + + + +## Examples [_examples] + +Here are some examples of configurations that use environment variables and what each configuration looks like after replacement: + +| Config source | Environment setting | Config after replacement | +| --- | --- | --- | +| `name: ${NAME}` | `export NAME=elastic` | `name: elastic` | +| `name: ${NAME}` | no setting | `name:` | +| `name: ${NAME:beats}` | no setting | `name: beats` | +| `name: ${NAME:beats}` | `export NAME=elastic` | `name: elastic` | +| `name: ${NAME:?You need to set the NAME environment variable}` | no setting | None. Returns an error message that’s prepended with the custom text. | +| `name: ${NAME:?You need to set the NAME environment variable}` | `export NAME=elastic` | `name: elastic` | + + +## Specify complex objects in environment variables [_specify_complex_objects_in_environment_variables] + +You can specify complex objects, such as lists or dictionaries, in environment variables by using a JSON-like syntax. + +As with JSON, dictionaries and lists are constructed using `{}` and `[]`. But unlike JSON, the syntax allows for trailing commas and slightly different string quotation rules. Strings can be unquoted, single-quoted, or double-quoted, as a convenience for simple settings and to make it easier for you to mix quotation usage in the shell. Arrays at the top-level do not require brackets (`[]`). + +For example, the following environment variable is set to a list: + +```yaml +ES_HOSTS="10.45.3.2:9220,10.45.3.1:9230" +``` + +You can reference this variable in the config file: + +```yaml +output.elasticsearch: + hosts: '${ES_HOSTS}' +``` + +When Auditbeat loads the config file, it resolves the environment variable and replaces it with the specified list before reading the `hosts` setting. + +::::{note} +Do not use double-quotes (`"`) to wrap regular expressions, or the backslash (`\`) will be interpreted as an escape character. +:::: + + diff --git a/docs/reference/auditbeat/yaml-tips.md b/docs/reference/auditbeat/yaml-tips.md new file mode 100644 index 000000000000..1dc1b1b4c9ee --- /dev/null +++ b/docs/reference/auditbeat/yaml-tips.md @@ -0,0 +1,60 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/auditbeat/current/yaml-tips.html +--- + +# Avoid YAML formatting problems [yaml-tips] + +The configuration file uses [YAML](http://yaml.org/) for its syntax. When you edit the file to modify configuration settings, there are a few things that you should know. + + +## Use spaces for indentation [_use_spaces_for_indentation] + +Indentation is meaningful in YAML. Make sure that you use spaces, rather than tab characters, to indent sections. + +In the default configuration files and in all the examples in the documentation, we use 2 spaces per indentation level. We recommend you do the same. + + +## Look at the default config file for structure [_look_at_the_default_config_file_for_structure] + +The best way to understand where to define a configuration option is by looking at the provided sample configuration files. The configuration files contain most of the default configurations that are available for the Beat. To change a setting, simply uncomment the line and change the values. + + +## Test your config file [_test_your_config_file] + +You can test your configuration file to verify that the structure is valid. Simply change to the directory where the binary is installed, and run the Beat in the foreground with the `test config` command specified. For example: + +```shell +auditbeat test config -c auditbeat.yml +``` + +You’ll see a message if the Beat finds an error in the file. + + +## Wrap regular expressions in single quotation marks [_wrap_regular_expressions_in_single_quotation_marks] + +If you need to specify a regular expression in a YAML file, it’s a good idea to wrap the regular expression in single quotation marks to work around YAML’s tricky rules for string escaping. + +For more information about YAML, see [http://yaml.org/](http://yaml.org/). + + +## Wrap paths in single quotation marks [wrap-paths-in-quotes] + +Windows paths in particular sometimes contain spaces or characters, such as drive letters or triple dots, that may be misinterpreted by the YAML parser. + +To avoid this problem, it’s a good idea to wrap paths in single quotation marks. + + +## Avoid using leading zeros in numeric values [avoid-leading-zeros] + +If you use a leading zero (for example, `09`) in a numeric field without wrapping the value in single quotation marks, the value may be interpreted incorrectly by the YAML parser. If the value is a valid octal, it’s converted to an integer. If not, it’s converted to a float. + +To prevent unwanted type conversions, avoid using leading zeros in field values, or wrap the values in single quotation marks. + + +## Avoid accidental template variable resolution [dollar-sign-strings] + +The templating engine that allows the config to resolve data from environment variables can result in errors in strings with `$` characters. For example, if a password field contains `$$`, the engine will resolve this to `$`. + +To work around this, either use the [Secrets keystore](/reference/auditbeat/keystore.md) or escape all instances of `$` with `$$`. + diff --git a/docs/reference/filebeat/_debugging_on_kibana.md b/docs/reference/filebeat/_debugging_on_kibana.md new file mode 100644 index 000000000000..4ea4aacef52a --- /dev/null +++ b/docs/reference/filebeat/_debugging_on_kibana.md @@ -0,0 +1,9 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/_debugging_on_kibana.html +--- + +# Debugging on Kibana [_debugging_on_kibana] + +Events produced by `filestream` with `take_over: true` contains `take_over` tag. You can filter on this tag in Kibana and see the events which came from a filestream in the "take over" mode. + diff --git a/docs/reference/filebeat/_if_something_went_wrong.md b/docs/reference/filebeat/_if_something_went_wrong.md new file mode 100644 index 000000000000..90e994b8604a --- /dev/null +++ b/docs/reference/filebeat/_if_something_went_wrong.md @@ -0,0 +1,21 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/_if_something_went_wrong.html +--- + +# If something went wrong [_if_something_went_wrong] + +If for whatever reason you’d like to revert the configuration after running the migrated configuration and return to old `log` inputs the files that were taken by `filestream` inputs, you need to do the following: + +1. Stop Filebeat as soon as possible +2. Save its debug-level logs for further investigation +3. Find your [`registry.path/filebeat` directory](/reference/filebeat/configuration-general-options.md#configuration-global-options) +4. Find the created backup files, they have the `.bak` suffix. If you have multiple backups for the same file, choose the one with the more recent timestamp. +5. Replace the files with their backups, e.g. `log.json` should be replaced by `log.json-1674152412247684000.bak` +6. Run Filebeat with the old configuration (no `filestream` inputs with `take_over: true`). + +::::{note} +Reverting to backups might cause some events to repeat, depends on the amount of time the new configuration was running. +:::: + + diff --git a/docs/reference/filebeat/_live_reloading.md b/docs/reference/filebeat/_live_reloading.md new file mode 100644 index 000000000000..3ef988079520 --- /dev/null +++ b/docs/reference/filebeat/_live_reloading.md @@ -0,0 +1,37 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/_live_reloading.html +--- + +# Live reloading [_live_reloading] + +You can configure Filebeat to dynamically reload external configuration files when there are changes. This feature is available for input and module configurations that are loaded as [external configuration files](/reference/filebeat/filebeat-configuration-reloading.md). You cannot use this feature to reload the main `filebeat.yml` configuration file. + +To configure this feature, you specify a path ([Glob](https://golang.org/pkg/path/filepath/#Glob)) to watch for configuration changes. When the files found by the Glob change, new inputs and/or modules are started and stopped according to changes in the configuration files. + +This feature is especially useful in container environments where one container is used to tail logs for services running in other containers on the same host. + +To enable dynamic config reloading, you specify the `path` and `reload` options under `filebeat.config.inputs` or `filebeat.config.modules` sections. For example: + +```sh +filebeat.config.inputs: + enabled: true + path: configs/*.yml + reload.enabled: true + reload.period: 10s +``` + +`path` +: A Glob that defines the files to check for changes. + +`reload.enabled` +: When set to `true`, enables dynamic config reload. + +`reload.period` +: Specifies how often the files are checked for changes. Do not set the `period` to less than 1s because the modification time of files is often stored in seconds. Setting the `period` to less than 1s will result in unnecessary overhead. + +::::{note} +On systems with POSIX file permissions, all Beats configuration files are subject to ownership and file permission checks. For more information, see [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md). +:::: + + diff --git a/docs/reference/filebeat/_set_up_the_oauth_app_in_the_salesforce_2.md b/docs/reference/filebeat/_set_up_the_oauth_app_in_the_salesforce_2.md new file mode 100644 index 000000000000..5a20c4b3bd3a --- /dev/null +++ b/docs/reference/filebeat/_set_up_the_oauth_app_in_the_salesforce_2.md @@ -0,0 +1,451 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/_set_up_the_oauth_app_in_the_salesforce_2.html +--- + +# Set up the OAuth App in the Salesforce [_set_up_the_oauth_app_in_the_salesforce_2] + +In order to use this integration, users need to create a new Salesforce Application using OAuth. Follow the steps below to create a connected application in Salesforce: + +1. Login to [Salesforce](https://login.salesforce.com/) with the same user credentials that the user wants to collect data with. +2. Click on Setup on the top right menu bar. On the Setup page, search for `App Manager` in the `Search Setup` search box at the top of the page, then select `App Manager`. +3. Click *New Connected App*. +4. Provide a name for the connected application. This will be displayed in the App Manager and on its App Launcher tile. +5. Enter the API name. The default is a version of the name without spaces. Only letters, numbers, and underscores are allowed. If the original app name contains any other characters, edit the default name. +6. Enter the contact email for Salesforce. +7. Under the API (Enable OAuth Settings) section of the page, select *Enable OAuth Settings*. +8. In the Callback URL, enter the Instance URL (Please refer to `Salesforce Instance URL`). +9. Select the following OAuth scopes to apply to the connected app: + + * Manage user data via APIs (api). + * Perform requests at any time (refresh_token, offline_access). + * (Optional) In case of data collection, if any permission issues arise, add the Full access (full) scope. + +10. Select *Require Secret for the Web Server Flow* to require the app’s client secret in exchange for an access token. +11. Select *Require Secret for Refresh Token Flow* to require the app’s client secret in the authorization request of a refresh token and hybrid refresh token flow. +12. Click Save. It may take approximately 10 minutes for the changes to take effect. +13. Click Continue and then under API details, click Manage Consumer Details. Verify the user account using the Verification Code. +14. Copy `Consumer Key` and `Consumer Secret` from the Consumer Details section, which should be populated as values for Client ID and Client Secret respectively in the configuration. + +For more details on how to create a Connected App, refer to the Salesforce documentation [here](https://help.salesforce.com/apex/HTViewHelpDoc?id=connected_app_create.htm). + +::::{note} +**Enabling real-time events** + +To get started with [real-time](https://developer.salesforce.com/blogs/2020/05/introduction-to-real-time-event-monitoring) events, head to setup and into the quick find search for *Event Manager*. Enterprise and Unlimited environments have access to the Logout Event by default, but the remainder of the events need licensing to access [Shield Event Monitoring](https://help.salesforce.com/s/articleView?id=sf.salesforce_shield.htm&type=5). + +:::: + + +::::{tip} +Read the [quick start](/reference/filebeat/filebeat-installation-configuration.md) to learn how to configure and run modules. +:::: + + + +## Configure the module [configuring-salesforce-module] + +You can further refine the behavior of the `salesforce` module by specifying [variable settings](#salesforce-settings) in the `modules.d/salesforce.yml` file, or overriding settings at the command line. + +You must enable at least one fileset in the module. **Filesets are disabled by default.** + + +### Variable settings [salesforce-settings] + +Each fileset has separate variable settings for configuring the behavior of the module. If you don’t specify variable settings, the `salesforce` module uses the defaults. + +For advanced use cases, you can also override input settings. See [Override input settings](/reference/filebeat/advanced-settings.md). + +::::{tip} +When you specify a setting at the command line, remember to prefix the setting with the module name, for example, `salesforce.login.var.paths` instead of `login.var.paths`. +:::: + + + +## Fileset settings [_fileset_settings] + + +### `login` fileset [_login_fileset] + +Example config: + +```yaml +- module: salesforce + login: + enabled: true + var.initial_interval: 1d + var.api_version: 56 + + var.authentication: + jwt_bearer_flow: + enabled: false + client.id: "my-client-id" + client.username: "my.email@here.com" + client.key_path: client_key.pem + url: https://login.salesforce.com + user_password_flow: + enabled: true + client.id: "my-client-id" + client.secret: "my-client-secret" + token_url: "https://login.salesforce.com" + username: "my.email@here.com" + password: "password" + + var.url: "https://instance-url.salesforce.com" + + var.event_log_file: true + var.elf_interval: 1h + var.log_file_interval: Hourly + + var.real_time: true + var.real_time_interval: 5m +``` + +**`var.initial_interval`** +: The time window for collecting historical data when the input starts. Expects a duration string (e.g. 12h or 7d). + +**`var.api_version`** +: The API version of the Salesforce instance. + +**`var.authentication`** +: Authentication config for connecting to Salesforce API. Supports JWT or user-password auth flows. + +**`var.authentication.jwt_bearer_flow.enabled`** +: Set to true to use JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.id`** +: The client ID for JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.username`** +: The username for JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.key_path`** +: Path to the client key file for JWT authentication. + +**`var.authentication.jwt_bearer_flow.url`** +: The audience URL for JWT authentication. + +**`var.authentication.user_password_flow.enabled`** +: Set to true to use user-password authentication. + +**`var.authentication.user_password_flow.client.id`** +: The client ID for user-password authentication. + +**`var.authentication.user_password_flow.client.secret`** +: The client secret for user-password authentication. + +**`var.authentication.user_password_flow.token_url`** +: The Salesforce token URL for user-password authentication. + +**`var.authentication.user_password_flow.username`** +: The Salesforce username for authentication. + +**`var.authentication.user_password_flow.password`** +: The password for the Salesforce user. + +**`var.url`** +: The URL of the Salesforce instance. + +**`var.event_log_file`** +: Set to true to collect logs from EventLogFile (historical data). + +**`var.elf_interval`** +: Interval for collecting EventLogFile logs, e.g. 1h or 5m. + +**`var.log_file_interval`** +: Either "Hourly" or "Daily". The time interval of each log file from EventLogFile. + +**`var.real_time`** +: Set to true to collect real-time data collection. + +**`var.real_time_interval`** +: Interval for collecting real-time logs, e.g. 30s or 5m. + + +### `logout` fileset [_logout_fileset] + +Example config: + +```yaml +- module: salesforce + logout: + enabled: true + var.initial_interval: 1d + var.api_version: 56 + + var.authentication: + jwt_bearer_flow: + enabled: false + client.id: "my-client-id" + client.username: "my.email@here.com" + client.key_path: client_key.pem + url: https://login.salesforce.com + user_password_flow: + enabled: true + client.id: "my-client-id" + client.secret: "my-client-secret" + token_url: "https://login.salesforce.com" + username: "my.email@here.com" + password: "password" + + var.url: "https://instance-url.salesforce.com" + + var.event_log_file: true + var.elf_interval: 1h + var.log_file_interval: Hourly + + var.real_time: true + var.real_time_interval: 5m +``` + +**`var.initial_interval`** +: The time window for collecting historical data when the input starts. Expects a duration string (e.g. 12h or 7d). + +**`var.api_version`** +: The API version of the Salesforce instance. + +**`var.authentication`** +: Authentication config for connecting to Salesforce API. Supports JWT or user-password auth flows. + +**`var.authentication.jwt_bearer_flow.enabled`** +: Set to true to use JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.id`** +: The client ID for JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.username`** +: The username for JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.key_path`** +: Path to the client key file for JWT authentication. + +**`var.authentication.jwt_bearer_flow.url`** +: The audience URL for JWT authentication. + +**`var.authentication.user_password_flow.enabled`** +: Set to true to use user-password authentication. + +**`var.authentication.user_password_flow.client.id`** +: The client ID for user-password authentication. + +**`var.authentication.user_password_flow.client.secret`** +: The client secret for user-password authentication. + +**`var.authentication.user_password_flow.token_url`** +: The Salesforce token URL for user-password authentication. + +**`var.authentication.user_password_flow.username`** +: The Salesforce username for authentication. + +**`var.authentication.user_password_flow.password`** +: The password for the Salesforce user. + +**`var.url`** +: The URL of the Salesforce instance. + +**`var.event_log_file`** +: Set to true to collect logs from EventLogFile (historical data). + +**`var.elf_interval`** +: Interval for collecting EventLogFile logs, e.g. 1h or 5m. + +**`var.log_file_interval`** +: Either "Hourly" or "Daily". The time interval of each log file from EventLogFile. + +**`var.real_time`** +: Set to true to collect real-time data collection. + +**`var.real_time_interval`** +: Interval for collecting real-time logs, e.g. 30s or 5m. + + +### `setupaudittrail` fileset [_setupaudittrail_fileset] + +Example config: + +```yaml +- module: salesforce + setupaudittrail: + enabled: true + var.initial_interval: 1d + var.api_version: 56 + + var.authentication: + jwt_bearer_flow: + enabled: false + client.id: "my-client-id" + client.username: "my.email@here.com" + client.key_path: client_key.pem + url: https://login.salesforce.com + user_password_flow: + enabled: true + client.id: "my-client-id" + client.secret: "my-client-secret" + token_url: "https://login.salesforce.com" + username: "my.email@here.com" + password: "password" + + var.url: "https://instance-url.salesforce.com" + + var.real_time: true + var.real_time_interval: 5m +``` + +**`var.initial_interval`** +: The time window for collecting historical data when the input starts. Expects a duration string (e.g. 12h or 7d). + +**`var.api_version`** +: The API version of the Salesforce instance. + +**`var.authentication`** +: Authentication config for connecting to Salesforce API. Supports JWT or user-password auth flows. + +**`var.authentication.jwt_bearer_flow.enabled`** +: Set to true to use JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.id`** +: The client ID for JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.username`** +: The username for JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.key_path`** +: Path to the client key file for JWT authentication. + +**`var.authentication.jwt_bearer_flow.url`** +: The audience URL for JWT authentication. + +**`var.authentication.user_password_flow.enabled`** +: Set to true to use user-password authentication. + +**`var.authentication.user_password_flow.client.id`** +: The client ID for user-password authentication. + +**`var.authentication.user_password_flow.client.secret`** +: The client secret for user-password authentication. + +**`var.authentication.user_password_flow.token_url`** +: The Salesforce token URL for user-password authentication. + +**`var.authentication.user_password_flow.username`** +: The Salesforce username for authentication. + +**`var.authentication.user_password_flow.password`** +: The password for the Salesforce user. + +**`var.url`** +: The URL of the Salesforce instance. + +**`var.real_time`** +: Set to true to collect real-time data collection. + +**`var.real_time_interval`** +: Interval for collecting real-time logs, e.g. 30s or 5m. + + +### `apex` fileset [_apex_fileset] + +Example config: + +```yaml +- module: salesforce + apex: + enabled: true + var.initial_interval: 1d + var.log_file_interval: Hourly + var.api_version: 56 + + var.authentication: + jwt_bearer_flow: + enabled: false + client.id: "my-client-id" + client.username: "my.email@here.com" + client.key_path: client_key.pem + url: https://login.salesforce.com + user_password_flow: + enabled: true + client.id: "my-client-id" + client.secret: "my-client-secret" + token_url: "https://login.salesforce.com" + username: "my.email@here.com" + password: "password" + + var.url: "https://instance-url.salesforce.com" + + var.event_log_file: true + var.elf_interval: 1h + var.log_file_interval: Hourly +``` + +**`var.initial_interval`** +: The time window for collecting historical data when the input starts. Expects a duration string (e.g. 12h or 7d). + +**`var.api_version`** +: The API version of the Salesforce instance. + +**`var.authentication`** +: Authentication config for connecting to Salesforce API. Supports JWT or user-password auth flows. + +**`var.authentication.jwt_bearer_flow.enabled`** +: Set to true to use JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.id`** +: The client ID for JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.username`** +: The username for JWT authentication. + +**`var.authentication.jwt_bearer_flow.client.key_path`** +: Path to the client key file for JWT authentication. + +**`var.authentication.jwt_bearer_flow.url`** +: The audience URL for JWT authentication. + +**`var.authentication.user_password_flow.enabled`** +: Set to true to use user-password authentication. + +**`var.authentication.user_password_flow.client.id`** +: The client ID for user-password authentication. + +**`var.authentication.user_password_flow.client.secret`** +: The client secret for user-password authentication. + +**`var.authentication.user_password_flow.token_url`** +: The Salesforce token URL for user-password authentication. + +**`var.authentication.user_password_flow.username`** +: The Salesforce username for authentication. + +**`var.authentication.user_password_flow.password`** +: The password for the Salesforce user. + +**`var.url`** +: The URL of the Salesforce instance. + +**`var.event_log_file`** +: Set to true to collect logs from EventLogFile (historical data). + +**`var.elf_interval`** +: Interval for collecting EventLogFile logs, e.g. 1h or 5m. + +**`var.log_file_interval`** +: Either "Hourly" or "Daily". The time interval of each log file from EventLogFile. + + +## Troubleshooting [_troubleshooting] + +Here are some common issues and how to resolve them: + +**Hitting Salesforce API limits** +: Reduce the values of `var.real_time_interval` and `var.elf_interval` to poll the API less frequently. Monitor the API usage in your Salesforce instance. + +**Connectivity issues** +: Verify the `var.url` is correct. Check that the user credentials are valid and have the necessary permissions. Ensure network connectivity between the Elastic Agent and Salesforce instance. + +**Not seeing any data** +: Check the Elastic Agent logs for errors. Verify the module configuration is correct, the filesets are enabled, and the intervals are reasonable. Confirm there is log activity in Salesforce for the log types being collected. + + +## Fields [_fields_47] + +For a description of each field in the module, see the [exported fields](/reference/filebeat/exported-fields-salesforce.md) section. diff --git a/docs/reference/filebeat/_step_1_set_an_identifier_for_each_filestream_input.md b/docs/reference/filebeat/_step_1_set_an_identifier_for_each_filestream_input.md new file mode 100644 index 000000000000..6565f0077d84 --- /dev/null +++ b/docs/reference/filebeat/_step_1_set_an_identifier_for_each_filestream_input.md @@ -0,0 +1,35 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/_step_1_set_an_identifier_for_each_filestream_input.html +--- + +# Step 1: Set an identifier for each filestream input [_step_1_set_an_identifier_for_each_filestream_input] + +All `filestream` inputs require an ID. Ensure you set a unique identifier for every input. + +::::{important} +Never change the ID of an input, or you will end up with duplicate events. +:::: + + +```yaml +filebeat.inputs: +- type: filestream + enabled: true + id: my-java-collector + paths: + - /var/log/java-exceptions*.log + +- type: filestream + enabled: true + id: my-application-input + paths: + - /var/log/my-application*.json + +- type: filestream + enabled: true + id: my-old-files + paths: + - /var/log/my-old-files*.log +``` + diff --git a/docs/reference/filebeat/_step_2_enable_the_take_over_mode.md b/docs/reference/filebeat/_step_2_enable_the_take_over_mode.md new file mode 100644 index 000000000000..d951575d93c2 --- /dev/null +++ b/docs/reference/filebeat/_step_2_enable_the_take_over_mode.md @@ -0,0 +1,50 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/_step_2_enable_the_take_over_mode.html +--- + +# Step 2: Enable the take over mode [_step_2_enable_the_take_over_mode] + +Now, to indicate that the new `filestream` is supposed to take over the files from a previously defined `log` input, we need to add `take_over: true` to each new `filestream`. This will make sure that the new `filestream` inputs will continue ingesting files from the same offset where the `log` inputs stopped. + +::::{note} +It’s recommended to enable debug-level logs for Filebeat in order to follow the migration process. After the first run with `take_over: true` the setting can be removed. +:::: + + +::::{warning} +The `take over` mode is in beta. +:::: + + +::::{important} +If this parameter is not set, all the files will be re-ingested from the beginning and this will lead to data duplication. Please, double-check that this parameter is set. +:::: + + +```yaml +logging: + level: debug +filebeat.inputs: +- type: filestream + enabled: true + id: my-java-collector + take_over: true + paths: + - /var/log/java-exceptions*.log + +- type: filestream + enabled: true + id: my-application-input + take_over: true + paths: + - /var/log/my-application*.json + +- type: filestream + enabled: true + id: my-old-files + take_over: true + paths: + - /var/log/my-old-files*.log +``` + diff --git a/docs/reference/filebeat/_step_3_use_new_option_names.md b/docs/reference/filebeat/_step_3_use_new_option_names.md new file mode 100644 index 000000000000..fadb26d49082 --- /dev/null +++ b/docs/reference/filebeat/_step_3_use_new_option_names.md @@ -0,0 +1,68 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/_step_3_use_new_option_names.html +--- + +# Step 3: Use new option names [_step_3_use_new_option_names] + +Several options are renamed in `filestream`. You can find a table with all of the changed configuration names at the end of this guide. + +The most significant change you have to know about is in parsers. The configuration of `multiline`, `json`, and other parsers has changed. Now the ordering is configurable, so `filestream` expects a list of parsers. Furthermore, the `json` parser was renamed to `ndjson`. + +The example configuration shown earlier needs to be adjusted as well: + +```yaml +- type: filestream + enabled: true + id: my-java-collector + take_over: true + paths: + - /var/log/java-exceptions*.log + parsers: + - multiline: + pattern: '^\[' + negate: true + match: after + close.on_state_change.removed: true + close.on_state_change.renamed: true + +- type: filestream + enabled: true + id: my-application-input + take_over: true + paths: + - /var/log/my-application*.json + prospector.scanner.check_interval: 1m + parsers: + - ndjson: + keys_under_root: true + +- type: filestream + enabled: true + id: my-old-files + take_over: true + paths: + - /var/log/my-old-files*.log + ignore_inactive: since_last_start +``` + +| | | +| --- | --- | +| Option name in log input | Option name in filestream input | +| recursive_glob.enabled | prospector.scanner.recursive_glob | +| harvester_buffer_size | buffer_size | +| max_bytes | message_max_bytes | +| json | parsers.n.ndjson | +| multiline | parsers.n.multiline | +| exclude_files | prospector.scanner.exclude_files | +| close_inactive | close.on_state_change.inactive | +| close_removed | close.on_state_change.removed | +| close_eof | close.reader.on_eof | +| close_timeout | close.reader.after_interval | +| close_inactive | close.on_state_change.inactive | +| scan_frequency | prospector.scanner.check_interval | +| tail_files | ignore_inactive.since_last_start | +| symlinks | prospector.scanner.symlinks | +| backoff | backoff.init | +| backoff_max | backoff.max | + diff --git a/docs/reference/filebeat/_step_4.md b/docs/reference/filebeat/_step_4.md new file mode 100644 index 000000000000..9bf4bc25c37a --- /dev/null +++ b/docs/reference/filebeat/_step_4.md @@ -0,0 +1,11 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/_step_4.html +--- + +# Step 4 [_step_4] + +The events produced by `filestream` input with `take_over: true` contain a `take_over` tag. You can filter on this tag in Kibana and see the events which came from a filestream in the "take_over" mode. + +Once you start receiving events with this tag, you can remove `take_over: true` and restart the fileinput again. + diff --git a/docs/reference/filebeat/add-cached-metadata.md b/docs/reference/filebeat/add-cached-metadata.md new file mode 100644 index 000000000000..bcc7195c0ac0 --- /dev/null +++ b/docs/reference/filebeat/add-cached-metadata.md @@ -0,0 +1,121 @@ +--- +navigation_title: "cache" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-cached-metadata.html +--- + +# Add cached metadata [add-cached-metadata] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `cache` processor enriches events with information from a previously cached events. + +```yaml +processors: + - cache: + backend: + memory: + id: cache_id + put: + key_field: join_key_field + value_field: source_field +``` + +```yaml +processors: + - cache: + backend: + memory: + id: cache_id + get: + key_field: join_key_field + target_field: destination_field +``` + +```yaml +processors: + - cache: + backend: + memory: + id: cache_id + delete: + key_field: join_key_field +``` + +The fields added to the target field will depend on the provider. + +It has the following settings: + +One of `backend.memory.id` or `backend.file.id` must be provided. + +`backend.capacity` +: The number of elements that can be stored in the cache. `put` operations that would cause the capacity to be exceeded will result in evictions of the oldest elements. Values at or below zero indicate no limit. The capacity should not be lower than the number of elements that are expected to be referenced when processing the input as evicted elements are lost. The default is `0`, no limit. + +`backend.memory.id` +: The ID of a memory-based cache. Use the same ID across instance to reference the same cache. + +`backend.file.id` +: The ID of a file-based cache. Use the same ID across instance to reference the same cache. + +`backend.file.write_interval` +: The interval between periodic cache writes to the backing file. Valid time units are h, m, s, ms, us/µs and ns. Periodic writes are only made if `backend.file.write_interval` is greater than zero. The contents are always written out to the backing file when the processor is closed. Default is zero, no periodic writes. + +One of `put`, `get` or `delete` must be provided. + +`put.key_field` +: Name of the field containing the key to put into the cache. Required if `put` is present. + +`put.value_field` +: Name of the field containing the value to put into the cache. Required if `put` is present. + +`put.ttl` +: The TTL to associate with the cached key/value. Valid time units are h, m, s, ms, us/µs and ns. Required if `put` is present. + +`get.key_field` +: Name of the field containing the key to get. Required if `get` is present. + +`get.target_field` +: Name of the field to which the cached value will be written. Required if `get` is present. + +`delete.key_field` +: Name of the field containing the key to delete. Required if `delete` is present. + +`ignore_missing` +: (Optional) When set to `false`, events that don’t contain any of the fields in `match_keys` will be discarded and an error will be generated. By default, this condition is ignored. + +`overwrite_keys` +: (Optional) By default, if a target field already exists, it will not be overwritten and an error will be logged. If `overwrite_keys` is set to `true`, this condition will be ignored. + +The `cache` processor can be used to perform joins within the Beat between documents within an event stream. + +```yaml +processors: + - if: + contains: + log.file.path: fdrv2/aidmaster + then: + - cache: + backend: + memory: + id: aidmaster + capacity: 10000 + put: + ttl: 168h + key_field: crowdstrike.aid + value_field: crowdstrike.metadata + else: + - cache: + backend: + memory: + id: aidmaster + get: + key_field: crowdstrike.aid + target_field: crowdstrike.metadata +``` + +This would enrich an event events with `log.file.path` not equal to "fdrv2/aidmaster" with the `crowdstrike.metadata` fields from events with `log.file.path` equal to that value where the `crowdstrike.aid` field matches between the source and destination documents. The capacity allows up to 10,000 metadata object to be cached between `put` and `get` operations. + diff --git a/docs/reference/filebeat/add-cloud-metadata.md b/docs/reference/filebeat/add-cloud-metadata.md new file mode 100644 index 000000000000..be14896041bb --- /dev/null +++ b/docs/reference/filebeat/add-cloud-metadata.md @@ -0,0 +1,205 @@ +--- +navigation_title: "add_cloud_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-cloud-metadata.html +--- + +# Add cloud metadata [add-cloud-metadata] + + +The `add_cloud_metadata` processor enriches each event with instance metadata from the machine’s hosting provider. At startup it will query a list of hosting providers and cache the instance metadata. + +The following cloud providers are supported: + +* Amazon Web Services (AWS) +* Digital Ocean +* Google Compute Engine (GCE) +* [Tencent Cloud](https://www.qcloud.com/?lang=en) (QCloud) +* Alibaba Cloud (ECS) +* Huawei Cloud (ECS) +* Azure Virtual Machine +* Openstack Nova +* Hetzner Cloud + + +## Special notes [_special_notes] + +`huawei` is an alias for `openstack`. Huawei cloud runs on OpenStack platform, and when viewed from a metadata API standpoint, it is impossible to differentiate it from OpenStack. If you know that your deployments run on Huawei Cloud exclusively, and you wish to have `cloud.provider` value as `huawei`, you can achieve this by overwriting the value using an `add_fields` processor. + +The Alibaba Cloud and Tencent cloud providers are disabled by default, because they require to access a remote host. The `providers` setting allows users to select a list of default providers to query. + +Cloud providers tend to maintain metadata services compliant with other cloud providers. For example, Openstack supports [EC2 compliant metadat service](https://docs.openstack.org/nova/latest/user/metadata.html#ec2-compatible-metadata). This makes it impossible to differentiate cloud provider (`cloud.provider` property) with auto discovery (when `providers` configuration is omitted). The processor implementation incorporates a priority mechanism where priority is given to some providers over others when there are multiple successful metadata results. Currently, `aws/ec2` and `azure` have priority over any other provider as their metadata retrival rely on SDKs. The expectation here is that SDK methods should fail if run in an environment not configured accordingly (ex:- missing configurations or credentials). + + +## Configurations [_configurations] + +The simple configuration below enables the processor. + +```yaml +processors: + - add_cloud_metadata: ~ +``` + +The `add_cloud_metadata` processor has three optional configuration settings. The first one is `timeout` which specifies the maximum amount of time to wait for a successful response when detecting the hosting provider. The default timeout value is `3s`. + +If a timeout occurs then no instance metadata will be added to the events. This makes it possible to enable this processor for all your deployments (in the cloud or on-premise). + +The second optional setting is `providers`. The `providers` settings accepts a list of cloud provider names to be used. If `providers` is not configured, then all providers that do not access a remote endpoint are enabled by default. The list of providers may alternatively be configured with the environment variable `BEATS_ADD_CLOUD_METADATA_PROVIDERS`, by setting it to a comma-separated list of provider names. + +List of names the `providers` setting supports: + +* "alibaba", or "ecs" for the Alibaba Cloud provider (disabled by default). +* "azure" for Azure Virtual Machine (enabled by default). If the virtual machine is part of an AKS managed cluster, the fields `orchestrator.cluster.name` and `orchestrator.cluster.id` can also be retrieved. "TENANT_ID", "CLIENT_ID" and "CLIENT_SECRET" environment variables need to be set for authentication purposes. If not set we fallback to [DefaultAzureCredential](https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication?tabs=bash#2-authenticate-with-azure) and user can choose different authentication methods (e.g. workload identity). +* "digitalocean" for Digital Ocean (enabled by default). +* "aws", or "ec2" for Amazon Web Services (enabled by default). +* "gcp" for Google Copmute Enging (enabled by default). +* "openstack", "nova", or "huawei" for Openstack Nova (enabled by default). +* "openstack-ssl", or "nova-ssl" for Openstack Nova when SSL metadata APIs are enabled (enabled by default). +* "tencent", or "qcloud" for Tencent Cloud (disabled by default). +* "hetzner" for Hetzner Cloud (enabled by default). + +For example, configuration below only utilize `aws` metadata retrival mechanism, + +```yaml +processors: + - add_cloud_metadata: + providers: + aws +``` + +The third optional configuration setting is `overwrite`. When `overwrite` is `true`, `add_cloud_metadata` overwrites existing `cloud.*` fields (`false` by default). + +The `add_cloud_metadata` processor supports SSL options to configure the http client used to query cloud metadata. See [SSL](/reference/filebeat/configuration-ssl.md) for more information. + + +## Provided metadata [_provided_metadata] + +The metadata that is added to events varies by hosting provider. Below are examples for each of the supported providers. + +*AWS* + +Metadata given below are extracted from [instance identity document](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html), + +```json +{ + "cloud": { + "account.id": "123456789012", + "availability_zone": "us-east-1c", + "instance.id": "i-4e123456", + "machine.type": "t2.medium", + "image.id": "ami-abcd1234", + "provider": "aws", + "region": "us-east-1" + } +} +``` + +If the EC2 instance has IMDS enabled and if tags are allowed through IMDS endpoint, the processor will further append tags in metadata. Please refer official documentation on [IMDS endpoint](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) for further details. + +```json +{ + "aws": { + "tags": { + "org" : "myOrg", + "owner": "userID" + } + } +} +``` + +*Digital Ocean* + +```json +{ + "cloud": { + "instance.id": "1234567", + "provider": "digitalocean", + "region": "nyc2" + } +} +``` + +*GCP* + +```json +{ + "cloud": { + "availability_zone": "us-east1-b", + "instance.id": "1234556778987654321", + "machine.type": "f1-micro", + "project.id": "my-dev", + "provider": "gcp" + } +} +``` + +*Tencent Cloud* + +```json +{ + "cloud": { + "availability_zone": "gz-azone2", + "instance.id": "ins-qcloudv5", + "provider": "qcloud", + "region": "china-south-gz" + } +} +``` + +*Alibaba Cloud* + +This metadata is only available when VPC is selected as the network type of the ECS instance. + +```json +{ + "cloud": { + "availability_zone": "cn-shenzhen", + "instance.id": "i-wz9g2hqiikg0aliyun2b", + "provider": "ecs", + "region": "cn-shenzhen-a" + } +} +``` + +*Azure Virtual Machine* + +```json +{ + "cloud": { + "provider": "azure", + "instance.id": "04ab04c3-63de-4709-a9f9-9ab8c0411d5e", + "instance.name": "test-az-vm", + "machine.type": "Standard_D3_v2", + "region": "eastus2" + } +} +``` + +*Openstack Nova* + +```json +{ + "cloud": { + "instance.name": "test-998d932195.mycloud.tld", + "instance.id": "i-00011a84", + "availability_zone": "xxxx-az-c", + "provider": "openstack", + "machine.type": "m2.large" + } +} +``` + +*Hetzner Cloud* + +```json +{ + "cloud": { + "availability_zone": "hel1-dc2", + "instance.name": "my-hetzner-instance", + "instance.id": "111111", + "provider": "hetzner", + "region": "eu-central" + } +} +``` + diff --git a/docs/reference/filebeat/add-cloudfoundry-metadata.md b/docs/reference/filebeat/add-cloudfoundry-metadata.md new file mode 100644 index 000000000000..522df68560ba --- /dev/null +++ b/docs/reference/filebeat/add-cloudfoundry-metadata.md @@ -0,0 +1,70 @@ +--- +navigation_title: "add_cloudfoundry_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-cloudfoundry-metadata.html +--- + +# Add Cloud Foundry metadata [add-cloudfoundry-metadata] + + +The `add_cloudfoundry_metadata` processor annotates each event with relevant metadata from Cloud Foundry applications. The events are annotated with Cloud Foundry metadata, only if the event contains a reference to a Cloud Foundry application (using field `cloudfoundry.app.id`) and the configured Cloud Foundry client is able to retrieve information for the application. + +Each event is annotated with: + +* Application Name +* Space ID +* Space Name +* Organization ID +* Organization Name + +::::{note} +Pivotal Application Service and Tanzu Application Service include this metadata in all events from the firehose since version 2.8. In these cases the metadata in the events is used, and `add_cloudfoundry_metadata` processor doesn’t modify these fields. +:::: + + +For efficient annotation, application metadata retrieved by the Cloud Foundry client is stored in a persistent cache on the filesystem under the `path.data` directory. This is done so the metadata can persist across restarts of Filebeat. For control over this cache, use the `cache_duration` and `cache_retry_delay` settings. + +```yaml +processors: + - add_cloudfoundry_metadata: + api_address: https://api.dev.cfdev.sh + client_id: uaa-filebeat + client_secret: verysecret + ssl: + verification_mode: none + # To connect to Cloud Foundry over verified TLS you can specify a client and CA certificate. + #ssl: + # certificate_authorities: ["/etc/pki/cf/ca.pem"] + # certificate: "/etc/pki/cf/cert.pem" + # key: "/etc/pki/cf/cert.key" +``` + +It has the following settings: + +`api_address` +: (Optional) The URL of the Cloud Foundry API. It uses `http://api.bosh-lite.com` by default. + +`doppler_address` +: (Optional) The URL of the Cloud Foundry Doppler Websocket. It uses value from ${api_address}/v2/info by default. + +`uaa_address` +: (Optional) The URL of the Cloud Foundry UAA API. It uses value from ${api_address}/v2/info by default. + +`rlp_address` +: (Optional) The URL of the Cloud Foundry RLP Gateway. It uses value from ${api_address}/v2/info by default. + +`client_id` +: Client ID to authenticate with Cloud Foundry. + +`client_secret` +: Client Secret to authenticate with Cloud Foundry. + +`cache_duration` +: (Optional) Maximum amount of time to cache an application’s metadata. Defaults to 120 seconds. + +`cache_retry_delay` +: (Optional) Time to wait before trying to obtain an application’s metadata again in case of error. Defaults to 20 seconds. + +`ssl` +: (Optional) SSL configuration to use when connecting to Cloud Foundry. + diff --git a/docs/reference/filebeat/add-docker-metadata.md b/docs/reference/filebeat/add-docker-metadata.md new file mode 100644 index 000000000000..989bd27a83f9 --- /dev/null +++ b/docs/reference/filebeat/add-docker-metadata.md @@ -0,0 +1,80 @@ +--- +navigation_title: "add_docker_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-docker-metadata.html +--- + +# Add Docker metadata [add-docker-metadata] + + +The `add_docker_metadata` processor annotates each event with relevant metadata from Docker containers. At startup it detects a docker environment and caches the metadata. The events are annotated with Docker metadata, only if a valid configuration is detected and the processor is able to reach Docker API. + +Each event is annotated with: + +* Container ID +* Name +* Image +* Labels + +::::{note} +When running Filebeat in a container, you need to provide access to Docker’s unix socket in order for the `add_docker_metadata` processor to work. You can do this by mounting the socket inside the container. For example: + +`docker run -v /var/run/docker.sock:/var/run/docker.sock ...` + +To avoid privilege issues, you may also need to add `--user=root` to the `docker run` flags. Because the user must be part of the docker group in order to access `/var/run/docker.sock`, root access is required if Filebeat is running as non-root inside the container. + +If Docker daemon is restarted the mounted socket will become invalid and metadata will stop working, in these situations there are two options: + +* Restart Filebeat every time Docker is restarted +* Mount the entire `/var/run` directory (instead of just the socket) + +:::: + + +```yaml +processors: + - add_docker_metadata: + host: "unix:///var/run/docker.sock" + #match_fields: ["system.process.cgroup.id"] + #match_pids: ["process.pid", "process.parent.pid"] + #match_source: true + #match_source_index: 4 + #match_short_id: true + #cleanup_timeout: 60 + #labels.dedot: false + # To connect to Docker over TLS you must specify a client and CA certificate. + #ssl: + # certificate_authority: "/etc/pki/root/ca.pem" + # certificate: "/etc/pki/client/cert.pem" + # key: "/etc/pki/client/cert.key" +``` + +It has the following settings: + +`host` +: (Optional) Docker socket (UNIX or TCP socket). It uses `unix:///var/run/docker.sock` by default. + +`ssl` +: (Optional) SSL configuration to use when connecting to the Docker socket. + +`match_fields` +: (Optional) A list of fields to match a container ID, at least one of them should hold a container ID to get the event enriched. + +`match_pids` +: (Optional) A list of fields that contain process IDs. If the process is running in Docker then the event will be enriched. The default value is `["process.pid", "process.parent.pid"]`. + +`match_source` +: (Optional) Match container ID from a log path present in the `log.file.path` field. Enabled by default. + +`match_short_id` +: (Optional) Match container short ID from a log path present in the `log.file.path` field. Disabled by default. This allows to match directories names that have the first 12 characters of the container ID. For example, `/var/log/containers/b7e3460e2b21/*.log`. + +`match_source_index` +: (Optional) Index in the source path split by `/` to look for container ID. It defaults to 4 to match `/var/lib/docker/containers//*.log` + +`cleanup_timeout` +: (Optional) Time of inactivity to consider we can clean and forget metadata for a container, 60s by default. + +`labels.dedot` +: (Optional) Default to be false. If set to true, replace dots in labels with `_`. + diff --git a/docs/reference/filebeat/add-fields.md b/docs/reference/filebeat/add-fields.md new file mode 100644 index 000000000000..2068d570f47b --- /dev/null +++ b/docs/reference/filebeat/add-fields.md @@ -0,0 +1,51 @@ +--- +navigation_title: "add_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-fields.html +--- + +# Add fields [add-fields] + + +The `add_fields` processor adds additional fields to the event. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. The `add_fields` processor will overwrite the target field if it already exists. By default the fields that you specify will be grouped under the `fields` sub-dictionary in the event. To group the fields under a different sub-dictionary, use the `target` setting. To store the fields as top-level fields, set `target: ''`. + +`target` +: (Optional) Sub-dictionary to put all fields into. Defaults to `fields`. Setting this to `@metadata` will add values to the event metadata instead of fields. + +`fields` +: Fields to be added. + +For example, this configuration: + +```yaml +processors: + - add_fields: + target: project + fields: + name: myproject + id: '574734885120952459' +``` + +Adds these fields to any event: + +```json +{ + "project": { + "name": "myproject", + "id": "574734885120952459" + } +} +``` + +This configuration will alter the event metadata: + +```yaml +processors: + - add_fields: + target: '@metadata' + fields: + op_type: "index" +``` + +When the event is ingested (e.g. by Elastisearch) the document will have `op_type: "index"` set as a metadata field. + diff --git a/docs/reference/filebeat/add-host-metadata.md b/docs/reference/filebeat/add-host-metadata.md new file mode 100644 index 000000000000..e4b8db929f94 --- /dev/null +++ b/docs/reference/filebeat/add-host-metadata.md @@ -0,0 +1,92 @@ +--- +navigation_title: "add_host_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-host-metadata.html +--- + +# Add Host metadata [add-host-metadata] + + +```yaml +processors: + - add_host_metadata: + cache.ttl: 5m + geo: + name: nyc-dc1-rack1 + location: 40.7128, -74.0060 + continent_name: North America + country_iso_code: US + region_name: New York + region_iso_code: NY + city_name: New York +``` + +It has the following settings: + +`netinfo.enabled` +: (Optional) Default true. Include IP addresses and MAC addresses as fields host.ip and host.mac + +`cache.ttl` +: (Optional) The processor uses an internal cache for the host metadata. This sets the cache expiration time. The default is 5m, negative values disable caching altogether. + +`geo.name` +: (Optional) User definable token to be used for identifying a discrete location. Frequently a datacenter, rack, or similar. + +`geo.location` +: (Optional) Longitude and latitude in comma separated format. + +`geo.continent_name` +: (Optional) Name of the continent. + +`geo.country_name` +: (Optional) Name of the country. + +`geo.region_name` +: (Optional) Name of the region. + +`geo.city_name` +: (Optional) Name of the city. + +`geo.country_iso_code` +: (Optional) ISO country code. + +`geo.region_iso_code` +: (Optional) ISO region code. + +`replace_fields` +: (Optional) Default true. If set to false, original host fields from the event will not be replaced by host fields from `add_host_metadata`. + +The `add_host_metadata` processor annotates each event with relevant metadata from the host machine. The fields added to the event look like the following: + +```json +{ + "host":{ + "architecture":"x86_64", + "name":"example-host", + "id":"", + "os":{ + "family":"darwin", + "type":"macos", + "build":"16G1212", + "platform":"darwin", + "version":"10.12.6", + "kernel":"16.7.0", + "name":"Mac OS X" + }, + "ip": ["192.168.0.1", "10.0.0.1"], + "mac": ["00:25:96:12:34:56", "72:00:06:ff:79:f1"], + "geo": { + "continent_name": "North America", + "country_iso_code": "US", + "region_name": "New York", + "region_iso_code": "NY", + "city_name": "New York", + "name": "nyc-dc1-rack1", + "location": "40.7128, -74.0060" + } + } +} +``` + +Note: `add_host_metadata` processor will overwrite host fields if `host.*` fields already exist in the event from Beats by default with `replace_fields` equals to `true`. Please use `add_observer_metadata` if the beat is being used to monitor external systems. + diff --git a/docs/reference/filebeat/add-id.md b/docs/reference/filebeat/add-id.md new file mode 100644 index 000000000000..f9f672dd362b --- /dev/null +++ b/docs/reference/filebeat/add-id.md @@ -0,0 +1,24 @@ +--- +navigation_title: "add_id" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-id.html +--- + +# Generate an ID for an event [add-id] + + +The `add_id` processor generates a unique ID for an event. + +```yaml +processors: + - add_id: ~ +``` + +The following settings are supported: + +`target_field` +: (Optional) Field where the generated ID will be stored. Default is `@metadata._id`. + +`type` +: (Optional) Type of ID to generate. Currently only `elasticsearch` is supported and is the default. The `elasticsearch` type generates IDs using the same algorithm that Elasticsearch uses for auto-generating document IDs. + diff --git a/docs/reference/filebeat/add-kubernetes-metadata.md b/docs/reference/filebeat/add-kubernetes-metadata.md new file mode 100644 index 000000000000..e6f0da382a01 --- /dev/null +++ b/docs/reference/filebeat/add-kubernetes-metadata.md @@ -0,0 +1,282 @@ +--- +navigation_title: "add_kubernetes_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html +--- + +# Add Kubernetes metadata [add-kubernetes-metadata] + + +The `add_kubernetes_metadata` processor annotates each event with relevant metadata based on which Kubernetes pod the event originated from. This processor only adds metadata to the events that do not have it yet present. + +At startup, it detects an `in_cluster` environment and caches the Kubernetes-related metadata. Events are only annotated if a valid configuration is detected. If it’s not able to detect a valid Kubernetes configuration, the events are not annotated with Kubernetes-related metadata. + +Each event is annotated with: + +* Pod Name +* Pod UID +* Namespace +* Labels + +In addition, the node and namespace metadata are added to the pod metadata. + +The `add_kubernetes_metadata` processor has two basic building blocks: + +* Indexers +* Matchers + +Indexers use pod metadata to create unique identifiers for each one of the pods. These identifiers help to correlate the metadata of the observed pods with actual events. For example, the `ip_port` indexer can take a Kubernetes pod and create identifiers for it based on all its `pod_ip:container_port` combinations. + +Matchers use information in events to construct lookup keys that match the identifiers created by the indexers. For example, when the `fields` matcher takes `["metricset.host"]` as a lookup field, it would construct a lookup key with the value of the field `metricset.host`. When one of these lookup keys matches with one of the identifiers, the event is enriched with the metadata of the identified pod. + +When `add_kubernetes_metadata` is used with Filebeat, it uses the `container` indexer and the `logs_path`. So events whose path in `log.file.path` contains a reference to a container ID are enriched with metadata of the pod of this container. + +This behaviour can be disabled by disabling default indexers and matchers in the configuration: + +```yaml +processors: + - add_kubernetes_metadata: + default_indexers.enabled: false + default_matchers.enabled: false +``` + +You can find more information about the available indexers and matchers, and some examples in [Indexers and matchers](#kubernetes-indexers-and-matchers). + +The configuration below enables the processor when filebeat is run as a pod in Kubernetes. + +```yaml +processors: + - add_kubernetes_metadata: + #labels.dedot: true + #annotations.dedot: true +``` + +The configuration below enables the processor on a Beat running as a process on the Kubernetes node. + +```yaml +processors: + - add_kubernetes_metadata: + host: + # If kube_config is not set, KUBECONFIG environment variable will be checked + # and if not present it will fall back to InCluster + kube_config: $Filebeat Reference/.kube/config + #labels.dedot: true + #annotations.dedot: true +``` + +The configuration below has the default indexers and matchers disabled and enables ones that the user is interested in. + +```yaml +processors: + - add_kubernetes_metadata: + host: + # If kube_config is not set, KUBECONFIG environment variable will be checked + # and if not present it will fall back to InCluster + kube_config: ~/.kube/config + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - fields: + lookup_fields: ["metricset.host"] + #labels.dedot: true + #annotations.dedot: true +``` + +The `add_kubernetes_metadata` processor has the following configuration settings: + +`host` +: (Optional) Specify the node to scope filebeat to in case it cannot be accurately detected, as when running filebeat in host network mode. + +`scope` +: (Optional) Specify if the processor should have visibility at the node level or at the entire cluster level. Possible values are `node` and `cluster`. Scope is `node` by default. + +`namespace` +: (Optional) Select the namespace from which to collect the metadata. If it is not set, the processor collects metadata from all namespaces. It is unset by default. + +`add_resource_metadata` +: (Optional) Specify filters and configuration for the extra metadata, that will be added to the event. Configuration parameters: + + * `node` or `namespace`: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included while annotations are not. To change default behaviour `include_labels`, `exclude_labels` and `include_annotations` can be defined. Those settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Note: wildcards are not supported for those settings. The enrichment of `node` or `namespace` metadata can be individually disabled by setting `enabled: false`. + * `deployment`: If resource is `pod` and it is created from a `deployment`, by default the deployment name is added, this can be disabled by setting `deployment: false`. + * `cronjob`: If resource is `pod` and it is created from a `cronjob`, by default the cronjob name is added, this can be disabled by setting `cronjob: false`. + + Example: + + +```yaml + add_resource_metadata: + namespace: + include_labels: ["namespacelabel1"] + #labels.dedot: true + #annotations.dedot: true + node: + include_labels: ["nodelabel2"] + include_annotations: ["nodeannotation1"] + #labels.dedot: true + #annotations.dedot: true + deployment: false + cronjob: false +``` + +`kube_config` +: (Optional) Use given config file as configuration for Kubernetes client. It defaults to `KUBECONFIG` environment variable if present. + +`use_kubeadm` +: (Optional) Default true. By default requests to kubeadm config map are made in order to enrich cluster name by requesting /api/v1/namespaces/kube-system/configmaps/kubeadm-config API endpoint. + +`kube_client_options` +: (Optional) Additional options can be configured for Kubernetes client. Currently client QPS and burst are supported, if not set Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) will be used. Example: + +```yaml + kube_client_options: + qps: 5 + burst: 10 +``` + +`cleanup_timeout` +: (Optional) Specify the time of inactivity before stopping the running configuration for a container. This is `60s` by default. + +`sync_period` +: (Optional) Specify the timeout for listing historical resources. + +`default_indexers.enabled` +: (Optional) Enable or disable default pod indexers when you want to specify your own. + +`default_matchers.enabled` +: (Optional) Enable or disable default pod matchers when you want to specify your own. + +`labels.dedot` +: (Optional) Default to be true. If set to true, then `.` in labels will be replaced with `_`. + +`annotations.dedot` +: (Optional) Default to be true. If set to true, then `.` in labels will be replaced with `_`. + + +## Indexers and matchers [kubernetes-indexers-and-matchers] + +## Indexers [_indexers] + +Indexers use pods metadata to create unique identifiers for each one of the pods. + +Available indexers are: + +`container` +: Identifies the pod metadata using the IDs of its containers. + +`ip_port` +: Identifies the pod metadata using combinations of its IP and its exposed ports. When using this indexer metadata is identified using the IP of the pods, and the combination if `ip:port` for each one of the ports exposed by its containers. + +`pod_name` +: Identifies the pod metadata using its namespace and its name as `namespace/pod_name`. + +`pod_uid` +: Identifies the pod metadata using the UID of the pod. + + +## Matchers [_matchers] + +Matchers are used to construct the lookup keys that match with the identifiers created by indexes. + +### `field_format` [_field_format] + +Looks up pod metadata using a key created with a string format that can include event fields. + +This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event. + +For example, the following configuration uses the `ip_port` indexer to identify the pod metadata by combinations of the pod IP and its exposed ports, and uses the destination IP and port in events as match keys: + +```yaml +processors: +- add_kubernetes_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - field_format: + format: '%{[destination.ip]}:%{[destination.port]}' +``` + + +### `fields` [_fields] + +Looks up pod metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used. + +This matcher has an option `lookup_fields` to define the files whose value will be used for lookup. + +For example, the following configuration uses the `ip_port` indexer to identify pods, and defines a matcher that uses the destination IP or the server IP for the lookup, the first it finds in the event: + +```yaml +processors: +- add_kubernetes_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - fields: + lookup_fields: ['destination.ip', 'server.ip'] +``` + +It’s also possible to extract the matching key from fields using a regex pattern. The optional `regex_pattern` field can be used to set the pattern. The pattern **must** contain a capture group named `key`, whose value will be used as the matching key. + +For example, the following configuration uses the `container` indexer to identify containers by their id, and extracts the matching key from the cgroup id field added to system process metrics. This field has the form `cri-containerd-.scope`, so we need a regex pattern to obtain the container id. + +```yaml +processors: + - add_kubernetes_metadata: + indexers: + - container: + matchers: + - fields: + lookup_fields: ['system.process.cgroup.id'] + regex_pattern: 'cri-containerd-(?P[0-9a-z]+)\.scope' +``` + + +### `logs_path` [_logs_path] + +Looks up pod metadata using identifiers extracted from the log path stored in the `log.file.path` field. + +This matcher has the following configuration settings: + +`logs_path` +: (Optional) Base path of container logs. If not specified, it uses the default logs path of the platform where Filebeat is running: for Linux - `/var/lib/docker/containers/`, Windows - `C:\\ProgramData\\Docker\\containers`. To change the default value: container ID must follow right after the `logs_path` - `/`, where `container_id` is a 64-character-long hexadecimal string. + +`resource_type` +: (Optional) Type of the resource to obtain the ID of. Valid `resource_type`: + + * `pod`: to make the lookup based on the pod UID. When `resource_type` is set to `pod`, `logs_path` must be set as well, supported path in this case: + + * `/var/lib/kubelet/pods/` used to read logs from mounted into the pod volumes, those logs end up under `/var/lib/kubelet/pods//volumes//...` To use `/var/lib/kubelet/pods/` as a `log_path`, `/var/lib/kubelet/pods` must be mounted into the filebeat Pods. + * `/var/log/pods/` Note: when using `resource_type: 'pod'` logs will be enriched only with pod metadata: pod id, pod name, etc., not container metadata. + + * `container`: to make the lookup based on the container ID, `logs_path` must be set to `/var/log/containers/`. It defaults to `container`. + + +To be able to use `logs_path` matcher filebeat input path must be a subdirectory of directory defined in `logs_path` configuration setting. + +The default configuration is able to lookup the metadata using the container ID when the logs are collected from the default docker logs path (`/var/lib/docker/containers//...` on Linux). + +For example the following configuration would use the pod UID when the logs are collected from `/var/lib/kubelet/pods//...`. + +```yaml +processors: +- add_kubernetes_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - pod_uid: + matchers: + - logs_path: + logs_path: '/var/lib/kubelet/pods' + resource_type: 'pod' +``` + + + diff --git a/docs/reference/filebeat/add-labels.md b/docs/reference/filebeat/add-labels.md new file mode 100644 index 000000000000..ff01769c1df3 --- /dev/null +++ b/docs/reference/filebeat/add-labels.md @@ -0,0 +1,45 @@ +--- +navigation_title: "add_labels" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-labels.html +--- + +# Add labels [add-labels] + + +The `add_labels` processors adds a set of key-value pairs to an event. The processor will flatten nested configuration objects like arrays or dictionaries into a fully qualified name by merging nested names with a `.`. Array entries create numeric names starting with 0. Labels are always stored under the Elastic Common Schema compliant `labels` sub-dictionary. + +`labels` +: dictionaries of labels to be added. + +For example, this configuration: + +```yaml +processors: + - add_labels: + labels: + number: 1 + with.dots: test + nested: + with.dots: nested + array: + - do + - re + - with.field: mi +``` + +Adds these fields to every event: + +```json +{ + "labels": { + "number": 1, + "with.dots": "test", + "nested.with.dots": "nested", + "array.0": "do", + "array.1": "re", + "array.2.with.field": "mi" + } +} +``` + diff --git a/docs/reference/filebeat/add-locale.md b/docs/reference/filebeat/add-locale.md new file mode 100644 index 000000000000..503e798c7454 --- /dev/null +++ b/docs/reference/filebeat/add-locale.md @@ -0,0 +1,31 @@ +--- +navigation_title: "add_locale" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-locale.html +--- + +# Add the local time zone [add-locale] + + +The `add_locale` processor enriches each event with the machine’s time zone offset from UTC or with the name of the time zone. It supports one configuration option named `format` that controls whether an offset or time zone abbreviation is added to the event. The default format is `offset`. The processor adds the a `event.timezone` value to each event. + +The configuration below enables the processor with the default settings. + +```yaml +processors: + - add_locale: ~ +``` + +This configuration enables the processor and configures it to add the time zone abbreviation to events. + +```yaml +processors: + - add_locale: + format: abbreviation +``` + +::::{note} +Please note that `add_locale` differentiates between daylight savings time (DST) and regular time. For example `CEST` indicates DST and and `CET` is regular time. +:::: + + diff --git a/docs/reference/filebeat/add-network-direction.md b/docs/reference/filebeat/add-network-direction.md new file mode 100644 index 000000000000..8cc396a71227 --- /dev/null +++ b/docs/reference/filebeat/add-network-direction.md @@ -0,0 +1,22 @@ +--- +navigation_title: "add_network_direction" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-network-direction.html +--- + +# Add network direction [add-network-direction] + + +The `add_network_direction` processor attempts to compute the perimeter-based network direction given an a source and destination ip address and list of internal networks. The key `internal_networks` can contain either CIDR blocks or a list of special values enumerated in the network section of [Conditions](/reference/filebeat/defining-processors.md#conditions). + +```yaml +processors: + - add_network_direction: + source: source.ip + destination: destination.ip + target: network.direction + internal_networks: [ private ] +``` + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/filebeat/add-nomad-metadata.md b/docs/reference/filebeat/add-nomad-metadata.md new file mode 100644 index 000000000000..a7fab850eedc --- /dev/null +++ b/docs/reference/filebeat/add-nomad-metadata.md @@ -0,0 +1,161 @@ +--- +navigation_title: "add_nomad_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-nomad-metadata.html +--- + +# Add Nomad metadata [add-nomad-metadata] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `add_nomad_metadata` processor adds fields with relevant metadata for applications deployed in Nomad. + +Each event is annotated with the following information: + +* Allocation name, identifier and status. +* Job name and type. +* Namespace where the job is deployed. +* Datacenter and region where the agent running the allocation is located. + +```yaml +processors: + - add_nomad_metadata: ~ +``` + +It has the following settings to configure the connection: + +`address` +: (Optional) The URL of the agent API used to request the metadata. It uses `http://127.0.0.1:4646` by default. + +`namespace` +: (Optional) Namespace to watch. If set, only events for allocations in this namespace will be annotated. + +`region` +: (Optional) Region to watch. If set, only events for allocations in this region will be annotated. + +`secret_id` +: (Optional) SecretID to use when connecting with the agent API. This is an example ACL policy to apply to the token. + +```json +namespace "*" { + policy = "read" +} +node { + policy = "read" +} +agent { + policy = "read" +} +``` + +`refresh_interval` +: (Optional) Interval used to update the cached metadata. It defaults to 30 seconds. + +`cleanup_timeout` +: (Optional) After an allocation has been removed, time to wait before cleaning up their associated resources. This is useful if you expect to receive events after an allocation has been removed, which can happen when collecting logs. It defaults to 60 seconds. + +You can decide if Filebeat should annotate events related to allocations in local node or on the whole cluster configuring the scope with the following settings: + +`scope` +: (Optional) Scope of the resources to watch. It can be `node` to get metadata only for the allocations in a single agent, or `global`, to get metadata for allocations running on any agent. It defaults to `node`. + +`node` +: (Optional) When using `scope: node`, use `node` to specify the name of the local node if it cannot be discovered automatically. + +For example the following configuration could be used if Filebeat is collecting events from all the allocations in the cluster: + +```yaml +processors: + - add_nomad_metadata: + scope: global +``` + +## Indexers and matchers [_indexers_and_matchers] + +Indexers and matchers are used to correlate fields in events with actual metadata. Filebeat uses this information to know what metadata to include in each event. + +### Indexers [_indexers_2] + +Indexers use allocation metadata to create unique identifiers for each one of the pods. + +Avaliable indexers are: `allocation_name`:: Identifies allocations by its name and namespace (as `/`) `allocation_uuid`:: Identifies allocations by its unique identifier. + + +### Matchers [_matchers_2] + +Matchers are used to construct the lookup keys that match with the identifiers created by indexes. + + +### `field_format` [_field_format_2] + +Looks up allocation metadata using a key created with a string format that can include event fields. + +This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event. + +For example, the following configuration uses the `allocation_name` indexer to identify the allocation metadata by its name and namespace, and uses custom fields existing in the event as match keys: + +```yaml +processors: +- add_nomad_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_name: + matchers: + - field_format: + format: '%{[labels.nomad_namespace]}/%{[fields.nomad_alloc_name]}' +``` + + +### `fields` [_fields_2] + +Looks up allocation metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used. + +This matcher has an option `lookup_fields` to define the fields whose value will be used for lookup. + +For example, the following configuration uses the `allocation_uuid` indexer to identify allocations, and defines a matcher that uses some fields where the allocation UUID can be found for lookup, the first it finds in the event: + +```yaml +processors: +- add_nomad_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_uuid: + matchers: + - fields: + lookup_fields: ['host.name', 'fields.nomad_alloc_uuid'] +``` + + +### `logs_path` [_logs_path_2] + +Looks up allocation metadata using identifiers extracted from the log path stored in the `log.file.path` field. + +This matcher has an optional `logs_path` option with the base path of the directory containing the logs for the local agent. + +The default configuration is able to lookup the metadata using the allocation UUID when the logs are collected under `/var/lib/nomad`. + +For example the following configuration would use the allocation UUID when the logs are collected from `/var/lib/NomadClient001/alloc//alloc/logs/...`. + +```yaml +processors: +- add_nomad_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_uuid: + matchers: + - logs_path: + logs_path: '/var/lib/NomadClient001' +``` + + + diff --git a/docs/reference/filebeat/add-observer-metadata.md b/docs/reference/filebeat/add-observer-metadata.md new file mode 100644 index 000000000000..8a0f8ac92ea8 --- /dev/null +++ b/docs/reference/filebeat/add-observer-metadata.md @@ -0,0 +1,88 @@ +--- +navigation_title: "add_observer_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-observer-metadata.html +--- + +# Add Observer metadata [add-observer-metadata] + + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +```yaml +processors: + - add_observer_metadata: + cache.ttl: 5m + geo: + name: nyc-dc1-rack1 + location: 40.7128, -74.0060 + continent_name: North America + country_iso_code: US + region_name: New York + region_iso_code: NY + city_name: New York +``` + +It has the following settings: + +`netinfo.enabled` +: (Optional) Default true. Include IP addresses and MAC addresses as fields observer.ip and observer.mac + +`cache.ttl` +: (Optional) The processor uses an internal cache for the observer metadata. This sets the cache expiration time. The default is 5m, negative values disable caching altogether. + +`geo.name` +: (Optional) User definable token to be used for identifying a discrete location. Frequently a datacenter, rack, or similar. + +`geo.location` +: (Optional) Longitude and latitude in comma separated format. + +`geo.continent_name` +: (Optional) Name of the continent. + +`geo.country_name` +: (Optional) Name of the country. + +`geo.region_name` +: (Optional) Name of the region. + +`geo.city_name` +: (Optional) Name of the city. + +`geo.country_iso_code` +: (Optional) ISO country code. + +`geo.region_iso_code` +: (Optional) ISO region code. + +The `add_observer_metadata` processor annotates each event with relevant metadata from the observer machine. The fields added to the event look like the following: + +```json +{ + "observer" : { + "hostname" : "avce", + "type" : "heartbeat", + "vendor" : "elastic", + "ip" : [ + "192.168.1.251", + "fe80::64b2:c3ff:fe5b:b974", + ], + "mac" : [ + "dc:c1:02:6f:1b:ed", + ], + "geo": { + "continent_name": "North America", + "country_iso_code": "US", + "region_name": "New York", + "region_iso_code": "NY", + "city_name": "New York", + "name": "nyc-dc1-rack1", + "location": "40.7128, -74.0060" + } + } +} +``` + diff --git a/docs/reference/filebeat/add-process-metadata.md b/docs/reference/filebeat/add-process-metadata.md new file mode 100644 index 000000000000..02e942a7e6ca --- /dev/null +++ b/docs/reference/filebeat/add-process-metadata.md @@ -0,0 +1,94 @@ +--- +navigation_title: "add_process_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-process-metadata.html +--- + +# Add process metadata [add-process-metadata] + + +The `add_process_metadata` processor enriches events with information from running processes, identified by their process ID (PID). + +```yaml +processors: + - add_process_metadata: + match_pids: + - process.pid +``` + +The fields added to the event look as follows: + +```json +{ + "container": { + "id": "b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1" + }, + "process": { + "args": [ + "/usr/lib/systemd/systemd", + "--switched-root", + "--system", + "--deserialize", + "22" + ], + "executable": "/usr/lib/systemd/systemd", + "name": "systemd", + "owner": { + "id": "0", + "name": "root" + }, + "parent": { + "pid": 0 + }, + "pid": 1, + "start_time": "2018-08-22T08:44:50.684Z", + "title": "/usr/lib/systemd/systemd --switched-root --system --deserialize 22" + } +} +``` + +Optionally, the process environment can be included, too: + +```json + ... + "env": { + "HOME": "/", + "TERM": "linux", + "BOOT_IMAGE": "/boot/vmlinuz-4.11.8-300.fc26.x86_64", + "LANG": "en_US.UTF-8", + } + ... +``` + +It has the following settings: + +`match_pids` +: List of fields to lookup for a PID. The processor will search the list sequentially until the field is found in the current event, and the PID lookup will be applied to the value of this field. + +`target` +: (Optional) Destination prefix where the `process` object will be created. The default is the event’s root. + +`include_fields` +: (Optional) List of fields to add. By default, the processor will add all the available fields except `process.env`. + +`ignore_missing` +: (Optional) When set to `false`, events that don’t contain any of the fields in match_pids will be discarded and an error will be generated. By default, this condition is ignored. + +`overwrite_keys` +: (Optional) By default, if a target field already exists, it will not be overwritten, and an error will be logged. If `overwrite_keys` is set to `true`, this condition will be ignored. + +`restricted_fields` +: (Optional) By default, the `process.env` field is not output, to avoid leaking sensitive data. If `restricted_fields` is `true`, the field will be present in the output. + +`host_path` +: (Optional) By default, the `host_path` field is set to the root directory of the host `/`. This is the path where `/proc` is mounted. For different runtime configurations of Kubernetes or Docker, the `host_path` can be set to overwrite the default. + +`cgroup_prefixes` +: (Optional) List of prefixes that will be matched against cgroup paths. When a cgroup path begins with a prefix in the list, then the last element of the path is returned as the container ID. Only one of `cgroup_prefixes` and `cgroup_rexex` should be configured. If neither are configured then a default `cgroup_regex` value is used that matches cgroup paths containing 64-character container IDs (like those from Docker, Kubernetes, and Podman). + +`cgroup_regex` +: (Optional) A regular expression that will be matched against cgroup paths. It must contain one capturing group. When a cgroup path matches the regular expression then the value of the capturing group is returned as the container ID. Only one of `cgroup_prefixes` and `cgroup_rexex` should be configured. If neither are configured then a default `cgroup_regex` value is used that matches cgroup paths containing 64-character container IDs (like those from Docker, Kubernetes, and Podman). + +`cgroup_cache_expire_time` +: (Optional) By default, the `cgroup_cache_expire_time` is set to 30 seconds. This is the length of time before cgroup cache elements expire in seconds. It can be set to 0 to disable the cgroup cache. In some container runtimes technology like runc, the container’s process is also process in the host kernel, and will be affected by PID rollover/reuse. The expire time needs to set smaller than the PIDs wrap around time to avoid wrong container id. + diff --git a/docs/reference/filebeat/add-tags.md b/docs/reference/filebeat/add-tags.md new file mode 100644 index 000000000000..f99b05e2a4f7 --- /dev/null +++ b/docs/reference/filebeat/add-tags.md @@ -0,0 +1,34 @@ +--- +navigation_title: "add_tags" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/add-tags.html +--- + +# Add tags [add-tags] + + +The `add_tags` processor adds tags to a list of tags. If the target field already exists, the tags are appended to the existing list of tags. + +`tags` +: List of tags to add. + +`target` +: (Optional) Field the tags will be added to. Defaults to `tags`. Setting tags in `@metadata` is not supported. + +For example, this configuration: + +```yaml +processors: + - add_tags: + tags: [web, production] + target: "environment" +``` + +Adds the environment field to every event: + +```json +{ + "environment": ["web", "production"] +} +``` + diff --git a/docs/reference/filebeat/advanced-settings.md b/docs/reference/filebeat/advanced-settings.md new file mode 100644 index 000000000000..0d8170ea6dcb --- /dev/null +++ b/docs/reference/filebeat/advanced-settings.md @@ -0,0 +1,34 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/advanced-settings.html +--- + +# Override input settings [advanced-settings] + +Behind the scenes, each module starts a Filebeat input. Advanced users can add or override any input settings. For example, you can set [close_eof](/reference/filebeat/filebeat-input-log.md#filebeat-input-log-close-eof) to `true` in the module configuration: + +```yaml +- module: nginx + access: + input: + close_eof: true +``` + +Or at the command line when you run Filebeat: + +```sh +-M "nginx.access.input.close_eof=true" +``` + +You can use wildcards to change variables or settings for multiple modules/filesets at once. For example, you can enable `close_eof` for all the filesets in the `nginx` module: + +```sh +-M "nginx.*.input.close_eof=true" +``` + +You can also enable `close_eof` for all inputs created by any of the modules: + +```sh +-M "*.*.input.close_eof=true" +``` + diff --git a/docs/reference/filebeat/append.md b/docs/reference/filebeat/append.md new file mode 100644 index 000000000000..868c7721a626 --- /dev/null +++ b/docs/reference/filebeat/append.md @@ -0,0 +1,73 @@ +--- +navigation_title: "append" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/append.html +--- + +# Append Processor [append] + + +The `append` processor appends one or more values to an existing array if the target field already exists and it is an array. Converts a scaler to an array and appends one or more values to it if the field exists and it is a scaler. Here the values can either be one or more static values or one or more values from the fields listed under *fields* key. + +`target_field` +: The field in which you want to append the data. + +`fields` +: (Optional) List of fields from which you want to copy data from. If the value is of a concrete type it will be appended directly to the target. However, if the value is an array, all the elements of the array are pushed individually to the target field. + +`values` +: (Optional) List of static values you want to append to target field. + +`ignore_empty_values` +: (Optional) If set to `true`, all the `""` and `nil` are omitted from being appended to the target field. + +`fail_on_error` +: (Optional) If set to `true` and an error occurs, the changes are reverted and the original is returned. If set to `false`, processing continues if an error occurs. Default is `true`. + +`allow_duplicate` +: (Optional) If set to `false`, the processor does not append values already present in the field. The default is `true`, which will append duplicate values in the array. + +`ignore_missing` +: (Optional) Indicates whether to ignore events that lack the source field. The default is `false`, which will fail processing of an event if a field is missing. + +note: If you want to use `fields` parameter with fields under `message`, make sure you use `decode_json_fields` first with `target: ""`. + +For example, this configuration: + +```yaml +processors: + - decode_json_fields: + fields: message + target: "" + - append: + target_field: target-field + fields: + - concrete.field + - array.one + values: + - static-value + - "" + ignore_missing: true + fail_on_error: true + ignore_empty_values: true +``` + +Copies the values of `concrete.field`, `array.one` response fields and the static values to `target-field`: + +```json +{ + "concrete": { + "field": "val0" + }, + "array": { + "one": [ "val1", "val2" ] + }, + "target-field": [ + "val0", + "val1", + "val2", + "static-value" + ] +} +``` + diff --git a/docs/reference/filebeat/bandwidth-throttling.md b/docs/reference/filebeat/bandwidth-throttling.md new file mode 100644 index 000000000000..3b7bcaef6980 --- /dev/null +++ b/docs/reference/filebeat/bandwidth-throttling.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/bandwidth-throttling.html +--- + +# Filebeat uses too much bandwidth [bandwidth-throttling] + +If you need to limit bandwidth usage, we recommend that you configure the network stack on your OS to perform bandwidth throttling. + +For example, the following Linux commands cap the connection between Filebeat and Logstash by setting a limit of 50 kbps on TCP connections over port 5044: + +```shell +tc qdisc add dev $DEV root handle 1: htb +tc class add dev $DEV parent 1:1 classid 1:10 htb rate 50kbps ceil 50kbps +tc filter add dev $DEV parent 1:0 prio 1 protocol ip handle 10 fw flowid 1:10 +iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10 +``` + +Using OS tools to perform bandwidth throttling gives you better control over policies. For example, you can use OS tools to cap bandwidth during the day, but not at night. Or you can leave the bandwidth uncapped, but assign a low priority to the traffic. + diff --git a/docs/reference/filebeat/beats-api-keys.md b/docs/reference/filebeat/beats-api-keys.md new file mode 100644 index 000000000000..375d49d8e60d --- /dev/null +++ b/docs/reference/filebeat/beats-api-keys.md @@ -0,0 +1,142 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/beats-api-keys.html +--- + +# Grant access using API keys [beats-api-keys] + +Instead of using usernames and passwords, you can use API keys to grant access to {{es}} resources. You can set API keys to expire at a certain time, and you can explicitly invalidate them. Any user with the `manage_api_key` or `manage_own_api_key` cluster privilege can create API keys. + +Filebeat instances typically send both collected data and monitoring information to {{es}}. If you are sending both to the same cluster, you can use the same API key. For different clusters, you need to use an API key per cluster. + +::::{note} +For security reasons, we recommend using a unique API key per Filebeat instance. You can create as many API keys per user as necessary. +:::: + + +::::{important} +Review [*Grant users access to secured resources*](/reference/filebeat/feature-roles.md) before creating API keys for Filebeat. +:::: + + + +## Create an API key for publishing [beats-api-key-publish] + +To create an API key to use for writing data to {{es}}, use the [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key), for example: + +```console +POST /_security/api_key +{ + "name": "filebeat_host001", <1> + "role_descriptors": { + "filebeat_writer": { <2> + "cluster": ["monitor", "read_ilm", "read_pipeline"], + "index": [ + { + "names": ["filebeat-*"], + "privileges": ["view_index_metadata", "create_doc", "auto_configure"] + } + ] + } + } +} +``` + +1. Name of the API key +2. Granted privileges, see [*Grant users access to secured resources*](/reference/filebeat/feature-roles.md) + + +::::{note} +See [Create a *publishing* user](/reference/filebeat/privileges-to-publish-events.md) for the list of privileges required to publish events. +:::: + + +The return value will look something like this: + +```console-result +{ + "id":"TiNAGG4BaaMdaH1tRfuU", <1> + "name":"filebeat_host001", + "api_key":"KnR6yE41RrSowb0kQ0HWoA" <2> +} +``` + +1. Unique id for this API key +2. Generated API key + + +You can now use this API key in your `filebeat.yml` configuration file like this: + +```yaml +output.elasticsearch: + api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1> +``` + +1. Format is `id:api_key` (as returned by [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key)) + + + +## Create an API key for monitoring [beats-api-key-monitor] + +To create an API key to use for sending monitoring data to {{es}}, use the [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key), for example: + +```console +POST /_security/api_key +{ + "name": "filebeat_host001", <1> + "role_descriptors": { + "filebeat_monitoring": { <2> + "cluster": ["monitor"], + "index": [ + { + "names": [".monitoring-beats-*"], + "privileges": ["create_index", "create"] + } + ] + } + } +} +``` + +1. Name of the API key +2. Granted privileges, see [*Grant users access to secured resources*](/reference/filebeat/feature-roles.md) + + +::::{note} +See [Create a *monitoring* user](/reference/filebeat/privileges-to-publish-monitoring.md) for the list of privileges required to send monitoring data. +:::: + + +The return value will look something like this: + +```console-result +{ + "id":"TiNAGG4BaaMdaH1tRfuU", <1> + "name":"filebeat_host001", + "api_key":"KnR6yE41RrSowb0kQ0HWoA" <2> +} +``` + +1. Unique id for this API key +2. Generated API key + + +You can now use this API key in your `filebeat.yml` configuration file like this: + +```yaml +monitoring.elasticsearch: + api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1> +``` + +1. Format is `id:api_key` (as returned by [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key)) + + + +## Learn more about API keys [learn-more-api-keys] + +See the {{es}} API key documentation for more information: + +* [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) +* [Get API key information](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-api-key) +* [Invalidate API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-invalidate-api-key) + diff --git a/docs/reference/filebeat/change-index-name.md b/docs/reference/filebeat/change-index-name.md new file mode 100644 index 000000000000..dc88f911f22d --- /dev/null +++ b/docs/reference/filebeat/change-index-name.md @@ -0,0 +1,23 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/change-index-name.html +--- + +# Change the index name [change-index-name] + +Filebeat uses data streams named `filebeat-9.0.0-beta1`. To use a different name, set the [`index`](/reference/filebeat/elasticsearch-output.md#index-option-es) option in the {{es}} output. You also need to configure the `setup.template.name` and `setup.template.pattern` options to match the new name. For example: + +```sh +output.elasticsearch.index: "customname-%{[agent.version]}" +setup.template.name: "customname-%{[agent.version]}" +setup.template.pattern: "customname-%{[agent.version]}" +``` + +If you’re using pre-built Kibana dashboards, also set the `setup.dashboards.index` option. For example: + +```yaml +setup.dashboards.index: "customname-*" +``` + +For a full list of template setup options, see [Elasticsearch index template](/reference/filebeat/configuration-template.md). + diff --git a/docs/reference/filebeat/command-line-options.md b/docs/reference/filebeat/command-line-options.md new file mode 100644 index 000000000000..fb820e55d4ae --- /dev/null +++ b/docs/reference/filebeat/command-line-options.md @@ -0,0 +1,443 @@ +--- +navigation_title: "Command reference" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/command-line-options.html +--- + +# Filebeat command reference [command-line-options] + + +Filebeat provides a command-line interface for starting Filebeat and performing common tasks, like testing configuration files and loading dashboards. + +The command-line also supports [global flags](#global-flags) for controlling global behaviors. + +::::{tip} +Use `sudo` to run the following commands if: + +* the config file is owned by `root`, or +* Filebeat is configured to capture data that requires `root` access + +:::: + + +Some of the features described here require an Elastic license. For more information, see [https://www.elastic.co/subscriptions](https://www.elastic.co/subscriptions) and [License Management](docs-content://deploy-manage/license/manage-your-license-in-self-managed-cluster.md). + +| Commands | | +| --- | --- | +| [`export`](#export-command) | Exports the configuration, index template, ILM policy, or a dashboard to stdout. | +| [`help`](#help-command) | Shows help for any command. | +| [`keystore`](#keystore-command) | Manages the [secrets keystore](/reference/filebeat/keystore.md). | +| [`modules`](#modules-command) | Manages configured modules. | +| [`run`](#run-command) | Runs Filebeat. This command is used by default if you start Filebeat without specifying a command. | +| [`setup`](#setup-command) | Sets up the initial environment, including the index template, ILM policy and write alias, {{kib}} dashboards (when available), and machine learning jobs (when available). | +| [`test`](#test-command) | Tests the configuration. | +| [`version`](#version-command) | Shows information about the current version. | + +Also see [Global flags](#global-flags). + +## `export` command [export-command] + +Exports the configuration, index template, ILM policy, or a dashboard to stdout. You can use this command to quickly view your configuration, see the contents of the index template and the ILM policy, or export a dashboard from {{kib}}. + +**SYNOPSIS** + +```sh +filebeat export SUBCOMMAND [FLAGS] +``` + +**SUBCOMMANDS** + +**`config`** +: Exports the current configuration to stdout. If you use the `-c` flag, this command exports the configuration that’s defined in the specified file. + +$$$dashboard-subcommand$$$**`dashboard`** +: Exports a dashboard. You can use this option to store a dashboard on disk in a module and load it automatically. For example, to export the dashboard to a JSON file, run: + + ```shell + filebeat export dashboard --id="DASHBOARD_ID" > dashboard.json + ``` + + To find the `DASHBOARD_ID`, look at the URL for the dashboard in {{kib}}. By default, `export dashboard` writes the dashboard to stdout. The example shows how to write the dashboard to a JSON file so that you can import it later. The JSON file will contain the dashboard with all visualizations and searches. You must load the index pattern separately for Filebeat. + + To load the dashboard, copy the generated `dashboard.json` file into the `kibana/6/dashboard` directory of Filebeat, and run `filebeat setup --dashboards` to import the dashboard. + + If {{kib}} is not running on `localhost:5061`, you must also adjust the Filebeat configuration under `setup.kibana`. + + +$$$template-subcommand$$$**`template`** +: Exports the index template to stdout. You can specify the `--es.version` flag to further define what gets exported. Furthermore you can export the template to a file instead of `stdout` by defining a directory via `--dir`. + +$$$ilm-policy-subcommand$$$ + +**`ilm-policy`** +: Exports the index lifecycle management policy to stdout. You can specify the `--es.version` and a `--dir` to which the policy should be exported as a file rather than exporting to `stdout`. + +**FLAGS** + +**`--es.version VERSION`** +: When used with [`template`](#template-subcommand), exports an index template that is compatible with the specified version. When used with [`ilm-policy`](#ilm-policy-subcommand), exports the ILM policy if the specified ES version is enabled for ILM. + +**`-h, --help`** +: Shows help for the `export` command. + +**`--dir DIRNAME`** +: Define a directory to which the template, pipelines, and ILM policy should be exported to as files instead of printing them to `stdout`. + +**`--id DASHBOARD_ID`** +: When used with [`dashboard`](#dashboard-subcommand), specifies the dashboard ID. + +Also see [Global flags](#global-flags). + +**EXAMPLES** + +```sh +filebeat export config +filebeat export template --es.version 9.0.0-beta1 +filebeat export dashboard --id="a7b35890-8baa-11e8-9676-ef67484126fb" > dashboard.json +``` + + +## `help` command [help-command] + +Shows help for any command. If no command is specified, shows help for the `run` command. + +**SYNOPSIS** + +```sh +filebeat help COMMAND_NAME [FLAGS] +``` + +**`COMMAND_NAME`** +: Specifies the name of the command to show help for. + +**FLAGS** + +**`-h, --help`** +: Shows help for the `help` command. + +Also see [Global flags](#global-flags). + +**EXAMPLE** + +```sh +filebeat help export +``` + + +## `keystore` command [keystore-command] + +Manages the [secrets keystore](/reference/filebeat/keystore.md). + +**SYNOPSIS** + +```sh +filebeat keystore SUBCOMMAND [FLAGS] +``` + +**SUBCOMMANDS** + +**`add KEY`** +: Adds the specified key to the keystore. Use the `--force` flag to overwrite an existing key. Use the `--stdin` flag to pass the value through `stdin`. + +**`create`** +: Creates a keystore to hold secrets. Use the `--force` flag to overwrite the existing keystore. + +**`list`** +: Lists the keys in the keystore. + +**`remove KEY`** +: Removes the specified key from the keystore. + +**FLAGS** + +**`--force`** +: Valid with the `add` and `create` subcommands. When used with `add`, overwrites the specified key. When used with `create`, overwrites the keystore. + +**`--stdin`** +: When used with `add`, uses the stdin as the source of the key’s value. + +**`-h, --help`** +: Shows help for the `keystore` command. + +Also see [Global flags](#global-flags). + +**EXAMPLES** + +```sh +filebeat keystore create +filebeat keystore add ES_PWD +filebeat keystore remove ES_PWD +filebeat keystore list +``` + +See [Secrets keystore](/reference/filebeat/keystore.md) for more examples. + + +## `modules` command [modules-command] + +Manages configured modules. You can use this command to enable and disable specific module configurations defined in the `modules.d` directory. The changes you make with this command are persisted and used for subsequent runs of Filebeat. + +To see which modules are enabled and disabled, run the `list` subcommand. + +**SYNOPSIS** + +```sh +filebeat modules SUBCOMMAND [FLAGS] +``` + +**SUBCOMMANDS** + +**`disable MODULE_LIST`** +: Disables the modules specified in the space-separated list. + +**`enable MODULE_LIST`** +: Enables the modules specified in the space-separated list. + +**`list`** +: Lists the modules that are currently enabled and disabled. + +**FLAGS** + +**`-h, --help`** +: Shows help for the `modules` command. + +Also see [Global flags](#global-flags). + +**EXAMPLES** + +```sh +filebeat modules list +filebeat modules enable apache2 auditd mysql +``` + + +## `run` command [run-command] + +Runs Filebeat. This command is used by default if you start Filebeat without specifying a command. + +**SYNOPSIS** + +```sh +filebeat run [FLAGS] +``` + +Or: + +```sh +filebeat [FLAGS] +``` + +**FLAGS** + +**`-N, --N`** +: Disables publishing for testing purposes. This option disables all outputs except the [File output](/reference/filebeat/file-output.md). + +**`--cpuprofile FILE`** +: Writes CPU profile data to the specified file. This option is useful for troubleshooting Filebeat. + +**`-h, --help`** +: Shows help for the `run` command. + +**`--httpprof [HOST]:PORT`** +: Starts an http server for profiling. This option is useful for troubleshooting and profiling Filebeat. + +**`--memprofile FILE`** +: Writes memory profile data to the specified output file. This option is useful for troubleshooting Filebeat. + +**`--modules MODULE_LIST`** +: Specifies a comma-separated list of modules to run. For example: + + ```sh + filebeat run --modules nginx,mysql,system + ``` + + Rather than specifying the list of modules every time you run Filebeat, you can use the [`modules`](#modules-command) command to enable and disable specific modules. Then when you run Filebeat, it will run any modules that are enabled. + + +**`--once`** +: When the `--once` flag is used, Filebeat starts all configured harvesters and inputs, and runs each input until the harvesters are closed. If you set the `--once` flag, you should also set `close_eof` so the harvester is closed when the end of the file is reached. By default harvesters are closed after `close_inactive` is reached. + + The `--once` option is not currently supported with the [`filestream`](/reference/filebeat/filebeat-input-filestream.md) input type. + + +**`--system.hostfs MOUNT_POINT`** +: Specifies the mount point of the host’s filesystem for use in monitoring a host. This flag is depricated, and an alternate hostfs should be specified via the `hostfs` module config value. + +Also see [Global flags](#global-flags). + +**EXAMPLE** + +```sh +filebeat run -e +``` + +Or: + +```sh +filebeat -e +``` + + +## `setup` command [setup-command] + +Sets up the initial environment, including the index template, ILM policy and write alias, {{kib}} dashboards (when available), and machine learning jobs (when available) + +* The index template ensures that fields are mapped correctly in Elasticsearch. If index lifecycle management is enabled it also ensures that the defined ILM policy and write alias are connected to the indices matching the index template. The ILM policy takes care of the lifecycle of an index, when to do a rollover, when to move an index from the hot phase to the next phase, etc. +* The {{kib}} dashboards make it easier for you to visualize Filebeat data in {{kib}}. +* The machine learning jobs contain the configuration information and metadata necessary to analyze data for anomalies. + +This command sets up the environment without actually running Filebeat and ingesting data. Specify optional flags to set up a subset of assets. + +**SYNOPSIS** + +```sh +filebeat setup [FLAGS] +``` + +**FLAGS** + +**`--dashboards`** +: Sets up the {{kib}} dashboards (when available). This option loads the dashboards from the Filebeat package. For more options, such as loading customized dashboards, see [Importing Existing Beat Dashboards](http://www.elastic.co/guide/en/beats/devguide/master/import-dashboards.md) in the *Beats Developer Guide*. + +**`-h, --help`** +: Shows help for the `setup` command. + +**`--modules MODULE_LIST`** +: Specifies a comma-separated list of modules. Use this flag to avoid errors when there are no modules defined in the `filebeat.yml` file. + +**`--pipelines`** +: Sets up ingest pipelines for configured filesets. Filebeat looks for enabled modules in the `filebeat.yml` file. If you used the [`modules`](#modules-command) command to enable modules in the `modules.d` directory, also specify the `--modules` flag. + +**`--enable-all-filesets`** +: Enables all modules and filesets. This is useful with `--pipelines` if you want to load all ingest pipelines. Without this option you would have to list every module with the [`modules`](#modules-command) command and enable every fileset within the module with a `-M` option, to load all of the ingest pipelines. + +**`--force-enable-module-filesets`** +: Enables all filesets in the enabled modules. This is useful with `--pipelines` if you want to load ingest pipelines. Without this option you enable every fileset within the module with a `-M` option, to load the ingest pipelines. + +**`--index-management`** +: Sets up components related to Elasticsearch index management including template, ILM policy, and write alias (if supported and configured). + +Also see [Global flags](#global-flags). + +**EXAMPLES** + +```sh +filebeat setup --dashboards +filebeat setup --pipelines +filebeat setup --pipelines --modules system,nginx,mysql <1> +filebeat setup --index-management +``` + +1. If you used the [`modules`](#modules-command) command to enable modules in the `modules.d` directory, also specify the `--modules` flag to indicate which modules to load pipelines for. + + + +## `test` command [test-command] + +Tests the configuration. + +**SYNOPSIS** + +```sh +filebeat test SUBCOMMAND [FLAGS] +``` + +**SUBCOMMANDS** + +**`config`** +: Tests the configuration settings. + +**`output`** +: Tests that Filebeat can connect to the output by using the current settings. + +**FLAGS** + +**`-h, --help`** +: Shows help for the `test` command. + +Also see [Global flags](#global-flags). + +**EXAMPLE** + +```sh +filebeat test config +``` + + +## `version` command [version-command] + +Shows information about the current version. + +**SYNOPSIS** + +```sh +filebeat version [FLAGS] +``` + +**FLAGS** + +**`-h, --help`** +: Shows help for the `version` command. + +Also see [Global flags](#global-flags). + +**EXAMPLE** + +```sh +filebeat version +``` + + +## Global flags [global-flags] + +These global flags are available whenever you run Filebeat. + +**`-E, --E "SETTING_NAME=VALUE"`** +: Overrides a specific configuration setting. You can specify multiple overrides. For example: + + ```sh + filebeat -E "name=mybeat" -E "output.elasticsearch.hosts=['http://myhost:9200']" + ``` + + This setting is applied to the currently running Filebeat process. The Filebeat configuration file is not changed. + + +**`-M, --M "VAR_NAME=VALUE"`** +: Overrides the default configuration for a Filebeat module. You can specify multiple variable overrides. For example: + + ```sh + filebeat --modules=nginx -M "nginx.access.var.paths=['/var/log/nginx/access.log*']" -M "nginx.access.var.pipeline=no_plugins" + ``` + + +**`-c, --c FILE`** +: Specifies the configuration file to use for Filebeat. The file you specify here is relative to `path.config`. If the `-c` flag is not specified, the default config file, `filebeat.yml`, is used. + +**`-d, --d SELECTORS`** +: Enables debugging for the specified selectors. For the selectors, you can specify a comma-separated list of components, or you can use `-d "*"` to enable debugging for all components. For example, `-d "publisher"` displays all the publisher-related messages. + +**`-e, --e`** +: Logs to stderr and disables syslog/file output. + +**`--environment`** +: For logging purposes, specifies the environment that Filebeat is running in. This setting is used to select a default log output when no log output is configured. Supported values are: `systemd`, `container`, `macos_service`, and `windows_service`. If `systemd` or `container` is specified, Filebeat will log to stdout and stderr by default. + +**`--path.config`** +: Sets the path for configuration files. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. + +**`--path.data`** +: Sets the path for data files. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. + +**`--path.home`** +: Sets the path for miscellaneous files. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. + +**`--path.logs`** +: Sets the path for log files. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. + +**`--strict.perms`** +: Sets strict permission checking on configuration files. The default is `--strict.perms=true`. See [Config file ownership and permissions](/reference/libbeat/config-file-permissions.md) for more information. + +**`-v, --v`** +: Logs INFO-level messages. + + diff --git a/docs/reference/filebeat/community-id.md b/docs/reference/filebeat/community-id.md new file mode 100644 index 000000000000..15e82552ab9e --- /dev/null +++ b/docs/reference/filebeat/community-id.md @@ -0,0 +1,41 @@ +--- +navigation_title: "community_id" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/community-id.html +--- + +# Community ID Network Flow Hash [community-id] + + +The `community_id` processor computes a network flow hash according to the [Community ID Flow Hash specification](https://github.com/corelight/community-id-spec). + +The flow hash is useful for correlating all network events related to a single flow. For example you can filter on a community ID value and you might get back the Netflow records from multiple collectors and layer 7 protocol records from Packetbeat. + +By default the processor is configured to read the flow parameters from the appropriate Elastic Common Schema (ECS) fields. If you are processing ECS data then no parameters are required. + +```yaml +processors: + - community_id: +``` + +If the data does not conform to ECS then you can customize the field names that the processor reads from. You can also change the `target` field which is where the computed hash is written to. + +```yaml +processors: + - community_id: + fields: + source_ip: my_source_ip + source_port: my_source_port + destination_ip: my_dest_ip + destination_port: my_dest_port + iana_number: my_iana_number + transport: my_transport + icmp_type: my_icmp_type + icmp_code: my_icmp_code + target: network.community_id +``` + +If the necessary fields are not present in the event then the processor will silently continue without adding the target field. + +The processor also accepts an optional `seed` parameter that must be a 16-bit unsigned integer. This value gets incorporated into all generated hashes. + diff --git a/docs/reference/filebeat/configuration-autodiscover-advanced.md b/docs/reference/filebeat/configuration-autodiscover-advanced.md new file mode 100644 index 000000000000..d918e5cda96d --- /dev/null +++ b/docs/reference/filebeat/configuration-autodiscover-advanced.md @@ -0,0 +1,32 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-advanced.html +--- + +# Advanced usage [configuration-autodiscover-advanced] + + +## Appenders [_appenders] + +Appenders allow users to append configuration that is already built with the help of either templates or builders. Appenders can be configured to be applied only when a required condition is matched. The kind of configuration that is applied is specific to each appender. + + +### Config [_config_2] + +The config appender can apply a config on top of the config that was generated by templates or builders. The config is applied whenever a provided condition is matched. It is always applied if there is no condition provided. + +```yaml +filebeat.autodiscover: + providers: + - type: kubernetes + templates: + ... + appenders: + - type: config + condition.equals: + kubernetes.namespace: "prometheus" + config: + fields: + type: monitoring +``` + diff --git a/docs/reference/filebeat/configuration-autodiscover-hints.md b/docs/reference/filebeat/configuration-autodiscover-hints.md new file mode 100644 index 000000000000..bff6e319eb03 --- /dev/null +++ b/docs/reference/filebeat/configuration-autodiscover-hints.md @@ -0,0 +1,395 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html +--- + +# Hints based autodiscover [configuration-autodiscover-hints] + +Filebeat supports autodiscover based on hints from the provider. The hints system looks for hints in Kubernetes Pod annotations or Docker labels that have the prefix `co.elastic.logs`. As soon as the container starts, Filebeat will check if it contains any hints and launch the proper config for it. Hints tell Filebeat how to get logs for the given container. By default logs will be retrieved from the container using the `filestream` input. You can use hints to modify this behavior. This is the full list of supported hints: + + +### `co.elastic.logs/enabled` [_co_elastic_logsenabled] + +Filebeat gets logs from all containers by default, you can set this hint to `false` to ignore the output of the container. Filebeat won’t read or send logs from it. If default config is disabled, you can use this annotation to enable log retrieval only for containers with this set to `true`. If you are aiming to use this with Kubernetes, have in mind that annotation values can only be of string type so you will need to explicitly define this as `"true"` or `"false"` accordingly. + + +### `co.elastic.logs/multiline.*` [_co_elastic_logsmultiline] + +Multiline settings. See [Multiline messages](/reference/filebeat/multiline-examples.md) for a full list of all supported options. + + +### `co.elastic.logs/json.*` [_co_elastic_logsjson] + +JSON settings. See [`ndjson`](/reference/filebeat/filebeat-input-filestream.md#filebeat-input-filestream-ndjson) for a full list of all supported options. + +For example, the following hints with json options: + +```yaml +co.elastic.logs/json.message_key: "log" +co.elastic.logs/json.add_error_key: "true" +``` + +will lead to the following input configuration: + +`filestream` + +```yaml +parsers: + - ndjson: + message_key: "log" + add_error_key: "true" +``` + + +### `co.elastic.logs/include_lines` [_co_elastic_logsinclude_lines] + +A list of regular expressions to match the lines that you want Filebeat to include. See [Inputs](/reference/filebeat/configuration-filebeat-options.md) for more info. + + +### `co.elastic.logs/exclude_lines` [_co_elastic_logsexclude_lines] + +A list of regular expressions to match the lines that you want Filebeat to exclude. See [Inputs](/reference/filebeat/configuration-filebeat-options.md) for more info. + + +### `co.elastic.logs/module` [_co_elastic_logsmodule] + +Instead of using raw `docker` input, specifies the module to use to parse logs from the container. See [Modules](/reference/filebeat/filebeat-modules.md) for the list of supported modules. + + +### `co.elastic.logs/fileset` [_co_elastic_logsfileset] + +When module is configured, map container logs to module filesets. You can either configure a single fileset like this: + +```yaml +co.elastic.logs/fileset: access +``` + +Or configure a fileset per stream in the container (stdout and stderr): + +```yaml +co.elastic.logs/fileset.stdout: access +co.elastic.logs/fileset.stderr: error +``` + + +### `co.elastic.logs/raw` [_co_elastic_logsraw] + +When an entire input/module configuration needs to be completely set the `raw` hint can be used. You can provide a stringified JSON of the input configuration. `raw` overrides every other hint and can be used to create both a single or a list of configurations. + +```yaml +co.elastic.logs/raw: "[{\"containers\":{\"ids\":[\"${data.container.id}\"]},\"multiline\":{\"negate\":\"true\",\"pattern\":\"^test\"},\"type\":\"docker\"}]" +``` + + +### `co.elastic.logs/processors` [_co_elastic_logsprocessors] + +Define a processor to be added to the Filebeat input/module configuration. See [Processors](/reference/filebeat/filtering-enhancing-data.md) for the list of supported processors. + +If processors configuration uses list data structure, object fields must be enumerated. For example, hints for the `rename` processor configuration below + +```yaml +processors: + - rename: + fields: + - from: "a.g" + to: "e.d" + fail_on_error: true +``` + +will look like: + +```yaml +co.elastic.logs/processors.rename.fields.0.from: "a.g" +co.elastic.logs/processors.rename.fields.1.to: "e.d" +co.elastic.logs/processors.rename.fail_on_error: 'true' +``` + +If processors configuration uses map data structure, enumeration is not needed. For example, the equivalent to the `add_fields` configuration below + +```yaml +processors: + - add_fields: + target: project + fields: + name: myproject +``` + +is + +```yaml +co.elastic.logs/processors.1.add_fields.target: "project" +co.elastic.logs/processors.1.add_fields.fields.name: "myproject" +``` + +In order to provide ordering of the processor definition, numbers can be provided. If not, the hints builder will do arbitrary ordering: + +```yaml +co.elastic.logs/processors.1.dissect.tokenizer: "%{key1} %{key2}" +co.elastic.logs/processors.dissect.tokenizer: "%{key2} %{key1}" +``` + +In the above sample the processor definition tagged with `1` would be executed first. + + +### `co.elastic.logs/pipeline` [_co_elastic_logspipeline] + +Define an ingest pipeline ID to be added to the Filebeat input/module configuration. + +```yaml +co.elastic.logs/pipeline: custom-pipeline +``` + +When hints are used along with templates, then hints will be evaluated only in case there is no template’s condition that resolves to true. For example: + +```yaml +filebeat.autodiscover.providers: + - type: docker + hints.enabled: true + hints.default_config: + type: container + paths: + - /var/lib/docker/containers/${data.container.id}/*.log + templates: + - condition: + equals: + docker.container.labels.type: "pipeline" + config: + - type: container + paths: + - "/var/lib/docker/containers/${data.docker.container.id}/*.log" + pipeline: my-pipeline +``` + +In this example first the condition `docker.container.labels.type: "pipeline"` is evaluated and if not matched the hints will be processed and if there is again no valid config the `hints.default_config` will be used. + + +## Kubernetes [_kubernetes_2] + +Kubernetes autodiscover provider supports hints in Pod annotations. To enable it just set `hints.enabled`: + +```yaml +filebeat.autodiscover: + providers: + - type: kubernetes + hints.enabled: true +``` + +You can configure the default config that will be launched when a new container is seen, like this: + +```yaml +filebeat.autodiscover: + providers: + - type: kubernetes + hints.enabled: true + hints.default_config: + type: container + paths: + - /var/log/containers/*-${data.container.id}.log # CRI path +``` + +You can also disable default settings entirely, so only Pods annotated like `co.elastic.logs/enabled: true` will be retrieved: + +```yaml +filebeat.autodiscover: + providers: + - type: kubernetes + hints.enabled: true + hints.default_config.enabled: false +``` + +You can annotate Kubernetes Pods with useful info to spin up Filebeat inputs or modules: + +```yaml +annotations: + co.elastic.logs/multiline.pattern: '^\[' + co.elastic.logs/multiline.negate: true + co.elastic.logs/multiline.match: after +``` + + +### Multiple containers [_multiple_containers] + +When a pod has multiple containers, the settings are shared unless you put the container name in the hint. For example, these hints configure multiline settings for all containers in the pod, but set a specific `exclude_lines` hint for the container called `sidecar`. + +```yaml +annotations: + co.elastic.logs/multiline.pattern: '^\[' + co.elastic.logs/multiline.negate: true + co.elastic.logs/multiline.match: after + co.elastic.logs.sidecar/exclude_lines: '^DBG' +``` + + +### Multiple sets of hints [_multiple_sets_of_hints] + +When a container needs multiple inputs to be defined on it, sets of annotations can be provided with numeric prefixes. If there are hints that don’t have a numeric prefix then they get grouped together into a single configuration. + +```yaml +annotations: + co.elastic.logs/exclude_lines: '^DBG' + co.elastic.logs/1.include_lines: '^DBG' + co.elastic.logs/1.processors.dissect.tokenizer: "%{key2} %{key1}" +``` + +The above configuration would generate two input configurations. The first input handles only debug logs and passes it through a dissect tokenizer. The second input handles everything but debug logs. + + +### Namespace Defaults [_namespace_defaults] + +Hints can be configured on the Namespace’s annotations as defaults to use when Pod level annotations are missing. The resultant hints are a combination of Pod annotations and Namespace annotations with the Pod’s taking precedence. To enable Namespace defaults configure the `add_resource_metadata` for Namespace objects as follows: + +```yaml +filebeat.autodiscover: + providers: + - type: kubernetes + hints.enabled: true + add_resource_metadata: + namespace: + include_annotations: ["nsannotation1"] +``` + + +## Docker [_docker_3] + +Docker autodiscover provider supports hints in labels. To enable it just set `hints.enabled`: + +```yaml +filebeat.autodiscover: + providers: + - type: docker + hints.enabled: true +``` + +You can configure the default config that will be launched when a new container is seen, like this: + +```yaml +filebeat.autodiscover: + providers: + - type: docker + hints.enabled: true + hints.default_config: + type: container + paths: + - /var/log/containers/*-${data.container.id}.log # CRI path +``` + +You can also disable default settings entirely, so only containers labeled with `co.elastic.logs/enabled: true` will be retrieved: + +```yaml +filebeat.autodiscover: + providers: + - type: docker + hints.enabled: true + hints.default_config.enabled: false +``` + +You can label Docker containers with useful info to spin up Filebeat inputs, for example: + +```yaml + co.elastic.logs/module: nginx + co.elastic.logs/fileset.stdout: access + co.elastic.logs/fileset.stderr: error +``` + +The above labels configure Filebeat to use the Nginx module to harvest logs for this container. Access logs will be retrieved from stdout stream, and error logs from stderr. + +You can label Docker containers with useful info to decode logs structured as JSON messages, for example: + +```yaml + co.elastic.logs/json.keys_under_root: true + co.elastic.logs/json.add_error_key: true + co.elastic.logs/json.message_key: log +``` + + +## Nomad [_nomad_2] + +Nomad autodiscover provider supports hints using the [`meta` stanza](https://www.nomadproject.io/docs/job-specification/meta.html). To enable it just set `hints.enabled`: + +```yaml +filebeat.autodiscover: + providers: + - type: nomad + hints.enabled: true +``` + +You can configure the default config that will be launched when a new job is seen, like this: + +```yaml +filebeat.autodiscover: + providers: + - type: nomad + hints.enabled: true + hints.default_config: + type: filestream + id: ${data.nomad.task.name}-${data.nomad.allocation.id} # unique ID required + paths: + - /opt/nomad/alloc/${data.nomad.allocation.id}/alloc/logs/${data.nomad.task.name}.* +``` + +You can also disable the default config such that only logs from jobs explicitly annotated with `"co.elastic.logs/enabled" = "true"` will be collected: + +```yaml +filebeat.autodiscover: + providers: + - type: nomad + hints.enabled: true + hints.default_config: + enabled: false + type: filestream + id: ${data.nomad.task.name}-${data.nomad.allocation.id} # unique ID required + paths: + - /opt/nomad/alloc/${data.nomad.allocation.id}/alloc/logs/${data.nomad.task.name}.* +``` + +You can annotate Nomad Jobs using the `meta` stanza with useful info to spin up Filebeat inputs or modules: + +```json +meta { + "co.elastic.logs/enabled" = "true" + "co.elastic.logs/multiline.pattern" = "^\\[" + "co.elastic.logs/multiline.negate" = "true" + "co.elastic.logs/multiline.match" = "after" +} +``` + +If you are using autodiscover then in most cases you will want to use the [`add_nomad_metadata`](/reference/filebeat/add-nomad-metadata.md) processor to enrich events with Nomad metadata. This example configures {{Filebeat}} to connect to the local Nomad agent over HTTPS and adds the Nomad allocation ID to all events from the input. Later in the pipeline the `add_nomad_metadata` processor will use that ID to enrich the event. + +```yaml +filebeat.autodiscover: + providers: + - type: nomad + address: https://localhost:4646 + hints.enabled: true + hints.default_config: + enabled: false <1> + type: filestream + id: ${data.nomad.task.name}-${data.nomad.allocation.id} <2> + paths: + - /opt/nomad/alloc/${data.nomad.allocation.id}/alloc/logs/${data.nomad.task.name}.* + processors: + - add_fields: <3> + target: nomad + fields: + allocation.id: ${data.nomad.allocation.id} + +processors: + - add_nomad_metadata: <4> + when.has_fields.fields: [nomad.allocation.id] + address: https://localhost:4646 + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_uuid: + matchers: + - fields: + lookup_fields: + - 'nomad.allocation.id' +``` + +1. The default config is disabled meaning any task without the `"co.elastic.logs/enabled" = "true"` metadata will be ignored. +2. Unique ID is required. +3. The `add_fields` processor populates the `nomad.allocation.id` field with the Nomad allocation UUID. +4. The `add_nomad_metadata` processor is configured at the global level so that it is only instantiated one time which saves resources. + + diff --git a/docs/reference/filebeat/configuration-autodiscover.md b/docs/reference/filebeat/configuration-autodiscover.md new file mode 100644 index 000000000000..fbe02fa7f36f --- /dev/null +++ b/docs/reference/filebeat/configuration-autodiscover.md @@ -0,0 +1,570 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html +--- + +# Autodiscover [configuration-autodiscover] + +When you run applications on containers, they become moving targets to the monitoring system. Autodiscover allows you to track them and adapt settings as changes happen. By defining configuration templates, the autodiscover subsystem can monitor services as they start running. + +You define autodiscover settings in the `filebeat.autodiscover` section of the `filebeat.yml` config file. To enable autodiscover, you specify a list of providers. + + +## Providers [_providers_2] + +Autodiscover providers work by watching for events on the system and translating those events into internal autodiscover events with a common format. When you configure the provider, you can optionally use fields from the autodiscover event to set conditions that, when met, launch specific configurations. + +On start, Filebeat will scan existing containers and launch the proper configs for them. Then it will watch for new start/stop events. This ensures you don’t need to worry about state, but only define your desired configs. + + +#### Docker [_docker_2] + +The Docker autodiscover provider watches for Docker containers to start and stop. + +It has the following settings: + +`host` +: (Optional) Docker socket (UNIX or TCP socket). It uses `unix:///var/run/docker.sock` by default. + +`ssl` +: (Optional) SSL configuration to use when connecting to the Docker socket. + +`cleanup_timeout` +: (Optional) Specify the time of inactivity before stopping the running configuration for a container, 60s by default. + +`labels.dedot` +: (Optional) Default to be false. If set to true, replace dots in labels with `_`. + +These are the fields available within config templating. The `docker.*` fields will be available on each emitted event. event: + +* host +* port +* docker.container.id +* docker.container.image +* docker.container.name +* docker.container.labels + +For example: + +```yaml +{ + "host": "10.4.15.9", + "port": 6379, + "docker": { + "container": { + "id": "382184ecdb385cfd5d1f1a65f78911054c8511ae009635300ac28b4fc357ce51" + "name": "redis", + "image": "redis:3.2.11", + "labels": { + "io.kubernetes.pod.namespace": "default" + ... + } + } + } +} +``` + +You can define a set of configuration templates to be applied when the condition matches an event. Templates define a condition to match on autodiscover events, together with the list of configurations to launch when this condition happens. + +Conditions match events from the provider. Providers use the same format for [Conditions](/reference/filebeat/defining-processors.md#conditions) that processors use. + +Configuration templates can contain variables from the autodiscover event. They can be accessed under the `data` namespace. For example, with the example event, "`${data.port}`" resolves to `6379`. + +Filebeat supports templates for inputs and modules. + +```yaml +filebeat.autodiscover: + providers: + - type: docker + templates: + - condition: + contains: + docker.container.image: redis + config: + - type: container + paths: + - /var/lib/docker/containers/${data.docker.container.id}/*.log + exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines +``` + +This configuration launches a `docker` logs input for all containers running an image with `redis` in the name. `labels.dedot` defaults to be `true` for docker autodiscover, which means dots in docker labels are replaced with *_* by default. + +If you are using modules, you can override the default input and use the docker input instead. + +```yaml +filebeat.autodiscover: + providers: + - type: docker + templates: + - condition: + contains: + docker.container.image: redis + config: + - module: redis + log: + input: + type: container + paths: + - /var/lib/docker/containers/${data.docker.container.id}/*.log +``` + +::::{warning} +When using autodiscover, you have to be careful when defining config templates, especially if they are reading from places holding information for several containers. For instance, under this file structure: + +`/mnt/logs//*.log` + +You can define a config template like this: + +**Wrong settings**: + +```yaml +autodiscover.providers: + - type: docker + templates: + - condition.contains: + docker.container.image: nginx + config: + - type: log + paths: + - "/mnt/logs/*/*.log" +``` + +That would read all the files under the given path several times (one per nginx container). What you really want is to scope your template to the container that matched the autodiscover condition. Good settings: + +```yaml +autodiscover.providers: + - type: docker + templates: + - condition.contains: + docker.container.image: nginx + config: + - type: log + paths: + - "/mnt/logs/${data.docker.container.id}/*.log" +``` + +:::: + + + +#### Kubernetes [_kubernetes] + +The Kubernetes autodiscover provider watches for Kubernetes nodes, pods, services to start, update, and stop. + +The `kubernetes` autodiscover provider has the following configuration settings: + +`node` +: (Optional) Specify the node to scope filebeat to in case it cannot be accurately detected, as when running filebeat in host network mode. + +`namespace` +: (Optional) Select the namespace from which to collect the events from the resources. If it is not set, the provider collects them from all namespaces. It is unset by default. The namespace configuration only applies to kubernetes resources that are namespace scoped and if `unique` field is set to `false`. + +`cleanup_timeout` +: (Optional) Specify the time of inactivity before stopping the running configuration for a container, 60s by default. + +`kube_config` +: (Optional) Use given config file as configuration for Kubernetes client. If kube_config is not set, KUBECONFIG environment variable will be checked and if not present it will fall back to InCluster. + +`kube_client_options` +: (Optional) Additional options can be configured for Kubernetes client. Currently client QPS and burst are supported, if not set Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) will be used. Example: + +```yaml + kube_client_options: + qps: 5 + burst: 10 +``` + +`resource` +: (Optional) Select the resource to do discovery on. Currently supported Kubernetes resources are `pod`, `service` and `node`. If not configured `resource` defaults to `pod`. + +`scope` +: (Optional) Specify at what level autodiscover needs to be done at. `scope` can either take `node` or `cluster` as values. `node` scope allows discovery of resources in the specified node. `cluster` scope allows cluster wide discovery. Only `pod` and `node` resources can be discovered at node scope. + +`add_resource_metadata` +: (Optional) Specify filters and configration for the extra metadata, that will be added to the event. Configuration parameters: + + * `node` or `namespace`: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included while annotations are not. To change default behaviour `include_labels`, `exclude_labels` and `include_annotations` can be defined. Those settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Note: wildcards are not supported for those settings. The enrichment of `node` or `namespace` metadata can be individually disabled by setting `enabled: false`. + * `deployment`: If resource is `pod` and it is created from a `deployment`, by default the deployment name isn’t added, this can be enabled by setting `deployment: true`. + * `cronjob`: If resource is `pod` and it is created from a `cronjob`, by default the cronjob name isn’t added, this can be enabled by setting `cronjob: true`. + + Example: + + +```yaml + add_resource_metadata: + namespace: + include_labels: ["namespacelabel1"] + node: + include_labels: ["nodelabel2"] + include_annotations: ["nodeannotation1"] + # deployment: false + # cronjob: false +``` + +`unique` +: (Optional) Defaults to `false`. Marking an autodiscover provider as unique results into making the provider to enable the provided templates only when it will gain the leader lease. This setting can only be combined with `cluster` scope. When `unique` is enabled, `resource` and `add_resource_metadata` settings are not taken into account. + +`leader_lease` +: (Optional) Defaults to `filebeat-cluster-leader`. This will be name of the lock lease. One can monitor the status of the lease with `kubectl describe lease beats-cluster-leader`. Different Beats that refer to the same leader lease will be competitors in holding the lease and only one will be elected as leader each time. + +`leader_leaseduration` +: (Optional) Duration that non-leader candidates will wait to force acquire the lease leadership. Defaults to `15s`. + +`leader_renewdeadline` +: (Optional) Duration that the leader will retry refreshing its leadership before giving up. Defaults to `10s`. + +`leader_retryperiod` +: (Optional) Duration that the metricbeat instances running to acquire the lease should wait between tries of actions. Defaults to `2s`. + +Configuration templates can contain variables from the autodiscover event. These variables can be accessed under the `data` namespace, e.g. to access Pod IP: `${data.kubernetes.pod.ip}`. + +These are the fields available within config templating. The `kubernetes.*` fields will be available on each emitted event: + + +##### Generic fields: [_generic_fields] + +* host + + +##### Pod specific: [_pod_specific] + +| Key | Type | Description | +| --- | --- | --- | +| `port` | `string` | Pod port. If pod has multiple ports exposed should be used `ports.` instead | +| `kubernetes.namespace` | `string` | Namespace, where the Pod is running | +| `kubernetes.namespace_uuid` | `string` | UUID of the Namespace, where the Pod is running | +| `kubernetes.namespace_annotations.*` | `object` | Annotations of the Namespace, where the Pod is running. Annotations should be used in not dedoted format, e.g. `kubernetes.namespace_annotations.app.kubernetes.io/name` | +| `kubernetes.pod.name` | `string` | Name of the Pod | +| `kubernetes.pod.uid` | `string` | UID of the Pod | +| `kubernetes.pod.ip` | `string` | IP of the Pod | +| `kubernetes.labels.*` | `object` | Object of the Pod labels. Labels should be used in not dedoted format, e.g. `kubernetes.labels.app.kubernetes.io/name` | +| `kubernetes.annotations.*` | `object` | Object of the Pod annotations. Annotations should be used in not dedoted format, e.g. `kubernetes.annotations.test.io/test` | +| `kubernetes.container.name` | `string` | Name of the container | +| `kubernetes.container.runtime` | `string` | Runtime of the container | +| `kubernetes.container.id` | `string` | ID of the container | +| `kubernetes.container.image` | `string` | Image of the container | +| `kubernetes.node.name` | `string` | Name of the Node | +| `kubernetes.node.uid` | `string` | UID of the Node | +| `kubernetes.node.hostname` | `string` | Hostname of the Node | + + +##### Node specific: [_node_specific] + +| Key | Type | Description | +| --- | --- | --- | +| `kubernetes.labels.*` | `object` | Object of labels of the Node | +| `kubernetes.annotations.*` | `object` | Object of annotations of the Node | +| `kubernetes.node.name` | `string` | Name of the Node | +| `kubernetes.node.uid` | `string` | UID of the Node | +| `kubernetes.node.hostname` | `string` | Hostname of the Node | + + +##### Service specific: [_service_specific] + +| Key | Type | Description | +| --- | --- | --- | +| `port` | `string` | Service port | +| `kubernetes.namespace` | `string` | Namespace of the Service | +| `kubernetes.namespace_uuid` | `string` | UUID of the Namespace of the Service | +| `kubernetes.namespace_annotations.*` | `object` | Annotations of the Namespace of the Service. Annotations should be used in not dedoted format, e.g. `kubernetes.namespace_annotations.app.kubernetes.io/name` | +| `kubernetes.labels.*` | `object` | Object of the Service labels | +| `kubernetes.annotations.*` | `object` | Object of the Service annotations | +| `kubernetes.service.name` | `string` | Name of the Service | +| `kubernetes.service.uid` | `string` | UID of the Service | + +If the `include_annotations` config is added to the provider config, then the list of annotations present in the config are added to the event. + +If the `include_labels` config is added to the provider config, then the list of labels present in the config will be added to the event. + +If the `exclude_labels` config is added to the provider config, then the list of labels present in the config will be excluded from the event. + +if the `labels.dedot` config is set to be `true` in the provider config, then `.` in labels will be replaced with `_`. By default it is `true`. + +if the `annotations.dedot` config is set to be `true` in the provider config, then `.` in annotations will be replaced with `_`. By default it is `true`. + +::::{note} +Starting from 8.6 release `kubernetes.labels.*` used in config templating are not dedoted regardless of `labels.dedot` value. This config parameter only affects the fields added in the final Elasticsearch document. For example, for a pod with label `app.kubernetes.io/name=ingress-nginx` the matching condition should be `condition.equals: kubernetes.labels.app.kubernetes.io/name: "ingress-nginx"`. If `labels.dedot` is set to `true`(default value) the label will be stored in Elasticsearch as `kubernetes.labels.app_kubernetes_io/name`. The same applies for kubernetes annotations. +:::: + + +For example: + +```yaml +{ + "host": "172.17.0.21", + "port": 9090, + "kubernetes": { + "container": { + "id": "bb3a50625c01b16a88aa224779c39262a9ad14264c3034669a50cd9a90af1527", + "image": "prom/prometheus", + "name": "prometheus" + }, + "labels": { + "project": "prometheus", + ... + }, + "namespace": "default", + "node": { + "name": "minikube" + }, + "pod": { + "name": "prometheus-2657348378-k1pnh" + } + }, +} +``` + +Filebeat supports templates for inputs and modules. + +```yaml +filebeat.autodiscover: + providers: + - type: kubernetes + templates: + - condition: + equals: + kubernetes.namespace: kube-system + config: + - type: container + paths: + - /var/log/containers/*-${data.kubernetes.container.id}.log + exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines +``` + +This configuration launches a `docker` logs input for all containers of pods running in the Kubernetes namespace `kube-system`. + +If you are using modules, you can override the default input and use the docker input instead. + +```yaml +filebeat.autodiscover: + providers: + - type: kubernetes + templates: + - condition: + equals: + kubernetes.container.image: "redis" + config: + - module: redis + log: + input: + type: container + paths: + - /var/log/containers/*-${data.kubernetes.container.id}.log +``` + + +#### Jolokia [_jolokia] + +The Jolokia autodiscover provider uses Jolokia Discovery to find agents running in your host or your network. + +The configuration of this provider consists in a set of network interfaces, as well as a set of templates as in other providers. The network interfaces will be the ones used for discovery probes, each item of `interfaces` has these settings: + +`name` +: the name of the interface (e.g. `br0`), it can contain a wildcard as suffix to apply the same settings to multiple network interfaces of the same type (e.g. `br*`). + +`interval` +: time between probes (defaults to 10s) + +`grace_period` +: time since the last reply to consider an instance stopped (defaults to 30s) + +`probe_timeout` +: max time to wait for responses since a probe is sent (defaults to 1s) + +Jolokia Discovery mechanism is supported by any Jolokia agent since version 1.2.0, it is enabled by default when Jolokia is included in the application as a JVM agent, but disabled in other cases as the OSGI or WAR (Java EE) agents. In any case, this feature is controlled with two properties: + +* `discoveryEnabled`, to enable the feature +* `discoveryAgentUrl`, if set, this is the URL announced by the agent when being discovered, setting this parameter implicitly enables the feature + +There are multiple ways of setting these properties, and they can vary from application to application, please refer to the documentation of your application to find the more suitable way to set them in your case. + +Jolokia Discovery is based on UDP multicast requests. Agents join the multicast group 239.192.48.84, port 24884, and discovery is done by sending queries to this group. You have to take into account that UDP traffic between Filebeat and the Jolokia agents has to be allowed. Also notice that this multicast address is in the 239.0.0.0/8 range, that is reserved for private use within an organization, so it can only be used in private networks. + +These are the available fields during within config templating. The `jolokia.*` fields will be available on each emitted event. + +* jolokia.agent.id +* jolokia.agent.version +* jolokia.secured +* jolokia.server.product +* jolokia.server.vendor +* jolokia.server.version +* jolokia.url + +Filebeat supports templates for inputs and modules: + +```yaml +filebeat.autodiscover: + providers: + - type: jolokia + interfaces: + - name: lo + templates: + - condition: + contains: + jolokia.server.product: "kafka" + config: + - module: kafka + log: + enabled: true + var.paths: + - /var/log/kafka/*.log +``` + +This configuration starts a jolokia module that collects logs of kafka if it is running. Discovery probes are sent using the local interface. + + +#### Nomad [_nomad] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The Nomad autodiscover provider watches for Nomad jobs to start, update, and stop. + +The `nomad` autodiscover provider has the following configuration settings: + +`address` +: (Optional) Specify the address of the Nomad agent. By default it will try to talk to a Nomad agent running locally (`http://127.0.0.1:4646`). + +`region` +: (Optional) Region to use. If not provided, the default agent region is used. + +`namespace` +: (Optional) Namespace to use. If not provided the `default` namespace is used. + +`secret_id` +: (Optional) SecretID to use if ACL is enabled in Nomad. This is an example ACL policy to apply to the token. + +```json +namespace "*" { + policy = "read" +} +node { + policy = "read" +} +agent { + policy = "read" +} +``` + +`node` +: (Optional) Specify the node to scope filebeat to in case it cannot be accurately detected when `node` scope is used. + +`scope` +: (Optional) Specify at what level autodiscover needs to be done at. `scope` can either take `node` or `cluster` as values. `node` scope allows discovery of resources in the specified node. `cluster` scope allows cluster wide discovery. Defaults to `node`. + +`wait_time` +: (Optional) Limits how long a Watch will block. If not specified (or set to `0`) the default configuration from the agent will be used. + +`allow_stale` +: (Optional) allows any Nomad server (non-leader) to service a read. This normally means that the local node where filebeat is allocated will service filebeat’s requests. Defaults to `true`. + +The configuration of templates and conditions is similar to that of the Docker provider. Configuration templates can contain variables from the autodiscover event. They can be accessed under `data` namespace. + +These are the available fields during config templating. The `nomad.*` fields will be available on each emitted event. + +* nomad.allocation.id +* nomad.allocation.name +* nomad.allocation.status +* nomad.datacenter +* nomad.job.name +* nomad.job.type +* nomad.namespace +* nomad.region +* nomad.task.name +* nomad.task.service.canary_tags +* nomad.task.service.name +* nomad.task.service.tags + +If the `include_labels` config is added to the provider config, then the list of labels present in the config will be added to the event. + +If the `exclude_labels` config is added to the provider config, then the list of labels present in the config will be excluded from the event. + +if the `labels.dedot` config is set to be `true` in the provider config, then `.` in labels will be replaced with `_`. + +For example: + +```yaml +{ + ... + "region": "europe", + "allocation": { + "name": "coffeshop.api[0]", + "id": "35eba07f-e5e4-20ac-6def-85117bee6efb", + "status": "running" + }, + "datacenters": [ + "europe-west4" + ], + "namespace": "default", + "job": { + "type": "service", + "name": "coffeshop" + }, + "task": { + "service": { + "name": [ + "coffeshop" + ], + "tags": [ + "coffeshop", + "nginx" + ], + "canary_tags": [ + "coffeshop" + ] + }, + "name": "api" + }, + ... +} +``` + +Filebeat supports templates for inputs and modules. + +```yaml +filebeat.autodiscover: + providers: + - type: nomad + node: nomad1 + scope: local + hints.enabled: true + allow_stale: true + templates: + - condition: + equals: + nomad.namespace: web + config: + - type: filestream + id: ${data.nomad.task.name}-${data.nomad.allocation.id} # unique ID required + paths: + - /var/lib/nomad/alloc/${data.nomad.allocation.id}/alloc/logs/${data.nomad.task.name}.stderr.[0-9]* + exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines +``` + +This configuration launches a `filestream` input for all jobs under the `web` Nomad namespace. + +If you are using modules, you can override the default input and customize it to read from the `${data.nomad.task.name}.stdout` and/or `${data.nomad.task.name}.stderr` files. + +```yaml +filebeat.autodiscover: + providers: + - type: nomad + templates: + - condition: + equals: + nomad.task.service.tags: "redis" + config: + - module: redis + log: + input: + type: filestream + id: ${data.nomad.task.name}-${data.nomad.allocation.id} # unique ID required + paths: + - /var/lib/nomad/alloc/${data.nomad.allocation.id}/alloc/logs/${data.nomad.task.name}.* +``` + +::::{warning} +The `docker` input is currently not supported. Nomad doesn’t expose the container ID associated with the allocation. Without the container ID, there is no way of generating the proper path for reading the container’s logs. +:::: diff --git a/docs/reference/filebeat/configuration-dashboards.md b/docs/reference/filebeat/configuration-dashboards.md new file mode 100644 index 000000000000..519070bbbd64 --- /dev/null +++ b/docs/reference/filebeat/configuration-dashboards.md @@ -0,0 +1,103 @@ +--- +navigation_title: "Kibana dashboards" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-dashboards.html +--- + +# Configure Kibana dashboard loading [configuration-dashboards] + + +Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. + +To load the dashboards, you can either enable dashboard loading in the `setup.dashboards` section of the `filebeat.yml` config file, or you can run the `setup` command. Dashboard loading is disabled by default. + +When dashboard loading is enabled, Filebeat uses the Kibana API to load the sample dashboards. Dashboard loading is only attempted when Filebeat starts up. If Kibana is not available at startup, Filebeat will stop with an error. + +To enable dashboard loading, add the following setting to the config file: + +```yaml +setup.dashboards.enabled: true +``` + + +## Configuration options [_configuration_options_35] + +You can specify the following options in the `setup.dashboards` section of the `filebeat.yml` config file: + + +### `setup.dashboards.enabled` [_setup_dashboards_enabled] + +If this option is set to true, Filebeat loads the sample Kibana dashboards from the local `kibana` directory in the home path of the Filebeat installation. + +::::{note} +Filebeat loads dashboards on startup if either `enabled` is set to `true` or the `setup.dashboards` section is included in the configuration. +:::: + + +::::{note} +When dashboard loading is enabled, Filebeat overwrites any existing dashboards that match the names of the dashboards you are loading. This happens every time Filebeat starts. +:::: + + +If no other options are set, the dashboard are loaded from the local `kibana` directory in the home path of the Filebeat installation. To load dashboards from a different location, you can configure one of the following options: [`setup.dashboards.directory`](#directory-option), [`setup.dashboards.url`](#url-option), or [`setup.dashboards.file`](#file-option). + + +### `setup.dashboards.directory` [directory-option] + +The directory that contains the dashboards to load. The default is the `kibana` folder in the home path. + + +### `setup.dashboards.url` [url-option] + +The URL to use for downloading the dashboard archive. If this option is set, Filebeat downloads the dashboard archive from the specified URL instead of using the local directory. + + +### `setup.dashboards.file` [file-option] + +The file archive (zip file) that contains the dashboards to load. If this option is set, Filebeat looks for a dashboard archive in the specified path instead of using the local directory. + + +### `setup.dashboards.beat` [_setup_dashboards_beat] + +In case the archive contains the dashboards for multiple Beats, this setting lets you select the Beat for which you want to load dashboards. To load all the dashboards in the archive, set this option to an empty string. The default is `"filebeat"`. + + +### `setup.dashboards.kibana_index` [_setup_dashboards_kibana_index] + +The name of the Kibana index to use for setting the configuration. The default is `".kibana"` + + +### `setup.dashboards.index` [_setup_dashboards_index] + +The Elasticsearch index name. This setting overwrites the index name defined in the dashboards and index pattern. Example: `"testbeat-*"` + +::::{note} +This setting only works for Kibana 6.0 and newer. +:::: + + + +### `setup.dashboards.always_kibana` [_setup_dashboards_always_kibana] + +Force loading of dashboards using the Kibana API without querying Elasticsearch for the version. The default is `false`. + + +### `setup.dashboards.retry.enabled` [_setup_dashboards_retry_enabled] + +If this option is set to true, and Kibana is not reachable at the time when dashboards are loaded, Filebeat will retry to reconnect to Kibana instead of exiting with an error. Disabled by default. + + +### `setup.dashboards.retry.interval` [_setup_dashboards_retry_interval] + +Duration interval between Kibana connection retries. Defaults to 1 second. + + +### `setup.dashboards.retry.maximum` [_setup_dashboards_retry_maximum] + +Maximum number of retries before exiting with an error. Set to 0 for unlimited retrying. Default is unlimited. + + +### `setup.dashboards.string_replacements` [_setup_dashboards_string_replacements] + +The needle and replacements string map, which is used to replace needle string in dashboards and their references contents. + diff --git a/docs/reference/filebeat/configuration-feature-flags.md b/docs/reference/filebeat/configuration-feature-flags.md new file mode 100644 index 000000000000..49376c262432 --- /dev/null +++ b/docs/reference/filebeat/configuration-feature-flags.md @@ -0,0 +1,54 @@ +--- +navigation_title: "Feature flags" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-feature-flags.html +--- + +# Configure feature flags [configuration-feature-flags] + + +The Feature Flags section of the `filebeat.yml` config file contains settings in Filebeat that are disabled by default. These may include experimental features, changes to behaviors within Filebeat, or settings that could cause a breaking change. For example a setting that changes information included in events might be inconsistent with the naming pattern expected in your configured Filebeat output. + +To enable any of the settings listed on this page, change the associated `enabled` flag from `false` to `true`. + +```yaml +features: + mysetting: + enabled: true +``` + + +## Configuration options [_configuration_options_40] + +You can specify the following options in the `features` section of the `filebeat.yml` config file: + + +### `fqdn` [_fqdn] + +Contains configuration for the FQDN reporting feature. When this feature is enabled, the fully-qualified domain name for the host is reported in the `host.name` field in events produced by Filebeat. + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +For FQDN reporting to work as expected, the hostname of the current host must either: + +* Have a CNAME entry defined in DNS. +* Have one of its corresponding IP addresses respond successfully to a reverse DNS lookup. + +If neither pre-requisite is satisfied, `host.name` continues to report the hostname of the current host as if the FQDN feature flag were not enabled. + +Example configuration: + +```yaml +features: + fqdn: + enabled: true +``` + + +#### `enabled` [_enabled_39] + +Set to `true` to enable the FQDN reporting feature of Filebeat. Defaults to `false`. + diff --git a/docs/reference/filebeat/configuration-filebeat-modules.md b/docs/reference/filebeat/configuration-filebeat-modules.md new file mode 100644 index 000000000000..60c2b76664f8 --- /dev/null +++ b/docs/reference/filebeat/configuration-filebeat-modules.md @@ -0,0 +1,95 @@ +--- +navigation_title: "Modules" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-modules.html +--- + +# Configure modules [configuration-filebeat-modules] + + +::::{note} +Using Filebeat modules is optional. You may decide to [configure inputs manually](/reference/filebeat/configuration-filebeat-options.md) if you’re using a log type that isn’t supported, or you want to use a different setup. +:::: + + +Filebeat [modules](/reference/filebeat/filebeat-modules.md) provide a quick way to get started processing common log formats. They contain default configurations, {{es}} ingest pipeline definitions, and {{kib}} dashboards to help you implement and deploy a log monitoring solution. + +You can configure modules in the `modules.d` directory (recommended), or in the Filebeat configuration file. + +Before running Filebeat with modules enabled, make sure you also set up the environment to use {{kib}} dashboards. See [Quick start: installation and configuration](/reference/filebeat/filebeat-installation-configuration.md) for more information. + +::::{note} +On systems with POSIX file permissions, all Beats configuration files are subject to ownership and file permission checks. For more information, see [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md). +:::: + + + +## Configure modules in the `modules.d` directory [configure-modules-d-configs] + +The `modules.d` directory contains default configurations for all the modules available in Filebeat. To enable or disable specific module configurations under `modules.d`, run the [`modules enable` or `modules disable`](/reference/filebeat/command-line-options.md#modules-command) command. For example: + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +filebeat modules enable nginx +``` +:::::: + +::::::{tab-item} RPM +```sh +filebeat modules enable nginx +``` +:::::: + +::::::{tab-item} MacOS +```sh +./filebeat modules enable nginx +``` +:::::: + +::::::{tab-item} Linux +```sh +./filebeat modules enable nginx +``` +:::::: + +::::::{tab-item} Windows +```sh +PS > .\filebeat.exe modules enable nginx +``` +:::::: + +::::::: +The default configurations assume that your data is in the location expected for your OS and that the behavior of the module is appropriate for your environment. To change the default behavior, configure variable settings. For a list of available settings, see the documentation under [Modules](/reference/filebeat/filebeat-modules.md). + +For advanced use cases, you can also [override input settings](/reference/filebeat/advanced-settings.md). + +::::{tip} +You can enable modules at runtime by using the [--modules flag](/reference/filebeat/filebeat-modules.md). This is useful if you’re getting started and want to try things out. Any modules specified at the command line are loaded along with any modules that are enabled in the configuration file or `modules.d` directory. If there’s a conflict, the configuration specified at the command line is used. +:::: + + + +## Configure modules in the `filebeat.yml` file [configure-modules-config-file] + +When possible, you should use the config files in the `modules.d` directory. + +However, configuring [modules](/reference/filebeat/filebeat-modules.md) directly in the config file is a practical approach if you have upgraded from a previous version of Filebeat and don’t want to move your module configs to the `modules.d` directory. You can continue to configure modules in the `filebeat.yml` file, but you won’t be able to use the `modules` command to enable and disable configurations because the command requires the `modules.d` layout. + +To enable specific modules in the `filebeat.yml` config file, add entries to the `filebeat.modules` list. Each entry in the list begins with a dash (-) and is followed by settings for that module. + +The following example shows a configuration that runs the `nginx`,`mysql`, and `system` modules: + +```yaml +filebeat.modules: +- module: nginx + access: + error: +- module: mysql + slowlog: +- module: system + auth: +``` + + diff --git a/docs/reference/filebeat/configuration-filebeat-options.md b/docs/reference/filebeat/configuration-filebeat-options.md new file mode 100644 index 000000000000..55c249bd5f98 --- /dev/null +++ b/docs/reference/filebeat/configuration-filebeat-options.md @@ -0,0 +1,88 @@ +--- +navigation_title: "Inputs" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html +--- + +# Configure inputs [configuration-filebeat-options] + + +::::{tip} +[Filebeat modules](/reference/filebeat/filebeat-modules-overview.md) provide the fastest getting started experience for common log formats. See [Quick start: installation and configuration](/reference/filebeat/filebeat-installation-configuration.md) to learn how to get started. +:::: + + +To configure Filebeat manually (instead of using [modules](/reference/filebeat/filebeat-modules-overview.md)), you specify a list of inputs in the `filebeat.inputs` section of the `filebeat.yml`. Inputs specify how Filebeat locates and processes input data. + +The list is a [YAML](http://yaml.org/) array, so each input begins with a dash (`-`). You can specify multiple inputs, and you can specify the same input type more than once. For example: + +```yaml +filebeat.inputs: +- type: filestream + id: my-filestream-id <1> + paths: + - /var/log/system.log + - /var/log/wifi.log +- type: filestream + id: apache-filestream-id + paths: + - "/var/log/apache2/*" + fields: + apache: true + fields_under_root: true +``` + +1. Each filestream input must have a unique ID to allow tracking the state of files. + + +For the most basic configuration, define a single input with a single path. For example: + +```yaml +filebeat.inputs: +- type: filestream + id: my-filestream-id + paths: + - /var/log/*.log +``` + +The input in this example harvests all files in the path `/var/log/*.log`, which means that Filebeat will harvest all files in the directory `/var/log/` that end with `.log`. All patterns supported by [Go Glob](https://golang.org/pkg/path/filepath/#Glob) are also supported here. + +To fetch all files from a predefined level of subdirectories, use this pattern: `/var/log/*/*.log`. This fetches all `.log` files from the subfolders of `/var/log`. It does not fetch log files from the `/var/log` folder itself. Currently it is not possible to recursively fetch all files in all subdirectories of a directory. + + +## Input types [filebeat-input-types] + +You can configure Filebeat to use the following inputs: + +* [AWS CloudWatch](/reference/filebeat/filebeat-input-aws-cloudwatch.md) +* [AWS S3](/reference/filebeat/filebeat-input-aws-s3.md) +* [Azure Event Hub](/reference/filebeat/filebeat-input-azure-eventhub.md) +* [Azure Blob Storage](/reference/filebeat/filebeat-input-azure-blob-storage.md) +* [Benchmark](/reference/filebeat/filebeat-input-benchmark.md) +* [CEL](/reference/filebeat/filebeat-input-cel.md) +* [Cloud Foundry](/reference/filebeat/filebeat-input-cloudfoundry.md) +* [CometD](/reference/filebeat/filebeat-input-cometd.md) +* [Container](/reference/filebeat/filebeat-input-container.md) +* [Entity Analytics](/reference/filebeat/filebeat-input-entity-analytics.md) +* [ETW](/reference/filebeat/filebeat-input-etw.md) +* [filestream](/reference/filebeat/filebeat-input-filestream.md) +* [GCP Pub/Sub](/reference/filebeat/filebeat-input-gcp-pubsub.md) +* [Google Cloud Storage](/reference/filebeat/filebeat-input-gcs.md) +* [HTTP Endpoint](/reference/filebeat/filebeat-input-http_endpoint.md) +* [HTTP JSON](/reference/filebeat/filebeat-input-httpjson.md) +* [journald](/reference/filebeat/filebeat-input-journald.md) +* [Kafka](/reference/filebeat/filebeat-input-kafka.md) +* [Log](/reference/filebeat/filebeat-input-log.md) (deprecated in 7.16.0, use [filestream](/reference/filebeat/filebeat-input-filestream.md)) +* [MQTT](/reference/filebeat/filebeat-input-mqtt.md) +* [NetFlow](/reference/filebeat/filebeat-input-netflow.md) +* [Office 365 Management Activity API](/reference/filebeat/filebeat-input-o365audit.md) +* [Redis](/reference/filebeat/filebeat-input-redis.md) +* [Salesforce](/reference/filebeat/filebeat-input-salesforce.md) +* [Stdin](/reference/filebeat/filebeat-input-stdin.md) +* [Streaming](/reference/filebeat/filebeat-input-streaming.md) +* [Syslog](/reference/filebeat/filebeat-input-syslog.md) +* [TCP](/reference/filebeat/filebeat-input-tcp.md) +* [UDP](/reference/filebeat/filebeat-input-udp.md) +* [Unified Logs](/reference/filebeat/filebeat-input-unifiedlogs.md) +* [Unix](/reference/filebeat/filebeat-input-unix.md) +* [winlog](/reference/filebeat/filebeat-input-winlog.md) diff --git a/docs/reference/filebeat/configuration-general-options.md b/docs/reference/filebeat/configuration-general-options.md new file mode 100644 index 000000000000..16afda2da579 --- /dev/null +++ b/docs/reference/filebeat/configuration-general-options.md @@ -0,0 +1,187 @@ +--- +navigation_title: "General settings" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-general-options.html +--- + +# Configure general settings [configuration-general-options] + + +You can specify settings in the `filebeat.yml` config file to control the general behavior of Filebeat. This includes: + +* [Global options](#configuration-global-options) that control things like publisher behavior and the location of some files. +* [General options](#configuration-general) that are supported by all Elastic Beats. + + +## Global Filebeat configuration options [configuration-global-options] + +These options are in the `filebeat` namespace. + + +### `registry.path` [_registry_path] + +The root path of the registry. If a relative path is used, it is considered relative to the data path. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. The default is `${path.data}/registry`. + +```yaml +filebeat.registry.path: registry +``` + +::::{note} +The registry is only updated when new events are flushed and not on a predefined period. That means in case there are some states where the TTL expired, these are only removed when new events are processed. +:::: + + + +### `registry.file_permissions` [_registry_file_permissions] + +The permissions mask to apply on registry data file. The default value is 0600. The permissions option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with 0. + +The most permissive mask allowed is 0640. If a higher permissions mask is specified via this setting, it will be subject to an umask of 0027. + +This option is not supported on Windows. + +Examples: + +* 0640: give read and write access to the file owner, and read access to members of the group associated with the file. +* 0600: give read and write access to the file owner, and no access to all others. + +```yaml +filebeat.registry.file_permissions: 0600 +``` + + +### `registry.flush` [_registry_flush] + +The timeout value that controls when registry entries are written to disk (flushed). When an unwritten update exceeds this value, it triggers a write to disk. When `registry.flush` is set to 0s, the registry is written to disk after each batch of events has been published successfully. The default value is 1s. + +::::{note} +The registry is always updated when Filebeat shuts down normally. After an abnormal shutdown, the registry will not be up-to-date if the `registry.flush` value is >0s. Filebeat will send published events again (depending on values in the last updated registry file). +:::: + + +::::{note} +Filtering out a huge number of logs can cause many registry updates, slowing down processing. Setting `registry.flush` to a value >0s reduces write operations, helping Filebeat process more events. +:::: + + + +### `registry.migrate_file` [_registry_migrate_file] + +Prior to Filebeat 7.0 the registry is stored in a single file. When you upgrade to 7.0, Filebeat will automatically migrate the old Filebeat 6.x registry file to use the new directory format. Filebeat looks for the file in the location specified by `filebeat.registry.path`. If you changed the path while upgrading, set `filebeat.registry.migrate_file` to point to the old registry file. + +```yaml +filebeat.registry.path: ${path.data}/registry +filebeat.registry.migrate_file: /path/to/old/registry_file +``` + +The registry will be migrated to the new location only if a registry using the directory format does not already exist. + + +### `config_dir` [_config_dir] + +[6.0.0] + +The full path to the directory that contains additional input configuration files. Each configuration file must end with `.yml`. Each config file must also specify the full Filebeat config hierarchy even though only the `inputs` part of each file is processed. All global options, such as `registry_file`, are ignored. + +The `config_dir` option MUST point to a directory other than the directory where the main Filebeat config file resides. + +If the specified path is not absolute, it is considered relative to the configuration path. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. + +```yaml +filebeat.config_dir: path/to/configs +``` + + +### `shutdown_timeout` [shutdown-timeout] + +How long Filebeat waits on shutdown for the publisher to finish sending events before Filebeat shuts down. + +By default, this option is disabled, and Filebeat does not wait for the publisher to finish sending events before shutting down. This means that any events sent to the output, but not acknowledged before Filebeat shuts down, are sent again when you restart Filebeat. For more details about how this works, see [How does Filebeat ensure at-least-once delivery?](/reference/filebeat/how-filebeat-works.md#at-least-once-delivery). + +You can configure the `shutdown_timeout` option to specify the maximum amount of time that Filebeat waits for the publisher to finish sending events before shutting down. If all events are acknowledged before `shutdown_timeout` is reached, Filebeat will shut down. + +There is no recommended setting for this option because determining the correct value for `shutdown_timeout` depends heavily on the environment in which Filebeat is running and the current state of the output. + +Example configuration: + +```yaml +filebeat.shutdown_timeout: 5s +``` + + +## General configuration options [configuration-general] + + +These options are supported by all Elastic Beats. Because they are common options, they are not namespaced. + +Here is an example configuration: + +```yaml +name: "my-shipper" +tags: ["service-X", "web-tier"] +``` + + +### `name` [_name_2] + +The name of the Beat. If this option is empty, the `hostname` of the server is used. The name is included as the `agent.name` field in each published transaction. You can use the name to group all transactions sent by a single Beat. + +Example: + +```yaml +name: "my-shipper" +``` + + +### `tags` [_tags_30] + +A list of tags that the Beat includes in the `tags` field of each published transaction. Tags make it easy to group servers by different logical properties. For example, if you have a cluster of web servers, you can add the "webservers" tag to the Beat on each server, and then use filters and queries in the Kibana web interface to get visualisations for the whole group of servers. + +Example: + +```yaml +tags: ["my-service", "hardware", "test"] +``` + + +### `fields` [libbeat-configuration-fields] + +Optional fields that you can specify to add additional information to the output. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. By default, the fields that you specify here will be grouped under a `fields` sub-dictionary in the output document. To store the custom fields as top-level fields, set the `fields_under_root` option to true. + +Example: + +```yaml +fields: {project: "myproject", instance-id: "574734885120952459"} +``` + + +### `fields_under_root` [_fields_under_root_2] + +If this option is set to true, the custom [fields](#libbeat-configuration-fields) are stored as top-level fields in the output document instead of being grouped under a `fields` sub-dictionary. If the custom field names conflict with other field names, then the custom fields overwrite the other fields. + +Example: + +```yaml +fields_under_root: true +fields: + instance_id: i-10a64379 + region: us-east-1 +``` + + +### `processors` [_processors_30] + +A list of processors to apply to the data generated by the beat. + +See [Processors](/reference/filebeat/filtering-enhancing-data.md) for information about specifying processors in your config. + + +### `max_procs` [_max_procs] + +Sets the maximum number of CPUs that can be executing simultaneously. The default is the number of logical CPUs available in the system. + + +### `timestamp.precision` [_timestamp_precision] + +Configure the precision of all timestamps. By default it is set to millisecond. Available options: millisecond, microsecond, nanosecond + diff --git a/docs/reference/filebeat/configuration-instrumentation.md b/docs/reference/filebeat/configuration-instrumentation.md new file mode 100644 index 000000000000..4f4e8aa53639 --- /dev/null +++ b/docs/reference/filebeat/configuration-instrumentation.md @@ -0,0 +1,87 @@ +--- +navigation_title: "Instrumentation" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-instrumentation.html +--- + +# Configure APM instrumentation [configuration-instrumentation] + + +Libbeat uses the Elastic APM Go Agent to instrument its publishing pipeline. Currently, only the Elasticsearch output is instrumented. To gain insight into the performance of Filebeat, you can enable this instrumentation and send trace data to the APM Integration. + +Example configuration with instrumentation enabled: + +```yaml +instrumentation: + enabled: true + environment: production + hosts: + - "http://localhost:8200" + api_key: L5ER6FEvjkmlfalBealQ3f3fLqf03fazfOV +``` + + +## Configuration options [_configuration_options_39] + +You can specify the following options in the `instrumentation` section of the `filebeat.yml` config file: + + +### `enabled` [_enabled_38] + +Set to `true` to enable instrumentation of Filebeat. Defaults to `false`. + + +### `environment` [_environment] + +Set the environment in which Filebeat is running, for example, `staging`, `production`, `dev`, etc. Environments can be filtered in the [APM app](docs-content://solutions/observability/apps/overviews.md). + + +### `hosts` [_hosts_4] + +The APM integration [host](docs-content://reference/ingestion-tools/observability/apm-settings.md) to report instrumentation data to. Defaults to `http://localhost:8200`. + + +### `api_key` [_api_key_2] + +The [API Key](docs-content://reference/ingestion-tools/observability/apm-settings.md) used to secure communication with the APM Integration. If `api_key` is set then `secret_token` will be ignored. + + +### `secret_token` [_secret_token] + +The [Secret token](docs-content://reference/ingestion-tools/observability/apm-settings.md) used to secure communication with the APM Integration. + + +### `profiling.cpu.enabled` [_profiling_cpu_enabled] + +Set to `true` to enable CPU profiling, where profile samples are recorded as events. + +This feature is experimental. + + +### `profiling.cpu.interval` [_profiling_cpu_interval] + +Configure the CPU profiling interval. Defaults to `60s`. + +This feature is experimental. + + +### `profiling.cpu.duration` [_profiling_cpu_duration] + +Configure the CPU profiling duration. Defaults to `10s`. + +This feature is experimental. + + +### `profiling.heap.enabled` [_profiling_heap_enabled] + +Set to `true` to enable heap profiling. + +This feature is experimental. + + +### `profiling.heap.interval` [_profiling_heap_interval] + +Configure the heap profiling interval. Defaults to `60s`. + +This feature is experimental. + diff --git a/docs/reference/filebeat/configuration-kerberos.md b/docs/reference/filebeat/configuration-kerberos.md new file mode 100644 index 000000000000..d36ffc8c768e --- /dev/null +++ b/docs/reference/filebeat/configuration-kerberos.md @@ -0,0 +1,90 @@ +--- +navigation_title: "Kerberos" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-kerberos.html +--- + +# Configure Kerberos [configuration-kerberos] + + +You can specify Kerberos options with any output or input that supports Kerberos, like {{es}}. + +The following encryption types are supported: + +* aes128-cts-hmac-sha1-96 +* aes128-cts-hmac-sha256-128 +* aes256-cts-hmac-sha1-96 +* aes256-cts-hmac-sha384-192 +* des3-cbc-sha1-kd +* rc4-hmac + +Example output config with Kerberos password based authentication: + +```yaml +output.elasticsearch.hosts: ["http://my-elasticsearch.elastic.co:9200"] +output.elasticsearch.kerberos.auth_type: password +output.elasticsearch.kerberos.username: "elastic" +output.elasticsearch.kerberos.password: "changeme" +output.elasticsearch.kerberos.config_path: "/etc/krb5.conf" +output.elasticsearch.kerberos.realm: "ELASTIC.CO" +``` + +The service principal name for the Elasticsearch instance is contructed from these options. Based on this configuration it is going to be `HTTP/my-elasticsearch.elastic.co@ELASTIC.CO`. + + +## Configuration options [_configuration_options_32] + +You can specify the following options in the `kerberos` section of the `filebeat.yml` config file: + + +### `enabled` [_enabled_37] + +The `enabled` setting can be used to enable the kerberos configuration by setting it to `false`. The default value is `true`. + +::::{note} +Kerberos settings are disabled if either `enabled` is set to `false` or the `kerberos` section is missing. +:::: + + + +### `auth_type` [_auth_type] + +There are two options to authenticate with Kerberos KDC: `password` and `keytab`. + +`password` expects the principal name and its password. When choosing `keytab`, you have to specify a principal name and a path to a keytab. The keytab must contain the keys of the selected principal. Otherwise, authentication will fail. + + +### `config_path` [_config_path] + +You need to set the path to the `krb5.conf`, so Filebeat can find the Kerberos KDC to retrieve a ticket. + + +### `username` [_username_5] + +Name of the principal used to connect to the output. + + +### `password` [_password_6] + +If you configured `password` for `auth_type`, you have to provide a password for the selected principal. + + +### `keytab` [_keytab] + +If you configured `keytab` for `auth_type`, you have to provide the path to the keytab of the selected principal. + + +### `service_name` [_service_name] + +This option can only be configured for Kafka. It is the name of the Kafka service, usually `kafka`. + + +### `realm` [_realm] + +Name of the realm where the output resides. + + +### `enable_krb5_fast` [_enable_krb5_fast] + +Enable Kerberos FAST authentication. This may conflict with some Active Directory installations. The default is `false`. + diff --git a/docs/reference/filebeat/configuration-logging.md b/docs/reference/filebeat/configuration-logging.md new file mode 100644 index 000000000000..b92c2d52b98d --- /dev/null +++ b/docs/reference/filebeat/configuration-logging.md @@ -0,0 +1,253 @@ +--- +navigation_title: "Logging" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-logging.html +--- + +# Configure logging [configuration-logging] + + +The `logging` section of the `filebeat.yml` config file contains options for configuring the logging output. The logging system can write logs to the syslog or rotate log files. If logging is not explicitly configured the file output is used. + +```yaml +logging.level: info +logging.to_files: true +logging.files: + path: /var/log/filebeat + name: filebeat + keepfiles: 7 + permissions: 0640 +``` + +::::{tip} +In addition to setting logging options in the config file, you can modify the logging output configuration from the command line. See [Command reference](/reference/filebeat/command-line-options.md). +:::: + + +::::{warning} +When Filebeat is running on a Linux system with systemd, it uses by default the `-e` command line option, that makes it write all the logging output to stderr so it can be captured by journald. Other outputs are disabled. See [Filebeat and systemd](/reference/filebeat/running-with-systemd.md) to know more and learn how to change this. +:::: + + + +## Configuration options [_configuration_options_38] + +You can specify the following options in the `logging` section of the `filebeat.yml` config file: + + +### `logging.to_stderr` [_logging_to_stderr] + +When true, writes all logging output to standard error output. This is equivalent to using the `-e` command line option. + + +### `logging.to_syslog` [_logging_to_syslog] + +When true, writes all logging output to the syslog. + +::::{note} +This option is not supported on Windows. +:::: + + + +### `logging.to_eventlog` [_logging_to_eventlog] + +When true, writes all logging output to the Windows Event Log. + + +### `logging.to_files` [_logging_to_files] + +When true, writes all logging output to files. The log files are automatically rotated when the log file size limit is reached. + +::::{note} +Filebeat only creates a log file if there is logging output. For example, if you set the log [`level`](#level) to `error` and there are no errors, there will be no log file in the directory specified for logs. +:::: + + + +### `logging.level` [level] + +Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default log level is `info`. + +`debug` +: Logs debug messages, including a detailed printout of all events flushed. Also logs informational messages, warnings, errors, and critical errors. When the log level is `debug`, you can specify a list of [`selectors`](#selectors) to display debug messages for specific components. If no selectors are specified, the `*` selector is used to display debug messages for all components. + +`info` +: Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors. + +`warning` +: Logs warnings, errors, and critical errors. + +`error` +: Logs errors and critical errors. + + +### `logging.selectors` [selectors] + +The list of debugging-only selector tags used by different Filebeat components. Use `*` to enable debug output for all components. Use `publisher` to display debug messages related to event publishing. + +::::{tip} +The list of available selectors may change between releases, so avoid creating tests that depend on specific selectors. + +To see which selectors are available, run Filebeat in debug mode (set `logging.level: debug` in the configuration). The selector name appears after the log level and is enclosed in brackets. + +:::: + + +To configure multiple selectors, use the following [YAML list syntax](/reference/libbeat/config-file-format.md): + +```yaml +logging.selectors: [ harvester, input ] +``` + +To override selectors at the command line, use the `-d` global flag (`-d` also sets the debug log level). For more information, see [Command reference](/reference/filebeat/command-line-options.md). + + +### `logging.metrics.enabled` [_logging_metrics_enabled] + +By default, Filebeat periodically logs its internal metrics that have changed in the last period. For each metric that changed, the delta from the value at the beginning of the period is logged. Also, the total values for all non-zero internal metrics are logged on shutdown. Set this to false to disable this behavior. The default is true. + +Here is an example log line: + +```shell +2017-12-17T19:17:42.667-0500 INFO [metrics] log/log.go:110 Non-zero metrics in the last 30s: beat.info.uptime.ms=30004 beat.memstats.gc_next=5046416 +``` + +Note that we currently offer no backwards compatible guarantees for the internal metrics and for this reason they are also not documented. + + +### `logging.metrics.period` [_logging_metrics_period] + +The period after which to log the internal metrics. The default is 30s. + + +### `logging.metrics.namespaces` [_logging_metrics_namespaces] + +A list of metrics namespaces to report in the logs. Defaults to `[stats]`. `stats` contains general Beat metrics. `dataset` and `inputs` may be present in some Beats and contains module or input metrics. + + +### `logging.files.path` [_logging_files_path] + +The directory that log files are written to. The default is the logs path. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. + + +### `logging.files.name` [_logging_files_name] + +The name of the file that logs are written to. The default is *filebeat*. + + +### `logging.files.rotateeverybytes` [_logging_files_rotateeverybytes] + +The maximum size of a log file. If the limit is reached, a new log file is generated. The default size limit is 10485760 (10 MB). + + +### `logging.files.keepfiles` [_logging_files_keepfiles] + +The number of most recent rotated log files to keep on disk. Older files are deleted during log rotation. The default value is 7. The `keepfiles` options has to be in the range of 2 to 1024 files. + + +### `logging.files.permissions` [_logging_files_permissions] + +The permissions mask to apply when rotating log files. The default value is 0600. The `permissions` option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with *0*. + +The most permissive mask allowed is 0640. If a higher permissions mask is specified via this setting, it will be subject to an umask of 0027. + +This option is not supported on Windows. + +Examples: + +* 0640: give read and write access to the file owner, and read access to members of the group associated with the file. +* 0600: give read and write access to the file owner, and no access to all others. + + +### `logging.files.interval` [_logging_files_interval] + +Enable log file rotation on time intervals in addition to size-based rotation. Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h are boundary-aligned with minutes, hours, days, weeks, months, and years as reported by the local system clock. All other intervals are calculated from the unix epoch. Defaults to disabled. + + +### `logging.files.rotateonstartup` [_logging_files_rotateonstartup] + +If the log file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to true. + + +### `logging.files.redirect_stderr` [preview] [_logging_files_redirect_stderr] + +When true, diagnostic messages printed to Filebeat’s standard error output will also be logged to the log file. This can be helpful in situations were Filebeat terminates unexpectedly because an error has been detected by Go’s runtime but diagnostic information is not present in the log file. This feature is only available when logging to files (`logging.to_files` is true). Disabled by default. + + +## Logging format [_logging_format] + +The logging format is generally the same for each logging output. The one exception is with the syslog output where the timestamp is not included in the message because syslog adds its own timestamp. + +Each log message consists of the following parts: + +* Timestamp in ISO8601 format +* Level +* Logger name contained in brackets (Optional) +* File name and line number of the caller +* Message +* Structured data encoded in JSON (Optional) + +Below are some samples: + +`2017-12-17T18:54:16.241-0500 INFO logp/core_test.go:13 unnamed global logger` + +`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:16 some message` + +`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:19 some message {"x": 1}` + + +## Configuration options for event_data logger [_configuration_options_for_event_data_logger] + +Some outputs will log raw events on errors like indexing errors in the Elasticsearch output, to prevent logging raw events (that may contain sensitive information) together with other log messages, a different log file, only for log entries containing raw events, is used. It will use the same level, selectors and all other configurations from the default logger, but it will have it’s own file configuration. + +Having a different log file for raw events also prevents event data from drowning out the regular log files. + +::::{important} +No matter the default logger output configuration, raw events will **always** be logged to a file configured by `logging.event_data.files`. +:::: + + + +### `logging.event_data.files.path` [_logging_event_data_files_path] + +The directory that log files are written to. The default is the logs path. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. + + +### `logging.event_data.files.name` [_logging_event_data_files_name] + +The name of the file that logs are written to. The default is *filebeat*-events-data. + + +### `logging.event_data.files.rotateeverybytes` [_logging_event_data_files_rotateeverybytes] + +The maximum size of a log file. If the limit is reached, a new log file is generated. The default size limit is 5242880 (5 MB). + + +### `logging.event_data.files.keepfiles` [_logging_event_data_files_keepfiles] + +The number of most recent rotated log files to keep on disk. Older files are deleted during log rotation. The default value is 2. The `keepfiles` options has to be in the range of 2 to 1024 files. + + +### `logging.event_data.files.permissions` [_logging_event_data_files_permissions] + +The permissions mask to apply when rotating log files. The default value is 0600. The `permissions` option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with *0*. + +The most permissive mask allowed is 0640. If a higher permissions mask is specified via this setting, it will be subject to an umask of 0027. + +This option is not supported on Windows. + +Examples: + +* 0640: give read and write access to the file owner, and read access to members of the group associated with the file. +* 0600: give read and write access to the file owner, and no access to all others. + + +### `logging.event_data.files.interval` [_logging_event_data_files_interval] + +Enable log file rotation on time intervals in addition to size-based rotation. Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h are boundary-aligned with minutes, hours, days, weeks, months, and years as reported by the local system clock. All other intervals are calculated from the unix epoch. Defaults to disabled. + + +### `logging.event_data.files.rotateonstartup` [_logging_event_data_files_rotateonstartup] + +If the log file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to false. diff --git a/docs/reference/filebeat/configuration-monitor.md b/docs/reference/filebeat/configuration-monitor.md new file mode 100644 index 000000000000..e811628350b8 --- /dev/null +++ b/docs/reference/filebeat/configuration-monitor.md @@ -0,0 +1,113 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-monitor.html +--- + +# Settings for internal collection [configuration-monitor] + +Use the following settings to configure internal collection when you are not using {{metricbeat}} to collect monitoring data. + +You specify these settings in the X-Pack monitoring section of the `filebeat.yml` config file: + +## `monitoring.enabled` [_monitoring_enabled] + +The `monitoring.enabled` config is a boolean setting to enable or disable {{monitoring}}. If set to `true`, monitoring is enabled. + +The default value is `false`. + + +## `monitoring.elasticsearch` [_monitoring_elasticsearch] + +The {{es}} instances that you want to ship your Filebeat metrics to. This configuration option contains the following fields: + + +## `monitoring.cluster_uuid` [_monitoring_cluster_uuid] + +The `monitoring.cluster_uuid` config identifies the {{es}} cluster under which the monitoring data will appear in the Stack Monitoring UI. + +### `api_key` [_api_key_3] + +The detail of the API key to be used to send monitoring information to {{es}}. See [*Grant access using API keys*](/reference/filebeat/beats-api-keys.md) for more information. + + +### `bulk_max_size` [_bulk_max_size_5] + +The maximum number of metrics to bulk in a single {{es}} bulk API index request. The default is `50`. For more information, see [Elasticsearch](/reference/filebeat/elasticsearch-output.md). + + +### `backoff.init` [_backoff_init_5] + +The number of seconds to wait before trying to reconnect to Elasticsearch after a network error. After waiting `backoff.init` seconds, Filebeat tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is 1s. + + +### `backoff.max` [_backoff_max_5] + +The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. The default is 60s. + + +### `compression_level` [_compression_level_3] + +The gzip compression level. Setting this value to `0` disables compression. The compression level must be in the range of `1` (best speed) to `9` (best compression). The default value is `0`. Increasing the compression level reduces the network usage but increases the CPU usage. + + +### `headers` [_headers_3] + +Custom HTTP headers to add to each request. For more information, see [Elasticsearch](/reference/filebeat/elasticsearch-output.md). + + +### `hosts` [_hosts_5] + +The list of {{es}} nodes to connect to. Monitoring metrics are distributed to these nodes in round robin order. For more information, see [Elasticsearch](/reference/filebeat/elasticsearch-output.md). + + +### `max_retries` [_max_retries_5] + +The number of times to retry sending the monitoring metrics after a failure. After the specified number of retries, the metrics are typically dropped. The default value is `3`. For more information, see [Elasticsearch](/reference/filebeat/elasticsearch-output.md). + + +### `parameters` [_parameters_2] + +Dictionary of HTTP parameters to pass within the url with index operations. + + +### `password` [_password_7] + +The password that Filebeat uses to authenticate with the {{es}} instances for shipping monitoring data. + + +### `metrics.period` [_metrics_period] + +The time interval (in seconds) when metrics are sent to the {{es}} cluster. A new snapshot of Filebeat metrics is generated and scheduled for publishing each period. The default value is 10 * time.Second. + + +### `state.period` [_state_period] + +The time interval (in seconds) when state information are sent to the {{es}} cluster. A new snapshot of Filebeat state is generated and scheduled for publishing each period. The default value is 60 * time.Second. + + +### `protocol` [_protocol] + +The name of the protocol to use when connecting to the {{es}} cluster. The options are: `http` or `https`. The default is `http`. If you specify a URL for `hosts`, however, the value of protocol is overridden by the scheme you specify in the URL. + + +### `proxy_url` [_proxy_url_5] + +The URL of the proxy to use when connecting to the {{es}} cluster. For more information, see [Elasticsearch](/reference/filebeat/elasticsearch-output.md). + + +### `timeout` [_timeout_6] + +The HTTP request timeout in seconds for the {{es}} request. The default is `90`. + + +### `ssl` [_ssl_9] + +Configuration options for Transport Layer Security (TLS) or Secure Sockets Layer (SSL) parameters like the certificate authority (CA) to use for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to {{es}}. For more information, see [SSL](/reference/filebeat/configuration-ssl.md). + + +### `username` [_username_6] + +The user ID that Filebeat uses to authenticate with the {{es}} instances for shipping monitoring data. + + + diff --git a/docs/reference/filebeat/configuration-output-codec.md b/docs/reference/filebeat/configuration-output-codec.md new file mode 100644 index 000000000000..cd7a2be3ddcf --- /dev/null +++ b/docs/reference/filebeat/configuration-output-codec.md @@ -0,0 +1,32 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-output-codec.html +--- + +# Change the output codec [configuration-output-codec] + +For outputs that do not require a specific encoding, you can change the encoding by using the codec configuration. You can specify either the `json` or `format` codec. By default the `json` codec is used. + +**`json.pretty`**: If `pretty` is set to true, events will be nicely formatted. The default is false. + +**`json.escape_html`**: If `escape_html` is set to true, html symbols will be escaped in strings. The default is false. + +Example configuration that uses the `json` codec with pretty printing enabled to write events to the console: + +```yaml +output.console: + codec.json: + pretty: true + escape_html: false +``` + +**`format.string`**: Configurable format string used to create a custom formatted message. + +Example configurable that uses the `format` codec to print the events timestamp and message field to console: + +```yaml +output.console: + codec.format: + string: '%{[@timestamp]} %{[message]}' +``` + diff --git a/docs/reference/filebeat/configuration-path.md b/docs/reference/filebeat/configuration-path.md new file mode 100644 index 000000000000..6a98c88ce5fc --- /dev/null +++ b/docs/reference/filebeat/configuration-path.md @@ -0,0 +1,78 @@ +--- +navigation_title: "Project paths" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-path.html +--- + +# Configure project paths [configuration-path] + + +The `path` section of the `filebeat.yml` config file contains configuration options that define where Filebeat looks for its files. For example, Filebeat looks for the Elasticsearch template file in the configuration path and writes log files in the logs path. Filebeat looks for its registry files in the data path. + +Please see the [Directory layout](/reference/filebeat/directory-layout.md) section for more details. + +Here is an example configuration: + +```yaml +path.home: /usr/share/beat +path.config: /etc/beat +path.data: /var/lib/beat +path.logs: /var/log/ +``` + +Note that it is possible to override these options by using command line flags. + + +## Configuration options [_configuration_options_24] + +You can specify the following options in the `path` section of the `filebeat.yml` config file: + + +### `home` [_home] + +The home path for the Filebeat installation. This is the default base path for all other path settings and for miscellaneous files that come with the distribution (for example, the sample dashboards). If not set by a CLI flag or in the configuration file, the default for the home path is the location of the Filebeat binary. + +Example: + +```yaml +path.home: /usr/share/beats +``` + + +### `config` [_config] + +The configuration path for the Filebeat installation. This is the default base path for configuration files, including the main YAML configuration file and the Elasticsearch template file. If not set by a CLI flag or in the configuration file, the default for the configuration path is the home path. + +Example: + +```yaml +path.config: /usr/share/beats/config +``` + + +### `data` [_data] + +The data path for the Filebeat installation. This is the default base path for all the files in which Filebeat needs to store its data. If not set by a CLI flag or in the configuration file, the default for the data path is a `data` subdirectory inside the home path. + +Example: + +```yaml +path.data: /var/lib/beats +``` + +::::{tip} +When running multiple Filebeat instances on the same host, make sure they each have a distinct `path.data` value. +:::: + + + +### `logs` [_logs] + +The logs path for a Filebeat installation. This is the default location for Filebeat’s log files. If not set by a CLI flag or in the configuration file, the default for the logs path is a `logs` subdirectory inside the home path. + +Example: + +```yaml +path.logs: /var/log/beats +``` + diff --git a/docs/reference/filebeat/configuration-ssl.md b/docs/reference/filebeat/configuration-ssl.md new file mode 100644 index 000000000000..9b4285ec9390 --- /dev/null +++ b/docs/reference/filebeat/configuration-ssl.md @@ -0,0 +1,502 @@ +--- +navigation_title: "SSL" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-ssl.html +--- + +# Configure SSL [configuration-ssl] + + +You can specify SSL options when you configure: + +* [outputs](/reference/filebeat/configuring-output.md) that support SSL +* the [Kibana endpoint](/reference/filebeat/setup-kibana-endpoint.md) + +Example output config with SSL enabled: + +```yaml +output.elasticsearch.hosts: ["https://192.168.1.42:9200"] +output.elasticsearch.ssl.certificate_authorities: ["/etc/client/ca.pem"] +output.elasticsearch.ssl.certificate: "/etc/client/cert.pem" +output.elasticsearch.ssl.key: "/etc/client/cert.key" +``` + +Also see [*Secure communication with Logstash*](/reference/filebeat/configuring-ssl-logstash.md). + +Example Kibana endpoint config with SSL enabled: + +```yaml +setup.kibana.host: "https://192.0.2.255:5601" +setup.kibana.ssl.enabled: true +setup.kibana.ssl.certificate_authorities: ["/etc/client/ca.pem"] +setup.kibana.ssl.certificate: "/etc/client/cert.pem" +setup.kibana.ssl.key: "/etc/client/cert.key" +``` + +There are a number of SSL configuration options available to you: + +* [Common configuration options](#ssl-common-config) +* [Client configuration options](#ssl-client-config) +* [Server configuration options](#ssl-server-config) + + +## Common configuration options [ssl-common-config] + +Common SSL configuration options can be used in both client and server configurations. You can specify the following options in the `ssl` section of each subsystem that supports SSL. + + +### `enabled` [enabled] + +To disable SSL configuration, set the value to `false`. The default value is `true`. + +::::{note} +SSL settings are disabled if either `enabled` is set to `false` or the `ssl` section is missing. + +:::: + + + +### `supported_protocols` [supported-protocols] + +List of allowed SSL/TLS versions. If SSL/TLS server decides for protocol versions not configured, the connection will be dropped during or after the handshake. The setting is a list of allowed protocol versions: `TLSv1.1`, `TLSv1.2`, and `TLSv1.3`. + +The default value is `[TLSv1.2, TLSv1.3]`. + + +### `cipher_suites` [cipher-suites] + +The list of cipher suites to use. The first entry has the highest priority. If this option is omitted, the Go crypto library’s [default suites](https://golang.org/pkg/crypto/tls/) are used (recommended). + +Note that if TLS 1.3 is enabled (which is true by default), then the default TLS 1.3 cipher suites are always included, because Go’s standard library adds them to all connections. In order to exclude the default TLS 1.3 ciphers, TLS 1.3 must also be disabled, e.g. with the setting `ssl.supported_protocols = [TLSv1.2]`. + +The following cipher suites are available: + +| Cypher | Notes | +| --- | --- | +| ECDHE-ECDSA-AES-128-CBC-SHA | | +| ECDHE-ECDSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. | +| ECDHE-ECDSA-AES-128-GCM-SHA256 | TLS 1.2 only. | +| ECDHE-ECDSA-AES-256-CBC-SHA | | +| ECDHE-ECDSA-AES-256-GCM-SHA384 | TLS 1.2 only. | +| ECDHE-ECDSA-CHACHA20-POLY1305 | TLS 1.2 only. | +| ECDHE-ECDSA-RC4-128-SHA | Disabled by default. RC4 not recommended. | +| ECDHE-RSA-3DES-CBC3-SHA | | +| ECDHE-RSA-AES-128-CBC-SHA | | +| ECDHE-RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. | +| ECDHE-RSA-AES-128-GCM-SHA256 | TLS 1.2 only. | +| ECDHE-RSA-AES-256-CBC-SHA | | +| ECDHE-RSA-AES-256-GCM-SHA384 | TLS 1.2 only. | +| ECDHE-RSA-CHACHA20-POLY1205 | TLS 1.2 only. | +| ECDHE-RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. | +| RSA-3DES-CBC3-SHA | | +| RSA-AES-128-CBC-SHA | | +| RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. | +| RSA-AES-128-GCM-SHA256 | TLS 1.2 only. | +| RSA-AES-256-CBC-SHA | | +| RSA-AES-256-GCM-SHA384 | TLS 1.2 only. | +| RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. | + +Here is a list of acronyms used in defining the cipher suites: + +* 3DES: Cipher suites using triple DES +* AES-128/256: Cipher suites using AES with 128/256-bit keys. +* CBC: Cipher using Cipher Block Chaining as block cipher mode. +* ECDHE: Cipher suites using Elliptic Curve Diffie-Hellman (DH) ephemeral key exchange. +* ECDSA: Cipher suites using Elliptic Curve Digital Signature Algorithm for authentication. +* GCM: Galois/Counter mode is used for symmetric key cryptography. +* RC4: Cipher suites using RC4. +* RSA: Cipher suites using RSA. +* SHA, SHA256, SHA384: Cipher suites using SHA-1, SHA-256 or SHA-384. + + +### `curve_types` [curve-types] + +The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). + +The following elliptic curve types are available: + +* P-256 +* P-384 +* P-521 +* X25519 + + +### `ca_sha256` [ca-sha256] + +This configures a certificate pin that you can use to ensure that a specific certificate is part of the verified chain. + +The pin is a base64 encoded string of the SHA-256 of the certificate. + +::::{note} +This check is not a replacement for the normal SSL validation, but it adds additional validation. If this option is used with `verification_mode` set to `none`, the check will always fail because it will not receive any verified chains. +:::: + + + +## Client configuration options [ssl-client-config] + +You can specify the following options in the `ssl` section of each subsystem that supports SSL. + + +### `certificate_authorities` [client-certificate-authorities] + +The list of root certificates for verifications is required. If `certificate_authorities` is empty or not set, the system keystore is used. If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well. + +By default you can specify a list of files that `filebeat` will read, but you can also embed a certificate directly in the `YAML` configuration: + +```yaml +certificate_authorities: + - | + -----BEGIN CERTIFICATE----- + MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF + ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 + MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB + BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n + fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl + 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t + /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP + PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 + CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O + BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux + 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D + 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw + 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA + H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu + 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 + yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk + sxSmbIUfc2SGJGCJD4I= + -----END CERTIFICATE----- +``` + + +### `certificate: "/etc/client/cert.pem"` [client-certificate] + +The path to the certificate for SSL client authentication is only required if `client_authentication` is specified. If the certificate is not specified, client authentication is not available. The connection might fail if the server requests client authentication. If the SSL server does not require client authentication, the certificate will be loaded, but not requested or used by the server. + +When this option is configured, the [`key`](#client-key) option is also required. The certificate option support embedding of the certificate: + +```yaml +certificate: | + -----BEGIN CERTIFICATE----- + MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF + ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 + MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB + BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n + fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl + 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t + /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP + PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 + CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O + BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux + 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D + 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw + 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA + H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu + 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 + yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk + sxSmbIUfc2SGJGCJD4I= + -----END CERTIFICATE----- +``` + + +### `key: "/etc/client/cert.key"` [client-key] + +The client certificate key used for client authentication and is only required if `client_authentication` is configured. The key option support embedding of the private key: + +```yaml +key: | + -----BEGIN PRIVATE KEY----- + MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI + sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP + Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F + KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2 + MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z + HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ + nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx + Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0 + eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/ + Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM + epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve + Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn + BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8 + VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU + zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5 + GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA + 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7 + TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF + hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li + e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze + Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T + kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+ + kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav + NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K + 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc + nygO9KTJuUiBrLr0AHEnqko= + -----END PRIVATE KEY----- +``` + + +### `key_passphrase` [client-key-passphrase] + +The passphrase used to decrypt an encrypted key stored in the configured `key` file. + + +### `verification_mode` [client-verification-mode] + +Controls the verification of server certificates. Valid values are: + +`full` +: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. + +`strict` +: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error. + +`certificate` +: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification. + +`none` +: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged. + + The default value is `full`. + + + +### `ca_trusted_fingerprint` [ca_trusted_fingerprint] + +A HEX encoded SHA-256 of a CA certificate. If this certificate is present in the chain during the handshake, it will be added to the `certificate_authorities` list and the handshake will continue normaly. + +To get the fingerprint from a CA certificate on a Unix-like system, you can use the following command, where `ca.crt` is the certificate. + +``` +openssl x509 -fingerprint -sha256 -noout -in ./ca.crt | awk --field-separator="=" '{print $2}' | sed 's/://g' +``` + + +## Server configuration options [ssl-server-config] + +You can specify the following options in the `ssl` section of each subsystem that supports SSL. + + +### `certificate_authorities` [server-certificate-authorities] + +The list of root certificates for client verifications is only required if `client_authentication` is configured. If `certificate_authorities` is empty or not set, and `client_authentication` is configured, the system keystore is used. + +If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well. By default you can specify a list of files that `filebeat` will read, but you can also embed a certificate directly in the `YAML` configuration: + +```yaml +certificate_authorities: + - | + -----BEGIN CERTIFICATE----- + MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF + ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 + MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB + BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n + fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl + 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t + /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP + PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 + CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O + BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux + 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D + 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw + 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA + H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu + 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 + yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk + sxSmbIUfc2SGJGCJD4I= + -----END CERTIFICATE----- +``` + + +### `certificate: "/etc/server/cert.pem"` [server-certificate] + +The end-entity (leaf) certificate that the server uses to identify itself. If the certificate is signed by a certificate authority (CA), then it should include intermediate CA certificates, sorted from leaf to root. For servers, a `certificate` and [`key`](#server-key) must be specified. + +The certificate option supports embedding of the PEM certificate content. This example contains the leaf certificate followed by issuer’s certificate. + +```yaml +certificate: | + -----BEGIN CERTIFICATE----- + MIIF2jCCA8KgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBlMQswCQYDVQQGEwJVUzEW + MBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8g + UmVhbDEOMAwGA1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwHhcNMjMxMDMw + MTkyMzU4WhcNMjMxMDMxMTkyMzU4WjB2MQswCQYDVQQGEwJVUzEWMBQGA1UEBxMN + U2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8gUmVhbDEOMAwG + A1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMxDzANBgNVBAMTBnNlcnZlcjCC + AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALW37cart7l0KE3LCStFbiGm + Rr/QSkuPv+Y+SXFT4zXrMFP3mOfUCVsR4lugv+jmql9qjbwR9jKsgKXA1kSvNXSZ + lLYWRcNnQ+QzwKxJf/jy246nSfqb2FKvVMs580lDwKHHxn/FSpHV93O4Goy5cLfF + ACE7BSdJdxl5DVAMmmkzd6gBGgN8dQIbcyJYuIZYQt44PqSYh/BomTyOXKrmvX4y + t7/pF+ldJjWZq/6SfCq6WE0jSrpI1P/42Qd9h5Tsnl6qsUGA2Tz5ZqKz2cyxaIlK + wL9tYDionfFIl+jZcxkGPF2a14O1TycCI0B/z+0VL+HR/8fKAB0NdP+QRLaPWOrn + DvraAO+bVKC6VrQyUYNUOwtd2gMUqm6Hzrf4s3wjP754eSJkvnSoSAB6l7ZmJKe5 + Pz5oDDOVPwKHv/MrhsCSMNFeXSEO+rq9TtYEAFQI5rFGHlURga8kA1T1pirHyEtS + 2o8GUSPSHVulaPdFnHg4xfTexfRYLCqya75ISJuY2/+2GblCie/re1GFitZCZ46/ + xiQQDOjgL96soDVZ+cTtMpXanslgDapTts9LPIJTd9FUJCY1omISGiSjABRuTlCV + 8054ja4BKVahSd5BqqtVkWyV64SCut6kce2ndwBkyFvlZ6cteLCW7KtzYvba4XBb + YIAs+H+9e/bZUVhws5mFAgMBAAGjgYMwgYAwDgYDVR0PAQH/BAQDAgeAMB0GA1Ud + JQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATAOBgNVHQ4EBwQFAQIDBAUwPwYDVR0R + BDgwNoIJbG9jYWxob3N0ghFiZWF0cy5leGFtcGxlLmNvbYcEfwAAAYcQAAAAAAAA + AAAAAAAAAAAAATANBgkqhkiG9w0BAQsFAAOCAgEAldSZOUi+OUR46ERQuINl1oED + mjNsQ9FNP/RDu8mPJaNb5v2sAbcpuZb9YdnScT+d+n0+LMd5uz2g67Qr73QCpXwL + 9YJIs56i7qMTKXlVvRQrvF9P/zP3sm5Zfd2I/x+8oXgEeYsxAWipJ8RsbnN1dtu8 + C4l+P0E58jjrjom11W90RiHYaT0SI2PPBTTRhYLz0HayThPZDMdFnIQqVxUYbQD5 + ybWu77hnsvC/g2C8/N2LAdQGJJ67owMa5T3YRneiaSvvOf3I45oeLE+olGAPdrSq + 5Sp0G7fcAKMRPxcwYeD7V5lfYMtb+RzECpYAHT8zHKLZl6/34q2k8P8EWEpAsD80 + +zSbCkdvNiU5lU90rV8E2baTKCg871k4O8sT48eUyDps6ZUCfT1dgefXeyOTV5bY + 864Zo6bWJhAJ7Qa2d4HJkqPzSbqsosHVobojgkOcMqkStLHd8sgtCoFmJMflbp7E + ghawl/RVFEkL9+TWy9fR8sJWRx13P8CUP6AL9kVmcU2c3gMNpvQfIii9QOnQrRsi + yZj9FKl+ZM49I6RQ6dY5JVgWtpVm/+GBVuy1Aj91JEjw7r1jAeir5K9LAXG8kEN9 + irndx1SK2MMTY79lGHFGQRv3vnQGI0Wzjtn31YJ7qIFNJ1WWbAZLR9FBtzmMeXM6 + puoJ9UYvfIcHUGPdZGU= + -----END CERTIFICATE----- + -----BEGIN CERTIFICATE----- + MIIFpjCCA46gAwIBAgIBATANBgkqhkiG9w0BAQsFADBlMQswCQYDVQQGEwJVUzEW + MBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8g + UmVhbDEOMAwGA1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwHhcNMjMxMDMw + MTkyMzU2WhcNMjMxMDMxMTkyMzU2WjBlMQswCQYDVQQGEwJVUzEWMBQGA1UEBxMN + U2FuIEZyYW5jaXNjbzEcMBoGA1UECRMTV2VzdCBFbCBDYW1pbm8gUmVhbDEOMAwG + A1UEERMFOTQwNDAxEDAOBgNVBAoTB0VsYXN0aWMwggIiMA0GCSqGSIb3DQEBAQUA + A4ICDwAwggIKAoICAQDQP3hJt4jTIo+tBXB/R4RuBTvv6OOago9joxlNDm0abseJ + ehE0V8FDi0SSpa7ZiqwCGq/deu5OIWVNpFCLHeH5YBriNmB7oPkNRCleu50JsUrG + RjSTtBIJcu/CVpD7Q5XMbhbhYcPArrxrSreo3ox8a+2X7b8nA1xPgIcWqSCgs9iV + lwKHaQWNTUXYwwZG7b9WG4EJaki6t1+1QbDDJU0oWrZNg23wQEBvEVRDQs7kadvm + 9YtZLPULlSyV4Rk3yNW8dPXHjcz2wp3PBPIWIQe9mzYU608307TkUMVN2EEOImxl + Wm1RtXYvvVb1LiY0C2lYbN3jLZQzffK5RsS87ocqTQM+HvDBv/PupHDvW08wietu + RtRbdx/2cN0GLmOHnkWKx+GlYDZfAtIj958fTKl2hHyNqJ1pE7vksSYBwBxMFQem + eSGzw5pO53kmPcZO203YQ2qoJd7z1aLf7eAOqDn5zwlYNc00bZ6DwTZsyptGv9sZ + zcZuovppPgCN4f1I9ja/NPKep+sVKfQqR5HuOFOPFcr6oOioESJSgIvXXF9RhCVh + UMeZKWWSCNm1ea4h6q8OJdQfM7XXkXm+dEyF0TogC00CidZWuYMZcgXND5p/1Di5 + PkCKPUMllCoK0oaTfFioNW7qtNbDGQrW+spwDa4kjJNKYtDD0jjPgFMgSzQ2MwID + AQABo2EwXzAOBgNVHQ8BAf8EBAMCAoQwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsG + AQUFBwMBMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFImOXc9Tv+mgn9jOsPig + 9vlAUTa+MA0GCSqGSIb3DQEBCwUAA4ICAQBZ9tqU88Nmgf+vDgkKMKLmLMaRCRlV + HcYrm7WoWLX+q6VSbmvf5eD5OrzzbAnnp16iXap8ivsAEFTo8XWh/bjl7G/2jetR + xZD2WHtzmAg3s4SVsEHIyFUF1ERwnjO2ndHjoIsx8ktUk1aNrmgPI6s07fkULDm+ + 2aXyBSZ9/oimZM/s3IqYJecxwE+yyS+FiS6mSDCCVIyQXdtVAbFHegyiBYv8EbwF + Xz70QiqQtxotGlfts/3uN1s+xnEoWz5E6S5DQn4xQh0xiKSXPizMXou9xKzypeSW + qtNdwtg62jKWDaVriBfrvoCnyjjCIjmcTcvA2VLmeZShyTuIucd0lkg2NKIGeM7I + o33hmdiKaop1fVtj8zqXvCRa3ecmlvcxPKX0otVFORFNOfaPjH/CjW0CnP0LByGK + YW19w0ncJZa9cc1SlNL28lnBhW+i1+ViR02wtjabH9XO+mtxuaEPDZ1hLhhjktqI + Y2oFUso4C5xiTU/hrH8+cFv0dn/+zyQoLfJEQbUX9biFeytt7T4Yynwhdy7jryqH + fdy/QM26YnsE8D7l4mv99z+zII0IRGnQOuLTuNAIyGJUf69hCDubZFDeHV/IB9hU + 6GA6lBpsJlTDgfJLbtKuAHxdn1DO+uGg0GxgwggH6Vh9x9yQK2E6BaepJisL/zNB + RQQmEyTn1hn/eA== + -----END CERTIFICATE----- +``` + + +### `key: "/etc/server/cert.key"` [server-key] + +The server certificate key used for authentication is required. The key option supports embedding of the private key: + +```yaml +key: | + -----BEGIN PRIVATE KEY----- + MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI + sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP + Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F + KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2 + MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z + HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ + nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx + Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0 + eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/ + Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM + epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve + Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn + BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8 + VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU + zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5 + GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA + 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7 + TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF + hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li + e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze + Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T + kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+ + kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav + NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K + 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc + nygO9KTJuUiBrLr0AHEnqko= + -----END PRIVATE KEY----- +``` + + +### `key_passphrase` [server-key-passphrase] + +The passphrase is used to decrypt an encrypted key stored in the configured `key` file. + + +### `verification_mode` [server-verification-mode] + +Controls the verification of client certificates. Valid values are: + +`full` +: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. + +`strict` +: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error. + +`certificate` +: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification. + +`none` +: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged. + + The default value is `full`. + + + +### `renegotiation` [server-renegotiation] + +This configures what types of TLS renegotiation are supported. The valid options are: + +`never` +: Disables renegotiation. + +`once` +: Allows a remote server to request renegotiation once per connection. + +`freely` +: Allows a remote server to request renegotiation repeatedly. + + The default value is `never`. + + + +### `restart_on_cert_change.enabled` [exit_on_cert_change_enabled] + +If set to `true` Filebeat will restart if any file listed by `key`, `certificate`, or `certificate_authorities` is modified. + +::::{note} +This feature is NOT supported on Windows. The default value is `false`. +:::: + + +::::{note} +This feature requres the `execve` system call to be enabled. If you have a custom seccomp policy in place, make sure to allow for `execve`. +:::: + + + +### `restart_on_cert_change.period` [restart_on_cert_change_period] + +Specifies how often the files are checked for changes. Do not set the period to less than 1s because the modification time of files is often stored in seconds. Setting the period to less than 1s will result in validation error and Filebeat will not start. The default value is 1m. + + +### `client_authentication` [server-client-renegotiation] + +The type of client authentication mode. When `certificate_authorities` is set, it defaults to `required`. Otherwise, it defaults to `none`. + +The valid options are: + +`none` +: Disables client authentication. + +`optional` +: When a client certificate is supplied, the server will verify it. + +`required` +: Will require clients to provide a valid certificate. + diff --git a/docs/reference/filebeat/configuration-template.md b/docs/reference/filebeat/configuration-template.md new file mode 100644 index 000000000000..f16c1e95363d --- /dev/null +++ b/docs/reference/filebeat/configuration-template.md @@ -0,0 +1,112 @@ +--- +navigation_title: "Elasticsearch index template" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html +--- + +# Configure Elasticsearch index template loading [configuration-template] + + +The `setup.template` section of the `filebeat.yml` config file specifies the [index template](docs-content://manage-data/data-store/templates.md) to use for setting mappings in Elasticsearch. If template loading is enabled (the default), Filebeat loads the index template automatically after successfully connecting to Elasticsearch. + +::::{note} +A connection to Elasticsearch is required to load the index template. If the configured output is not Elasticsearch (or {{ess}}), you must [load the template manually](/reference/filebeat/filebeat-template.md#load-template-manually). +:::: + + +You can adjust the following settings to load your own template or overwrite an existing one. + +**`setup.template.enabled`** +: Set to false to disable template loading. If this is set to false, you must [load the template manually](/reference/filebeat/filebeat-template.md#load-template-manually). + +**`setup.template.name`** +: The name of the template. The default is `filebeat`. The Filebeat version is always appended to the given name, so the final name is `filebeat-%{[agent.version]}`. + +**`setup.template.pattern`** +: The template pattern to apply to the default index settings. The default pattern is `filebeat`. The Filebeat version is always included in the pattern, so the final pattern is `filebeat-%{[agent.version]}`. + + Example: + + ```yaml + setup.template.name: "filebeat" + setup.template.pattern: "filebeat" + ``` + + +**`setup.template.fields`** +: The path to the YAML file describing the fields. The default is `fields.yml`. If a relative path is set, it is considered relative to the config path. See the [Directory layout](/reference/filebeat/directory-layout.md) section for details. + +**`setup.template.overwrite`** +: A boolean that specifies whether to overwrite the existing template. The default is false. Do not enable this option if you start more than one instance of Filebeat at the same time. It can overload {{es}} by sending too many template update requests. + +**`setup.template.settings`** +: A dictionary of settings to place into the `settings.index` dictionary of the Elasticsearch template. For more details about the available Elasticsearch mapping options, please see the Elasticsearch [mapping reference](docs-content://manage-data/data-store/mapping.md). + + Example: + + ```yaml + setup.template.name: "filebeat" + setup.template.fields: "fields.yml" + setup.template.overwrite: false + setup.template.settings: + index.number_of_shards: 1 + index.number_of_replicas: 1 + ``` + + +**`setup.template.settings._source`** +: A dictionary of settings for the `_source` field. For the available settings, please see the Elasticsearch [reference](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md). + + Example: + + ```yaml + setup.template.name: "filebeat" + setup.template.fields: "fields.yml" + setup.template.overwrite: false + setup.template.settings: + _source.enabled: false + ``` + + +**`setup.template.append_fields`** +: A list of fields to be added to the template and {{kib}} index pattern. This setting adds new fields. It does not overwrite or change existing fields. + + This setting is useful when your data contains fields that Filebeat doesn’t know about in advance. + + If `append_fields` is specified along with `overwrite: true`, Filebeat overwrites the existing template and applies the new template when creating new indices. Existing indices are not affected. If you’re running multiple instances of Filebeat with different `append_fields` settings, the last one writing the template takes precedence. + + Any changes to this setting also affect the {{kib}} index pattern. + + Example config: + + ```yaml + setup.template.overwrite: true + setup.template.append_fields: + - name: test.name + type: keyword + - name: test.hostname + type: long + ``` + + +**`setup.template.json.enabled`** +: Set to `true` to load a JSON-based template file. Specify the path to your {{es}} index template file and set the name of the template. + + ```yaml + setup.template.json.enabled: true + setup.template.json.path: "template.json" + setup.template.json.name: "template-name" + setup.template.json.data_stream: false + ``` + + +::::{note} +If the JSON template is used, the `fields.yml` is skipped for the template generation. +:::: + + +::::{note} +If the JSON template is a data stream, set `setup.template.json.data_stream`. +:::: + + diff --git a/docs/reference/filebeat/configure-cloud-id.md b/docs/reference/filebeat/configure-cloud-id.md new file mode 100644 index 000000000000..059f0cac75d7 --- /dev/null +++ b/docs/reference/filebeat/configure-cloud-id.md @@ -0,0 +1,34 @@ +--- +navigation_title: "{{ess}}" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configure-cloud-id.html +--- + +# Configure the output for {{ess}} on {{ecloud}} [configure-cloud-id] + + +Filebeat comes with two settings that simplify the output configuration when used together with [{{ess}}](https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body). When defined, these setting overwrite settings from other parts in the configuration. + +Example: + +```yaml +cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" +cloud.auth: "elastic:{pwd}" +``` + +These settings can be also specified at the command line, like this: + +```sh +filebeat -e -E cloud.id="" -E cloud.auth="" +``` + +## `cloud.id` [_cloud_id] + +The Cloud ID, which can be found in the {{ess}} web console, is used by Filebeat to resolve the {{es}} and {{kib}} URLs. This setting overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. For more on locating and configuring the Cloud ID, see [Configure Beats and Logstash with Cloud ID](docs-content://deploy-manage/deploy/cloud-enterprise/find-cloud-id.md). + + +## `cloud.auth` [_cloud_auth] + +When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and `output.elasticsearch.password` settings. Because the Kibana settings inherit the username and password from the {{es}} output, this can also be used to set the `setup.kibana.username` and `setup.kibana.password` options. + + diff --git a/docs/reference/filebeat/configuring-howto-filebeat.md b/docs/reference/filebeat/configuring-howto-filebeat.md new file mode 100644 index 000000000000..be6a3e99194f --- /dev/null +++ b/docs/reference/filebeat/configuring-howto-filebeat.md @@ -0,0 +1,46 @@ +--- +navigation_title: "Configure" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuring-howto-filebeat.html +--- + +# Configure Filebeat [configuring-howto-filebeat] + + +::::{tip} +To get started quickly, read [Quick start: installation and configuration](/reference/filebeat/filebeat-installation-configuration.md). +:::: + + +To configure Filebeat, edit the configuration file. The default configuration file is called `filebeat.yml`. The location of the file varies by platform. To locate the file, see [Directory layout](/reference/filebeat/directory-layout.md). + +There’s also a full example configuration file called `filebeat.reference.yml` that shows all non-deprecated options. + +::::{tip} +See the [Config File Format](/reference/libbeat/config-file-format.md) for more about the structure of the config file. +:::: + + +The following topics describe how to configure Filebeat: + +* [Inputs](/reference/filebeat/configuration-filebeat-options.md) +* [Modules](/reference/filebeat/configuration-filebeat-modules.md) +* [General settings](/reference/filebeat/configuration-general-options.md) +* [Project paths](/reference/filebeat/configuration-path.md) +* [Config file loading](/reference/filebeat/filebeat-configuration-reloading.md) +* [Output](/reference/filebeat/configuring-output.md) +* [SSL](/reference/filebeat/configuration-ssl.md) +* [Index lifecycle management (ILM)](/reference/filebeat/ilm.md) +* [Elasticsearch index template](/reference/filebeat/configuration-template.md) +* [{{kib}} endpoint](/reference/filebeat/setup-kibana-endpoint.md) +* [Kibana dashboards](/reference/filebeat/configuration-dashboards.md) +* [Processors](/reference/filebeat/filtering-enhancing-data.md) +* [*Autodiscover*](/reference/filebeat/configuration-autodiscover.md) +* [Internal queue](/reference/filebeat/configuring-internal-queue.md) +* [Logging](/reference/filebeat/configuration-logging.md) +* [HTTP endpoint](/reference/filebeat/http-endpoint.md) +* [Regular expression support](/reference/filebeat/regexp-support.md) +* [Instrumentation](/reference/filebeat/configuration-instrumentation.md) +* [Feature flags](/reference/filebeat/configuration-feature-flags.md) +* [*filebeat.reference.yml*](/reference/filebeat/filebeat-reference-yml.md) + diff --git a/docs/reference/filebeat/configuring-ingest-node.md b/docs/reference/filebeat/configuring-ingest-node.md new file mode 100644 index 000000000000..bba49947e74d --- /dev/null +++ b/docs/reference/filebeat/configuring-ingest-node.md @@ -0,0 +1,50 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuring-ingest-node.html +--- + +# Parse data using an ingest pipeline [configuring-ingest-node] + +When you use {{es}} for output, you can configure Filebeat to use an [ingest pipeline](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) to pre-process documents before the actual indexing takes place in {{es}}. An ingest pipeline is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of {{ls}}. For example, you can create an ingest pipeline in {{es}} that consists of one processor that removes a field in a document followed by another processor that renames a field. + +After defining the pipeline in {{es}}, you simply configure Filebeat to use the pipeline. To configure Filebeat, you specify the pipeline ID in the `parameters` option under `elasticsearch` in the `filebeat.yml` file: + +```yaml +output.elasticsearch: + hosts: ["localhost:9200"] + pipeline: my_pipeline_id +``` + +For example, let’s say that you’ve defined the following pipeline in a file named `pipeline.json`: + +```json +{ + "description": "Test pipeline", + "processors": [ + { + "lowercase": { + "field": "agent.name" + } + } + ] +} +``` + +To add the pipeline in {{es}}, you would run: + +```shell +curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json +``` + +Then in the `filebeat.yml` file, you would specify: + +```yaml +output.elasticsearch: + hosts: ["localhost:9200"] + pipeline: "test-pipeline" +``` + +When you run Filebeat, the value of `agent.name` is converted to lowercase before indexing. + +For more information about defining a pre-processing pipeline, see the [ingest pipeline](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) documentation. + diff --git a/docs/reference/filebeat/configuring-internal-queue.md b/docs/reference/filebeat/configuring-internal-queue.md new file mode 100644 index 000000000000..55dce103bf2d --- /dev/null +++ b/docs/reference/filebeat/configuring-internal-queue.md @@ -0,0 +1,144 @@ +--- +navigation_title: "Internal queue" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuring-internal-queue.html +--- + +# Configure the internal queue [configuring-internal-queue] + + +Filebeat uses an internal queue to store events before publishing them. The queue is responsible for buffering and combining events into batches that can be consumed by the outputs. The outputs will use bulk operations to send a batch of events in one transaction. + +You can configure the type and behavior of the internal queue by setting options in the `queue` section of the `filebeat.yml` config file or by setting options in the `queue` section of the output. Only one queue type can be configured. + +This sample configuration sets the memory queue to buffer up to 4096 events: + +```yaml +queue.mem: + events: 4096 +``` + + +## Configure the memory queue [configuration-internal-queue-memory] + +The memory queue keeps all events in memory. + +The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted. + +The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`. `flush.min_events` gives a limit on the number of events that can be included in a single batch, and `flush.timeout` specifies how long the queue should wait to completely fill an event request. If the output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of `bulk_max_size` and `flush.min_events`. + +`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`. + +In synchronous mode, an event request is always filled as soon as events are available, even if there are not enough events to fill the requested batch. This is useful when latency must be minimized. To use synchronous mode, set `flush.timeout` to 0. + +For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0 or 1. In this case, batch size will be capped at 1/2 the queue capacity. + +In asynchronous mode, an event request will wait up to the specified timeout to try and fill the requested batch completely. If the timeout expires, the queue returns a partial batch with all available events. To use asynchronous mode, set `flush.timeout` to a positive duration, e.g. `5s`. + +This sample configuration forwards events to the output when there are enough events to fill the output’s request (usually controlled by `bulk_max_size`, and limited to at most 512 events by `flush.min_events`), or when events have been waiting for 5s without filling the requested size: + +```yaml +queue.mem: + events: 4096 + flush.min_events: 512 + flush.timeout: 5s +``` + + +## Configuration options [_configuration_options_37] + +You can specify the following options in the `queue.mem` section of the `filebeat.yml` config file: + + +#### `events` [queue-mem-events-option] + +Number of events the queue can store. + +The default value is 3200 events. + + +#### `flush.min_events` [queue-mem-flush-min-events-option] + +If greater than 1, specifies the maximum number of events per batch. In this case the output must wait for the queue to accumulate the requested number of events or for `flush.timeout` to expire before publishing. + +If 0 or 1, sets the maximum number of events per batch to half the queue size, and sets the queue to synchronous mode (equivalent to `flush.timeout` of 0). + +The default value is 1600. + + +#### `flush.timeout` [queue-mem-flush-timeout-option] + +Maximum wait time for event requests from the output to be fulfilled. If set to 0s, events are returned immediately. + +The default value is 10s. + + +## Configure the disk queue [configuration-internal-queue-disk] + +The disk queue stores pending events on the disk rather than main memory. This allows Beats to queue a larger number of events than is possible with the memory queue, and to save events when a Beat or device is restarted. This increased reliability comes with a performance tradeoff, as every incoming event must be written and read from the device’s disk. However, for setups where the disk is not the main bottleneck, the disk queue gives a simple and relatively low-overhead way to add a layer of robustness to incoming event data. + +To enable the disk queue with default settings, specify a maximum size: + +```yaml +queue.disk: + max_size: 10GB +``` + +The queue will use up to the specified maximum size on disk. It will only use as much space as required. For example, if the queue is only storing 1GB of events, then it will only occupy 1GB on disk no matter how high the maximum is. Queue data is deleted from disk after it has been successfully sent to the output. + + +### Configuration options [configuration-internal-queue-disk-reference] + +You can specify the following options in the `queue.disk` section of the `filebeat.yml` config file: + + +#### `path` [_path] + +The path to the directory where the disk queue should store its data files. The directory is created on startup if it doesn’t exist. + +The default value is `"${path.data}/diskqueue"`. + + +#### `max_size` (required) [_max_size_required] + +The maximum size the queue should use on disk. Events that exceed this maximum will either pause their input or be discarded, depending on the input’s configuration. + +A value of `0` means that no maximum size is enforced, and the queue can grow up to the amount of free space on the disk. This value should be used with caution, as completely filling a system’s main disk can make it inoperable. It is best to use this setting only with a dedicated data or backup partition that will not interfere with Filebeat or the rest of the host system. + +The default value is `10GB`. + + +#### `segment_size` [_segment_size] + +Data added to the queue is stored in segment files. Each segment contains some number of events waiting to be sent to the outputs, and is deleted when all its events are sent. By default, segment size is limited to 1/10 of the maximum queue size. Using a smaller size means that the queue will use more data files, but they will be deleted more quickly after use. Using a larger size means some data will take longer to delete, but the queue will use fewer auxiliary files. It is usually fine to leave this value unchanged. + +The default value is `max_size / 10`. + + +#### `read_ahead` [_read_ahead] + +The number of events that should be read from disk into memory while waiting for an output to request them. If you find outputs are slowing down because they can’t read as many events at a time, adjusting this setting upward may help, at the cost of higher memory usage. + +The default value is `512`. + + +#### `write_ahead` [_write_ahead] + +The number of events the queue should accept and store in memory while waiting for them to be written to disk. If you find the queue’s memory use is too high because events are waiting too long to be written to disk, adjusting this setting downward may help, at the cost of reduced event throughput. On the other hand, if inputs are waiting or discarding events because they are being produced faster than the disk can handle, adjusting this setting upward may help, at the cost of higher memory usage. + +The default value is `2048`. + + +#### `retry_interval` [_retry_interval] + +Some disk errors may block operation of the queue, for example a permission error writing to the data directory, or a disk full error while writing an event. In this case, the queue reports the error and retries after pausing for the time specified in `retry_interval`. + +The default value is `1s` (one second). + + +#### `max_retry_interval` [_max_retry_interval] + +When there are multiple consecutive errors writing to the disk, the queue increases the retry interval by factors of 2 up to a maximum of `max_retry_interval`. Increase this value if you are concerned about logging too many errors or overloading the host system if the target disk becomes unavailable for an extended time. + +The default value is `30s` (thirty seconds). + diff --git a/docs/reference/filebeat/configuring-output.md b/docs/reference/filebeat/configuring-output.md new file mode 100644 index 000000000000..a089d5b2f462 --- /dev/null +++ b/docs/reference/filebeat/configuring-output.md @@ -0,0 +1,31 @@ +--- +navigation_title: "Output" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html +--- + +# Configure the output [configuring-output] + + +You configure Filebeat to write to a specific output by setting options in the Outputs section of the `filebeat.yml` config file. Only a single output may be defined. + +The following topics describe how to configure each supported output. If you’ve secured the {{stack}}, also read [Secure](/reference/filebeat/securing-filebeat.md) for more about security-related configuration options. + +* [{{ess}}](/reference/filebeat/configure-cloud-id.md) +* [Elasticsearch](/reference/filebeat/elasticsearch-output.md) +* [Logstash](/reference/filebeat/logstash-output.md) +* [Kafka](/reference/filebeat/kafka-output.md) +* [Redis](/reference/filebeat/redis-output.md) +* [File](/reference/filebeat/file-output.md) +* [Console](/reference/filebeat/console-output.md) +* [Discard](/reference/filebeat/discard-output.md) + + + + + + + + + + diff --git a/docs/reference/filebeat/configuring-ssl-logstash.md b/docs/reference/filebeat/configuring-ssl-logstash.md new file mode 100644 index 000000000000..12345e1cfa48 --- /dev/null +++ b/docs/reference/filebeat/configuring-ssl-logstash.md @@ -0,0 +1,118 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/configuring-ssl-logstash.html +--- + +# Secure communication with Logstash [configuring-ssl-logstash] + +You can use SSL mutual authentication to secure connections between Filebeat and Logstash. This ensures that Filebeat sends encrypted data to trusted Logstash servers only, and that the Logstash server receives data from trusted Filebeat clients only. + +To use SSL mutual authentication: + +1. Create a certificate authority (CA) and use it to sign the certificates that you plan to use for Filebeat and Logstash. Creating a correct SSL/TLS infrastructure is outside the scope of this document. There are many online resources available that describe how to create certificates. + + ::::{tip} + If you are using {{security-features}}, you can use the [elasticsearch-certutil tool](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) to generate certificates. + :::: + +2. Configure Filebeat to use SSL. In the `filebeat.yml` config file, specify the following settings under `ssl`: + + * `certificate_authorities`: Configures Filebeat to trust any certificates signed by the specified CA. If `certificate_authorities` is empty or not set, the trusted certificate authorities of the host system are used. + * `certificate` and `key`: Specifies the certificate and key that Filebeat uses to authenticate with Logstash. + + For example: + + ```yaml + output.logstash: + hosts: ["logs.mycompany.com:5044"] + ssl.certificate_authorities: ["/etc/ca.crt"] + ssl.certificate: "/etc/client.crt" + ssl.key: "/etc/client.key" + ``` + + For more information about these configuration options, see [SSL](/reference/filebeat/configuration-ssl.md). + +3. Configure Logstash to use SSL. In the Logstash config file, specify the following settings for the [Beats input plugin for Logstash](logstash://reference/plugins-inputs-beats.md): + + * `ssl`: When set to true, enables Logstash to use SSL/TLS. + * `ssl_certificate_authorities`: Configures Logstash to trust any certificates signed by the specified CA. + * `ssl_certificate` and `ssl_key`: Specify the certificate and key that Logstash uses to authenticate with the client. + * `ssl_verify_mode`: Specifies whether the Logstash server verifies the client certificate against the CA. You need to specify either `peer` or `force_peer` to make the server ask for the certificate and validate it. If you specify `force_peer`, and Filebeat doesn’t provide a certificate, the Logstash connection will be closed. If you choose not to use [certutil](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md), the certificates that you obtain must allow for both `clientAuth` and `serverAuth` if the extended key usage extension is present. + + For example: + + ```json + input { + beats { + port => 5044 + ssl => true + ssl_certificate_authorities => ["/etc/ca.crt"] + ssl_certificate => "/etc/server.crt" + ssl_key => "/etc/server.key" + ssl_verify_mode => "force_peer" + } + } + ``` + + For more information about these options, see the [documentation for the Beats input plugin](logstash://reference/plugins-inputs-beats.md). + + + +## Validate the Logstash server’s certificate [testing-ssl-logstash] + +Before running Filebeat, you should validate the Logstash server’s certificate. You can use `curl` to validate the certificate even though the protocol used to communicate with Logstash is not based on HTTP. For example: + +```shell +curl -v --cacert ca.crt https://logs.mycompany.com:5044 +``` + +If the test is successful, you’ll receive an empty response error: + +```shell +* Rebuilt URL to: https://logs.mycompany.com:5044/ +* Trying 192.168.99.100... +* Connected to logs.mycompany.com (192.168.99.100) port 5044 (#0) +* TLS 1.2 connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA +* Server certificate: logs.mycompany.com +* Server certificate: mycompany.com +> GET / HTTP/1.1 +> Host: logs.mycompany.com:5044 +> User-Agent: curl/7.43.0 +> Accept: */* +> +* Empty reply from server +* Connection #0 to host logs.mycompany.com left intact +curl: (52) Empty reply from server +``` + +The following example uses the IP address rather than the hostname to validate the certificate: + +```shell +curl -v --cacert ca.crt https://192.168.99.100:5044 +``` + +Validation for this test fails because the certificate is not valid for the specified IP address. It’s only valid for the `logs.mycompany.com`, the hostname that appears in the Subject field of the certificate. + +```shell +* Rebuilt URL to: https://192.168.99.100:5044/ +* Trying 192.168.99.100... +* Connected to 192.168.99.100 (192.168.99.100) port 5044 (#0) +* WARNING: using IP address, SNI is being disabled by the OS. +* SSL: certificate verification failed (result: 5) +* Closing connection 0 +curl: (51) SSL: certificate verification failed (result: 5) +``` + +See the [troubleshooting docs](/reference/filebeat/ssl-client-fails.md) for info about resolving this issue. + + +## Test the Filebeat to Logstash connection [_test_the_filebeat_to_logstash_connection] + +If you have Filebeat running as a service, first stop the service. Then test your setup by running Filebeat in the foreground so you can quickly see any errors that occur: + +```sh +filebeat -c filebeat.yml -e -v +``` + +Any errors will be printed to the console. See the [troubleshooting docs](/reference/filebeat/ssl-client-fails.md) for info about resolving common errors. + diff --git a/docs/reference/filebeat/connection-problem.md b/docs/reference/filebeat/connection-problem.md new file mode 100644 index 000000000000..f84cb9519258 --- /dev/null +++ b/docs/reference/filebeat/connection-problem.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/connection-problem.html +--- + +# Logstash connection doesn���t work [connection-problem] + +You may have configured {{ls}} or Filebeat incorrectly. To resolve the issue: + +* Make sure that {{ls}} is running and you can connect to it. First, try to ping the {{ls}} host to verify that you can reach it from the host running Filebeat. Then use either `nc` or `telnet` to make sure that the port is available. For example: + + ```shell + ping + telnet 5044 + ``` + +* Verify that the config file for Filebeat specifies the correct port where {{ls}} is running. +* Make sure that the {{es}} output is commented out in the config file and the {{ls}} output is uncommented. +* Confirm that the most recent [Beats input plugin for {{ls}}](logstash://reference/plugins-inputs-beats.md) is installed and configured. Note that Beats will not connect to the Lumberjack input plugin. To learn how to install and update plugins, see [Working with plugins](logstash://reference/working-with-plugins.md). + diff --git a/docs/reference/filebeat/console-output.md b/docs/reference/filebeat/console-output.md new file mode 100644 index 000000000000..e0c26ede6ecf --- /dev/null +++ b/docs/reference/filebeat/console-output.md @@ -0,0 +1,67 @@ +--- +navigation_title: "Console" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/console-output.html +--- + +# Configure the Console output [console-output] + + +The Console output writes events in JSON format to stdout. + +::::{warning} +The Console output should be used only for debugging issues as it can produce a large amount of logging data. +:::: + + +To use this output, edit the Filebeat configuration file to disable the {{es}} output by commenting it out, and enable the console output by adding `output.console`. + +Example configuration: + +```yaml +output.console: + pretty: true +``` + +## Configuration options [_configuration_options_30] + +You can specify the following `output.console` options in the `filebeat.yml` config file: + +### `enabled` [_enabled_35] + +The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + +### `pretty` [_pretty] + +If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false. + + +### `codec` [_codec_4] + +Output codec configuration. If the `codec` section is missing, events will be json encoded using the `pretty` option. + +See [Change the output codec](/reference/filebeat/configuration-output-codec.md) for more information. + + +### `bulk_max_size` [_bulk_max_size_4] + +The maximum number of events to buffer internally during publishing. The default is 2048. + +Specifying a larger batch size may add some latency and buffering during publishing. However, for Console output, this setting does not affect how events are published. + +Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. + + +### `queue` [_queue_6] + +Configuration options for internal queue. + +See [Internal queue](/reference/filebeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `filebeat.yml` or the `output` section but not both. + + + diff --git a/docs/reference/filebeat/contributing-to-beats.md b/docs/reference/filebeat/contributing-to-beats.md new file mode 100644 index 000000000000..7938a0e5a9ad --- /dev/null +++ b/docs/reference/filebeat/contributing-to-beats.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/contributing-to-beats.html +--- + +# Contribute to Beats [contributing-to-beats] + +The Beats are open source and we love to receive contributions from our community — you! + +There are many ways to contribute, from writing tutorials or blog posts, improving the documentation, submitting bug reports and feature requests, or writing code that implements a whole new protocol, module, or Beat. + +The [Beats Developer Guide](http://www.elastic.co/guide/en/beats/devguide/master/index.md) is your one-stop shop for everything related to developing code for the Beats project. + diff --git a/docs/reference/filebeat/convert.md b/docs/reference/filebeat/convert.md new file mode 100644 index 000000000000..e474c21188ad --- /dev/null +++ b/docs/reference/filebeat/convert.md @@ -0,0 +1,42 @@ +--- +navigation_title: "convert" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/convert.html +--- + +# Convert [convert] + + +The `convert` processor converts a field in the event to a different type, such as converting a string to an integer. + +The supported types include: `integer`, `long`, `float`, `double`, `string`, `boolean`, and `ip`. + +The `ip` type is effectively an alias for `string`, but with an added validation that the value is an IPv4 or IPv6 address. + +```yaml +processors: + - convert: + fields: + - {from: "src_ip", to: "source.ip", type: "ip"} + - {from: "src_port", to: "source.port", type: "integer"} + ignore_missing: true + fail_on_error: false +``` + +The `convert` processor has the following configuration settings: + +`fields` +: (Required) This is the list of fields to convert. At least one item must be contained in the list. Each item in the list must have a `from` key that specifies the source field. The `to` key is optional and specifies where to assign the converted value. If `to` is omitted then the `from` field is updated in-place. The `type` key specifies the data type to convert the value to. If `type` is omitted then the processor copies or renames the field without any type conversion. + +`ignore_missing` +: (Optional) If `true` the processor continues to the next field when the `from` key is not found in the event. If false then the processor returns an error and does not process the remaining fields. Default is `false`. + +`fail_on_error` +: (Optional) If false type conversion failures are ignored and the processor continues to the next field. Default is `true`. + +`tag` +: (Optional) An identifier for this processor. Useful for debugging. + +`mode` +: (Optional) When both `from` and `to` are defined for a field then `mode` controls whether to `copy` or `rename` the field when the type conversion is successful. Default is `copy`. + diff --git a/docs/reference/filebeat/copy-fields.md b/docs/reference/filebeat/copy-fields.md new file mode 100644 index 000000000000..47d9aca409a8 --- /dev/null +++ b/docs/reference/filebeat/copy-fields.md @@ -0,0 +1,45 @@ +--- +navigation_title: "copy_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/copy-fields.html +--- + +# Copy fields [copy-fields] + + +The `copy_fields` processor takes the value of a field and copies it to a new field. + +You cannot use this processor to replace an existing field. If the target field already exists, you must [drop](/reference/filebeat/drop-fields.md) or [rename](/reference/filebeat/rename-fields.md) the field before using `copy_fields`. + +`fields` +: List of `from` and `to` pairs to copy from and to. It’s supported to use `@metadata.` prefix for `from` and `to` and copy values not just in/from/to the event fields but also in/from/to the event metadata. + +`fail_on_error` +: (Optional) If set to `true` and an error occurs, the changes are reverted and the original is returned. If set to `false`, processing continues if an error occurs. Default is `true`. + +`ignore_missing` +: (Optional) Indicates whether to ignore events that lack the source field. The default is `false`, which will fail processing of an event if a field is missing. + +For example, this configuration: + +```yaml +processors: + - copy_fields: + fields: + - from: message + to: event.original + fail_on_error: false + ignore_missing: true +``` + +Copies the original `message` field to `event.original`: + +```json +{ + "message": "my-interesting-message", + "event": { + "original": "my-interesting-message" + } +} +``` + diff --git a/docs/reference/filebeat/could-not-locate-index-pattern.md b/docs/reference/filebeat/could-not-locate-index-pattern.md new file mode 100644 index 000000000000..7ec1ec01ffd0 --- /dev/null +++ b/docs/reference/filebeat/could-not-locate-index-pattern.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/could-not-locate-index-pattern.html +--- + +# Dashboard could not locate the index-pattern [could-not-locate-index-pattern] + +Typically Filebeat sets up the index pattern automatically when it loads the index template. However, if for some reason Filebeat loads the index template, but the index pattern does not get created correctly, you’ll see a "could not locate that index-pattern" error. To resolve this problem: + +1. Try running the `setup` command again. For example: `./filebeat setup`. +2. If that doesn’t work, go to the Management app in {{kib}}, and under **Index Patterns**, look for the pattern. + + 1. If the pattern doesn’t exist, create it manually. + + * Set the **Time filter field name** to `@timestamp`. + * Set the **Custom index pattern ID** advanced option. For example, if your custom index name is `filebeat-customname`, set the custom index pattern ID to `filebeat-customname-*`. + + +For more information, see [Creating an index pattern](docs-content://explore-analyze/find-and-organize/data-views.md) in the {{kib}} docs. + diff --git a/docs/reference/filebeat/dashboard-fields-incorrect-filebeat.md b/docs/reference/filebeat/dashboard-fields-incorrect-filebeat.md new file mode 100644 index 000000000000..d05db3fed0c1 --- /dev/null +++ b/docs/reference/filebeat/dashboard-fields-incorrect-filebeat.md @@ -0,0 +1,9 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/dashboard-fields-incorrect-filebeat.html +--- + +# Dashboard in Kibana is breaking up data fields incorrectly [dashboard-fields-incorrect-filebeat] + +The index template might not be loaded correctly. See [*Load the {{es}} index template*](/reference/filebeat/filebeat-template.md). + diff --git a/docs/reference/filebeat/decode-base64-field.md b/docs/reference/filebeat/decode-base64-field.md new file mode 100644 index 000000000000..d900eea58ef2 --- /dev/null +++ b/docs/reference/filebeat/decode-base64-field.md @@ -0,0 +1,35 @@ +--- +navigation_title: "decode_base64_field" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/decode-base64-field.html +--- + +# Decode Base64 fields [decode-base64-field] + + +The `decode_base64_field` processor specifies a field to base64 decode. The `field` key contains a `from: old-key` and a `to: new-key` pair. `from` is the origin and `to` the target name of the field. + +To overwrite fields either first rename the target field or use the `drop_fields` processor to drop the field and then rename the field. + +```yaml +processors: + - decode_base64_field: + field: + from: "field1" + to: "field2" + ignore_missing: false + fail_on_error: true +``` + +In the example above: - field1 is decoded in field2 + +The `decode_base64_field` processor has the following configuration settings: + +`ignore_missing` +: (Optional) If set to true, no error is logged in case a key which should be base64 decoded is missing. Default is `false`. + +`fail_on_error` +: (Optional) If set to true, in case of an error the base64 decode of fields is stopped and the original event is returned. If set to false, decoding continues also if an error happened during decoding. Default is `true`. + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/filebeat/decode-csv-fields.md b/docs/reference/filebeat/decode-csv-fields.md new file mode 100644 index 000000000000..8df815bb8d0f --- /dev/null +++ b/docs/reference/filebeat/decode-csv-fields.md @@ -0,0 +1,48 @@ +--- +navigation_title: "decode_csv_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/decode-csv-fields.html +--- + +# Decode CSV fields [decode-csv-fields] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `decode_csv_fields` processor decodes fields containing records in comma-separated format (CSV). It will output the values as an array of strings. This processor is available for Filebeat. + +```yaml +processors: + - decode_csv_fields: + fields: + message: decoded.csv + separator: "," + ignore_missing: false + overwrite_keys: true + trim_leading_space: false + fail_on_error: true +``` + +The `decode_csv_fields` has the following settings: + +`fields` +: This is a mapping from the source field containing the CSV data to the destination field to which the decoded array will be written. + +`separator` +: (Optional) Character to be used as a column separator. The default is the comma character. For using a TAB character you must set it to "\t". + +`ignore_missing` +: (Optional) Whether to ignore events which lack the source field. The default is `false`, which will fail processing of an event if a field is missing. + +`overwrite_keys` +: Whether the target field is overwritten if it already exists. The default is false, which will fail processing of an event when `target` already exists. + +`trim_leading_space` +: Whether extra space after the separator is trimmed from values. This works even if the separator is also a space. The default is `false`. + +`fail_on_error` +: (Optional) If set to true, in case of an error the changes to the event are reverted, and the original event is returned. If set to `false`, processing continues also if an error happens. Default is `true`. + diff --git a/docs/reference/filebeat/decode-duration.md b/docs/reference/filebeat/decode-duration.md new file mode 100644 index 000000000000..1e8bb3d50a53 --- /dev/null +++ b/docs/reference/filebeat/decode-duration.md @@ -0,0 +1,25 @@ +--- +navigation_title: "decode_duration" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/decode-duration.html +--- + +# Decode duration [decode-duration] + + +The `decode_duration` processor decodes a Go-style duration string into a specific `format`. + +For more information about the Go `time.Duration` string style, refer to the [Go documentation](https://pkg.go.dev/time#Duration). + +| Name | Required | Default | Description | | +| --- | --- | --- | --- | --- | +| `field` | yes | | Which field of event needs to be decoded as `time.Duration` | | +| `format` | yes | `milliseconds` | Supported formats: `milliseconds`/`seconds`/`minutes`/`hours` | | + +```yaml +processors: + - decode_duration: + field: "app.rpc.cost" + format: "milliseconds" +``` + diff --git a/docs/reference/filebeat/decode-json-fields.md b/docs/reference/filebeat/decode-json-fields.md new file mode 100644 index 000000000000..89be9851d52c --- /dev/null +++ b/docs/reference/filebeat/decode-json-fields.md @@ -0,0 +1,48 @@ +--- +navigation_title: "decode_json_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/decode-json-fields.html +--- + +# Decode JSON fields [decode-json-fields] + + +The `decode_json_fields` processor decodes fields containing JSON strings and replaces the strings with valid JSON objects. + +```yaml +processors: + - decode_json_fields: + fields: ["field1", "field2", ...] + process_array: false + max_depth: 1 + target: "" + overwrite_keys: false + add_error_key: true +``` + +The `decode_json_fields` processor has the following configuration settings: + +`fields` +: The fields containing JSON strings to decode. + +`process_array` +: (Optional) A Boolean value that specifies whether to process arrays. The default is `false`. + +`max_depth` +: (Optional) The maximum parsing depth. A value of `1` will decode the JSON objects in fields indicated in `fields`, a value of `2` will also decode the objects embedded in the fields of these parsed documents. The default is `1`. + +`target` +: (Optional) The field under which the decoded JSON will be written. By default, the decoded JSON object replaces the string field from which it was read. To merge the decoded JSON fields into the root of the event, specify `target` with an empty string (`target: ""`). Note that the `null` value (`target:`) is treated as if the field was not set. + +`overwrite_keys` +: (Optional) A Boolean value that specifies whether existing keys in the event are overwritten by keys from the decoded JSON object. The default value is `false`. + +`expand_keys` +: (Optional) A Boolean value that specifies whether keys in the decoded JSON should be recursively de-dotted and expanded into a hierarchical object structure. For example, `{"a.b.c": 123}` would be expanded into `{"a":{"b":{"c":123}}}`. + +`add_error_key` +: (Optional) If set to `true` and an error occurs while decoding JSON keys, the `error` field will become a part of the event with the error message. If set to `false`, there will not be any error in the event’s field. The default value is `false`. + +`document_id` +: (Optional) JSON key that’s used as the document ID. If configured, the field will be removed from the original JSON document and stored in `@metadata._id` + diff --git a/docs/reference/filebeat/decode-xml-wineventlog.md b/docs/reference/filebeat/decode-xml-wineventlog.md new file mode 100644 index 000000000000..4f2ebce14121 --- /dev/null +++ b/docs/reference/filebeat/decode-xml-wineventlog.md @@ -0,0 +1,162 @@ +--- +navigation_title: "decode_xml_wineventlog" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/decode-xml-wineventlog.html +--- + +# Decode XML Wineventlog [decode-xml-wineventlog] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `decode_xml_wineventlog` processor decodes Windows Event Log data in XML format that is stored under the `field` key. It outputs the result into the `target_field`. + +The output fields will be the same as the [winlogbeat winlog fields](/reference/winlogbeat/exported-fields-winlog.md#_winlog). + +The supported configuration options are: + +`field` +: (Required) Source field containing the XML. Defaults to `message`. + +`target_field` +: (Required) The field under which the decoded XML will be written. To merge the decoded XML fields into the root of the event specify `target_field` with an empty string (`target_field: ""`). The default value is `winlog`. + +`overwrite_keys` +: (Optional) A boolean that specifies whether keys that already exist in the event are overwritten by keys from the decoded XML object. The default value is `true`. + +`map_ecs_fields` +: (Optional) A boolean that specifies whether to map additional ECS fields when possible. Note that ECS field keys are placed outside of `target_field`. The default value is `true`. + +`ignore_missing` +: (Optional) If `true` the processor will not return an error when a specified field does not exist. Defaults to `false`. + +`ignore_failure` +: (Optional) Ignore all errors produced by the processor. Defaults to `false`. + +`language` +: (Optional) The language ID the events will be rendered in. The language will be forced regardless of the system language. Forwarded events will ignore this setting. A complete list of language IDs can be found [here](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/a9eac961-e77d-41a6-90a5-ce1a8b0cdb9c). It defaults to `0`, which indicates to use the system language. + +Example: + +```yaml +processors: + - decode_xml_wineventlog: + field: event.original + target_field: winlog +``` + +```json +{ + "event": { + "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success" + } +} +``` + +Will produce the following output: + +```json +{ + "event": { + "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success", + "action": "Special Logon", + "code": "4672", + "kind": "event", + "outcome": "success", + "provider": "Microsoft-Windows-Security-Auditing", + }, + "host": { + "name": "vagrant", + }, + "log": { + "level": "information", + }, + "winlog": { + "channel": "Security", + "outcome": "success", + "activity_id": "{ffb23523-1f32-0000-c335-b2ff321fd701}", + "level": "information", + "event_id": 4672, + "provider_name": "Microsoft-Windows-Security-Auditing", + "record_id": 11303, + "computer_name": "vagrant", + "keywords_raw": 9232379236109516800, + "opcode": "Info", + "provider_guid": "{54849625-5478-4994-a5ba-3e3b0328c30d}", + "event_data": { + "SubjectUserSid": "S-1-5-18", + "SubjectUserName": "SYSTEM", + "SubjectDomainName": "NT AUTHORITY", + "SubjectLogonId": "0x3e7", + "PrivilegeList": "SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege" + }, + "task": "Special Logon", + "keywords": [ + "Audit Success" + ], + "message": "Special privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege", + "process": { + "pid": 652, + "thread": { + "id": 4660 + } + } + } +} +``` + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + +The field mappings are as follows: + +| Event Field | Source XML Element | Notes | +| --- | --- | --- | +| `winlog.channel` | `` | | +| `winlog.event_id` | `` | | +| `winlog.provider_name` | `` | `Name` attribute | +| `winlog.record_id` | `` | | +| `winlog.task` | `` | | +| `winlog.computer_name` | `` | | +| `winlog.keywords` | `` | list of each `Keyword` | +| `winlog.opcodes` | `` | | +| `winlog.provider_guid` | `` | `Guid` attribute | +| `winlog.version` | `` | | +| `winlog.time_created` | `` | `SystemTime` attribute | +| `winlog.outcome` | `` | "success" if bit 0x20000000000000 is set, "failure" if 0x10000000000000 is set | +| `winlog.level` | `` | converted to lowercase | +| `winlog.message` | `` | line endings removed | +| `winlog.user.identifier` | `` | | +| `winlog.user.domain` | `` | | +| `winlog.user.name` | `` | | +| `winlog.user.type` | `` | converted from integer to String | +| `winlog.event_data` | `` | map where `Name` attribute in Data element is key, and value is the value of the Data element | +| `winlog.user_data` | `` | map where `Name` attribute in Data element is key, and value is the value of the Data element | +| `winlog.activity_id` | `` | | +| `winlog.related_activity_id` | `` | | +| `winlog.kernel_time` | `` | | +| `winlog.process.pid` | `` | | +| `winlog.process.thread.id` | `` | | +| `winlog.processor_id` | `` | | +| `winlog.processor_time` | `` | | +| `winlog.session_id` | `` | | +| `winlog.user_time` | `` | | +| `winlog.error.code` | `` | | + +If `map_ecs_fields` is enabled then the following field mappings are also performed: + +| Event Field | Source XML or other field | Notes | +| --- | --- | --- | +| `event.code` | `winlog.event_id` | | +| `event.kind` | `"event"` | | +| `event.provider` | `` | `Name` attribute | +| `event.action` | `` | | +| `event.host.name` | `` | | +| `event.outcome` | `winlog.outcome` | | +| `log.level` | `winlog.level` | | +| `message` | `winlog.message` | | +| `error.code` | `winlog.error.code` | | +| `error.message` | `winlog.error.message` | | + diff --git a/docs/reference/filebeat/decode-xml.md b/docs/reference/filebeat/decode-xml.md new file mode 100644 index 000000000000..f29f6c6f7d89 --- /dev/null +++ b/docs/reference/filebeat/decode-xml.md @@ -0,0 +1,96 @@ +--- +navigation_title: "decode_xml" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/decode-xml.html +--- + +# Decode XML [decode-xml] + + +The `decode_xml` processor decodes XML data that is stored under the `field` key. It outputs the result into the `target_field`. + +This example demonstrates how to decode an XML string contained in the `message` field and write the resulting fields into the root of the document. Any fields that already exist will be overwritten. + +```yaml +processors: + - decode_xml: + field: message + target_field: "" + overwrite_keys: true +``` + +By default any decoding errors that occur will stop the processing chain and the error will be added to `error.message` field. To ignore all errors and continue to the next processor you can set `ignore_failure: true`. To specifically ignore failures caused by `field` not existing you can set `ignore_missing: true`. + +```yaml +processors: + - decode_xml: + field: example + target_field: xml + ignore_missing: true + ignore_failure: true +``` + +By default all keys converted from XML will have the names converted to lowercase. If there is a need to disable this behavior it is possible to use the below example: + +```yaml +processors: + - decode_xml: + field: message + target_field: xml + to_lower: false +``` + +Example XML input: + +```xml + + + William H. Gaddis + The Recognitions + One of the great seminal American novels of the 20th century. + + +``` + +Will produce the following output: + +```json +{ + "xml": { + "catalog": { + "book": { + "author": "William H. Gaddis", + "review": "One of the great seminal American novels of the 20th century.", + "seq": "1", + "title": "The Recognitions" + } + } + } +} +``` + +The supported configuration options are: + +`field` +: (Required) Source field containing the XML. Defaults to `message`. + +`target_field` +: (Optional) The field under which the decoded XML will be written. By default the decoded XML object replaces the field from which it was read. To merge the decoded XML fields into the root of the event specify `target_field` with an empty string (`target_field: ""`). Note that the `null` value (`target_field:`) is treated as if the field was not set at all. + +`overwrite_keys` +: (Optional) A boolean that specifies whether keys that already exist in the event are overwritten by keys from the decoded XML object. The default value is `true`. + +`to_lower` +: (Optional) Converts all keys to lowercase. Accepts either `true` or `false`. The default value is `true`. + +`document_id` +: (Optional) XML key to use as the document ID. If configured, the field will be removed from the original XML document and stored in `@metadata._id`. + +`ignore_missing` +: (Optional) If `true` the processor will not return an error when a specified field does not exist. Defaults to `false`. + +`ignore_failure` +: (Optional) Ignore all errors produced by the processor. Defaults to `false`. + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/filebeat/decompress-gzip-field.md b/docs/reference/filebeat/decompress-gzip-field.md new file mode 100644 index 000000000000..7ea90012c0cd --- /dev/null +++ b/docs/reference/filebeat/decompress-gzip-field.md @@ -0,0 +1,35 @@ +--- +navigation_title: "decompress_gzip_field" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/decompress-gzip-field.html +--- + +# Decompress gzip fields [decompress-gzip-field] + + +The `decompress_gzip_field` processor specifies a field to gzip decompress. The `field` key contains a `from: old-key` and a `to: new-key` pair. `from` is the origin and `to` the target name of the field. + +To overwrite fields either first rename the target field or use the `drop_fields` processor to drop the field and then decompress the field. + +```yaml +processors: + - decompress_gzip_field: + field: + from: "field1" + to: "field2" + ignore_missing: false + fail_on_error: true +``` + +In the example above: - field1 is decoded in field2 + +The `decompress_gzip_field` processor has the following configuration settings: + +`ignore_missing` +: (Optional) If set to true, no error is logged in case a key which should be decompressed is missing. Default is `false`. + +`fail_on_error` +: (Optional) If set to true, in case of an error the decompression of fields is stopped and the original event is returned. If set to false, decompression continues also if an error happened during decoding. Default is `true`. + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/filebeat/defining-processors.md b/docs/reference/filebeat/defining-processors.md new file mode 100644 index 000000000000..800c5f876562 --- /dev/null +++ b/docs/reference/filebeat/defining-processors.md @@ -0,0 +1,335 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html +--- + +# Define processors [defining-processors] + +You can use processors to filter and enhance data before sending it to the configured output. To define a processor, you specify the processor name, an optional condition, and a set of parameters: + +```yaml +processors: + - : + when: + + + + - : + when: + + + +... +``` + +Where: + +* `` specifies a [processor](#processors) that performs some kind of action, such as selecting the fields that are exported or adding metadata to the event. +* `` specifies an optional [condition](#conditions). If the condition is present, then the action is executed only if the condition is fulfilled. If no condition is set, then the action is always executed. +* `` is the list of parameters to pass to the processor. + +More complex conditional processing can be accomplished by using the if-then-else processor configuration. This allows multiple processors to be executed based on a single condition. + +```yaml +processors: + - if: + + then: <1> + - : + + - : + + ... + else: <2> + - : + + - : + + ... +``` + +1. `then` must contain a single processor or a list of one or more processors to execute when the condition evaluates to true. +2. `else` is optional. It can contain a single processor or a list of processors to execute when the conditional evaluate to false. + + +## Where are processors valid? [where-valid] + +Processors are valid: + +* At the top-level in the configuration. The processor is applied to all data collected by Filebeat. +* Under a specific input. The processor is applied to the data collected for that input. + + ```yaml + - type: + processors: + - : + when: + + + ... + ``` + + Similarly, for Filebeat modules, you can define processors under the `input` section of the module definition. + + + +## Processors [processors] + +The supported processors are: + +* [`add_cloud_metadata`](/reference/filebeat/add-cloud-metadata.md) +* [`add_cloudfoundry_metadata`](/reference/filebeat/add-cloudfoundry-metadata.md) +* [`add_docker_metadata`](/reference/filebeat/add-docker-metadata.md) +* [`add_fields`](/reference/filebeat/add-fields.md) +* [`add_host_metadata`](/reference/filebeat/add-host-metadata.md) +* [`add_id`](/reference/filebeat/add-id.md) +* [`add_kubernetes_metadata`](/reference/filebeat/add-kubernetes-metadata.md) +* [`add_labels`](/reference/filebeat/add-labels.md) +* [`add_locale`](/reference/filebeat/add-locale.md) +* [`add_nomad_metadata`](/reference/filebeat/add-nomad-metadata.md) +* [`add_observer_metadata`](/reference/filebeat/add-observer-metadata.md) +* [`add_process_metadata`](/reference/filebeat/add-process-metadata.md) +* [`add_tags`](/reference/filebeat/add-tags.md) +* [`append`](/reference/filebeat/append.md) +* [`community_id`](/reference/filebeat/community-id.md) +* [`convert`](/reference/filebeat/convert.md) +* [`copy_fields`](/reference/filebeat/copy-fields.md) +* [`decode_base64_field`](/reference/filebeat/decode-base64-field.md) +* [`decode_cef`](/reference/filebeat/processor-decode-cef.md) +* [`decode_csv_fields`](/reference/filebeat/decode-csv-fields.md) +* [`decode_duration`](/reference/filebeat/decode-duration.md) +* [`decode_json_fields`](/reference/filebeat/decode-json-fields.md) +* [`decode_xml`](/reference/filebeat/decode-xml.md) +* [`decode_xml_wineventlog`](/reference/filebeat/decode-xml-wineventlog.md) +* [`decompress_gzip_field`](/reference/filebeat/decompress-gzip-field.md) +* [`detect_mime_type`](/reference/filebeat/detect-mime-type.md) +* [`dissect`](/reference/filebeat/dissect.md) +* [`dns`](/reference/filebeat/processor-dns.md) +* [`drop_event`](/reference/filebeat/drop-event.md) +* [`drop_fields`](/reference/filebeat/drop-fields.md) +* [`extract_array`](/reference/filebeat/extract-array.md) +* [`fingerprint`](/reference/filebeat/fingerprint.md) +* [`include_fields`](/reference/filebeat/include-fields.md) +* [`move-fields`](/reference/filebeat/move-fields.md) +* [`parse_aws_vpc_flow_log`](/reference/filebeat/processor-parse-aws-vpc-flow-log.md) +* [`rate_limit`](/reference/filebeat/rate-limit.md) +* [`registered_domain`](/reference/filebeat/processor-registered-domain.md) +* [`rename`](/reference/filebeat/rename-fields.md) +* [`replace`](/reference/filebeat/replace-fields.md) +* [`script`](/reference/filebeat/processor-script.md) +* [`syslog`](/reference/filebeat/syslog.md) +* [`timestamp`](/reference/filebeat/processor-timestamp.md) +* [`translate_ldap_attribute`](/reference/filebeat/processor-translate-guid.md) +* [`translate_sid`](/reference/filebeat/processor-translate-sid.md) +* [`truncate_fields`](/reference/filebeat/truncate-fields.md) +* [`urldecode`](/reference/filebeat/urldecode.md) + + +## Conditions [conditions] + +Each condition receives a field to compare. You can specify multiple fields under the same condition by using `AND` between the fields (for example, `field1 AND field2`). + +For each field, you can specify a simple field name or a nested map, for example `dns.question.name`. + +See [Exported fields](/reference/filebeat/exported-fields.md) for a list of all the fields that are exported by Filebeat. + +The supported conditions are: + +* [`equals`](#condition-equals) +* [`contains`](#condition-contains) +* [`regexp`](#condition-regexp) +* [`range`](#condition-range) +* [`network`](#condition-network) +* [`has_fields`](#condition-has_fields) +* [`or`](#condition-or) +* [`and`](#condition-and) +* [`not`](#condition-not) + + +#### `equals` [condition-equals] + +With the `equals` condition, you can compare if a field has a certain value. The condition accepts only an integer or a string value. + +For example, the following condition checks if the response code of the HTTP transaction is 200: + +```yaml +equals: + http.response.code: 200 +``` + + +#### `contains` [condition-contains] + +The `contains` condition checks if a value is part of a field. The field can be a string or an array of strings. The condition accepts only a string value. + +For example, the following condition checks if an error is part of the transaction status: + +```yaml +contains: + status: "Specific error" +``` + + +#### `regexp` [condition-regexp] + +The `regexp` condition checks the field against a regular expression. The condition accepts only strings. + +For example, the following condition checks if the process name starts with `foo`: + +```yaml +regexp: + system.process.name: "^foo.*" +``` + + +#### `range` [condition-range] + +The `range` condition checks if the field is in a certain range of values. The condition supports `lt`, `lte`, `gt` and `gte`. The condition accepts only integer, float, or strings that can be converted to either of these as values. + +For example, the following condition checks for failed HTTP transactions by comparing the `http.response.code` field with 400. + +```yaml +range: + http.response.code: + gte: 400 +``` + +This can also be written as: + +```yaml +range: + http.response.code.gte: 400 +``` + +The following condition checks if the CPU usage in percentage has a value between 0.5 and 0.8. + +```yaml +range: + system.cpu.user.pct.gte: 0.5 + system.cpu.user.pct.lt: 0.8 +``` + + +#### `network` [condition-network] + +The `network` condition checks whether a field’s value falls within a specified IP network range. If multiple fields are provided, each field value must match its corresponding network range. You can specify multiple network ranges for a single field, and a match occurs if any one of the ranges matches. If the field value is an array of IPs, it will match if any of the IPs fall within any of the given ranges. Both IPv4 and IPv6 addresses are supported. + +The network range may be specified using CIDR notation, like "192.0.2.0/24" or "2001:db8::/32", or by using one of these named ranges: + +* `loopback` - Matches loopback addresses in the range of `127.0.0.0/8` or `::1/128`. +* `unicast` - Matches global unicast addresses defined in RFC 1122, RFC 4632, and RFC 4291 with the exception of the IPv4 broadcast address (`255.255.255.255`). This includes private address ranges. +* `multicast` - Matches multicast addresses. +* `interface_local_multicast` - Matches IPv6 interface-local multicast addresses. +* `link_local_unicast` - Matches link-local unicast addresses. +* `link_local_multicast` - Matches link-local multicast addresses. +* `private` - Matches private address ranges defined in RFC 1918 (IPv4) and RFC 4193 (IPv6). +* `public` - Matches addresses that are not loopback, unspecified, IPv4 broadcast, link local unicast, link local multicast, interface local multicast, or private. +* `unspecified` - Matches unspecified addresses (either the IPv4 address "0.0.0.0" or the IPv6 address "::"). + +The following condition returns true if the `source.ip` value is within the private address space. + +```yaml +network: + source.ip: private +``` + +This condition returns true if the `destination.ip` value is within the IPv4 range of `192.168.1.0` - `192.168.1.255`. + +```yaml +network: + destination.ip: '192.168.1.0/24' +``` + +And this condition returns true when `destination.ip` is within any of the given subnets. + +```yaml +network: + destination.ip: ['192.168.1.0/24', '10.0.0.0/8', loopback] +``` + + +#### `has_fields` [condition-has_fields] + +The `has_fields` condition checks if all the given fields exist in the event. The condition accepts a list of string values denoting the field names. + +For example, the following condition checks if the `http.response.code` field is present in the event. + +```yaml +has_fields: ['http.response.code'] +``` + + +#### `or` [condition-or] + +The `or` operator receives a list of conditions. + +```yaml +or: + - + - + - + ... +``` + +For example, to configure the condition `http.response.code = 304 OR http.response.code = 404`: + +```yaml +or: + - equals: + http.response.code: 304 + - equals: + http.response.code: 404 +``` + + +#### `and` [condition-and] + +The `and` operator receives a list of conditions. + +```yaml +and: + - + - + - + ... +``` + +For example, to configure the condition `http.response.code = 200 AND status = OK`: + +```yaml +and: + - equals: + http.response.code: 200 + - equals: + status: OK +``` + +To configure a condition like ` OR AND `: + +```yaml +or: + - + - and: + - + - +``` + + +#### `not` [condition-not] + +The `not` operator receives the condition to negate. + +```yaml +not: + +``` + +For example, to configure the condition `NOT status = OK`: + +```yaml +not: + equals: + status: OK +``` + + diff --git a/docs/reference/filebeat/detect-mime-type.md b/docs/reference/filebeat/detect-mime-type.md new file mode 100644 index 000000000000..d2430ff391e9 --- /dev/null +++ b/docs/reference/filebeat/detect-mime-type.md @@ -0,0 +1,22 @@ +--- +navigation_title: "detect_mime_type" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/detect-mime-type.html +--- + +# Detect mime type [detect-mime-type] + + +The `detect_mime_type` processor attempts to detect a mime type for a field that contains a given stream of bytes. The `field` key contains the field used as the data source and the `target` key contains the field to populate with the detected type. It’s supported to use `@metadata.` prefix for `target` and set the value in the event metadata instead of fields. + +```yaml +processors: + - detect_mime_type: + field: http.request.body.content + target: http.request.mime_type +``` + +In the example above: - http.request.body.content is used as the source and http.request.mime_type is set to the detected mime type + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/filebeat/diff-logstash-beats.md b/docs/reference/filebeat/diff-logstash-beats.md new file mode 100644 index 000000000000..214708605b3f --- /dev/null +++ b/docs/reference/filebeat/diff-logstash-beats.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/diff-logstash-beats.html +--- + +# Not sure whether to use Logstash or Beats [diff-logstash-beats] + +Beats are lightweight data shippers that you install as agents on your servers to send specific types of operational data to {{es}}. Beats have a small footprint and use fewer system resources than {{ls}}. + +{{ls}} has a larger footprint, but provides a broad array of input, filter, and output plugins for collecting, enriching, and transforming data from a variety of sources. + +For more information, see the [{{ls}} Introduction](logstash://reference/index.md) and the [Beats Overview](/reference/index.md). + diff --git a/docs/reference/filebeat/directory-layout.md b/docs/reference/filebeat/directory-layout.md new file mode 100644 index 000000000000..1c50acf899c0 --- /dev/null +++ b/docs/reference/filebeat/directory-layout.md @@ -0,0 +1,70 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/directory-layout.html +--- + +# Directory layout [directory-layout] + +The directory layout of an installation is as follows: + +::::{tip} +Archive installation has a different layout. See [zip, tar.gz, or tgz](#directory-layout-archive). +:::: + + +| Type | Description | Default Location | Config Option | +| --- | --- | --- | --- | +| home | Home of the Filebeat installation. | | `path.home` | +| bin | The location for the binary files. | `{path.home}/bin` | | +| config | The location for configuration files. | `{path.home}` | `path.config` | +| data | The location for persistent data files. | `{path.home}/data` | `path.data` | +| logs | The location for the logs created by Filebeat. | `{path.home}/logs` | `path.logs` | + +You can change these settings by using CLI flags or setting [path options](/reference/filebeat/configuration-path.md) in the configuration file. + +## Default paths [_default_paths] + +Filebeat uses the following default paths unless you explicitly change them. + + +#### deb and rpm [_deb_and_rpm] + +| Type | Description | Location | +| --- | --- | --- | +| home | Home of the Filebeat installation. | `/usr/share/filebeat` | +| bin | The location for the binary files. | `/usr/share/filebeat/bin` | +| config | The location for configuration files. | `/etc/filebeat` | +| data | The location for persistent data files. | `/var/lib/filebeat` | +| logs | The location for the logs created by Filebeat. | `/var/log/filebeat` | + +For the deb and rpm distributions, these paths are set in the init script or in the systemd unit file. Make sure that you start the Filebeat service by using the preferred operating system method (init scripts or `systemctl`). Otherwise the paths might be set incorrectly. + + +#### docker [_docker] + +| Type | Description | Location | +| --- | --- | --- | +| home | Home of the Filebeat installation. | `/usr/share/filebeat` | +| bin | The location for the binary files. | `/usr/share/filebeat` | +| config | The location for configuration files. | `/usr/share/filebeat` | +| data | The location for persistent data files. | `/usr/share/filebeat/data` | +| logs | The location for the logs created by Filebeat. | `/usr/share/filebeat/logs` | + + +#### zip, tar.gz, or tgz [directory-layout-archive] + +| Type | Description | Location | +| --- | --- | --- | +| home | Home of the Filebeat installation. | `{extract.path}` | +| bin | The location for the binary files. | `{extract.path}` | +| config | The location for configuration files. | `{extract.path}` | +| data | The location for persistent data files. | `{extract.path}/data` | +| logs | The location for the logs created by Filebeat. | `{extract.path}/logs` | + +For the zip, tar.gz, or tgz distributions, these paths are based on the location of the extracted binary file. This means that if you start Filebeat with the following simple command, all paths are set correctly: + +```sh +./filebeat +``` + + diff --git a/docs/reference/filebeat/discard-output.md b/docs/reference/filebeat/discard-output.md new file mode 100644 index 000000000000..69a26343d49e --- /dev/null +++ b/docs/reference/filebeat/discard-output.md @@ -0,0 +1,37 @@ +--- +navigation_title: "Discard" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/discard-output.html +--- + +# Configure the Discard output [discard-output] + + +The Discard output throws away data. + +::::{warning} +The Discard output should be used only for development or debugging issues. Data is lost. +:::: + + +This can be useful if you want to work on your input configuration without needing to configure an output. It can also be useful to test how changes in input and processor configuration affect performance. + +Example configuration: + +```yaml +output.discard: + enabled: true +``` + +## Configuration options [_configuration_options_31] + +You can specify the following `output.discard` options in the `filebeat.yml` config file: + +### `enabled` [_enabled_36] + +The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + + diff --git a/docs/reference/filebeat/dissect.md b/docs/reference/filebeat/dissect.md new file mode 100644 index 000000000000..8603d041a692 --- /dev/null +++ b/docs/reference/filebeat/dissect.md @@ -0,0 +1,95 @@ +--- +navigation_title: "dissect" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/dissect.html +--- + +# Dissect strings [dissect] + + +The `dissect` processor tokenizes incoming strings using defined patterns. + +```yaml +processors: + - dissect: + tokenizer: "%{key1} %{key2} %{key3|convert_datatype}" + field: "message" + target_prefix: "dissect" +``` + +The `dissect` processor has the following configuration settings: + +`tokenizer` +: The field used to define the **dissection** pattern. Optional convert datatype can be provided after the key using `|` as separator to convert the value from string to integer, long, float, double, boolean or ip. + +`field` +: (Optional) The event field to tokenize. Default is `message`. + +`target_prefix` +: (Optional) The name of the field where the values will be extracted. When an empty string is defined, the processor will create the keys at the root of the event. Default is `dissect`. When the target key already exists in the event, the processor won’t replace it and log an error; you need to either drop or rename the key before using dissect, or enable the `overwrite_keys` flag. + +`ignore_failure` +: (Optional) Flag to control whether the processor returns an error if the tokenizer fails to match the message field. If set to true, the processor will silently restore the original event, allowing execution of subsequent processors (if any). If set to false (default), the processor will log an error, preventing execution of other processors. + +`overwrite_keys` +: (Optional) When set to true, the processor will overwrite existing keys in the event. The default is false, which causes the processor to fail when a key already exists. + +`trim_values` +: (Optional) Enables the trimming of the extracted values. Useful to remove leading and/or trailing spaces. Possible values are: + + * `none`: (default) no trimming is performed. + * `left`: values are trimmed on the left (leading). + * `right`: values are trimmed on the right (trailing). + * `all`: values are trimmed for leading and trailing. + + +`trim_chars` +: (Optional) Set of characters to trim from values, when trimming is enabled. The default is to trim the space character (`" "`). To trim multiple characters, simply set it to a string containing all characters to trim. For example, `trim_chars: " \t"` will trim spaces and/or tabs. + +For tokenization to be successful, all keys must be found and extracted, if one of them cannot be found an error will be logged and no modification is done on the original event. + +::::{note} +A key can contain any characters except reserved suffix or prefix modifiers: `/`,`&`, `+`, `#` and `?`. +:::: + + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + +## Dissect example [dissect-example] + +For this example, imagine that an application generates the following messages: + +```sh +"321 - App01 - WebServer is starting" +"321 - App01 - WebServer is up and running" +"321 - App01 - WebServer is scaling 2 pods" +"789 - App02 - Database is will be restarted in 5 minutes" +"789 - App02 - Database is up and running" +"789 - App02 - Database is refreshing tables" +``` + +Use the `dissect` processor to split each message into three fields, for example, `service.pid`, `service.name` and `service.status`: + +```yaml +processors: + - dissect: + tokenizer: '"%{service.pid|integer} - %{service.name} - %{service.status}"' + field: "message" + target_prefix: "" +``` + +This configuration produces fields like: + +```json +"service": { + "pid": 321, + "name": "App01", + "status": "WebServer is up and running" +}, +``` + +`service.name` is an ECS [keyword field](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md), which means that you can use it in {{es}} for filtering, sorting, and aggregations. + +When possible, use ECS-compatible field names. For more information, see the [Elastic Common Schema](ecs://reference/index.md) documentation. + + diff --git a/docs/reference/filebeat/drop-event.md b/docs/reference/filebeat/drop-event.md new file mode 100644 index 000000000000..06f39ed06764 --- /dev/null +++ b/docs/reference/filebeat/drop-event.md @@ -0,0 +1,20 @@ +--- +navigation_title: "drop_event" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/drop-event.html +--- + +# Drop events [drop-event] + + +The `drop_event` processor drops the entire event if the associated condition is fulfilled. The condition is mandatory, because without one, all the events are dropped. + +```yaml +processors: + - drop_event: + when: + condition +``` + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + diff --git a/docs/reference/filebeat/drop-fields.md b/docs/reference/filebeat/drop-fields.md new file mode 100644 index 000000000000..51558ad4550d --- /dev/null +++ b/docs/reference/filebeat/drop-fields.md @@ -0,0 +1,35 @@ +--- +navigation_title: "drop_fields" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/drop-fields.html +--- + +# Drop fields from events [drop-fields] + + +The `drop_fields` processor specifies which fields to drop if a certain condition is fulfilled. The condition is optional. If it’s missing, the specified fields are always dropped. The `@timestamp` and `type` fields cannot be dropped, even if they show up in the `drop_fields` list. + +```yaml +processors: + - drop_fields: + when: + condition + fields: ["field1", "field2", ...] + ignore_missing: false +``` + +See [Conditions](/reference/filebeat/defining-processors.md#conditions) for a list of supported conditions. + +::::{note} +If you define an empty list of fields under `drop_fields`, then no fields are dropped. +:::: + + +The `drop_fields` processor has the following configuration settings: + +`fields` +: If non-empty, a list of matching field names will be removed. Any element in array can contain a regular expression delimited by two slashes (*/reg_exp/*), in order to match (name) and remove more than one field. + +`ignore_missing` +: (Optional) If `true` the processor will not return an error when a specified field does not exist. Defaults to `false`. + diff --git a/docs/reference/filebeat/elasticsearch-output.md b/docs/reference/filebeat/elasticsearch-output.md new file mode 100644 index 000000000000..803fd05da3e5 --- /dev/null +++ b/docs/reference/filebeat/elasticsearch-output.md @@ -0,0 +1,516 @@ +--- +navigation_title: "Elasticsearch" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html +--- + +# Configure the Elasticsearch output [elasticsearch-output] + + +The Elasticsearch output sends events directly to Elasticsearch using the Elasticsearch HTTP API. + +Example configuration: + +```yaml +output.elasticsearch: + hosts: ["https://myEShost:9200"] <1> +``` + +1. To enable SSL, add `https` to all URLs defined under *hosts*. + + +When sending data to a secured cluster through the `elasticsearch` output, Filebeat can use any of the following authentication methods: + +* Basic authentication credentials (username and password). +* Token-based (API key) authentication. +* Public Key Infrastructure (PKI) certificates. + +**Basic authentication:** + +```yaml +output.elasticsearch: + hosts: ["https://myEShost:9200"] + username: "filebeat_writer" + password: "{pwd}" +``` + +**API key authentication:** + +```yaml +output.elasticsearch: + hosts: ["https://myEShost:9200"] + api_key: "ZCV7VnwBgnX0T19fN8Qe:KnR6yE41RrSowb0kQ0HWoA" +``` + +**PKI certificate authentication:** + +```yaml +output.elasticsearch: + hosts: ["https://myEShost:9200"] + ssl.certificate: "/etc/pki/client/cert.pem" + ssl.key: "/etc/pki/client/cert.key" +``` + +See [*Secure communication with Elasticsearch*](/reference/filebeat/securing-communication-elasticsearch.md) for details on each authentication method. + +## Compatibility [_compatibility] + +This output works with all compatible versions of Elasticsearch. See the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_compatibility). + +Optionally, you can set Filebeat to only connect to instances that are at least on the same version as the Beat. The check can be enabled by setting `output.elasticsearch.allow_older_versions` to `false`. Leaving the setting at it’s default value of `true` avoids an issue where Filebeat cannot connect to {{es}} after having been upgraded to a version higher than the {{stack}}. + + +## Configuration options [_configuration_options_25] + +You can specify the following options in the `elasticsearch` section of the `filebeat.yml` config file: + +### `enabled` [_enabled_30] + +The enabled config is a boolean setting to enable or disable the output. If set to `false`, the output is disabled. + +The default value is `true`. + + +### `hosts` [hosts-option] + +The list of Elasticsearch nodes to connect to. The events are distributed to these nodes in round robin order. If one node becomes unreachable, the event is automatically sent to another node. Each Elasticsearch node can be defined as a `URL` or `IP:PORT`. For example: `http://192.15.3.2`, `https://es.found.io:9230` or `192.24.3.2:9300`. If no port is specified, `9200` is used. + +::::{note} +When a node is defined as an `IP:PORT`, the *scheme* and *path* are taken from the [`protocol`](#protocol-option) and [`path`](#path-option) config options. +:::: + + +```yaml +output.elasticsearch: + hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] + protocol: https + path: /elasticsearch +``` + +In the previous example, the Elasticsearch nodes are available at `https://10.45.3.2:9220/elasticsearch` and `https://10.45.3.1:9230/elasticsearch`. + + +### `compression_level` [compression-level-option] + +The gzip compression level. Setting this value to `0` disables compression. The compression level must be in the range of `1` (best speed) to `9` (best compression). + +Increasing the compression level will reduce the network usage but will increase the cpu usage. + +The default value is `1`. + + +### `escape_html` [_escape_html] + +Configure escaping of HTML in strings. Set to `true` to enable escaping. + +The default value is `false`. + + +### `worker` or `workers` [worker-option] + +The number of workers per configured host publishing events to Elasticsearch. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). + +The default value is `1`. + + +### `loadbalance` [_loadbalance] + +When `loadbalance: true` is set, Filebeat connects to all configured hosts and sends data through all connections in parallel. If a connection fails, data is sent to the remaining hosts until it can be reestablished. Data will still be sent as long as Filebeat can connect to at least one of its configured hosts. + +When `loadbalance: false` is set, Filebeat sends data to a single host at a time. The target host is chosen at random from the list of configured hosts, and all data is sent to that target until the connection fails, when a new target is selected. Data will still be sent as long as Filebeat can connect to at least one of its configured hosts. + +The default value is `true`. + +```yaml +output.elasticsearch: + hosts: ["localhost:9200", "localhost:9201"] + loadbalance: true +``` + + +### `api_key` [_api_key] + +Instead of using a username and password, you can use API keys to secure communication with {{es}}. The value must be the ID of the API key and the API key joined by a colon: `id:api_key`. + +See [*Grant access using API keys*](/reference/filebeat/beats-api-keys.md) for more information. + + +### `username` [_username_3] + +The basic authentication username for connecting to Elasticsearch. + +This user needs the privileges required to publish events to {{es}}. To create a user like this, see [Create a *publishing* user](/reference/filebeat/privileges-to-publish-events.md). + + +### `password` [_password_3] + +The basic authentication password for connecting to Elasticsearch. + + +### `parameters` [_parameters] + +Dictionary of HTTP parameters to pass within the url with index operations. + + +### `protocol` [protocol-option] + +The name of the protocol Elasticsearch is reachable on. The options are: `http` or `https`. The default is `http`. However, if you specify a URL for [`hosts`](#hosts-option), the value of `protocol` is overridden by whatever scheme you specify in the URL. + + +### `path` [path-option] + +An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix. + + +### `headers` [_headers] + +Custom HTTP headers to add to each request created by the Elasticsearch output. Example: + +```yaml +output.elasticsearch.headers: + X-My-Header: Header contents +``` + +It is possible to specify multiple header values for the same header name by separating them with a comma. + + +### `proxy_disable` [_proxy_disable] + +If set to `true` all proxy settings, including `HTTP_PROXY` and `HTTPS_PROXY` variables are ignored. + + +### `proxy_url` [_proxy_url_2] + +The URL of the proxy to use when connecting to the Elasticsearch servers. The value must be a complete URL. If a value is not specified through the configuration file then proxy environment variables are used. See the [Go documentation](https://golang.org/pkg/net/http/#ProxyFromEnvironment) for more information about the environment variables. + + +### `proxy_headers` [_proxy_headers_2] + +Additional headers to send to proxies during CONNECT requests. + + +### `index` [index-option-es] + +The indexing target to write events to. Can point to an [index](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html), [alias](docs-content://manage-data/data-store/aliases.md), or [data stream](docs-content://manage-data/data-store/data-streams.md). When using daily indices, this will be the index name. The default is `"filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"`, for example, `"filebeat-9.0.0-beta1-2025-01-30"`. If you change this setting, you also need to configure the `setup.template.name` and `setup.template.pattern` options (see [Elasticsearch index template](/reference/filebeat/configuration-template.md)). + +If you are using the pre-built Kibana dashboards, you also need to set the `setup.dashboards.index` option (see [Kibana dashboards](/reference/filebeat/configuration-dashboards.md)). + +When [index lifecycle management (ILM)](/reference/filebeat/ilm.md) is enabled, the default `index` is `"filebeat-%{[agent.version]}-%{+yyyy.MM.dd}-%{{index_num}}"`, for example, `"filebeat-9.0.0-beta1-2025-01-30-000001"`. Custom `index` settings are ignored when ILM is enabled. If you’re sending events to a cluster that supports index lifecycle management, see [Index lifecycle management (ILM)](/reference/filebeat/ilm.md) to learn how to change the index name. + +You can set the index dynamically by using a format string to access any event field. For example, this configuration uses a custom field, `fields.log_type`, to set the index: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + index: "%{[fields.log_type]}-%{[agent.version]}-%{+yyyy.MM.dd}" <1> +``` + +1. We recommend including `agent.version` in the name to avoid mapping issues when you upgrade. + + +With this configuration, all events with `log_type: normal` are sent to an index named `normal-9.0.0-beta1-2025-01-30`, and all events with `log_type: critical` are sent to an index named `critical-9.0.0-beta1-2025-01-30`. + +::::{tip} +To learn how to add custom fields to events, see the [`fields`](/reference/filebeat/configuration-general-options.md#libbeat-configuration-fields) option. +:::: + + +See the [`indices`](#indices-option-es) setting for other ways to set the index dynamically. + + +### `indices` [indices-option-es] + +An array of index selector rules. Each rule specifies the index to use for events that match the rule. During publishing, Filebeat uses the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the `indices` setting is missing or no rule matches, the [`index`](#index-option-es) setting is used. + +Similar to `index`, defining custom `indices` will disable [Index lifecycle management (ILM)](/reference/filebeat/ilm.md). + +Rule settings: + +**`index`** +: The index format string to use. If this string contains field references, such as `%{[fields.name]}`, the fields must exist, or the rule fails. + +**`mappings`** +: A dictionary that takes the value returned by `index` and maps it to a new name. + +**`default`** +: The default string value to use if `mappings` does not find a match. + +**`when`** +: A condition that must succeed in order to execute the current rule. All the [conditions](/reference/filebeat/defining-processors.md#conditions) supported by processors are also supported here. + +The following example sets the index based on whether the `message` field contains the specified string: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + indices: + - index: "warning-%{[agent.version]}-%{+yyyy.MM.dd}" + when.contains: + message: "WARN" + - index: "error-%{[agent.version]}-%{+yyyy.MM.dd}" + when.contains: + message: "ERR" +``` + +This configuration results in indices named `warning-9.0.0-beta1-2025-01-30` and `error-9.0.0-beta1-2025-01-30` (plus the default index if no matches are found). + +The following example sets the index by taking the name returned by the `index` format string and mapping it to a new name that’s used for the index: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + indices: + - index: "%{[fields.log_type]}" + mappings: + critical: "sev1" + normal: "sev2" + default: "sev3" +``` + +This configuration results in indices named `sev1`, `sev2`, and `sev3`. + +The `mappings` setting simplifies the configuration, but is limited to string values. You cannot specify format strings within the mapping pairs. + + +### `ilm` [ilm-es] + +Configuration options for index lifecycle management. + +See [Index lifecycle management (ILM)](/reference/filebeat/ilm.md) for more information. + + +### `pipeline` [pipeline-option-es] + +A format string value that specifies the ingest pipeline to write events to. + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipeline: my_pipeline_id +``` + +::::{important} +The `pipeline` is always lowercased. If `pipeline: Foo-Bar`, then the pipeline name in {{es}} needs to be defined as `foo-bar`. +:::: + + +For more information, see [*Parse data using an ingest pipeline*](/reference/filebeat/configuring-ingest-node.md). + +You can set the ingest pipeline dynamically by using a format string to access any event field. For example, this configuration uses a custom field, `fields.log_type`, to set the pipeline for each event: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipeline: "%{[fields.log_type]}_pipeline" +``` + +With this configuration, all events with `log_type: normal` are sent to a pipeline named `normal_pipeline`, and all events with `log_type: critical` are sent to a pipeline named `critical_pipeline`. + +::::{tip} +To learn how to add custom fields to events, see the [`fields`](/reference/filebeat/configuration-general-options.md#libbeat-configuration-fields) option. +:::: + + +See the [`pipelines`](#pipelines-option-es) setting for other ways to set the ingest pipeline dynamically. + + +### `pipelines` [pipelines-option-es] + +An array of pipeline selector rules. Each rule specifies the ingest pipeline to use for events that match the rule. During publishing, Filebeat uses the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the `pipelines` setting is missing or no rule matches, the [`pipeline`](#pipeline-option-es) setting is used. + +Rule settings: + +**`pipeline`** +: The pipeline format string to use. If this string contains field references, such as `%{[fields.name]}`, the fields must exist, or the rule fails. + +**`mappings`** +: A dictionary that takes the value returned by `pipeline` and maps it to a new name. + +**`default`** +: The default string value to use if `mappings` does not find a match. + +**`when`** +: A condition that must succeed in order to execute the current rule. All the [conditions](/reference/filebeat/defining-processors.md#conditions) supported by processors are also supported here. + +The following example sends events to a specific pipeline based on whether the `message` field contains the specified string: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipelines: + - pipeline: "warning_pipeline" + when.contains: + message: "WARN" + - pipeline: "error_pipeline" + when.contains: + message: "ERR" +``` + +The following example sets the pipeline by taking the name returned by the `pipeline` format string and mapping it to a new name that’s used for the pipeline: + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipelines: + - pipeline: "%{[fields.log_type]}" + mappings: + critical: "sev1_pipeline" + normal: "sev2_pipeline" + default: "sev3_pipeline" +``` + +With this configuration, all events with `log_type: critical` are sent to `sev1_pipeline`, all events with `log_type: normal` are sent to a `sev2_pipeline`, and all other events are sent to `sev3_pipeline`. + +For more information about ingest pipelines, see [*Parse data using an ingest pipeline*](/reference/filebeat/configuring-ingest-node.md). + + +### `max_retries` [_max_retries] + +Filebeat ignores the `max_retries` setting and retries indefinitely. + + +### `bulk_max_size` [bulk-max-size-option] + +The maximum number of events to bulk in a single Elasticsearch bulk API index request. The default is 1600. + +Events can be collected into batches. Filebeat will split batches read from the queue which are larger than `bulk_max_size` into multiple batches. + +Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput. + +Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. + + +### `backoff.init` [backoff-init-option] + +The number of seconds to wait before trying to reconnect to Elasticsearch after a network error. After waiting `backoff.init` seconds, Filebeat tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. + + +### `backoff.max` [backoff-max-option] + +The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. The default is `60s`. + + +### `idle_connection_timeout` [idle-connection-timeout-option] + +The maximum amount of time an idle connection will remain idle before closing itself. Zero means no limit. The format is a Go language duration (example 60s is 60 seconds). The default is 3s. + + +### `timeout` [_timeout_2] + +The http request timeout in seconds for the Elasticsearch request. The default is 90. + + +### `allow_older_versions` [_allow_older_versions] + +By default, Filebeat expects the Elasticsearch instance to be on the same or newer version to provide optimal experience. We suggest you connect to the same version to make sure all features Filebeat is using are available in your Elasticsearch instance. + +You can disable the check for example during updating the Elastic Stack, so data collection can go on. + + +### `ssl` [_ssl_4] + +Configuration options for SSL parameters like the certificate authority to use for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to Elasticsearch. + +See the [secure communication with {{es}}](/reference/filebeat/securing-communication-elasticsearch.md) guide or [SSL configuration reference](/reference/filebeat/configuration-ssl.md) for more information. + + +### `kerberos` [_kerberos_2] + +Configuration options for Kerberos authentication. + +See [Kerberos](/reference/filebeat/configuration-kerberos.md) for more information. + + +### `queue` [_queue] + +Configuration options for internal queue. + +See [Internal queue](/reference/filebeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `filebeat.yml` or the `output` section but not both. ===== `non_indexable_policy` + +Specifies the behavior when the elasticsearch cluster explicitly rejects documents, for example on mapping conflicts. + +#### `drop` [_drop] + +The default behaviour, when an event is explicitly rejected by elasticsearch it is dropped. + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + non_indexable_policy.drop: ~ +``` + + +#### `dead_letter_index` [_dead_letter_index] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +On an explicit rejection, this policy will retry the event in the next batch. However, the target index will change to index specified. In addition, the structure of the event will be change to the following fields: + +message +: Contains the escaped json of the original event. + +error.type +: Contains the status code + +error.message +: Contains status returned by elasticsearch, describing the reason + +`index` +: The index to send rejected events to. + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + non_indexable_policy.dead_letter_index: + index: "my-dead-letter-index" +``` + + + +### `preset` [_preset] + +The performance preset to apply to the output configuration. + +```yaml +output.elasticsearch: + hosts: ["http://localhost:9200"] + preset: balanced +``` + +Performance presets apply a set of configuration overrides based on a desired performance goal. If set, a performance preset will override other configuration flags to match the recommended settings for that preset. If a preset doesn’t set a value for a particular field, the user-specified value will be used if present, otherwise the default. Valid options are: * `balanced`: good starting point for general efficiency * `throughput`: good for high data volumes, may increase cpu and memory requirements * `scale`: reduces ambient resource use in large low-throughput deployments * `latency`: minimize the time for fresh data to become visible in Elasticsearch * `custom`: apply user configuration directly with no overrides + +The default if unspecified is `custom`. + +Presets represent current recommendations based on the intended goal; their effect may change between versions to better suit those goals. Currently the presets have the following effects: + +| preset | balanced | throughput | scale | latency | +| --- | --- | --- | --- | --- | +| [`bulk_max_size`](#bulk-max-size-option) | 1600 | 1600 | 1600 | 50 | +| [`worker`](#worker-option) | 1 | 4 | 1 | 1 | +| [`queue.mem.events`](/reference/filebeat/configuring-internal-queue.md#queue-mem-events-option) | 3200 | 12800 | 3200 | 4100 | +| [`queue.mem.flush.min_events`](/reference/filebeat/configuring-internal-queue.md#queue-mem-flush-min-events-option) | 1600 | 1600 | 1600 | 2050 | +| [`queue.mem.flush.timeout`](/reference/filebeat/configuring-internal-queue.md#queue-mem-flush-timeout-option) | `10s` | `5s` | `20s` | `1s` | +| [`compression_level`](#compression-level-option) | 1 | 1 | 1 | 1 | +| [`idle_connection_timeout`](#idle-connection-timeout-option) | `3s` | `15s` | `1s` | `60s` | +| [`backoff.init`](#backoff-init-option) | none | none | `5s` | none | +| [`backoff.max`](#backoff-max-option) | none | none | `300s` | none | + + + +## Elasticsearch APIs [es-apis] + +Filebeat will use the `_bulk` API from {{es}}, the events are sent in the order they arrive to the publishing pipeline, a single `_bulk` request may contain events from different inputs/modules. Temporary failures are re-tried. + +The status code for each event is checked and handled as: + +* `< 300`: The event is counted as `events.acked` +* `409` (Conflict): The event is counted as `events.duplicates` +* `429` (Too Many Requests): The event is counted as `events.toomany` +* `> 399 and < 500`: The `non_indexable_policy` is applied. + + diff --git a/docs/reference/filebeat/enable-filebeat-debugging.md b/docs/reference/filebeat/enable-filebeat-debugging.md new file mode 100644 index 000000000000..d15d09a65f31 --- /dev/null +++ b/docs/reference/filebeat/enable-filebeat-debugging.md @@ -0,0 +1,31 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/enable-filebeat-debugging.html +--- + +# Debug [enable-filebeat-debugging] + +By default, Filebeat sends all its output to syslog. When you run Filebeat in the foreground, you can use the `-e` command line flag to redirect the output to standard error instead. For example: + +```sh +filebeat -e +``` + +The default configuration file is filebeat.yml (the location of the file varies by platform). You can use a different configuration file by specifying the `-c` flag. For example: + +```sh +filebeat -e -c myfilebeatconfig.yml +``` + +You can increase the verbosity of debug messages by enabling one or more debug selectors. For example, to view publisher-related messages, start Filebeat with the `publisher` selector: + +```sh +filebeat -e -d "publisher" +``` + +If you want all the debugging output (fair warning, it’s quite a lot), you can use `*`, like this: + +```sh +filebeat -e -d "*" +``` + diff --git a/docs/reference/filebeat/error-found-unexpected-character.md b/docs/reference/filebeat/error-found-unexpected-character.md new file mode 100644 index 000000000000..6ca9214f3605 --- /dev/null +++ b/docs/reference/filebeat/error-found-unexpected-character.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/error-found-unexpected-character.html +--- + +# Found unexpected or unknown characters [error-found-unexpected-character] + +Either there is a problem with the structure of your config file, or you have used a path or expression that the YAML parser cannot resolve because the config file contains characters that aren’t properly escaped. + +If the YAML file contains paths with spaces or unusual characters, wrap the paths in single quotation marks (see [Wrap paths in single quotation marks](/reference/filebeat/yaml-tips.md#wrap-paths-in-quotes)). + +Also see the general advice under [*Avoid YAML formatting problems*](/reference/filebeat/yaml-tips.md). + diff --git a/docs/reference/filebeat/error-loading-config.md b/docs/reference/filebeat/error-loading-config.md new file mode 100644 index 000000000000..4eadd353fbab --- /dev/null +++ b/docs/reference/filebeat/error-loading-config.md @@ -0,0 +1,14 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/error-loading-config.html +--- + +# Error loading config file [error-loading-config] + +You may encounter errors loading the config file on POSIX operating systems if: + +* an unauthorized user tries to load the config file, or +* the config file has the wrong permissions. + +See [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md) for more about resolving these errors. + diff --git a/docs/reference/filebeat/exported-fields-activemq.md b/docs/reference/filebeat/exported-fields-activemq.md new file mode 100644 index 000000000000..71b9572ca970 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-activemq.md @@ -0,0 +1,44 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-activemq.html +--- + +# ActiveMQ fields [exported-fields-activemq] + +Module for parsing ActiveMQ log files. + + +## activemq [_activemq] + +**`activemq.caller`** +: Name of the caller issuing the logging request (class or resource). + +type: keyword + + +**`activemq.thread`** +: Thread that generated the logging event. + +type: keyword + + +**`activemq.user`** +: User that generated the logging event. + +type: keyword + + + +## audit [_audit] + +Fields from ActiveMQ audit logs. + + +## log [_log] + +Fields from ActiveMQ application logs. + +**`activemq.log.stack_trace`** +: type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-apache.md b/docs/reference/filebeat/exported-fields-apache.md new file mode 100644 index 000000000000..259bb6fee646 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-apache.md @@ -0,0 +1,42 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-apache.html +--- + +# Apache fields [exported-fields-apache] + +Apache Module + + +## apache [_apache] + +Apache fields. + + +## access [_access] + +Contains fields for the Apache HTTP Server access logs. + +**`apache.access.ssl.protocol`** +: SSL protocol version. + +type: keyword + + +**`apache.access.ssl.cipher`** +: SSL cipher name. + +type: keyword + + + +## error [_error] + +Fields from the Apache error logs. + +**`apache.error.module`** +: The module producing the logged message. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-auditd.md b/docs/reference/filebeat/exported-fields-auditd.md new file mode 100644 index 000000000000..86db1a334e57 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-auditd.md @@ -0,0 +1,363 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-auditd.html +--- + +# Auditd fields [exported-fields-auditd] + +Module for parsing auditd logs. + +**`user.terminal`** +: Terminal or tty device on which the user is performing the observed activity. + +type: keyword + + +**`user.audit.id`** +: One or multiple unique identifiers of the user. + +type: keyword + + +**`user.audit.name`** +: Short name or login of the user. + +type: keyword + +example: albert + + +**`user.audit.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.audit.group.name`** +: Name of the group. + +type: keyword + + +**`user.filesystem.id`** +: One or multiple unique identifiers of the user. + +type: keyword + + +**`user.filesystem.name`** +: Short name or login of the user. + +type: keyword + +example: albert + + +**`user.filesystem.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.filesystem.group.name`** +: Name of the group. + +type: keyword + + +**`user.owner.id`** +: One or multiple unique identifiers of the user. + +type: keyword + + +**`user.owner.name`** +: Short name or login of the user. + +type: keyword + +example: albert + + +**`user.owner.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.owner.group.name`** +: Name of the group. + +type: keyword + + +**`user.saved.id`** +: One or multiple unique identifiers of the user. + +type: keyword + + +**`user.saved.name`** +: Short name or login of the user. + +type: keyword + +example: albert + + +**`user.saved.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.saved.group.name`** +: Name of the group. + +type: keyword + + + +## auditd [_auditd] + +Fields from the auditd logs. + + +## log [_log_2] + +Fields from the Linux audit log. Not all fields are documented here because they are dynamic and vary by audit event type. + +**`auditd.log.old_auid`** +: For login events this is the old audit ID used for the user prior to this login. + + +**`auditd.log.new_auid`** +: For login events this is the new audit ID. The audit ID can be used to trace future events to the user even if their identity changes (like becoming root). + + +**`auditd.log.old_ses`** +: For login events this is the old session ID used for the user prior to this login. + + +**`auditd.log.new_ses`** +: For login events this is the new session ID. It can be used to tie a user to future events by session ID. + + +**`auditd.log.sequence`** +: The audit event sequence number. + +type: long + + +**`auditd.log.items`** +: The number of items in an event. + + +**`auditd.log.item`** +: The item field indicates which item out of the total number of items. This number is zero-based; a value of 0 means it is the first item. + + +**`auditd.log.tty`** +: type: keyword + + +**`auditd.log.a0`** +: The first argument to the system call. + + +**`auditd.log.addr`** +: type: ip + + +**`auditd.log.rport`** +: type: long + + +**`auditd.log.laddr`** +: type: ip + + +**`auditd.log.lport`** +: type: long + + +**`auditd.log.acct`** +: type: alias + +alias to: user.name + + +**`auditd.log.pid`** +: type: alias + +alias to: process.pid + + +**`auditd.log.ppid`** +: type: alias + +alias to: process.parent.pid + + +**`auditd.log.res`** +: type: alias + +alias to: event.outcome + + +**`auditd.log.record_type`** +: type: alias + +alias to: event.action + + +**`auditd.log.geoip.continent_name`** +: type: alias + +alias to: source.geo.continent_name + + +**`auditd.log.geoip.country_iso_code`** +: type: alias + +alias to: source.geo.country_iso_code + + +**`auditd.log.geoip.location`** +: type: alias + +alias to: source.geo.location + + +**`auditd.log.geoip.region_name`** +: type: alias + +alias to: source.geo.region_name + + +**`auditd.log.geoip.city_name`** +: type: alias + +alias to: source.geo.city_name + + +**`auditd.log.geoip.region_iso_code`** +: type: alias + +alias to: source.geo.region_iso_code + + +**`auditd.log.arch`** +: type: alias + +alias to: host.architecture + + +**`auditd.log.gid`** +: type: alias + +alias to: user.group.id + + +**`auditd.log.uid`** +: type: alias + +alias to: user.id + + +**`auditd.log.agid`** +: type: alias + +alias to: user.audit.group.id + + +**`auditd.log.auid`** +: type: alias + +alias to: user.audit.id + + +**`auditd.log.fsgid`** +: type: alias + +alias to: user.filesystem.group.id + + +**`auditd.log.fsuid`** +: type: alias + +alias to: user.filesystem.id + + +**`auditd.log.egid`** +: type: alias + +alias to: user.effective.group.id + + +**`auditd.log.euid`** +: type: alias + +alias to: user.effective.id + + +**`auditd.log.sgid`** +: type: alias + +alias to: user.saved.group.id + + +**`auditd.log.suid`** +: type: alias + +alias to: user.saved.id + + +**`auditd.log.ogid`** +: type: alias + +alias to: user.owner.group.id + + +**`auditd.log.ouid`** +: type: alias + +alias to: user.owner.id + + +**`auditd.log.comm`** +: type: alias + +alias to: process.name + + +**`auditd.log.exe`** +: type: alias + +alias to: process.executable + + +**`auditd.log.terminal`** +: type: alias + +alias to: user.terminal + + +**`auditd.log.msg`** +: type: alias + +alias to: message + + +**`auditd.log.src`** +: type: alias + +alias to: source.address + + +**`auditd.log.dst`** +: type: alias + +alias to: destination.address + + diff --git a/docs/reference/filebeat/exported-fields-aws-cloudwatch.md b/docs/reference/filebeat/exported-fields-aws-cloudwatch.md new file mode 100644 index 000000000000..847bb00da8eb --- /dev/null +++ b/docs/reference/filebeat/exported-fields-aws-cloudwatch.md @@ -0,0 +1,32 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-aws-cloudwatch.html +--- + +# AWS CloudWatch fields [exported-fields-aws-cloudwatch] + +Fields from AWS CloudWatch logs. + + +## aws.cloudwatch [_aws_cloudwatch] + +Fields from AWS CloudWatch logs. + +**`aws.cloudwatch.log_group`** +: The name of the log group to which this event belongs. + +type: keyword + + +**`aws.cloudwatch.log_stream`** +: The name of the log stream to which this event belongs. + +type: keyword + + +**`aws.cloudwatch.ingestion_time`** +: The time the event was ingested in AWS CloudWatch. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-aws.md b/docs/reference/filebeat/exported-fields-aws.md new file mode 100644 index 000000000000..6d22d7f397c8 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-aws.md @@ -0,0 +1,788 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-aws.html +--- + +# AWS fields [exported-fields-aws] + +Module for handling logs from AWS. + + +## aws [_aws] + +Fields from AWS logs. + + +## cloudtrail [_cloudtrail] + +Fields for AWS CloudTrail logs. + +**`aws.cloudtrail.event_version`** +: The CloudTrail version of the log event format. + +type: keyword + + + +## user_identity [_user_identity] + +The userIdentity element contains details about the type of IAM identity that made the request, and which credentials were used. If temporary credentials were used, the element shows how the credentials were obtained. + +**`aws.cloudtrail.user_identity.type`** +: The type of the identity + +type: keyword + + +**`aws.cloudtrail.user_identity.arn`** +: The Amazon Resource Name (ARN) of the principal that made the call. + +type: keyword + + +**`aws.cloudtrail.user_identity.access_key_id`** +: The access key ID that was used to sign the request. + +type: keyword + + + +## session_context [_session_context] + +If the request was made with temporary security credentials, an element that provides information about the session that was created for those credentials + +**`aws.cloudtrail.user_identity.session_context.mfa_authenticated`** +: The value is true if the root user or IAM user whose credentials were used for the request also was authenticated with an MFA device; otherwise, false. + +type: keyword + + +**`aws.cloudtrail.user_identity.session_context.creation_date`** +: The date and time when the temporary security credentials were issued. + +type: date + + + +## session_issuer [_session_issuer] + +If the request was made with temporary security credentials, an element that provides information about how the credentials were obtained. + +**`aws.cloudtrail.user_identity.session_context.session_issuer.type`** +: The source of the temporary security credentials, such as Root, IAMUser, or Role. + +type: keyword + + +**`aws.cloudtrail.user_identity.session_context.session_issuer.principal_id`** +: The internal ID of the entity that was used to get credentials. + +type: keyword + + +**`aws.cloudtrail.user_identity.session_context.session_issuer.arn`** +: The ARN of the source (account, IAM user, or role) that was used to get temporary security credentials. + +type: keyword + + +**`aws.cloudtrail.user_identity.session_context.session_issuer.account_id`** +: The account that owns the entity that was used to get credentials. + +type: keyword + + +**`aws.cloudtrail.user_identity.invoked_by`** +: The name of the AWS service that made the request, such as Amazon EC2 Auto Scaling or AWS Elastic Beanstalk. + +type: keyword + + +**`aws.cloudtrail.error_code`** +: The AWS service error if the request returns an error. + +type: keyword + + +**`aws.cloudtrail.error_message`** +: If the request returns an error, the description of the error. + +type: keyword + + +**`aws.cloudtrail.request_parameters`** +: The parameters, if any, that were sent with the request. + +type: keyword + + +**`aws.cloudtrail.request_parameters.text`** +: type: text + + +**`aws.cloudtrail.response_elements`** +: The response element for actions that make changes (create, update, or delete actions). + +type: keyword + + +**`aws.cloudtrail.response_elements.text`** +: type: text + + +**`aws.cloudtrail.additional_eventdata`** +: Additional data about the event that was not part of the request or response. + +type: keyword + + +**`aws.cloudtrail.additional_eventdata.text`** +: type: text + + +**`aws.cloudtrail.request_id`** +: The value that identifies the request. The service being called generates this value. + +type: keyword + + +**`aws.cloudtrail.event_type`** +: Identifies the type of event that generated the event record. + +type: keyword + + +**`aws.cloudtrail.api_version`** +: Identifies the API version associated with the AwsApiCall eventType value. + +type: keyword + + +**`aws.cloudtrail.management_event`** +: A Boolean value that identifies whether the event is a management event. + +type: keyword + + +**`aws.cloudtrail.read_only`** +: Identifies whether this operation is a read-only operation. + +type: keyword + + + +## resources [_resources] + +A list of resources accessed in the event. + +**`aws.cloudtrail.resources.arn`** +: Resource ARNs + +type: keyword + + +**`aws.cloudtrail.resources.account_id`** +: Account ID of the resource owner + +type: keyword + + +**`aws.cloudtrail.resources.type`** +: Resource type identifier in the format: AWS::aws-service-name::data-type-name + +type: keyword + + +**`aws.cloudtrail.recipient_account_id`** +: Represents the account ID that received this event. + +type: keyword + + +**`aws.cloudtrail.service_event_details`** +: Identifies the service event, including what triggered the event and the result. + +type: keyword + + +**`aws.cloudtrail.service_event_details.text`** +: type: text + + +**`aws.cloudtrail.shared_event_id`** +: GUID generated by CloudTrail to uniquely identify CloudTrail events from the same AWS action that is sent to different AWS accounts. + +type: keyword + + +**`aws.cloudtrail.vpc_endpoint_id`** +: Identifies the VPC endpoint in which requests were made from a VPC to another AWS service, such as Amazon S3. + +type: keyword + + +**`aws.cloudtrail.event_category`** +: Shows the event category that is used in LookupEvents calls. + +* For management events, the value is management. +* For data events, the value is data. +* For Insights events, the value is insight. + +type: keyword + + + +## console_login [_console_login] + +Fields specific to ConsoleLogin events + + +## additional_eventdata [_additional_eventdata] + +Additional Event Data for ConsoleLogin events + +**`aws.cloudtrail.console_login.additional_eventdata.mobile_version`** +: Identifies whether ConsoleLogin was from mobile version + +type: boolean + + +**`aws.cloudtrail.console_login.additional_eventdata.login_to`** +: URL for ConsoleLogin + +type: keyword + + +**`aws.cloudtrail.console_login.additional_eventdata.mfa_used`** +: Identifies whether multi factor authentication was used during ConsoleLogin + +type: boolean + + + +## flattened [_flattened] + +ES flattened datatype for objects where the subfields aren’t known in advance. + +**`aws.cloudtrail.flattened.additional_eventdata`** +: Additional data about the event that was not part of the request or response. + +type: flattened + + +**`aws.cloudtrail.flattened.request_parameters`** +: The parameters, if any, that were sent with the request. + +type: flattened + + +**`aws.cloudtrail.flattened.response_elements`** +: The response element for actions that make changes (create, update, or delete actions). + +type: flattened + + +**`aws.cloudtrail.flattened.service_event_details`** +: Identifies the service event, including what triggered the event and the result. + +type: flattened + + + +## digest [_digest] + +Fields from Cloudtrail Digest Logs + +**`aws.cloudtrail.digest.log_files`** +: A list of Logfiles contained in the digest. + +type: nested + + +**`aws.cloudtrail.digest.start_time`** +: The starting UTC time range that the digest file covers, taking as a reference the time in which log files have been delivered by CloudTrail. + +type: date + + +**`aws.cloudtrail.digest.end_time`** +: The ending UTC time range that the digest file covers, taking as a reference the time in which log files have been delivered by CloudTrail. + +type: date + + +**`aws.cloudtrail.digest.s3_bucket`** +: The name of the Amazon S3 bucket to which the current digest file has been delivered. + +type: keyword + + +**`aws.cloudtrail.digest.s3_object`** +: The Amazon S3 object key (that is, the Amazon S3 bucket location) of the current digest file. + +type: keyword + + +**`aws.cloudtrail.digest.newest_event_time`** +: The UTC time of the most recent event among all of the events in the log files in the digest. + +type: date + + +**`aws.cloudtrail.digest.oldest_event_time`** +: The UTC time of the oldest event among all of the events in the log files in the digest. + +type: date + + +**`aws.cloudtrail.digest.previous_s3_bucket`** +: The Amazon S3 bucket to which the previous digest file was delivered. + +type: keyword + + +**`aws.cloudtrail.digest.previous_hash_algorithm`** +: The name of the hash algorithm that was used to hash the previous digest file. + +type: keyword + + +**`aws.cloudtrail.digest.public_key_fingerprint`** +: The hexadecimal encoded fingerprint of the public key that matches the private key used to sign this digest file. + +type: keyword + + +**`aws.cloudtrail.digest.signature_algorithm`** +: The algorithm used to sign the digest file. + +type: keyword + + +**`aws.cloudtrail.insight_details`** +: Shows information about the underlying triggers of an Insights event, such as event source, user agent, statistics, API name, and whether the event is the start or end of the Insights event. + +type: flattened + + + +## cloudwatch [_cloudwatch] + +Fields for AWS CloudWatch logs. + +**`aws.cloudwatch.message`** +: CloudWatch log message. + +type: text + + + +## ec2 [_ec2] + +Fields for AWS EC2 logs in CloudWatch. + +**`aws.ec2.ip_address`** +: The internet address of the requester. + +type: keyword + + + +## elb [_elb] + +Fields for AWS ELB logs. + +**`aws.elb.name`** +: The name of the load balancer. + +type: keyword + + +**`aws.elb.type`** +: The type of the load balancer for v2 Load Balancers. + +type: keyword + + +**`aws.elb.target_group.arn`** +: The ARN of the target group handling the request. + +type: keyword + + +**`aws.elb.listener`** +: The ELB listener that received the connection. + +type: keyword + + +**`aws.elb.protocol`** +: The protocol of the load balancer (http or tcp). + +type: keyword + + +**`aws.elb.request_processing_time.sec`** +: The total time in seconds since the connection or request is received until it is sent to a registered backend. + +type: float + + +**`aws.elb.backend_processing_time.sec`** +: The total time in seconds since the connection is sent to the backend till the backend starts responding. + +type: float + + +**`aws.elb.response_processing_time.sec`** +: The total time in seconds since the response is received from the backend till it is sent to the client. + +type: float + + +**`aws.elb.connection_time.ms`** +: The total time of the connection in milliseconds, since it is opened till it is closed. + +type: long + + +**`aws.elb.tls_handshake_time.ms`** +: The total time for the TLS handshake to complete in milliseconds once the connection has been established. + +type: long + + +**`aws.elb.backend.ip`** +: The IP address of the backend processing this connection. + +type: keyword + + +**`aws.elb.backend.port`** +: The port in the backend processing this connection. + +type: keyword + + +**`aws.elb.backend.http.response.status_code`** +: The status code from the backend (status code sent to the client from ELB is stored in `http.response.status_code` + +type: keyword + + +**`aws.elb.ssl_cipher`** +: The SSL cipher used in TLS/SSL connections. + +type: keyword + + +**`aws.elb.ssl_protocol`** +: The SSL protocol used in TLS/SSL connections. + +type: keyword + + +**`aws.elb.chosen_cert.arn`** +: The ARN of the chosen certificate presented to the client in TLS/SSL connections. + +type: keyword + + +**`aws.elb.chosen_cert.serial`** +: The serial number of the chosen certificate presented to the client in TLS/SSL connections. + +type: keyword + + +**`aws.elb.incoming_tls_alert`** +: The integer value of TLS alerts received by the load balancer from the client, if present. + +type: keyword + + +**`aws.elb.tls_named_group`** +: The TLS named group. + +type: keyword + + +**`aws.elb.trace_id`** +: The contents of the `X-Amzn-Trace-Id` header. + +type: keyword + + +**`aws.elb.matched_rule_priority`** +: The priority value of the rule that matched the request, if a rule matched. + +type: keyword + + +**`aws.elb.action_executed`** +: The action executed when processing the request (forward, fixed-response, authenticate…​). It can contain several values. + +type: keyword + + +**`aws.elb.redirect_url`** +: The URL used if a redirection action was executed. + +type: keyword + + +**`aws.elb.error.reason`** +: The error reason if the executed action failed. + +type: keyword + + +**`aws.elb.target_port`** +: List of IP addresses and ports for the targets that processed this request. + +type: keyword + + +**`aws.elb.target_status_code`** +: List of status codes from the responses of the targets. + +type: keyword + + +**`aws.elb.classification`** +: The classification for desync mitigation. + +type: keyword + + +**`aws.elb.classification_reason`** +: The classification reason code. + +type: keyword + + + +## s3access [_s3access] + +Fields for AWS S3 server access logs. + +**`aws.s3access.bucket_owner`** +: The canonical user ID of the owner of the source bucket. + +type: keyword + + +**`aws.s3access.bucket`** +: The name of the bucket that the request was processed against. + +type: keyword + + +**`aws.s3access.remote_ip`** +: The apparent internet address of the requester. + +type: ip + + +**`aws.s3access.requester`** +: The canonical user ID of the requester, or a - for unauthenticated requests. + +type: keyword + + +**`aws.s3access.request_id`** +: A string generated by Amazon S3 to uniquely identify each request. + +type: keyword + + +**`aws.s3access.operation`** +: The operation listed here is declared as SOAP.operation, REST.HTTP_method.resource_type, WEBSITE.HTTP_method.resource_type, or BATCH.DELETE.OBJECT. + +type: keyword + + +**`aws.s3access.key`** +: The "key" part of the request, URL encoded, or "-" if the operation does not take a key parameter. + +type: keyword + + +**`aws.s3access.request_uri`** +: The Request-URI part of the HTTP request message. + +type: keyword + + +**`aws.s3access.http_status`** +: The numeric HTTP status code of the response. + +type: long + + +**`aws.s3access.error_code`** +: The Amazon S3 Error Code, or "-" if no error occurred. + +type: keyword + + +**`aws.s3access.bytes_sent`** +: The number of response bytes sent, excluding HTTP protocol overhead, or "-" if zero. + +type: long + + +**`aws.s3access.object_size`** +: The total size of the object in question. + +type: long + + +**`aws.s3access.total_time`** +: The number of milliseconds the request was in flight from the server’s perspective. + +type: long + + +**`aws.s3access.turn_around_time`** +: The number of milliseconds that Amazon S3 spent processing your request. + +type: long + + +**`aws.s3access.referrer`** +: The value of the HTTP Referrer header, if present. + +type: keyword + + +**`aws.s3access.user_agent`** +: The value of the HTTP User-Agent header. + +type: keyword + + +**`aws.s3access.version_id`** +: The version ID in the request, or "-" if the operation does not take a versionId parameter. + +type: keyword + + +**`aws.s3access.host_id`** +: The x-amz-id-2 or Amazon S3 extended request ID. + +type: keyword + + +**`aws.s3access.signature_version`** +: The signature version, SigV2 or SigV4, that was used to authenticate the request or a - for unauthenticated requests. + +type: keyword + + +**`aws.s3access.cipher_suite`** +: The Secure Sockets Layer (SSL) cipher that was negotiated for HTTPS request or a - for HTTP. + +type: keyword + + +**`aws.s3access.authentication_type`** +: The type of request authentication used, AuthHeader for authentication headers, QueryString for query string (pre-signed URL) or a - for unauthenticated requests. + +type: keyword + + +**`aws.s3access.host_header`** +: The endpoint used to connect to Amazon S3. + +type: keyword + + +**`aws.s3access.tls_version`** +: The Transport Layer Security (TLS) version negotiated by the client. + +type: keyword + + + +## vpcflow [_vpcflow] + +Fields for AWS VPC flow logs. + +**`aws.vpcflow.version`** +: The VPC Flow Logs version. If you use the default format, the version is 2. If you specify a custom format, the version is 3. + +type: keyword + + +**`aws.vpcflow.account_id`** +: The AWS account ID for the flow log. + +type: keyword + + +**`aws.vpcflow.interface_id`** +: The ID of the network interface for which the traffic is recorded. + +type: keyword + + +**`aws.vpcflow.action`** +: The action that is associated with the traffic, ACCEPT or REJECT. + +type: keyword + + +**`aws.vpcflow.log_status`** +: The logging status of the flow log, OK, NODATA or SKIPDATA. + +type: keyword + + +**`aws.vpcflow.instance_id`** +: The ID of the instance that’s associated with network interface for which the traffic is recorded, if the instance is owned by you. + +type: keyword + + +**`aws.vpcflow.pkt_srcaddr`** +: The packet-level (original) source IP address of the traffic. + +type: ip + + +**`aws.vpcflow.pkt_dstaddr`** +: The packet-level (original) destination IP address for the traffic. + +type: ip + + +**`aws.vpcflow.vpc_id`** +: The ID of the VPC that contains the network interface for which the traffic is recorded. + +type: keyword + + +**`aws.vpcflow.subnet_id`** +: The ID of the subnet that contains the network interface for which the traffic is recorded. + +type: keyword + + +**`aws.vpcflow.tcp_flags`** +: The bitmask value for the following TCP flags: 2=SYN,18=SYN-ACK,1=FIN,4=RST + +type: keyword + + +**`aws.vpcflow.tcp_flags_array`** +: List of TCP flags: *fin, syn, rst, psh, ack, urg* + +type: keyword + + +**`aws.vpcflow.type`** +: The type of traffic: IPv4, IPv6, or EFA. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-awsfargate.md b/docs/reference/filebeat/exported-fields-awsfargate.md new file mode 100644 index 000000000000..2f5da54155bc --- /dev/null +++ b/docs/reference/filebeat/exported-fields-awsfargate.md @@ -0,0 +1,19 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-awsfargate.html +--- + +# AWS Fargate fields [exported-fields-awsfargate] + +Module for collecting container logs from Amazon ECS Fargate. + + +## awsfargate [_awsfargate] + +Fields from Amazon ECS Fargate logs. + + +## log [_log_3] + +Fields for Amazon Fargate container logs. + diff --git a/docs/reference/filebeat/exported-fields-azure.md b/docs/reference/filebeat/exported-fields-azure.md new file mode 100644 index 000000000000..c21190a4acbb --- /dev/null +++ b/docs/reference/filebeat/exported-fields-azure.md @@ -0,0 +1,895 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-azure.html +--- + +# Azure fields [exported-fields-azure] + +Azure Module + + +## azure [_azure] + +**`azure.subscription_id`** +: Azure subscription ID + +type: keyword + + +**`azure.correlation_id`** +: Correlation ID + +type: keyword + + +**`azure.tenant_id`** +: tenant ID + +type: keyword + + + +## resource [_resource] + +Resource + +**`azure.resource.id`** +: Resource ID + +type: keyword + + +**`azure.resource.group`** +: Resource group + +type: keyword + + +**`azure.resource.provider`** +: Resource type/namespace + +type: keyword + + +**`azure.resource.namespace`** +: Resource type/namespace + +type: keyword + + +**`azure.resource.name`** +: Name + +type: keyword + + +**`azure.resource.authorization_rule`** +: Authorization rule + +type: keyword + + + +## activitylogs [_activitylogs] + +Fields for Azure activity logs. + +**`azure.activitylogs.identity_name`** +: identity name + +type: keyword + + + +## identity [_identity] + +Identity + + +## claims_initiated_by_user [_claims_initiated_by_user] + +Claims initiated by user + +**`azure.activitylogs.identity.claims_initiated_by_user.name`** +: Name + +type: keyword + + +**`azure.activitylogs.identity.claims_initiated_by_user.givenname`** +: Givenname + +type: keyword + + +**`azure.activitylogs.identity.claims_initiated_by_user.surname`** +: Surname + +type: keyword + + +**`azure.activitylogs.identity.claims_initiated_by_user.fullname`** +: Fullname + +type: keyword + + +**`azure.activitylogs.identity.claims_initiated_by_user.schema`** +: Schema + +type: keyword + + +**`azure.activitylogs.identity.claims.*`** +: Claims + +type: object + + + +## authorization [_authorization] + +Authorization + +**`azure.activitylogs.identity.authorization.scope`** +: Scope + +type: keyword + + +**`azure.activitylogs.identity.authorization.action`** +: Action + +type: keyword + + + +## evidence [_evidence] + +Evidence + +**`azure.activitylogs.identity.authorization.evidence.role_assignment_scope`** +: Role assignment scope + +type: keyword + + +**`azure.activitylogs.identity.authorization.evidence.role_definition_id`** +: Role definition ID + +type: keyword + + +**`azure.activitylogs.identity.authorization.evidence.role`** +: Role + +type: keyword + + +**`azure.activitylogs.identity.authorization.evidence.role_assignment_id`** +: Role assignment ID + +type: keyword + + +**`azure.activitylogs.identity.authorization.evidence.principal_id`** +: Principal ID + +type: keyword + + +**`azure.activitylogs.identity.authorization.evidence.principal_type`** +: Principal type + +type: keyword + + +**`azure.activitylogs.tenant_id`** +: Tenant ID + +type: keyword + + +**`azure.activitylogs.level`** +: Level + +type: long + + +**`azure.activitylogs.operation_version`** +: Operation version + +type: keyword + + +**`azure.activitylogs.operation_name`** +: Operation name + +type: keyword + + +**`azure.activitylogs.result_type`** +: Result type + +type: keyword + + +**`azure.activitylogs.result_signature`** +: Result signature + +type: keyword + + +**`azure.activitylogs.category`** +: Category + +type: keyword + + +**`azure.activitylogs.event_category`** +: Event Category + +type: keyword + + +**`azure.activitylogs.properties`** +: Properties + +type: flattened + + + +## auditlogs [_auditlogs] + +Fields for Azure audit logs. + +**`azure.auditlogs.category`** +: The category of the operation. Currently, Audit is the only supported value. + +type: keyword + + +**`azure.auditlogs.operation_name`** +: The operation name + +type: keyword + + +**`azure.auditlogs.operation_version`** +: The operation version + +type: keyword + + +**`azure.auditlogs.identity`** +: Identity + +type: keyword + + +**`azure.auditlogs.tenant_id`** +: Tenant ID + +type: keyword + + +**`azure.auditlogs.result_signature`** +: Result signature + +type: keyword + + + +## properties [_properties] + +The audit log properties + +**`azure.auditlogs.properties.result`** +: Log result + +type: keyword + + +**`azure.auditlogs.properties.activity_display_name`** +: Activity display name + +type: keyword + + +**`azure.auditlogs.properties.result_reason`** +: Reason for the log result + +type: keyword + + +**`azure.auditlogs.properties.correlation_id`** +: Correlation ID + +type: keyword + + +**`azure.auditlogs.properties.logged_by_service`** +: Logged by service + +type: keyword + + +**`azure.auditlogs.properties.operation_type`** +: Operation type + +type: keyword + + +**`azure.auditlogs.properties.id`** +: ID + +type: keyword + + +**`azure.auditlogs.properties.activity_datetime`** +: Activity timestamp + +type: date + + +**`azure.auditlogs.properties.category`** +: category + +type: keyword + + + +## target_resources.* [_target_resources] + +Target resources + +**`azure.auditlogs.properties.target_resources.*.display_name`** +: Display name + +type: keyword + + +**`azure.auditlogs.properties.target_resources.*.id`** +: ID + +type: keyword + + +**`azure.auditlogs.properties.target_resources.*.type`** +: Type + +type: keyword + + +**`azure.auditlogs.properties.target_resources.*.ip_address`** +: ip Address + +type: keyword + + +**`azure.auditlogs.properties.target_resources.*.user_principal_name`** +: User principal name + +type: keyword + + + +## modified_properties.* [_modified_properties] + +Modified properties + +**`azure.auditlogs.properties.target_resources.*.modified_properties.*.new_value`** +: New value + +type: keyword + + +**`azure.auditlogs.properties.target_resources.*.modified_properties.*.display_name`** +: Display value + +type: keyword + + +**`azure.auditlogs.properties.target_resources.*.modified_properties.*.old_value`** +: Old value + +type: keyword + + + +## initiated_by [_initiated_by] + +Information regarding the initiator + + +## app [_app] + +App + +**`azure.auditlogs.properties.initiated_by.app.servicePrincipalName`** +: Service principal name + +type: keyword + + +**`azure.auditlogs.properties.initiated_by.app.displayName`** +: Display name + +type: keyword + + +**`azure.auditlogs.properties.initiated_by.app.appId`** +: App ID + +type: keyword + + +**`azure.auditlogs.properties.initiated_by.app.servicePrincipalId`** +: Service principal ID + +type: keyword + + + +## user [_user] + +User + +**`azure.auditlogs.properties.initiated_by.user.userPrincipalName`** +: User principal name + +type: keyword + + +**`azure.auditlogs.properties.initiated_by.user.displayName`** +: Display name + +type: keyword + + +**`azure.auditlogs.properties.initiated_by.user.id`** +: ID + +type: keyword + + +**`azure.auditlogs.properties.initiated_by.user.ipAddress`** +: ip Address + +type: keyword + + + +## platformlogs [_platformlogs] + +Fields for Azure platform logs. + +**`azure.platformlogs.operation_name`** +: Operation name + +type: keyword + + +**`azure.platformlogs.result_type`** +: Result type + +type: keyword + + +**`azure.platformlogs.result_signature`** +: Result signature + +type: keyword + + +**`azure.platformlogs.category`** +: Category + +type: keyword + + +**`azure.platformlogs.event_category`** +: Event Category + +type: keyword + + +**`azure.platformlogs.status`** +: Status + +type: keyword + + +**`azure.platformlogs.ccpNamespace`** +: ccpNamespace + +type: keyword + + +**`azure.platformlogs.Cloud`** +: Cloud + +type: keyword + + +**`azure.platformlogs.Environment`** +: Environment + +type: keyword + + +**`azure.platformlogs.EventTimeString`** +: EventTimeString + +type: keyword + + +**`azure.platformlogs.Caller`** +: Caller + +type: keyword + + +**`azure.platformlogs.ScaleUnit`** +: ScaleUnit + +type: keyword + + +**`azure.platformlogs.ActivityId`** +: ActivityId + +type: keyword + + +**`azure.platformlogs.identity_name`** +: Identity name + +type: keyword + + +**`azure.platformlogs.properties`** +: Event inner properties + +type: flattened + + + +## signinlogs [_signinlogs] + +Fields for Azure sign-in logs. + +**`azure.signinlogs.operation_name`** +: The operation name + +type: keyword + + +**`azure.signinlogs.operation_version`** +: The operation version + +type: keyword + + +**`azure.signinlogs.tenant_id`** +: Tenant ID + +type: keyword + + +**`azure.signinlogs.result_signature`** +: Result signature + +type: keyword + + +**`azure.signinlogs.result_description`** +: Result description + +type: keyword + + +**`azure.signinlogs.result_type`** +: Result type + +type: keyword + + +**`azure.signinlogs.identity`** +: Identity + +type: keyword + + +**`azure.signinlogs.category`** +: Category + +type: keyword + + +**`azure.signinlogs.properties.id`** +: Unique ID representing the sign-in activity. + +type: keyword + + +**`azure.signinlogs.properties.created_at`** +: Date and time (UTC) the sign-in was initiated. + +type: date + + +**`azure.signinlogs.properties.user_display_name`** +: User display name + +type: keyword + + +**`azure.signinlogs.properties.correlation_id`** +: Correlation ID + +type: keyword + + +**`azure.signinlogs.properties.user_principal_name`** +: User principal name + +type: keyword + + +**`azure.signinlogs.properties.user_id`** +: User ID + +type: keyword + + +**`azure.signinlogs.properties.app_id`** +: App ID + +type: keyword + + +**`azure.signinlogs.properties.app_display_name`** +: App display name + +type: keyword + + +**`azure.signinlogs.properties.autonomous_system_number`** +: Autonomous system number. + +type: long + + +**`azure.signinlogs.properties.client_app_used`** +: Client app used + +type: keyword + + +**`azure.signinlogs.properties.conditional_access_status`** +: Conditional access status + +type: keyword + + +**`azure.signinlogs.properties.original_request_id`** +: Original request ID + +type: keyword + + +**`azure.signinlogs.properties.is_interactive`** +: Is interactive + +type: boolean + + +**`azure.signinlogs.properties.token_issuer_name`** +: Token issuer name + +type: keyword + + +**`azure.signinlogs.properties.token_issuer_type`** +: Token issuer type + +type: keyword + + +**`azure.signinlogs.properties.processing_time_ms`** +: Processing time in milliseconds + +type: float + + +**`azure.signinlogs.properties.risk_detail`** +: Risk detail + +type: keyword + + +**`azure.signinlogs.properties.risk_level_aggregated`** +: Risk level aggregated + +type: keyword + + +**`azure.signinlogs.properties.risk_level_during_signin`** +: Risk level during signIn + +type: keyword + + +**`azure.signinlogs.properties.risk_state`** +: Risk state + +type: keyword + + +**`azure.signinlogs.properties.resource_display_name`** +: Resource display name + +type: keyword + + +**`azure.signinlogs.properties.status.error_code`** +: Error code + +type: long + + +**`azure.signinlogs.properties.device_detail.device_id`** +: Device ID + +type: keyword + + +**`azure.signinlogs.properties.device_detail.operating_system`** +: Operating system + +type: keyword + + +**`azure.signinlogs.properties.device_detail.browser`** +: Browser + +type: keyword + + +**`azure.signinlogs.properties.device_detail.display_name`** +: Display name + +type: keyword + + +**`azure.signinlogs.properties.device_detail.trust_type`** +: Trust type + +type: keyword + + +**`azure.signinlogs.properties.device_detail.is_compliant`** +: If the device is compliant + +type: boolean + + +**`azure.signinlogs.properties.device_detail.is_managed`** +: If the device is managed + +type: boolean + + +**`azure.signinlogs.properties.applied_conditional_access_policies`** +: A list of conditional access policies that are triggered by the corresponding sign-in activity. + +type: array + + +**`azure.signinlogs.properties.authentication_details`** +: The result of the authentication attempt and additional details on the authentication method. + +type: array + + +**`azure.signinlogs.properties.authentication_processing_details`** +: Additional authentication processing details, such as the agent name in case of PTA/PHS or Server/farm name in case of federated authentication. + +type: flattened + + +**`azure.signinlogs.properties.authentication_protocol`** +: Authentication protocol type. + +type: keyword + + +**`azure.signinlogs.properties.incoming_token_type`** +: Incoming token type. + +type: keyword + + +**`azure.signinlogs.properties.unique_token_identifier`** +: Unique token identifier for the request. + +type: keyword + + +**`azure.signinlogs.properties.authentication_requirement`** +: This holds the highest level of authentication needed through all the sign-in steps, for sign-in to succeed. + +type: keyword + + +**`azure.signinlogs.properties.authentication_requirement_policies`** +: Set of CA policies that apply to this sign-in, each as CA: policy name, and/or MFA: Per-user + +type: flattened + + +**`azure.signinlogs.properties.flagged_for_review`** +: type: boolean + + +**`azure.signinlogs.properties.home_tenant_id`** +: type: keyword + + +**`azure.signinlogs.properties.network_location_details`** +: The network location details including the type of network used and its names. + +type: array + + +**`azure.signinlogs.properties.resource_id`** +: The identifier of the resource that the user signed in to. + +type: keyword + + +**`azure.signinlogs.properties.resource_tenant_id`** +: type: keyword + + +**`azure.signinlogs.properties.risk_event_types`** +: The list of risk event types associated with the sign-in. Possible values: unlikelyTravel, anonymizedIPAddress, maliciousIPAddress, unfamiliarFeatures, malwareInfectedIPAddress, suspiciousIPAddress, leakedCredentials, investigationsThreatIntelligence, generic, or unknownFutureValue. + +type: keyword + + +**`azure.signinlogs.properties.risk_event_types_v2`** +: The list of risk event types associated with the sign-in. Possible values: unlikelyTravel, anonymizedIPAddress, maliciousIPAddress, unfamiliarFeatures, malwareInfectedIPAddress, suspiciousIPAddress, leakedCredentials, investigationsThreatIntelligence, generic, or unknownFutureValue. + +type: keyword + + +**`azure.signinlogs.properties.service_principal_name`** +: The application name used for sign-in. This field is populated when you are signing in using an application. + +type: keyword + + +**`azure.signinlogs.properties.user_type`** +: type: keyword + + +**`azure.signinlogs.properties.service_principal_id`** +: The application identifier used for sign-in. This field is populated when you are signing in using an application. + +type: keyword + + +**`azure.signinlogs.properties.cross_tenant_access_type`** +: type: keyword + + +**`azure.signinlogs.properties.is_tenant_restricted`** +: type: boolean + + +**`azure.signinlogs.properties.sso_extension_version`** +: type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-beat-common.md b/docs/reference/filebeat/exported-fields-beat-common.md new file mode 100644 index 000000000000..a08765d86088 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-beat-common.md @@ -0,0 +1,47 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-beat-common.html +--- + +# Beat fields [exported-fields-beat-common] + +Contains common beat fields available in all event types. + +**`agent.hostname`** +: Deprecated - use agent.name or agent.id to identify an agent. + +type: alias + +alias to: agent.name + + +**`beat.timezone`** +: type: alias + +alias to: event.timezone + + +**`fields`** +: Contains user configurable fields. + +type: object + + +**`beat.name`** +: type: alias + +alias to: host.name + + +**`beat.hostname`** +: type: alias + +alias to: agent.name + + +**`timeseries.instance`** +: Time series instance id + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-cef-module.md b/docs/reference/filebeat/exported-fields-cef-module.md new file mode 100644 index 000000000000..e1262cd78bfd --- /dev/null +++ b/docs/reference/filebeat/exported-fields-cef-module.md @@ -0,0 +1,360 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-cef-module.html +--- + +# CEF fields [exported-fields-cef-module] + +Module for receiving CEF logs over Syslog. The module adds vendor specific fields in addition to the fields the decode_cef processor provides. + + +## forcepoint [_forcepoint] + +Fields for Forcepoint Custom String mappings + +**`forcepoint.virus_id`** +: Virus ID + +type: keyword + + + +## checkpoint [_checkpoint] + +Fields for Check Point custom string mappings. + +**`checkpoint.app_risk`** +: Application risk. + +type: keyword + + +**`checkpoint.app_severity`** +: Application threat severity. + +type: keyword + + +**`checkpoint.app_sig_id`** +: The signature ID which the application was detected by. + +type: keyword + + +**`checkpoint.auth_method`** +: Password authentication protocol used. + +type: keyword + + +**`checkpoint.category`** +: Category. + +type: keyword + + +**`checkpoint.confidence_level`** +: Confidence level determined. + +type: integer + + +**`checkpoint.connectivity_state`** +: Connectivity state. + +type: keyword + + +**`checkpoint.cookie`** +: IKE cookie. + +type: keyword + + +**`checkpoint.dst_phone_number`** +: Destination IP-Phone. + +type: keyword + + +**`checkpoint.email_control`** +: Engine name. + +type: keyword + + +**`checkpoint.email_id`** +: Internal email ID. + +type: keyword + + +**`checkpoint.email_recipients_num`** +: Number of recipients. + +type: long + + +**`checkpoint.email_session_id`** +: Internal email session ID. + +type: keyword + + +**`checkpoint.email_spool_id`** +: Internal email spool ID. + +type: keyword + + +**`checkpoint.email_subject`** +: Email subject. + +type: keyword + + +**`checkpoint.event_count`** +: Number of events associated with the log. + +type: long + + +**`checkpoint.frequency`** +: Scan frequency. + +type: keyword + + +**`checkpoint.icmp_type`** +: ICMP type. + +type: long + + +**`checkpoint.icmp_code`** +: ICMP code. + +type: long + + +**`checkpoint.identity_type`** +: Identity type. + +type: keyword + + +**`checkpoint.incident_extension`** +: Format of original data. + +type: keyword + + +**`checkpoint.integrity_av_invoke_type`** +: Scan invoke type. + +type: keyword + + +**`checkpoint.malware_family`** +: Malware family. + +type: keyword + + +**`checkpoint.peer_gateway`** +: Main IP of the peer Security Gateway. + +type: ip + + +**`checkpoint.performance_impact`** +: Protection performance impact. + +type: integer + + +**`checkpoint.protection_id`** +: Protection malware ID. + +type: keyword + + +**`checkpoint.protection_name`** +: Specific signature name of the attack. + +type: keyword + + +**`checkpoint.protection_type`** +: Type of protection used to detect the attack. + +type: keyword + + +**`checkpoint.scan_result`** +: Scan result. + +type: keyword + + +**`checkpoint.sensor_mode`** +: Sensor mode. + +type: keyword + + +**`checkpoint.severity`** +: Threat severity. + +type: keyword + + +**`checkpoint.spyware_name`** +: Spyware name. + +type: keyword + + +**`checkpoint.spyware_status`** +: Spyware status. + +type: keyword + + +**`checkpoint.subs_exp`** +: The expiration date of the subscription. + +type: date + + +**`checkpoint.tcp_flags`** +: TCP packet flags. + +type: keyword + + +**`checkpoint.termination_reason`** +: Termination reason. + +type: keyword + + +**`checkpoint.update_status`** +: Update status. + +type: keyword + + +**`checkpoint.user_status`** +: User response. + +type: keyword + + +**`checkpoint.uuid`** +: External ID. + +type: keyword + + +**`checkpoint.virus_name`** +: Virus name. + +type: keyword + + +**`checkpoint.voip_log_type`** +: VoIP log types. + +type: keyword + + + +## cef.extensions [_cef_extensions] + +Extra vendor-specific extensions. + +**`cef.extensions.cp_app_risk`** +: type: keyword + + +**`cef.extensions.cp_severity`** +: type: keyword + + +**`cef.extensions.ifname`** +: type: keyword + + +**`cef.extensions.inzone`** +: type: keyword + + +**`cef.extensions.layer_uuid`** +: type: keyword + + +**`cef.extensions.layer_name`** +: type: keyword + + +**`cef.extensions.logid`** +: type: keyword + + +**`cef.extensions.loguid`** +: type: keyword + + +**`cef.extensions.match_id`** +: type: keyword + + +**`cef.extensions.nat_addtnl_rulenum`** +: type: keyword + + +**`cef.extensions.nat_rulenum`** +: type: keyword + + +**`cef.extensions.origin`** +: type: keyword + + +**`cef.extensions.originsicname`** +: type: keyword + + +**`cef.extensions.outzone`** +: type: keyword + + +**`cef.extensions.parent_rule`** +: type: keyword + + +**`cef.extensions.product`** +: type: keyword + + +**`cef.extensions.rule_action`** +: type: keyword + + +**`cef.extensions.rule_uid`** +: type: keyword + + +**`cef.extensions.sequencenum`** +: type: keyword + + +**`cef.extensions.service_id`** +: type: keyword + + +**`cef.extensions.version`** +: type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-cef.md b/docs/reference/filebeat/exported-fields-cef.md new file mode 100644 index 000000000000..cb260b32f100 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-cef.md @@ -0,0 +1,1109 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-cef.html +--- + +# Decode CEF processor fields fields [exported-fields-cef] + +Common Event Format (CEF) data. + + +## cef [_cef] + +By default the `decode_cef` processor writes all data from the CEF message to this `cef` object. It contains the CEF header fields and the extension data. + +**`cef.version`** +: Version of the CEF specification used by the message. + +type: keyword + + +**`cef.device.vendor`** +: Vendor of the device that produced the message. + +type: keyword + + +**`cef.device.product`** +: Product of the device that produced the message. + +type: keyword + + +**`cef.device.version`** +: Version of the product that produced the message. + +type: keyword + + +**`cef.device.event_class_id`** +: Unique identifier of the event type. + +type: keyword + + +**`cef.severity`** +: Importance of the event. The valid string values are Unknown, Low, Medium, High, and Very-High. The valid integer values are 0-3=Low, 4-6=Medium, 7- 8=High, and 9-10=Very-High. + +type: keyword + +example: Very-High + + +**`cef.name`** +: Short description of the event. + +type: keyword + + + +## extensions [_extensions] + +Collection of key-value pairs carried in the CEF extension field. + +**`cef.extensions.agentAddress`** +: The IP address of the ArcSight connector that processed the event. + +type: ip + + +**`cef.extensions.agentDnsDomain`** +: The DNS domain name of the ArcSight connector that processed the event. + +type: keyword + + +**`cef.extensions.agentHostName`** +: The hostname of the ArcSight connector that processed the event. + +type: keyword + + +**`cef.extensions.agentId`** +: The agent ID of the ArcSight connector that processed the event. + +type: keyword + + +**`cef.extensions.agentMacAddress`** +: The MAC address of the ArcSight connector that processed the event. + +type: keyword + + +**`cef.extensions.agentNtDomain`** +: None + +type: keyword + + +**`cef.extensions.agentReceiptTime`** +: The time at which information about the event was received by the ArcSight connector. + +type: date + + +**`cef.extensions.agentTimeZone`** +: The agent time zone of the ArcSight connector that processed the event. + +type: keyword + + +**`cef.extensions.agentTranslatedAddress`** +: None + +type: ip + + +**`cef.extensions.agentTranslatedZoneExternalID`** +: None + +type: keyword + + +**`cef.extensions.agentTranslatedZoneURI`** +: None + +type: keyword + + +**`cef.extensions.agentType`** +: The agent type of the ArcSight connector that processed the event + +type: keyword + + +**`cef.extensions.agentVersion`** +: The version of the ArcSight connector that processed the event. + +type: keyword + + +**`cef.extensions.agentZoneExternalID`** +: None + +type: keyword + + +**`cef.extensions.agentZoneURI`** +: None + +type: keyword + + +**`cef.extensions.applicationProtocol`** +: Application level protocol, example values are HTTP, HTTPS, SSHv2, Telnet, POP, IMPA, IMAPS, and so on. + +type: keyword + + +**`cef.extensions.baseEventCount`** +: A count associated with this event. How many times was this same event observed? Count can be omitted if it is 1. + +type: long + + +**`cef.extensions.bytesIn`** +: Number of bytes transferred inbound, relative to the source to destination relationship, meaning that data was flowing from source to destination. + +type: long + + +**`cef.extensions.bytesOut`** +: Number of bytes transferred outbound relative to the source to destination relationship. For example, the byte number of data flowing from the destination to the source. + +type: long + + +**`cef.extensions.customerExternalID`** +: None + +type: keyword + + +**`cef.extensions.customerURI`** +: None + +type: keyword + + +**`cef.extensions.destinationAddress`** +: Identifies the destination address that the event refers to in an IP network. The format is an IPv4 address. + +type: ip + + +**`cef.extensions.destinationDnsDomain`** +: The DNS domain part of the complete fully qualified domain name (FQDN). + +type: keyword + + +**`cef.extensions.destinationGeoLatitude`** +: The latitudinal value from which the destination’s IP address belongs. + +type: double + + +**`cef.extensions.destinationGeoLongitude`** +: The longitudinal value from which the destination’s IP address belongs. + +type: double + + +**`cef.extensions.destinationHostName`** +: Identifies the destination that an event refers to in an IP network. The format should be a fully qualified domain name (FQDN) associated with the destination node, when a node is available. + +type: keyword + + +**`cef.extensions.destinationMacAddress`** +: Six colon-seperated hexadecimal numbers. + +type: keyword + + +**`cef.extensions.destinationNtDomain`** +: The Windows domain name of the destination address. + +type: keyword + + +**`cef.extensions.destinationPort`** +: The valid port numbers are between 0 and 65535. + +type: long + + +**`cef.extensions.destinationProcessId`** +: Provides the ID of the destination process associated with the event. For example, if an event contains process ID 105, "105" is the process ID. + +type: long + + +**`cef.extensions.destinationProcessName`** +: The name of the event’s destination process. + +type: keyword + + +**`cef.extensions.destinationServiceName`** +: The service targeted by this event. + +type: keyword + + +**`cef.extensions.destinationTranslatedAddress`** +: Identifies the translated destination that the event refers to in an IP network. + +type: ip + + +**`cef.extensions.destinationTranslatedPort`** +: Port after it was translated; for example, a firewall. Valid port numbers are 0 to 65535. + +type: long + + +**`cef.extensions.destinationTranslatedZoneExternalID`** +: None + +type: keyword + + +**`cef.extensions.destinationTranslatedZoneURI`** +: The URI for the Translated Zone that the destination asset has been assigned to in ArcSight. + +type: keyword + + +**`cef.extensions.destinationUserId`** +: Identifies the destination user by ID. For example, in UNIX, the root user is generally associated with user ID 0. + +type: keyword + + +**`cef.extensions.destinationUserName`** +: Identifies the destination user by name. This is the user associated with the event’s destination. Email addresses are often mapped into the UserName fields. The recipient is a candidate to put into this field. + +type: keyword + + +**`cef.extensions.destinationUserPrivileges`** +: The typical values are "Administrator", "User", and "Guest". This identifies the destination user’s privileges. In UNIX, for example, activity executed on the root user would be identified with destinationUser Privileges of "Administrator". + +type: keyword + + +**`cef.extensions.destinationZoneExternalID`** +: None + +type: keyword + + +**`cef.extensions.destinationZoneURI`** +: The URI for the Zone that the destination asset has been assigned to in ArcSight. + +type: keyword + + +**`cef.extensions.deviceAction`** +: Action taken by the device. + +type: keyword + + +**`cef.extensions.deviceAddress`** +: Identifies the device address that an event refers to in an IP network. + +type: ip + + +**`cef.extensions.deviceCustomFloatingPoint1Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomFloatingPoint3Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomFloatingPoint4Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomDate1`** +: One of two timestamp fields available to map fields that do not apply to any other in this dictionary. + +type: date + + +**`cef.extensions.deviceCustomDate1Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomDate2`** +: One of two timestamp fields available to map fields that do not apply to any other in this dictionary. + +type: date + + +**`cef.extensions.deviceCustomDate2Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomFloatingPoint1`** +: One of four floating point fields available to map fields that do not apply to any other in this dictionary. + +type: double + + +**`cef.extensions.deviceCustomFloatingPoint2`** +: One of four floating point fields available to map fields that do not apply to any other in this dictionary. + +type: double + + +**`cef.extensions.deviceCustomFloatingPoint2Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomFloatingPoint3`** +: One of four floating point fields available to map fields that do not apply to any other in this dictionary. + +type: double + + +**`cef.extensions.deviceCustomFloatingPoint4`** +: One of four floating point fields available to map fields that do not apply to any other in this dictionary. + +type: double + + +**`cef.extensions.deviceCustomIPv6Address1`** +: One of four IPv6 address fields available to map fields that do not apply to any other in this dictionary. + +type: ip + + +**`cef.extensions.deviceCustomIPv6Address1Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomIPv6Address2`** +: One of four IPv6 address fields available to map fields that do not apply to any other in this dictionary. + +type: ip + + +**`cef.extensions.deviceCustomIPv6Address2Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomIPv6Address3`** +: One of four IPv6 address fields available to map fields that do not apply to any other in this dictionary. + +type: ip + + +**`cef.extensions.deviceCustomIPv6Address3Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomIPv6Address4`** +: One of four IPv6 address fields available to map fields that do not apply to any other in this dictionary. + +type: ip + + +**`cef.extensions.deviceCustomIPv6Address4Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomNumber1`** +: One of three number fields available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: long + + +**`cef.extensions.deviceCustomNumber1Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomNumber2`** +: One of three number fields available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: long + + +**`cef.extensions.deviceCustomNumber2Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomNumber3`** +: One of three number fields available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: long + + +**`cef.extensions.deviceCustomNumber3Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomString1`** +: One of six strings available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: keyword + + +**`cef.extensions.deviceCustomString1Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomString2`** +: One of six strings available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: keyword + + +**`cef.extensions.deviceCustomString2Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomString3`** +: One of six strings available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: keyword + + +**`cef.extensions.deviceCustomString3Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomString4`** +: One of six strings available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: keyword + + +**`cef.extensions.deviceCustomString4Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomString5`** +: One of six strings available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: keyword + + +**`cef.extensions.deviceCustomString5Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceCustomString6`** +: One of six strings available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: keyword + + +**`cef.extensions.deviceCustomString6Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceDirection`** +: Any information about what direction the observed communication has taken. The following values are supported - "0" for inbound or "1" for outbound. + +type: long + + +**`cef.extensions.deviceDnsDomain`** +: The DNS domain part of the complete fully qualified domain name (FQDN). + +type: keyword + + +**`cef.extensions.deviceEventCategory`** +: Represents the category assigned by the originating device. Devices often use their own categorization schema to classify event. Example "/Monitor/Disk/Read". + +type: keyword + + +**`cef.extensions.deviceExternalId`** +: A name that uniquely identifies the device generating this event. + +type: keyword + + +**`cef.extensions.deviceFacility`** +: The facility generating this event. For example, Syslog has an explicit facility associated with every event. + +type: keyword + + +**`cef.extensions.deviceFlexNumber1`** +: One of two alternative number fields available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: long + + +**`cef.extensions.deviceFlexNumber1Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceFlexNumber2`** +: One of two alternative number fields available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. + +type: long + + +**`cef.extensions.deviceFlexNumber2Label`** +: All custom fields have a corresponding label field. Each of these fields is a string and describes the purpose of the custom field. + +type: keyword + + +**`cef.extensions.deviceHostName`** +: The format should be a fully qualified domain name (FQDN) associated with the device node, when a node is available. + +type: keyword + + +**`cef.extensions.deviceInboundInterface`** +: Interface on which the packet or data entered the device. + +type: keyword + + +**`cef.extensions.deviceMacAddress`** +: Six colon-separated hexadecimal numbers. + +type: keyword + + +**`cef.extensions.deviceNtDomain`** +: The Windows domain name of the device address. + +type: keyword + + +**`cef.extensions.deviceOutboundInterface`** +: Interface on which the packet or data left the device. + +type: keyword + + +**`cef.extensions.devicePayloadId`** +: Unique identifier for the payload associated with the event. + +type: keyword + + +**`cef.extensions.deviceProcessId`** +: Provides the ID of the process on the device generating the event. + +type: long + + +**`cef.extensions.deviceProcessName`** +: Process name associated with the event. An example might be the process generating the syslog entry in UNIX. + +type: keyword + + +**`cef.extensions.deviceReceiptTime`** +: The time at which the event related to the activity was received. The format is MMM dd yyyy HH:mm:ss or milliseconds since epoch (Jan 1st 1970) + +type: date + + +**`cef.extensions.deviceTimeZone`** +: The time zone for the device generating the event. + +type: keyword + + +**`cef.extensions.deviceTranslatedAddress`** +: Identifies the translated device address that the event refers to in an IP network. + +type: ip + + +**`cef.extensions.deviceTranslatedZoneExternalID`** +: None + +type: keyword + + +**`cef.extensions.deviceTranslatedZoneURI`** +: The URI for the Translated Zone that the device asset has been assigned to in ArcSight. + +type: keyword + + +**`cef.extensions.deviceZoneExternalID`** +: None + +type: keyword + + +**`cef.extensions.deviceZoneURI`** +: Thee URI for the Zone that the device asset has been assigned to in ArcSight. + +type: keyword + + +**`cef.extensions.endTime`** +: The time at which the activity related to the event ended. The format is MMM dd yyyy HH:mm:ss or milliseconds since epoch (Jan 1st1970). An example would be reporting the end of a session. + +type: date + + +**`cef.extensions.eventId`** +: This is a unique ID that ArcSight assigns to each event. + +type: long + + +**`cef.extensions.eventOutcome`** +: Displays the outcome, usually as *success* or *failure*. + +type: keyword + + +**`cef.extensions.externalId`** +: The ID used by an originating device. They are usually increasing numbers, associated with events. + +type: keyword + + +**`cef.extensions.fileCreateTime`** +: Time when the file was created. + +type: date + + +**`cef.extensions.fileHash`** +: Hash of a file. + +type: keyword + + +**`cef.extensions.fileId`** +: An ID associated with a file could be the inode. + +type: keyword + + +**`cef.extensions.fileModificationTime`** +: Time when the file was last modified. + +type: date + + +**`cef.extensions.filename`** +: Name of the file only (without its path). + +type: keyword + + +**`cef.extensions.filePath`** +: Full path to the file, including file name itself. + +type: keyword + + +**`cef.extensions.filePermission`** +: Permissions of the file. + +type: keyword + + +**`cef.extensions.fileSize`** +: Size of the file. + +type: long + + +**`cef.extensions.fileType`** +: Type of file (pipe, socket, etc.) + +type: keyword + + +**`cef.extensions.flexDate1`** +: A timestamp field available to map a timestamp that does not apply to any other defined timestamp field in this dictionary. Use all flex fields sparingly and seek a more specific, dictionary supplied field when possible. These fields are typically reserved for customer use and should not be set by vendors unless necessary. + +type: date + + +**`cef.extensions.flexDate1Label`** +: The label field is a string and describes the purpose of the flex field. + +type: keyword + + +**`cef.extensions.flexString1`** +: One of four floating point fields available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. These fields are typically reserved for customer use and should not be set by vendors unless necessary. + +type: keyword + + +**`cef.extensions.flexString2`** +: One of four floating point fields available to map fields that do not apply to any other in this dictionary. Use sparingly and seek a more specific, dictionary supplied field when possible. These fields are typically reserved for customer use and should not be set by vendors unless necessary. + +type: keyword + + +**`cef.extensions.flexString1Label`** +: The label field is a string and describes the purpose of the flex field. + +type: keyword + + +**`cef.extensions.flexString2Label`** +: The label field is a string and describes the purpose of the flex field. + +type: keyword + + +**`cef.extensions.message`** +: An arbitrary message giving more details about the event. Multi-line entries can be produced by using \n as the new line separator. + +type: keyword + + +**`cef.extensions.oldFileCreateTime`** +: Time when old file was created. + +type: date + + +**`cef.extensions.oldFileHash`** +: Hash of the old file. + +type: keyword + + +**`cef.extensions.oldFileId`** +: An ID associated with the old file could be the inode. + +type: keyword + + +**`cef.extensions.oldFileModificationTime`** +: Time when old file was last modified. + +type: date + + +**`cef.extensions.oldFileName`** +: Name of the old file. + +type: keyword + + +**`cef.extensions.oldFilePath`** +: Full path to the old file, including the file name itself. + +type: keyword + + +**`cef.extensions.oldFilePermission`** +: Permissions of the old file. + +type: keyword + + +**`cef.extensions.oldFileSize`** +: Size of the old file. + +type: long + + +**`cef.extensions.oldFileType`** +: Type of the old file (pipe, socket, etc.) + +type: keyword + + +**`cef.extensions.rawEvent`** +: None + +type: keyword + + +**`cef.extensions.Reason`** +: The reason an audit event was generated. For example "bad password" or "unknown user". This could also be an error or return code. Example "0x1234". + +type: keyword + + +**`cef.extensions.requestClientApplication`** +: The User-Agent associated with the request. + +type: keyword + + +**`cef.extensions.requestContext`** +: Description of the content from which the request originated (for example, HTTP Referrer) + +type: keyword + + +**`cef.extensions.requestCookies`** +: Cookies associated with the request. + +type: keyword + + +**`cef.extensions.requestMethod`** +: The HTTP method used to access a URL. + +type: keyword + + +**`cef.extensions.requestUrl`** +: In the case of an HTTP request, this field contains the URL accessed. The URL should contain the protocol as well. + +type: keyword + + +**`cef.extensions.sourceAddress`** +: Identifies the source that an event refers to in an IP network. + +type: ip + + +**`cef.extensions.sourceDnsDomain`** +: The DNS domain part of the complete fully qualified domain name (FQDN). + +type: keyword + + +**`cef.extensions.sourceGeoLatitude`** +: None + +type: double + + +**`cef.extensions.sourceGeoLongitude`** +: None + +type: double + + +**`cef.extensions.sourceHostName`** +: Identifies the source that an event refers to in an IP network. The format should be a fully qualified domain name (FQDN) associated with the source node, when a mode is available. Examples: *host* or *host.domain.com*. + +type: keyword + + +**`cef.extensions.sourceMacAddress`** +: Six colon-separated hexadecimal numbers. + +type: keyword + +example: 00:0d:60:af:1b:61 + + +**`cef.extensions.sourceNtDomain`** +: The Windows domain name for the source address. + +type: keyword + + +**`cef.extensions.sourcePort`** +: The valid port numbers are 0 to 65535. + +type: long + + +**`cef.extensions.sourceProcessId`** +: The ID of the source process associated with the event. + +type: long + + +**`cef.extensions.sourceProcessName`** +: The name of the event’s source process. + +type: keyword + + +**`cef.extensions.sourceServiceName`** +: The service that is responsible for generating this event. + +type: keyword + + +**`cef.extensions.sourceTranslatedAddress`** +: Identifies the translated source that the event refers to in an IP network. + +type: ip + + +**`cef.extensions.sourceTranslatedPort`** +: A port number after being translated by, for example, a firewall. Valid port numbers are 0 to 65535. + +type: long + + +**`cef.extensions.sourceTranslatedZoneExternalID`** +: None + +type: keyword + + +**`cef.extensions.sourceTranslatedZoneURI`** +: The URI for the Translated Zone that the destination asset has been assigned to in ArcSight. + +type: keyword + + +**`cef.extensions.sourceUserId`** +: Identifies the source user by ID. This is the user associated with the source of the event. For example, in UNIX, the root user is generally associated with user ID 0. + +type: keyword + + +**`cef.extensions.sourceUserName`** +: Identifies the source user by name. Email addresses are also mapped into the UserName fields. The sender is a candidate to put into this field. + +type: keyword + + +**`cef.extensions.sourceUserPrivileges`** +: The typical values are "Administrator", "User", and "Guest". It identifies the source user’s privileges. In UNIX, for example, activity executed by the root user would be identified with "Administrator". + +type: keyword + + +**`cef.extensions.sourceZoneExternalID`** +: None + +type: keyword + + +**`cef.extensions.sourceZoneURI`** +: The URI for the Zone that the source asset has been assigned to in ArcSight. + +type: keyword + + +**`cef.extensions.startTime`** +: The time when the activity the event referred to started. The format is MMM dd yyyy HH:mm:ss or milliseconds since epoch (Jan 1st 1970) + +type: date + + +**`cef.extensions.transportProtocol`** +: Identifies the Layer-4 protocol used. The possible values are protocols such as TCP or UDP. + +type: keyword + + +**`cef.extensions.type`** +: 0 means base event, 1 means aggregated, 2 means correlation, and 3 means action. This field can be omitted for base events (type 0). + +type: long + + +**`cef.extensions.categoryDeviceType`** +: Device type. Examples - Proxy, IDS, Web Server + +type: keyword + + +**`cef.extensions.categoryObject`** +: Object that the event is about. For example it can be an operating sytem, database, file, etc. + +type: keyword + + +**`cef.extensions.categoryBehavior`** +: Action or a behavior associated with an event. It’s what is being done to the object. + +type: keyword + + +**`cef.extensions.categoryTechnique`** +: Technique being used (e.g. /DoS). + +type: keyword + + +**`cef.extensions.categoryDeviceGroup`** +: General device group like Firewall. + +type: keyword + + +**`cef.extensions.categorySignificance`** +: Characterization of the importance of the event. + +type: keyword + + +**`cef.extensions.categoryOutcome`** +: Outcome of the event (e.g. sucess, failure, or attempt). + +type: keyword + + +**`cef.extensions.managerReceiptTime`** +: When the Arcsight ESM received the event. + +type: date + + +**`source.service.name`** +: Service that is the source of the event. + +type: keyword + + +**`destination.service.name`** +: Service that is the target of the event. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-checkpoint.md b/docs/reference/filebeat/exported-fields-checkpoint.md new file mode 100644 index 000000000000..5905e0056e24 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-checkpoint.md @@ -0,0 +1,2478 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-checkpoint.html +--- + +# Checkpoint fields [exported-fields-checkpoint] + +Some checkpoint module + + +## checkpoint [_checkpoint_2] + +Module for parsing Checkpoint syslog. + +**`checkpoint.confidence_level`** +: Confidence level determined by ThreatCloud. + +type: integer + + +**`checkpoint.calc_desc`** +: Log description. + +type: keyword + + +**`checkpoint.dst_country`** +: Destination country. + +type: keyword + + +**`checkpoint.dst_user_name`** +: Connected user name on the destination IP. + +type: keyword + + +**`checkpoint.email_id`** +: Email number in smtp connection. + +type: keyword + + +**`checkpoint.email_subject`** +: Original email subject. + +type: keyword + + +**`checkpoint.email_session_id`** +: Connection uuid. + +type: keyword + + +**`checkpoint.event_count`** +: Number of events associated with the log. + +type: long + + +**`checkpoint.sys_message`** +: System messages + +type: keyword + + +**`checkpoint.logid`** +: System messages + +type: keyword + + +**`checkpoint.failure_impact`** +: The impact of update service failure. + +type: keyword + + +**`checkpoint.id`** +: Override application ID. + +type: integer + + +**`checkpoint.identity_src`** +: The source for authentication identity information. + +type: keyword + + +**`checkpoint.information`** +: Policy installation status for a specific blade. + +type: keyword + + +**`checkpoint.layer_name`** +: Layer name. + +type: keyword + + +**`checkpoint.layer_uuid`** +: Layer UUID. + +type: keyword + + +**`checkpoint.log_id`** +: Unique identity for logs. + +type: integer + + +**`checkpoint.malware_family`** +: Additional information on protection. + +type: keyword + + +**`checkpoint.origin_sic_name`** +: Machine SIC. + +type: keyword + + +**`checkpoint.policy_mgmt`** +: Name of the Management Server that manages this Security Gateway. + +type: keyword + + +**`checkpoint.policy_name`** +: Name of the last policy that this Security Gateway fetched. + +type: keyword + + +**`checkpoint.protection_id`** +: Protection malware id. + +type: keyword + + +**`checkpoint.protection_name`** +: Specific signature name of the attack. + +type: keyword + + +**`checkpoint.protection_type`** +: Type of protection used to detect the attack. + +type: keyword + + +**`checkpoint.protocol`** +: Protocol detected on the connection. + +type: keyword + + +**`checkpoint.proxy_src_ip`** +: Sender source IP (even when using proxy). + +type: ip + + +**`checkpoint.rule`** +: Matched rule number. + +type: integer + + +**`checkpoint.rule_action`** +: Action of the matched rule in the access policy. + +type: keyword + + +**`checkpoint.scan_direction`** +: Scan direction. + +type: keyword + + +**`checkpoint.session_id`** +: Log uuid. + +type: keyword + + +**`checkpoint.source_os`** +: OS which generated the attack. + +type: keyword + + +**`checkpoint.src_country`** +: Country name, derived from connection source IP address. + +type: keyword + + +**`checkpoint.src_user_name`** +: User name connected to source IP + +type: keyword + + +**`checkpoint.ticket_id`** +: Unique ID per file. + +type: keyword + + +**`checkpoint.tls_server_host_name`** +: SNI/CN from encrypted TLS connection used by URLF for categorization. + +type: keyword + + +**`checkpoint.verdict`** +: TE engine verdict Possible values: Malicious/Benign/Error. + +type: keyword + + +**`checkpoint.user`** +: Source user name. + +type: keyword + + +**`checkpoint.vendor_list`** +: The vendor name that provided the verdict for a malicious URL. + +type: keyword + + +**`checkpoint.web_server_type`** +: Web server detected in the HTTP response. + +type: keyword + + +**`checkpoint.client_name`** +: Client Application or Software Blade that detected the event. + +type: keyword + + +**`checkpoint.client_version`** +: Build version of SandBlast Agent client installed on the computer. + +type: keyword + + +**`checkpoint.extension_version`** +: Build version of the SandBlast Agent browser extension. + +type: keyword + + +**`checkpoint.host_time`** +: Local time on the endpoint computer. + +type: keyword + + +**`checkpoint.installed_products`** +: List of installed Endpoint Software Blades. + +type: keyword + + +**`checkpoint.cc`** +: The Carbon Copy address of the email. + +type: keyword + + +**`checkpoint.parent_process_username`** +: Owner username of the parent process of the process that triggered the attack. + +type: keyword + + +**`checkpoint.process_username`** +: Owner username of the process that triggered the attack. + +type: keyword + + +**`checkpoint.audit_status`** +: Audit Status. Can be Success or Failure. + +type: keyword + + +**`checkpoint.objecttable`** +: Table of affected objects. + +type: keyword + + +**`checkpoint.objecttype`** +: The type of the affected object. + +type: keyword + + +**`checkpoint.operation_number`** +: The operation nuber. + +type: keyword + + +**`checkpoint.email_recipients_num`** +: Amount of recipients whom the mail was sent to. + +type: integer + + +**`checkpoint.suppressed_logs`** +: Aggregated connections for five minutes on the same source, destination and port. + +type: integer + + +**`checkpoint.blade_name`** +: Blade name. + +type: keyword + + +**`checkpoint.status`** +: Ok/Warning/Error. + +type: keyword + + +**`checkpoint.short_desc`** +: Short description of the process that was executed. + +type: keyword + + +**`checkpoint.long_desc`** +: More information on the process (usually describing error reason in failure). + +type: keyword + + +**`checkpoint.scan_hosts_hour`** +: Number of unique hosts during the last hour. + +type: integer + + +**`checkpoint.scan_hosts_day`** +: Number of unique hosts during the last day. + +type: integer + + +**`checkpoint.scan_hosts_week`** +: Number of unique hosts during the last week. + +type: integer + + +**`checkpoint.unique_detected_hour`** +: Detected virus for a specific host during the last hour. + +type: integer + + +**`checkpoint.unique_detected_day`** +: Detected virus for a specific host during the last day. + +type: integer + + +**`checkpoint.unique_detected_week`** +: Detected virus for a specific host during the last week. + +type: integer + + +**`checkpoint.scan_mail`** +: Number of emails that were scanned by "AB malicious activity" engine. + +type: integer + + +**`checkpoint.additional_ip`** +: DNS host name. + +type: keyword + + +**`checkpoint.description`** +: Additional explanation how the security gateway enforced the connection. + +type: keyword + + +**`checkpoint.email_spam_category`** +: Email categories. Possible values: spam/not spam/phishing. + +type: keyword + + +**`checkpoint.email_control_analysis`** +: Message classification, received from spam vendor engine. + +type: keyword + + +**`checkpoint.scan_results`** +: "Infected"/description of a failure. + +type: keyword + + +**`checkpoint.original_queue_id`** +: Original postfix email queue id. + +type: keyword + + +**`checkpoint.risk`** +: Risk level we got from the engine. + +type: keyword + + +**`checkpoint.roles`** +: The role of identity. + +type: keyword + + +**`checkpoint.observable_name`** +: IOC observable signature name. + +type: keyword + + +**`checkpoint.observable_id`** +: IOC observable signature id. + +type: keyword + + +**`checkpoint.observable_comment`** +: IOC observable signature description. + +type: keyword + + +**`checkpoint.indicator_name`** +: IOC indicator name. + +type: keyword + + +**`checkpoint.indicator_description`** +: IOC indicator description. + +type: keyword + + +**`checkpoint.indicator_reference`** +: IOC indicator reference. + +type: keyword + + +**`checkpoint.indicator_uuid`** +: IOC indicator uuid. + +type: keyword + + +**`checkpoint.app_desc`** +: Application description. + +type: keyword + + +**`checkpoint.app_id`** +: Application ID. + +type: integer + + +**`checkpoint.app_sig_id`** +: IOC indicator description. + +type: keyword + + +**`checkpoint.certificate_resource`** +: HTTPS resource Possible values: SNI or domain name (DN). + +type: keyword + + +**`checkpoint.certificate_validation`** +: Precise error, describing HTTPS certificate failure under "HTTPS categorize websites" feature. + +type: keyword + + +**`checkpoint.browse_time`** +: Application session browse time. + +type: keyword + + +**`checkpoint.limit_requested`** +: Indicates whether data limit was requested for the session. + +type: integer + + +**`checkpoint.limit_applied`** +: Indicates whether the session was actually date limited. + +type: integer + + +**`checkpoint.dropped_total`** +: Amount of dropped packets (both incoming and outgoing). + +type: integer + + +**`checkpoint.client_type_os`** +: Client OS detected in the HTTP request. + +type: keyword + + +**`checkpoint.name`** +: Application name. + +type: keyword + + +**`checkpoint.properties`** +: Application categories. + +type: keyword + + +**`checkpoint.sig_id`** +: Application’s signature ID which how it was detected by. + +type: keyword + + +**`checkpoint.desc`** +: Override application description. + +type: keyword + + +**`checkpoint.referrer_self_uid`** +: UUID of the current log. + +type: keyword + + +**`checkpoint.referrer_parent_uid`** +: Log UUID of the referring application. + +type: keyword + + +**`checkpoint.needs_browse_time`** +: Browse time required for the connection. + +type: integer + + +**`checkpoint.cluster_info`** +: Cluster information. Possible options: Failover reason/cluster state changes/CP cluster or 3rd party. + +type: keyword + + +**`checkpoint.sync`** +: Sync status and the reason (stable, at risk). + +type: keyword + + +**`checkpoint.file_direction`** +: File direction. Possible options: upload/download. + +type: keyword + + +**`checkpoint.invalid_file_size`** +: File_size field is valid only if this field is set to 0. + +type: integer + + +**`checkpoint.top_archive_file_name`** +: In case of archive file: the file that was sent/received. + +type: keyword + + +**`checkpoint.data_type_name`** +: Data type in rulebase that was matched. + +type: keyword + + +**`checkpoint.specific_data_type_name`** +: Compound/Group scenario, data type that was matched. + +type: keyword + + +**`checkpoint.word_list`** +: Words matched by data type. + +type: keyword + + +**`checkpoint.info`** +: Special log message. + +type: keyword + + +**`checkpoint.outgoing_url`** +: URL related to this log (for HTTP). + +type: keyword + + +**`checkpoint.dlp_rule_name`** +: Matched rule name. + +type: keyword + + +**`checkpoint.dlp_recipients`** +: Mail recipients. + +type: keyword + + +**`checkpoint.dlp_subject`** +: Mail subject. + +type: keyword + + +**`checkpoint.dlp_word_list`** +: Phrases matched by data type. + +type: keyword + + +**`checkpoint.dlp_template_score`** +: Template data type match score. + +type: keyword + + +**`checkpoint.message_size`** +: Mail/post size. + +type: integer + + +**`checkpoint.dlp_incident_uid`** +: Unique ID of the matched rule. + +type: keyword + + +**`checkpoint.dlp_related_incident_uid`** +: Other ID related to this one. + +type: keyword + + +**`checkpoint.dlp_data_type_name`** +: Matched data type. + +type: keyword + + +**`checkpoint.dlp_data_type_uid`** +: Unique ID of the matched data type. + +type: keyword + + +**`checkpoint.dlp_violation_description`** +: Violation descriptions described in the rulebase. + +type: keyword + + +**`checkpoint.dlp_relevant_data_types`** +: In case of Compound/Group: the inner data types that were matched. + +type: keyword + + +**`checkpoint.dlp_action_reason`** +: Action chosen reason. + +type: keyword + + +**`checkpoint.dlp_categories`** +: Data type category. + +type: keyword + + +**`checkpoint.dlp_transint`** +: HTTP/SMTP/FTP. + +type: keyword + + +**`checkpoint.duplicate`** +: Log marked as duplicated, when mail is split and the Security Gateway sees it twice. + +type: keyword + + +**`checkpoint.incident_extension`** +: Matched data type. + +type: keyword + + +**`checkpoint.matched_file`** +: Unique ID of the matched data type. + +type: keyword + + +**`checkpoint.matched_file_text_segments`** +: Fingerprint: number of text segments matched by this traffic. + +type: integer + + +**`checkpoint.matched_file_percentage`** +: Fingerprint: match percentage of the traffic. + +type: integer + + +**`checkpoint.dlp_additional_action`** +: Watermark/None. + +type: keyword + + +**`checkpoint.dlp_watermark_profile`** +: Watermark which was applied. + +type: keyword + + +**`checkpoint.dlp_repository_id`** +: ID of scanned repository. + +type: keyword + + +**`checkpoint.dlp_repository_root_path`** +: Repository path. + +type: keyword + + +**`checkpoint.scan_id`** +: Sequential number of scan. + +type: keyword + + +**`checkpoint.special_properties`** +: If this field is set to *1* the log will not be shown (in use for monitoring scan progress). + +type: integer + + +**`checkpoint.dlp_repository_total_size`** +: Repository size. + +type: integer + + +**`checkpoint.dlp_repository_files_number`** +: Number of files in repository. + +type: integer + + +**`checkpoint.dlp_repository_scanned_files_number`** +: Number of scanned files in repository. + +type: integer + + +**`checkpoint.duration`** +: Scan duration. + +type: keyword + + +**`checkpoint.dlp_fingerprint_long_status`** +: Scan status - long format. + +type: keyword + + +**`checkpoint.dlp_fingerprint_short_status`** +: Scan status - short format. + +type: keyword + + +**`checkpoint.dlp_repository_directories_number`** +: Number of directories in repository. + +type: integer + + +**`checkpoint.dlp_repository_unreachable_directories_number`** +: Number of directories the Security Gateway was unable to read. + +type: integer + + +**`checkpoint.dlp_fingerprint_files_number`** +: Number of successfully scanned files in repository. + +type: integer + + +**`checkpoint.dlp_repository_skipped_files_number`** +: Skipped number of files because of configuration. + +type: integer + + +**`checkpoint.dlp_repository_scanned_directories_number`** +: Amount of directories scanned. + +type: integer + + +**`checkpoint.number_of_errors`** +: Number of files that were not scanned due to an error. + +type: integer + + +**`checkpoint.next_scheduled_scan_date`** +: Next scan scheduled time according to time object. + +type: keyword + + +**`checkpoint.dlp_repository_scanned_total_size`** +: Size scanned. + +type: integer + + +**`checkpoint.dlp_repository_reached_directories_number`** +: Number of scanned directories in repository. + +type: integer + + +**`checkpoint.dlp_repository_not_scanned_directories_percentage`** +: Percentage of directories the Security Gateway was unable to read. + +type: integer + + +**`checkpoint.speed`** +: Current scan speed. + +type: integer + + +**`checkpoint.dlp_repository_scan_progress`** +: Scan percentage. + +type: integer + + +**`checkpoint.sub_policy_name`** +: Layer name. + +type: keyword + + +**`checkpoint.sub_policy_uid`** +: Layer uid. + +type: keyword + + +**`checkpoint.fw_message`** +: Used for various firewall errors. + +type: keyword + + +**`checkpoint.message`** +: ISP link has failed. + +type: keyword + + +**`checkpoint.isp_link`** +: Name of ISP link. + +type: keyword + + +**`checkpoint.fw_subproduct`** +: Can be vpn/non vpn. + +type: keyword + + +**`checkpoint.sctp_error`** +: Error information, what caused sctp to fail on out_of_state. + +type: keyword + + +**`checkpoint.chunk_type`** +: Chunck of the sctp stream. + +type: keyword + + +**`checkpoint.sctp_association_state`** +: The bad state you were trying to update to. + +type: keyword + + +**`checkpoint.tcp_packet_out_of_state`** +: State violation. + +type: keyword + + +**`checkpoint.tcp_flags`** +: TCP packet flags (SYN, ACK, etc.,). + +type: keyword + + +**`checkpoint.connectivity_level`** +: Log for a new connection in wire mode. + +type: keyword + + +**`checkpoint.ip_option`** +: IP option that was dropped. + +type: integer + + +**`checkpoint.tcp_state`** +: Log reinting a tcp state change. + +type: keyword + + +**`checkpoint.expire_time`** +: Connection closing time. + +type: keyword + + +**`checkpoint.icmp_type`** +: In case a connection is ICMP, type info will be added to the log. + +type: integer + + +**`checkpoint.icmp_code`** +: In case a connection is ICMP, code info will be added to the log. + +type: integer + + +**`checkpoint.rpc_prog`** +: Log for new RPC state - prog values. + +type: integer + + +**`checkpoint.dce-rpc_interface_uuid`** +: Log for new RPC state - UUID values + +type: keyword + + +**`checkpoint.elapsed`** +: Time passed since start time. + +type: keyword + + +**`checkpoint.icmp`** +: Number of packets, received by the client. + +type: keyword + + +**`checkpoint.capture_uuid`** +: UUID generated for the capture. Used when enabling the capture when logging. + +type: keyword + + +**`checkpoint.diameter_app_ID`** +: The ID of diameter application. + +type: integer + + +**`checkpoint.diameter_cmd_code`** +: Diameter not allowed application command id. + +type: integer + + +**`checkpoint.diameter_msg_type`** +: Diameter message type. + +type: keyword + + +**`checkpoint.cp_message`** +: Used to log a general message. + +type: integer + + +**`checkpoint.log_delay`** +: Time left before deleting template. + +type: integer + + +**`checkpoint.attack_status`** +: In case of a malicious event on an endpoint computer, the status of the attack. + +type: keyword + + +**`checkpoint.impacted_files`** +: In case of an infection on an endpoint computer, the list of files that the malware impacted. + +type: keyword + + +**`checkpoint.remediated_files`** +: In case of an infection and a successful cleaning of that infection, this is a list of remediated files on the computer. + +type: keyword + + +**`checkpoint.triggered_by`** +: The name of the mechanism that triggered the Software Blade to enforce a protection. + +type: keyword + + +**`checkpoint.https_inspection_rule_id`** +: ID of the matched rule. + +type: keyword + + +**`checkpoint.https_inspection_rule_name`** +: Name of the matched rule. + +type: keyword + + +**`checkpoint.app_properties`** +: List of all found categories. + +type: keyword + + +**`checkpoint.https_validation`** +: Precise error, describing HTTPS inspection failure. + +type: keyword + + +**`checkpoint.https_inspection_action`** +: HTTPS inspection action (Inspect/Bypass/Error). + +type: keyword + + +**`checkpoint.icap_service_id`** +: Service ID, can work with multiple servers, treated as services. + +type: integer + + +**`checkpoint.icap_server_name`** +: Server name. + +type: keyword + + +**`checkpoint.internal_error`** +: Internal error, for troubleshooting + +type: keyword + + +**`checkpoint.icap_more_info`** +: Free text for verdict. + +type: integer + + +**`checkpoint.reply_status`** +: ICAP reply status code, e.g. 200 or 204. + +type: integer + + +**`checkpoint.icap_server_service`** +: Service name, as given in the ICAP URI + +type: keyword + + +**`checkpoint.mirror_and_decrypt_type`** +: Information about decrypt and forward. Possible values: Mirror only, Decrypt and mirror, Partial mirroring (HTTPS inspection Bypass). + +type: keyword + + +**`checkpoint.interface_name`** +: Designated interface for mirror And decrypt. + +type: keyword + + +**`checkpoint.session_uid`** +: HTTP session-id. + +type: keyword + + +**`checkpoint.broker_publisher`** +: IP address of the broker publisher who shared the session information. + +type: ip + + +**`checkpoint.src_user_dn`** +: User distinguished name connected to source IP. + +type: keyword + + +**`checkpoint.proxy_user_name`** +: User name connected to proxy IP. + +type: keyword + + +**`checkpoint.proxy_machine_name`** +: Machine name connected to proxy IP. + +type: integer + + +**`checkpoint.proxy_user_dn`** +: User distinguished name connected to proxy IP. + +type: keyword + + +**`checkpoint.query`** +: DNS query. + +type: keyword + + +**`checkpoint.dns_query`** +: DNS query. + +type: keyword + + +**`checkpoint.inspection_item`** +: Blade element performed inspection. + +type: keyword + + +**`checkpoint.performance_impact`** +: Protection performance impact. + +type: integer + + +**`checkpoint.inspection_category`** +: Inspection category: protocol anomaly, signature etc. + +type: keyword + + +**`checkpoint.inspection_profile`** +: Profile which the activated protection belongs to. + +type: keyword + + +**`checkpoint.summary`** +: Summary message of a non-compliant DNS traffic drops or detects. + +type: keyword + + +**`checkpoint.question_rdata`** +: List of question records domains. + +type: keyword + + +**`checkpoint.answer_rdata`** +: List of answer resource records to the questioned domains. + +type: keyword + + +**`checkpoint.authority_rdata`** +: List of authoritative servers. + +type: keyword + + +**`checkpoint.additional_rdata`** +: List of additional resource records. + +type: keyword + + +**`checkpoint.files_names`** +: List of files requested by FTP. + +type: keyword + + +**`checkpoint.ftp_user`** +: FTP username. + +type: keyword + + +**`checkpoint.mime_from`** +: Sender’s address. + +type: keyword + + +**`checkpoint.mime_to`** +: List of receiver address. + +type: keyword + + +**`checkpoint.bcc`** +: List of BCC addresses. + +type: keyword + + +**`checkpoint.content_type`** +: Mail content type. Possible values: application/msword, text/html, image/gif etc. + +type: keyword + + +**`checkpoint.user_agent`** +: String identifying requesting software user agent. + +type: keyword + + +**`checkpoint.referrer`** +: Referrer HTTP request header, previous web page address. + +type: keyword + + +**`checkpoint.http_location`** +: Response header, indicates the URL to redirect a page to. + +type: keyword + + +**`checkpoint.content_disposition`** +: Indicates how the content is expected to be displayed inline in the browser. + +type: keyword + + +**`checkpoint.via`** +: Via header is added by proxies for tracking purposes to avoid sending reqests in loop. + +type: keyword + + +**`checkpoint.http_server`** +: Server HTTP header value, contains information about the software used by the origin server, which handles the request. + +type: keyword + + +**`checkpoint.content_length`** +: Indicates the size of the entity-body of the HTTP header. + +type: keyword + + +**`checkpoint.authorization`** +: Authorization HTTP header value. + +type: keyword + + +**`checkpoint.http_host`** +: Domain name of the server that the HTTP request is sent to. + +type: keyword + + +**`checkpoint.inspection_settings_log`** +: Indicats that the log was released by inspection settings. + +type: keyword + + +**`checkpoint.cvpn_resource`** +: Mobile Access application. + +type: keyword + + +**`checkpoint.cvpn_category`** +: Mobile Access application type. + +type: keyword + + +**`checkpoint.url`** +: Translated URL. + +type: keyword + + +**`checkpoint.reject_id`** +: A reject ID that corresponds to the one presented in the Mobile Access error page. + +type: keyword + + +**`checkpoint.fs-proto`** +: The file share protocol used in mobile acess file share application. + +type: keyword + + +**`checkpoint.app_package`** +: Unique identifier of the application on the protected mobile device. + +type: keyword + + +**`checkpoint.appi_name`** +: Name of application downloaded on the protected mobile device. + +type: keyword + + +**`checkpoint.app_repackaged`** +: Indicates whether the original application was repackage not by the official developer. + +type: keyword + + +**`checkpoint.app_sid_id`** +: Unique SHA identifier of a mobile application. + +type: keyword + + +**`checkpoint.app_version`** +: Version of the application downloaded on the protected mobile device. + +type: keyword + + +**`checkpoint.developer_certificate_name`** +: Name of the developer’s certificate that was used to sign the mobile application. + +type: keyword + + +**`checkpoint.email_control`** +: Engine name. + +type: keyword + + +**`checkpoint.email_message_id`** +: Email session id (uniqe ID of the mail). + +type: keyword + + +**`checkpoint.email_queue_id`** +: Postfix email queue id. + +type: keyword + + +**`checkpoint.email_queue_name`** +: Postfix email queue name. + +type: keyword + + +**`checkpoint.file_name`** +: Malicious file name. + +type: keyword + + +**`checkpoint.failure_reason`** +: MTA failure description. + +type: keyword + + +**`checkpoint.email_headers`** +: String containing all the email headers. + +type: keyword + + +**`checkpoint.arrival_time`** +: Email arrival timestamp. + +type: keyword + + +**`checkpoint.email_status`** +: Describes the email’s state. Possible options: delivered, deferred, skipped, bounced, hold, new, scan_started, scan_ended + +type: keyword + + +**`checkpoint.status_update`** +: Last time log was updated. + +type: keyword + + +**`checkpoint.delivery_time`** +: Timestamp of when email was delivered (MTA finished handling the email. + +type: keyword + + +**`checkpoint.links_num`** +: Number of links in the mail. + +type: integer + + +**`checkpoint.attachments_num`** +: Number of attachments in the mail. + +type: integer + + +**`checkpoint.email_content`** +: Mail contents. Possible options: attachments/links & attachments/links/text only. + +type: keyword + + +**`checkpoint.allocated_ports`** +: Amount of allocated ports. + +type: integer + + +**`checkpoint.capacity`** +: Capacity of the ports. + +type: integer + + +**`checkpoint.ports_usage`** +: Percentage of allocated ports. + +type: integer + + +**`checkpoint.nat_exhausted_pool`** +: 4-tuple of an exhausted pool. + +type: keyword + + +**`checkpoint.nat_rulenum`** +: NAT rulebase first matched rule. + +type: integer + + +**`checkpoint.nat_addtnl_rulenum`** +: When matching 2 automatic rules , second rule match will be shown otherwise field will be 0. + +type: integer + + +**`checkpoint.message_info`** +: Used for information messages, for example:NAT connection has ended. + +type: keyword + + +**`checkpoint.nat46`** +: NAT 46 status, in most cases "enabled". + +type: keyword + + +**`checkpoint.end_time`** +: TCP connection end time. + +type: keyword + + +**`checkpoint.tcp_end_reason`** +: Reason for TCP connection closure. + +type: keyword + + +**`checkpoint.cgnet`** +: Describes NAT allocation for specific subscriber. + +type: keyword + + +**`checkpoint.subscriber`** +: Source IP before CGNAT. + +type: ip + + +**`checkpoint.hide_ip`** +: Source IP which will be used after CGNAT. + +type: ip + + +**`checkpoint.int_start`** +: Subscriber start int which will be used for NAT. + +type: integer + + +**`checkpoint.int_end`** +: Subscriber end int which will be used for NAT. + +type: integer + + +**`checkpoint.packet_amount`** +: Amount of packets dropped. + +type: integer + + +**`checkpoint.monitor_reason`** +: Aggregated logs of monitored packets. + +type: keyword + + +**`checkpoint.drops_amount`** +: Amount of multicast packets dropped. + +type: integer + + +**`checkpoint.securexl_message`** +: Two options for a SecureXL message: 1. Missed accounting records after heavy load on logging system. 2. FW log message regarding a packet drop. + +type: keyword + + +**`checkpoint.conns_amount`** +: Connections amount of aggregated log info. + +type: integer + + +**`checkpoint.scope`** +: IP related to the attack. + +type: keyword + + +**`checkpoint.analyzed_on`** +: Check Point ThreatCloud / emulator name. + +type: keyword + + +**`checkpoint.detected_on`** +: System and applications version the file was emulated on. + +type: keyword + + +**`checkpoint.dropped_file_name`** +: List of names dropped from the original file. + +type: keyword + + +**`checkpoint.dropped_file_type`** +: List of file types dropped from the original file. + +type: keyword + + +**`checkpoint.dropped_file_hash`** +: List of file hashes dropped from the original file. + +type: keyword + + +**`checkpoint.dropped_file_verdict`** +: List of file verdics dropped from the original file. + +type: keyword + + +**`checkpoint.emulated_on`** +: Images the files were emulated on. + +type: keyword + + +**`checkpoint.extracted_file_type`** +: Types of extracted files in case of an archive. + +type: keyword + + +**`checkpoint.extracted_file_names`** +: Names of extracted files in case of an archive. + +type: keyword + + +**`checkpoint.extracted_file_hash`** +: Archive hash in case of extracted files. + +type: keyword + + +**`checkpoint.extracted_file_verdict`** +: Verdict of extracted files in case of an archive. + +type: keyword + + +**`checkpoint.extracted_file_uid`** +: UID of extracted files in case of an archive. + +type: keyword + + +**`checkpoint.mitre_initial_access`** +: The adversary is trying to break into your network. + +type: keyword + + +**`checkpoint.mitre_execution`** +: The adversary is trying to run malicious code. + +type: keyword + + +**`checkpoint.mitre_persistence`** +: The adversary is trying to maintain his foothold. + +type: keyword + + +**`checkpoint.mitre_privilege_escalation`** +: The adversary is trying to gain higher-level permissions. + +type: keyword + + +**`checkpoint.mitre_defense_evasion`** +: The adversary is trying to avoid being detected. + +type: keyword + + +**`checkpoint.mitre_credential_access`** +: The adversary is trying to steal account names and passwords. + +type: keyword + + +**`checkpoint.mitre_discovery`** +: The adversary is trying to expose information about your environment. + +type: keyword + + +**`checkpoint.mitre_lateral_movement`** +: The adversary is trying to explore your environment. + +type: keyword + + +**`checkpoint.mitre_collection`** +: The adversary is trying to collect data of interest to achieve his goal. + +type: keyword + + +**`checkpoint.mitre_command_and_control`** +: The adversary is trying to communicate with compromised systems in order to control them. + +type: keyword + + +**`checkpoint.mitre_exfiltration`** +: The adversary is trying to steal data. + +type: keyword + + +**`checkpoint.mitre_impact`** +: The adversary is trying to manipulate, interrupt, or destroy your systems and data. + +type: keyword + + +**`checkpoint.parent_file_hash`** +: Archive’s hash in case of extracted files. + +type: keyword + + +**`checkpoint.parent_file_name`** +: Archive’s name in case of extracted files. + +type: keyword + + +**`checkpoint.parent_file_uid`** +: Archive’s UID in case of extracted files. + +type: keyword + + +**`checkpoint.similiar_iocs`** +: Other IoCs similar to the ones found, related to the malicious file. + +type: keyword + + +**`checkpoint.similar_hashes`** +: Hashes found similar to the malicious file. + +type: keyword + + +**`checkpoint.similar_strings`** +: Strings found similar to the malicious file. + +type: keyword + + +**`checkpoint.similar_communication`** +: Network action found similar to the malicious file. + +type: keyword + + +**`checkpoint.te_verdict_determined_by`** +: Emulators determined file verdict. + +type: keyword + + +**`checkpoint.packet_capture_unique_id`** +: Identifier of the packet capture files. + +type: keyword + + +**`checkpoint.total_attachments`** +: The number of attachments in an email. + +type: integer + + +**`checkpoint.additional_info`** +: ID of original file/mail which are sent by admin. + +type: keyword + + +**`checkpoint.content_risk`** +: File risk. + +type: integer + + +**`checkpoint.operation`** +: Operation made by Threat Extraction. + +type: keyword + + +**`checkpoint.scrubbed_content`** +: Active content that was found. + +type: keyword + + +**`checkpoint.scrub_time`** +: Extraction process duration. + +type: keyword + + +**`checkpoint.scrub_download_time`** +: File download time from resource. + +type: keyword + + +**`checkpoint.scrub_total_time`** +: Threat extraction total file handling time. + +type: keyword + + +**`checkpoint.scrub_activity`** +: The result of the extraction + +type: keyword + + +**`checkpoint.watermark`** +: Reports whether watermark is added to the cleaned file. + +type: keyword + + +**`checkpoint.snid`** +: The Check Point session ID. + +type: keyword + + +**`checkpoint.source_object`** +: Matched object name on source column. + +type: keyword + + +**`checkpoint.destination_object`** +: Matched object name on destination column. + +type: keyword + + +**`checkpoint.drop_reason`** +: Drop reason description. + +type: keyword + + +**`checkpoint.hit`** +: Number of hits on a rule. + +type: integer + + +**`checkpoint.rulebase_id`** +: Layer number. + +type: integer + + +**`checkpoint.first_hit_time`** +: First hit time in current interval. + +type: integer + + +**`checkpoint.last_hit_time`** +: Last hit time in current interval. + +type: integer + + +**`checkpoint.rematch_info`** +: Information sent when old connections cannot be matched during policy installation. + +type: keyword + + +**`checkpoint.last_rematch_time`** +: Connection rematched time. + +type: keyword + + +**`checkpoint.action_reason`** +: Connection drop reason. + +type: integer + + +**`checkpoint.action_reason_msg`** +: Connection drop reason message. + +type: keyword + + +**`checkpoint.c_bytes`** +: Boolean value indicates whether bytes sent from the client side are used. + +type: integer + + +**`checkpoint.context_num`** +: Serial number of the log for a specific connection. + +type: integer + + +**`checkpoint.match_id`** +: Private key of the rule + +type: integer + + +**`checkpoint.alert`** +: Alert level of matched rule (for connection logs). + +type: keyword + + +**`checkpoint.parent_rule`** +: Parent rule number, in case of inline layer. + +type: integer + + +**`checkpoint.match_fk`** +: Rule number. + +type: integer + + +**`checkpoint.dropped_outgoing`** +: Number of outgoing bytes dropped when using UP-limit feature. + +type: integer + + +**`checkpoint.dropped_incoming`** +: Number of incoming bytes dropped when using UP-limit feature. + +type: integer + + +**`checkpoint.media_type`** +: Media used (audio, video, etc.) + +type: keyword + + +**`checkpoint.sip_reason`** +: Explains why *source_ip* isn’t allowed to redirect (handover). + +type: keyword + + +**`checkpoint.voip_method`** +: Registration request. + +type: keyword + + +**`checkpoint.registered_ip-phones`** +: Registered IP-Phones. + +type: keyword + + +**`checkpoint.voip_reg_user_type`** +: Registered IP-Phone type. + +type: keyword + + +**`checkpoint.voip_call_id`** +: Call-ID. + +type: keyword + + +**`checkpoint.voip_reg_int`** +: Registration port. + +type: integer + + +**`checkpoint.voip_reg_ipp`** +: Registration IP protocol. + +type: integer + + +**`checkpoint.voip_reg_period`** +: Registration period. + +type: integer + + +**`checkpoint.voip_log_type`** +: VoIP log types. Possible values: reject, call, registration. + +type: keyword + + +**`checkpoint.src_phone_number`** +: Source IP-Phone. + +type: keyword + + +**`checkpoint.voip_from_user_type`** +: Source IP-Phone type. + +type: keyword + + +**`checkpoint.dst_phone_number`** +: Destination IP-Phone. + +type: keyword + + +**`checkpoint.voip_to_user_type`** +: Destination IP-Phone type. + +type: keyword + + +**`checkpoint.voip_call_dir`** +: Call direction: in/out. + +type: keyword + + +**`checkpoint.voip_call_state`** +: Call state. Possible values: in/out. + +type: keyword + + +**`checkpoint.voip_call_term_time`** +: Call termination time stamp. + +type: keyword + + +**`checkpoint.voip_duration`** +: Call duration (seconds). + +type: keyword + + +**`checkpoint.voip_media_port`** +: Media int. + +type: keyword + + +**`checkpoint.voip_media_ipp`** +: Media IP protocol. + +type: keyword + + +**`checkpoint.voip_est_codec`** +: Estimated codec. + +type: keyword + + +**`checkpoint.voip_exp`** +: Expiration. + +type: integer + + +**`checkpoint.voip_attach_sz`** +: Attachment size. + +type: integer + + +**`checkpoint.voip_attach_action_info`** +: Attachment action Info. + +type: keyword + + +**`checkpoint.voip_media_codec`** +: Estimated codec. + +type: keyword + + +**`checkpoint.voip_reject_reason`** +: Reject reason. + +type: keyword + + +**`checkpoint.voip_reason_info`** +: Information. + +type: keyword + + +**`checkpoint.voip_config`** +: Configuration. + +type: keyword + + +**`checkpoint.voip_reg_server`** +: Registrar server IP address. + +type: ip + + +**`checkpoint.scv_user`** +: Username whose packets are dropped on SCV. + +type: keyword + + +**`checkpoint.scv_message_info`** +: Drop reason. + +type: keyword + + +**`checkpoint.ppp`** +: Authentication status. + +type: keyword + + +**`checkpoint.scheme`** +: Describes the scheme used for the log. + +type: keyword + + +**`checkpoint.auth_method`** +: Password authentication protocol used (PAP or EAP). + +type: keyword + + +**`checkpoint.auth_status`** +: The authentication status for an event. + +type: keyword + + +**`checkpoint.machine`** +: L2TP machine which triggered the log and the log refers to it. + +type: keyword + + +**`checkpoint.vpn_feature_name`** +: L2TP /IKE / Link Selection. + +type: keyword + + +**`checkpoint.reject_category`** +: Authentication failure reason. + +type: keyword + + +**`checkpoint.peer_ip_probing_status_update`** +: IP address response status. + +type: keyword + + +**`checkpoint.peer_ip`** +: IP address which the client connects to. + +type: keyword + + +**`checkpoint.peer_gateway`** +: Main IP of the peer Security Gateway. + +type: ip + + +**`checkpoint.link_probing_status_update`** +: IP address response status. + +type: keyword + + +**`checkpoint.source_interface`** +: External Interface name for source interface or Null if not found. + +type: keyword + + +**`checkpoint.next_hop_ip`** +: Next hop IP address. + +type: keyword + + +**`checkpoint.srckeyid`** +: Initiator Spi ID. + +type: keyword + + +**`checkpoint.dstkeyid`** +: Responder Spi ID. + +type: keyword + + +**`checkpoint.encryption_failure`** +: Message indicating why the encryption failed. + +type: keyword + + +**`checkpoint.ike_ids`** +: All QM ids. + +type: keyword + + +**`checkpoint.community`** +: Community name for the IPSec key and the use of the IKEv. + +type: keyword + + +**`checkpoint.ike`** +: IKEMode (PHASE1, PHASE2, etc..). + +type: keyword + + +**`checkpoint.cookieI`** +: Initiator cookie. + +type: keyword + + +**`checkpoint.cookieR`** +: Responder cookie. + +type: keyword + + +**`checkpoint.msgid`** +: Message ID. + +type: keyword + + +**`checkpoint.methods`** +: IPSEc methods. + +type: keyword + + +**`checkpoint.connection_uid`** +: Calculation of md5 of the IP and user name as UID. + +type: keyword + + +**`checkpoint.site_name`** +: Site name. + +type: keyword + + +**`checkpoint.esod_rule_name`** +: Unknown rule name. + +type: keyword + + +**`checkpoint.esod_rule_action`** +: Unknown rule action. + +type: keyword + + +**`checkpoint.esod_rule_type`** +: Unknown rule type. + +type: keyword + + +**`checkpoint.esod_noncompliance_reason`** +: Non-compliance reason. + +type: keyword + + +**`checkpoint.esod_associated_policies`** +: Associated policies. + +type: keyword + + +**`checkpoint.spyware_name`** +: Spyware name. + +type: keyword + + +**`checkpoint.spyware_type`** +: Spyware type. + +type: keyword + + +**`checkpoint.anti_virus_type`** +: Anti virus type. + +type: keyword + + +**`checkpoint.end_user_firewall_type`** +: End user firewall type. + +type: keyword + + +**`checkpoint.esod_scan_status`** +: Scan failed. + +type: keyword + + +**`checkpoint.esod_access_status`** +: Access denied. + +type: keyword + + +**`checkpoint.client_type`** +: Endpoint Connect. + +type: keyword + + +**`checkpoint.precise_error`** +: HTTP parser error. + +type: keyword + + +**`checkpoint.method`** +: HTTP method. + +type: keyword + + +**`checkpoint.trusted_domain`** +: In case of phishing event, the domain, which the attacker was impersonating. + +type: keyword + + +**`checkpoint.comment`** +: type: keyword + + +**`checkpoint.conn_direction`** +: Connection direction + +type: keyword + + +**`checkpoint.db_ver`** +: Database version + +type: keyword + + +**`checkpoint.update_status`** +: Status of database update + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-cisco.md b/docs/reference/filebeat/exported-fields-cisco.md new file mode 100644 index 000000000000..c2683a3fcc9c --- /dev/null +++ b/docs/reference/filebeat/exported-fields-cisco.md @@ -0,0 +1,850 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-cisco.html +--- + +# Cisco fields [exported-fields-cisco] + +Module for handling Cisco network device logs. + + +## cisco.amp [_cisco_amp] + +Module for parsing Cisco AMP logs. + +**`cisco.amp.timestamp_nanoseconds`** +: The timestamp in Epoch nanoseconds. + +type: date + + +**`cisco.amp.event_type_id`** +: A sub ID of the event, depending on event type. + +type: keyword + + +**`cisco.amp.detection`** +: The name of the malware detected. + +type: keyword + + +**`cisco.amp.detection_id`** +: The ID of the detection. + +type: keyword + + +**`cisco.amp.connector_guid`** +: The GUID of the connector sending information to AMP. + +type: keyword + + +**`cisco.amp.group_guids`** +: An array of group GUIDS related to the connector sending information to AMP. + +type: keyword + + +**`cisco.amp.vulnerabilities`** +: An array of related vulnerabilities to the malicious event. + +type: flattened + + +**`cisco.amp.scan.description`** +: Description of an event related to a scan being initiated, for example the specific directory name. + +type: keyword + + +**`cisco.amp.scan.clean`** +: Boolean value if a scanned file was clean or not. + +type: boolean + + +**`cisco.amp.scan.scanned_files`** +: Count of files scanned in a directory. + +type: long + + +**`cisco.amp.scan.scanned_processes`** +: Count of processes scanned related to a single scan event. + +type: long + + +**`cisco.amp.scan.scanned_paths`** +: Count of different directories scanned related to a single scan event. + +type: long + + +**`cisco.amp.scan.malicious_detections`** +: Count of malicious files or documents detected related to a single scan event. + +type: long + + +**`cisco.amp.computer.connector_guid`** +: The GUID of the connector, similar to top level connector_guid, but unique if multiple connectors are involved. + +type: keyword + + +**`cisco.amp.computer.external_ip`** +: The external IP of the related host. + +type: ip + + +**`cisco.amp.computer.active`** +: If the current endpoint is active or not. + +type: boolean + + +**`cisco.amp.computer.network_addresses`** +: All network interface information on the related host. + +type: flattened + + +**`cisco.amp.file.disposition`** +: Categorization of file, for example "Malicious" or "Clean". + +type: keyword + + +**`cisco.amp.network_info.disposition`** +: Categorization of a network event related to a file, for example "Malicious" or "Clean". + +type: keyword + + +**`cisco.amp.network_info.nfm.direction`** +: The current direction based on source and destination IP. + +type: keyword + + +**`cisco.amp.related.mac`** +: An array of all related MAC addresses. + +type: keyword + + +**`cisco.amp.related.cve`** +: An array of all related MAC addresses. + +type: keyword + + +**`cisco.amp.cloud_ioc.description`** +: Description of the related IOC for specific IOC events from AMP. + +type: keyword + + +**`cisco.amp.cloud_ioc.short_description`** +: Short description of the related IOC for specific IOC events from AMP. + +type: keyword + + +**`cisco.amp.network_info.parent.disposition`** +: Categorization of a IOC for example "Malicious" or "Clean". + +type: keyword + + +**`cisco.amp.network_info.parent.identity.md5`** +: MD5 hash of the related IOC. + +type: keyword + + +**`cisco.amp.network_info.parent.identity.sha1`** +: SHA1 hash of the related IOC. + +type: keyword + + +**`cisco.amp.network_info.parent.identify.sha256`** +: SHA256 hash of the related IOC. + +type: keyword + + +**`cisco.amp.file.archived_file.disposition`** +: Categorization of a file archive related to a file, for example "Malicious" or "Clean". + +type: keyword + + +**`cisco.amp.file.archived_file.identity.md5`** +: MD5 hash of the archived file related to the malicious event. + +type: keyword + + +**`cisco.amp.file.archived_file.identity.sha1`** +: SHA1 hash of the archived file related to the malicious event. + +type: keyword + + +**`cisco.amp.file.archived_file.identity.sha256`** +: SHA256 hash of the archived file related to the malicious event. + +type: keyword + + +**`cisco.amp.file.attack_details.application`** +: The application name related to Exploit Prevention events. + +type: keyword + + +**`cisco.amp.file.attack_details.attacked_module`** +: Path to the executable or dll that was attacked and detected by Exploit Prevention. + +type: keyword + + +**`cisco.amp.file.attack_details.base_address`** +: The base memory address related to the exploit detected. + +type: keyword + + +**`cisco.amp.file.attack_details.suspicious_files`** +: An array of related files when an attack is detected by Exploit Prevention. + +type: keyword + + +**`cisco.amp.file.parent.disposition`** +: Categorization of parrent, for example "Malicious" or "Clean". + +type: keyword + + +**`cisco.amp.error.description`** +: Description of an endpoint error event. + +type: keyword + + +**`cisco.amp.error.error_code`** +: The error code describing the related error event. + +type: keyword + + +**`cisco.amp.threat_hunting.severity`** +: Severity result of the threat hunt registered to the malicious event. Can be Low-Critical. + +type: keyword + + +**`cisco.amp.threat_hunting.incident_report_guid`** +: The GUID of the related threat hunting report. + +type: keyword + + +**`cisco.amp.threat_hunting.incident_hunt_guid`** +: The GUID of the related investigation tracking issue. + +type: keyword + + +**`cisco.amp.threat_hunting.incident_title`** +: Title of the incident related to the threat hunting activity. + +type: keyword + + +**`cisco.amp.threat_hunting.incident_summary`** +: Summary of the outcome on the threat hunting activity. + +type: keyword + + +**`cisco.amp.threat_hunting.incident_remediation`** +: Recommendations to resolve the vulnerability or exploited host. + +type: keyword + + +**`cisco.amp.threat_hunting.incident_id`** +: The id of the related incident for the threat hunting activity. + +type: keyword + + +**`cisco.amp.threat_hunting.incident_end_time`** +: When the threat hunt finalized or closed. + +type: date + + +**`cisco.amp.threat_hunting.incident_start_time`** +: When the threat hunt was initiated. + +type: date + + +**`cisco.amp.file.attack_details.indicators`** +: Different indicator types that matches the exploit detected, for example different MITRE tactics. + +type: flattened + + +**`cisco.amp.threat_hunting.tactics`** +: List of all MITRE tactics related to the incident found. + +type: flattened + + +**`cisco.amp.threat_hunting.techniques`** +: List of all MITRE techniques related to the incident found. + +type: flattened + + +**`cisco.amp.tactics`** +: List of all MITRE tactics related to the incident found. + +type: flattened + + +**`cisco.amp.mitre_tactics`** +: Array of all related mitre tactic ID’s + +type: keyword + + +**`cisco.amp.techniques`** +: List of all MITRE techniques related to the incident found. + +type: flattened + + +**`cisco.amp.mitre_techniques`** +: Array of all related mitre technique ID’s + +type: keyword + + +**`cisco.amp.command_line.arguments`** +: The CLI arguments related to the Cloud Threat IOC reported by Cisco. + +type: keyword + + +**`cisco.amp.bp_data`** +: Endpoint isolation information + +type: flattened + + + +## cisco.asa [_cisco_asa] + +Fields for Cisco ASA Firewall. + +**`cisco.asa.message_id`** +: The Cisco ASA message identifier. + +type: keyword + + +**`cisco.asa.suffix`** +: Optional suffix after %ASA identifier. + +type: keyword + +example: session + + +**`cisco.asa.source_interface`** +: Source interface for the flow or event. + +type: keyword + + +**`cisco.asa.destination_interface`** +: Destination interface for the flow or event. + +type: keyword + + +**`cisco.asa.rule_name`** +: Name of the Access Control List rule that matched this event. + +type: keyword + + +**`cisco.asa.source_username`** +: Name of the user that is the source for this event. + +type: keyword + + +**`cisco.asa.source_user_security_group_tag`** +: The Security Group Tag for the source user. Security Group Tag are 16-bit identifiers used to represent logical group privilege. + +type: long + + +**`cisco.asa.destination_username`** +: Name of the user that is the destination for this event. + +type: keyword + + +**`cisco.asa.destination_user_security_group_tag`** +: The Security Group Tag for the destination user. Security Group Tag are 16-bit identifiers used to represent logical group privilege. + +type: long + + +**`cisco.asa.mapped_source_ip`** +: The translated source IP address. + +type: ip + + +**`cisco.asa.mapped_source_host`** +: The translated source host. + +type: keyword + + +**`cisco.asa.mapped_source_port`** +: The translated source port. + +type: long + + +**`cisco.asa.mapped_destination_ip`** +: The translated destination IP address. + +type: ip + + +**`cisco.asa.mapped_destination_host`** +: The translated destination host. + +type: keyword + + +**`cisco.asa.mapped_destination_port`** +: The translated destination port. + +type: long + + +**`cisco.asa.threat_level`** +: Threat level for malware / botnet traffic. One of very-low, low, moderate, high or very-high. + +type: keyword + + +**`cisco.asa.threat_category`** +: Category for the malware / botnet traffic. For example: virus, botnet, trojan, etc. + +type: keyword + + +**`cisco.asa.connection_id`** +: Unique identifier for a flow. + +type: keyword + + +**`cisco.asa.icmp_type`** +: ICMP type. + +type: short + + +**`cisco.asa.icmp_code`** +: ICMP code. + +type: short + + +**`cisco.asa.connection_type`** +: The VPN connection type + +type: keyword + + +**`cisco.asa.dap_records`** +: The assigned DAP records + +type: keyword + + +**`cisco.asa.command_line_arguments`** +: The command line arguments logged by the local audit log + +type: keyword + + +**`cisco.asa.assigned_ip`** +: The IP address assigned to a VPN client successfully connecting + +type: ip + + +**`cisco.asa.privilege.old`** +: When a users privilege is changed this is the old value + +type: keyword + + +**`cisco.asa.privilege.new`** +: When a users privilege is changed this is the new value + +type: keyword + + +**`cisco.asa.burst.object`** +: The related object for burst warnings + +type: keyword + + +**`cisco.asa.burst.id`** +: The related rate ID for burst warnings + +type: keyword + + +**`cisco.asa.burst.current_rate`** +: The current burst rate seen + +type: keyword + + +**`cisco.asa.burst.configured_rate`** +: The current configured burst rate + +type: keyword + + +**`cisco.asa.burst.avg_rate`** +: The current average burst rate seen + +type: keyword + + +**`cisco.asa.burst.configured_avg_rate`** +: The current configured average burst rate allowed + +type: keyword + + +**`cisco.asa.burst.cumulative_count`** +: The total count of burst rate hits since the object was created or cleared + +type: keyword + + +**`cisco.asa.termination_user`** +: AAA name of user requesting termination + +type: keyword + + +**`cisco.asa.webvpn.group_name`** +: The WebVPN group name the user belongs to + +type: keyword + + +**`cisco.asa.termination_initiator`** +: Interface name of the side that initiated the teardown + +type: keyword + + +**`cisco.asa.tunnel_type`** +: SA type (remote access or L2L) + +type: keyword + + +**`cisco.asa.session_type`** +: Session type (for example, IPsec or UDP) + +type: keyword + + + +## cisco.ftd [_cisco_ftd] + +Fields for Cisco Firepower Threat Defense Firewall. + +**`cisco.ftd.message_id`** +: The Cisco FTD message identifier. + +type: keyword + + +**`cisco.ftd.suffix`** +: Optional suffix after %FTD identifier. + +type: keyword + +example: session + + +**`cisco.ftd.source_interface`** +: Source interface for the flow or event. + +type: keyword + + +**`cisco.ftd.destination_interface`** +: Destination interface for the flow or event. + +type: keyword + + +**`cisco.ftd.rule_name`** +: Name of the Access Control List rule that matched this event. + +type: keyword + + +**`cisco.ftd.source_username`** +: Name of the user that is the source for this event. + +type: keyword + + +**`cisco.ftd.destination_username`** +: Name of the user that is the destination for this event. + +type: keyword + + +**`cisco.ftd.mapped_source_ip`** +: The translated source IP address. Use ECS source.nat.ip. + +type: ip + + +**`cisco.ftd.mapped_source_host`** +: The translated source host. + +type: keyword + + +**`cisco.ftd.mapped_source_port`** +: The translated source port. Use ECS source.nat.port. + +type: long + + +**`cisco.ftd.mapped_destination_ip`** +: The translated destination IP address. Use ECS destination.nat.ip. + +type: ip + + +**`cisco.ftd.mapped_destination_host`** +: The translated destination host. + +type: keyword + + +**`cisco.ftd.mapped_destination_port`** +: The translated destination port. Use ECS destination.nat.port. + +type: long + + +**`cisco.ftd.threat_level`** +: Threat level for malware / botnet traffic. One of very-low, low, moderate, high or very-high. + +type: keyword + + +**`cisco.ftd.threat_category`** +: Category for the malware / botnet traffic. For example: virus, botnet, trojan, etc. + +type: keyword + + +**`cisco.ftd.connection_id`** +: Unique identifier for a flow. + +type: keyword + + +**`cisco.ftd.icmp_type`** +: ICMP type. + +type: short + + +**`cisco.ftd.icmp_code`** +: ICMP code. + +type: short + + +**`cisco.ftd.security`** +: Raw fields for Security Events. + +type: object + + +**`cisco.ftd.connection_type`** +: The VPN connection type + +type: keyword + + +**`cisco.ftd.dap_records`** +: The assigned DAP records + +type: keyword + + +**`cisco.ftd.termination_user`** +: AAA name of user requesting termination + +type: keyword + + +**`cisco.ftd.webvpn.group_name`** +: The WebVPN group name the user belongs to + +type: keyword + + +**`cisco.ftd.termination_initiator`** +: Interface name of the side that initiated the teardown + +type: keyword + + + +## cisco.ios [_cisco_ios] + +Fields for Cisco IOS logs. + +**`cisco.ios.access_list`** +: Name of the IP access list. + +type: keyword + + +**`cisco.ios.facility`** +: The facility to which the message refers (for example, SNMP, SYS, and so forth). A facility can be a hardware device, a protocol, or a module of the system software. It denotes the source or the cause of the system message. + +type: keyword + +example: SEC + + + +## cisco.umbrella [_cisco_umbrella] + +Fields for Cisco Umbrella. + +**`cisco.umbrella.identities`** +: An array of the different identities related to the event. + +type: keyword + + +**`cisco.umbrella.categories`** +: The security or content categories that the destination matches. + +type: keyword + + +**`cisco.umbrella.policy_identity_type`** +: The first identity type matched with this request. Available in version 3 and above. + +type: keyword + + +**`cisco.umbrella.identity_types`** +: The type of identity that made the request. For example, Roaming Computer or Network. + +type: keyword + + +**`cisco.umbrella.blocked_categories`** +: The categories that resulted in the destination being blocked. Available in version 4 and above. + +type: keyword + + +**`cisco.umbrella.content_type`** +: The type of web content, typically text/html. + +type: keyword + + +**`cisco.umbrella.sha_sha256`** +: Hex digest of the response content. + +type: keyword + + +**`cisco.umbrella.av_detections`** +: The detection name according to the antivirus engine used in file inspection. + +type: keyword + + +**`cisco.umbrella.puas`** +: A list of all potentially unwanted application (PUA) results for the proxied file as returned by the antivirus scanner. + +type: keyword + + +**`cisco.umbrella.amp_disposition`** +: The status of the files proxied and scanned by Cisco Advanced Malware Protection (AMP) as part of the Umbrella File Inspection feature; can be Clean, Malicious or Unknown. + +type: keyword + + +**`cisco.umbrella.amp_malware_name`** +: If Malicious, the name of the malware according to AMP. + +type: keyword + + +**`cisco.umbrella.amp_score`** +: The score of the malware from AMP. This field is not currently used and will be blank. + +type: keyword + + +**`cisco.umbrella.datacenter`** +: The name of the Umbrella Data Center that processed the user-generated traffic. + +type: keyword + + +**`cisco.umbrella.origin_id`** +: The unique identity of the network tunnel. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-cloud.md b/docs/reference/filebeat/exported-fields-cloud.md new file mode 100644 index 000000000000..31d6f8a522cf --- /dev/null +++ b/docs/reference/filebeat/exported-fields-cloud.md @@ -0,0 +1,57 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-cloud.html +--- + +# Cloud provider metadata fields [exported-fields-cloud] + +Metadata from cloud providers added by the add_cloud_metadata processor. + +**`cloud.image.id`** +: Image ID for the cloud instance. + +example: ami-abcd1234 + + +**`meta.cloud.provider`** +: type: alias + +alias to: cloud.provider + + +**`meta.cloud.instance_id`** +: type: alias + +alias to: cloud.instance.id + + +**`meta.cloud.instance_name`** +: type: alias + +alias to: cloud.instance.name + + +**`meta.cloud.machine_type`** +: type: alias + +alias to: cloud.machine.type + + +**`meta.cloud.availability_zone`** +: type: alias + +alias to: cloud.availability_zone + + +**`meta.cloud.project_id`** +: type: alias + +alias to: cloud.project.id + + +**`meta.cloud.region`** +: type: alias + +alias to: cloud.region + + diff --git a/docs/reference/filebeat/exported-fields-coredns.md b/docs/reference/filebeat/exported-fields-coredns.md new file mode 100644 index 000000000000..6acf696e4d90 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-coredns.md @@ -0,0 +1,30 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-coredns.html +--- + +# Coredns fields [exported-fields-coredns] + +Module for handling logs produced by coredns + + +## coredns [_coredns] + +coredns fields after normalization + +**`coredns.query.size`** +: size of the DNS query + +type: integer + +format: bytes + + +**`coredns.response.size`** +: size of the DNS response + +type: integer + +format: bytes + + diff --git a/docs/reference/filebeat/exported-fields-crowdstrike.md b/docs/reference/filebeat/exported-fields-crowdstrike.md new file mode 100644 index 000000000000..525f2c933d02 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-crowdstrike.md @@ -0,0 +1,558 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-crowdstrike.html +--- + +# Crowdstrike fields [exported-fields-crowdstrike] + +Module for collecting Crowdstrike events. + + +## crowdstrike [_crowdstrike] + +Fields for Crowdstrike Falcon event and alert data. + + +## metadata [_metadata_2] + +Meta data fields for each event that include type and timestamp. + +**`crowdstrike.metadata.eventType`** +: DetectionSummaryEvent, FirewallMatchEvent, IncidentSummaryEvent, RemoteResponseSessionStartEvent, RemoteResponseSessionEndEvent, AuthActivityAuditEvent, or UserActivityAuditEvent + +type: keyword + + +**`crowdstrike.metadata.eventCreationTime`** +: The time this event occurred on the endpoint in UTC UNIX_MS format. + +type: date + + +**`crowdstrike.metadata.offset`** +: Offset number that tracks the location of the event in stream. This is used to identify unique detection events. + +type: integer + + +**`crowdstrike.metadata.customerIDString`** +: Customer identifier + +type: keyword + + +**`crowdstrike.metadata.version`** +: Schema version + +type: keyword + + + +## event [_event] + +Event data fields for each event and alert. + +**`crowdstrike.event.ProcessStartTime`** +: The process start time in UTC UNIX_MS format. + +type: date + + +**`crowdstrike.event.ProcessEndTime`** +: The process termination time in UTC UNIX_MS format. + +type: date + + +**`crowdstrike.event.ProcessId`** +: Process ID related to the detection. + +type: integer + + +**`crowdstrike.event.ParentProcessId`** +: Parent process ID related to the detection. + +type: integer + + +**`crowdstrike.event.ComputerName`** +: Name of the computer where the detection occurred. + +type: keyword + + +**`crowdstrike.event.UserName`** +: User name associated with the detection. + +type: keyword + + +**`crowdstrike.event.DetectName`** +: Name of the detection. + +type: keyword + + +**`crowdstrike.event.DetectDescription`** +: Description of the detection. + +type: keyword + + +**`crowdstrike.event.Severity`** +: Severity score of the detection. + +type: integer + + +**`crowdstrike.event.SeverityName`** +: Severity score text. + +type: keyword + + +**`crowdstrike.event.FileName`** +: File name of the associated process for the detection. + +type: keyword + + +**`crowdstrike.event.FilePath`** +: Path of the executable associated with the detection. + +type: keyword + + +**`crowdstrike.event.CommandLine`** +: Executable path with command line arguments. + +type: keyword + + +**`crowdstrike.event.SHA1String`** +: SHA1 sum of the executable associated with the detection. + +type: keyword + + +**`crowdstrike.event.SHA256String`** +: SHA256 sum of the executable associated with the detection. + +type: keyword + + +**`crowdstrike.event.MD5String`** +: MD5 sum of the executable associated with the detection. + +type: keyword + + +**`crowdstrike.event.MachineDomain`** +: Domain for the machine associated with the detection. + +type: keyword + + +**`crowdstrike.event.FalconHostLink`** +: URL to view the detection in Falcon. + +type: keyword + + +**`crowdstrike.event.SensorId`** +: Unique ID associated with the Falcon sensor. + +type: keyword + + +**`crowdstrike.event.DetectId`** +: Unique ID associated with the detection. + +type: keyword + + +**`crowdstrike.event.LocalIP`** +: IP address of the host associated with the detection. + +type: keyword + + +**`crowdstrike.event.MACAddress`** +: MAC address of the host associated with the detection. + +type: keyword + + +**`crowdstrike.event.Tactic`** +: MITRE tactic category of the detection. + +type: keyword + + +**`crowdstrike.event.Technique`** +: MITRE technique category of the detection. + +type: keyword + + +**`crowdstrike.event.Objective`** +: Method of detection. + +type: keyword + + +**`crowdstrike.event.PatternDispositionDescription`** +: Action taken by Falcon. + +type: keyword + + +**`crowdstrike.event.PatternDispositionValue`** +: Unique ID associated with action taken. + +type: integer + + +**`crowdstrike.event.PatternDispositionFlags`** +: Flags indicating actions taken. + +type: object + + +**`crowdstrike.event.State`** +: Whether the incident summary is open and ongoing or closed. + +type: keyword + + +**`crowdstrike.event.IncidentStartTime`** +: Start time for the incident in UTC UNIX format. + +type: date + + +**`crowdstrike.event.IncidentEndTime`** +: End time for the incident in UTC UNIX format. + +type: date + + +**`crowdstrike.event.FineScore`** +: Score for incident. + +type: float + + +**`crowdstrike.event.UserId`** +: Email address or user ID associated with the event. + +type: keyword + + +**`crowdstrike.event.UserIp`** +: IP address associated with the user. + +type: keyword + + +**`crowdstrike.event.OperationName`** +: Event subtype. + +type: keyword + + +**`crowdstrike.event.ServiceName`** +: Service associated with this event. + +type: keyword + + +**`crowdstrike.event.Success`** +: Indicator of whether or not this event was successful. + +type: boolean + + +**`crowdstrike.event.UTCTimestamp`** +: Timestamp associated with this event in UTC UNIX format. + +type: date + + +**`crowdstrike.event.AuditKeyValues`** +: Fields that were changed in this event. + +type: nested + + +**`crowdstrike.event.ExecutablesWritten`** +: Detected executables written to disk by a process. + +type: nested + + +**`crowdstrike.event.SessionId`** +: Session ID of the remote response session. + +type: keyword + + +**`crowdstrike.event.HostnameField`** +: Host name of the machine for the remote session. + +type: keyword + + +**`crowdstrike.event.StartTimestamp`** +: Start time for the remote session in UTC UNIX format. + +type: date + + +**`crowdstrike.event.EndTimestamp`** +: End time for the remote session in UTC UNIX format. + +type: date + + +**`crowdstrike.event.LateralMovement`** +: Lateral movement field for incident. + +type: long + + +**`crowdstrike.event.ParentImageFileName`** +: Path to the parent process. + +type: keyword + + +**`crowdstrike.event.ParentCommandLine`** +: Parent process command line arguments. + +type: keyword + + +**`crowdstrike.event.GrandparentImageFileName`** +: Path to the grandparent process. + +type: keyword + + +**`crowdstrike.event.GrandparentCommandLine`** +: Grandparent process command line arguments. + +type: keyword + + +**`crowdstrike.event.IOCType`** +: CrowdStrike type for indicator of compromise. + +type: keyword + + +**`crowdstrike.event.IOCValue`** +: CrowdStrike value for indicator of compromise. + +type: keyword + + +**`crowdstrike.event.CustomerId`** +: Customer identifier. + +type: keyword + + +**`crowdstrike.event.DeviceId`** +: Device on which the event occurred. + +type: keyword + + +**`crowdstrike.event.Ipv`** +: Protocol for network request. + +type: keyword + + +**`crowdstrike.event.ConnectionDirection`** +: Direction for network connection. + +type: keyword + + +**`crowdstrike.event.EventType`** +: CrowdStrike provided event type. + +type: keyword + + +**`crowdstrike.event.HostName`** +: Host name of the local machine. + +type: keyword + + +**`crowdstrike.event.ICMPCode`** +: RFC2780 ICMP Code field. + +type: keyword + + +**`crowdstrike.event.ICMPType`** +: RFC2780 ICMP Type field. + +type: keyword + + +**`crowdstrike.event.ImageFileName`** +: File name of the associated process for the detection. + +type: keyword + + +**`crowdstrike.event.PID`** +: Associated process id for the detection. + +type: long + + +**`crowdstrike.event.LocalAddress`** +: IP address of local machine. + +type: ip + + +**`crowdstrike.event.LocalPort`** +: Port of local machine. + +type: long + + +**`crowdstrike.event.RemoteAddress`** +: IP address of remote machine. + +type: ip + + +**`crowdstrike.event.RemotePort`** +: Port of remote machine. + +type: long + + +**`crowdstrike.event.RuleAction`** +: Firewall rule action. + +type: keyword + + +**`crowdstrike.event.RuleDescription`** +: Firewall rule description. + +type: keyword + + +**`crowdstrike.event.RuleFamilyID`** +: Firewall rule family id. + +type: keyword + + +**`crowdstrike.event.RuleGroupName`** +: Firewall rule group name. + +type: keyword + + +**`crowdstrike.event.RuleName`** +: Firewall rule name. + +type: keyword + + +**`crowdstrike.event.RuleId`** +: Firewall rule id. + +type: keyword + + +**`crowdstrike.event.MatchCount`** +: Number of firewall rule matches. + +type: long + + +**`crowdstrike.event.MatchCountSinceLastReport`** +: Number of firewall rule matches since the last report. + +type: long + + +**`crowdstrike.event.Timestamp`** +: Firewall rule triggered timestamp. + +type: date + + +**`crowdstrike.event.Flags.Audit`** +: CrowdStrike audit flag. + +type: boolean + + +**`crowdstrike.event.Flags.Log`** +: CrowdStrike log flag. + +type: boolean + + +**`crowdstrike.event.Flags.Monitor`** +: CrowdStrike monitor flag. + +type: boolean + + +**`crowdstrike.event.Protocol`** +: CrowdStrike provided protocol. + +type: keyword + + +**`crowdstrike.event.NetworkProfile`** +: CrowdStrike network profile. + +type: keyword + + +**`crowdstrike.event.PolicyName`** +: CrowdStrike policy name. + +type: keyword + + +**`crowdstrike.event.PolicyID`** +: CrowdStrike policy id. + +type: keyword + + +**`crowdstrike.event.Status`** +: CrowdStrike status. + +type: keyword + + +**`crowdstrike.event.TreeID`** +: CrowdStrike tree id. + +type: keyword + + +**`crowdstrike.event.Commands`** +: Commands run in a remote session. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-cyberarkpas.md b/docs/reference/filebeat/exported-fields-cyberarkpas.md new file mode 100644 index 000000000000..6bc21632e06a --- /dev/null +++ b/docs/reference/filebeat/exported-fields-cyberarkpas.md @@ -0,0 +1,364 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-cyberarkpas.html +--- + +# CyberArk PAS fields [exported-fields-cyberarkpas] + +cyberarkpas fields. + + +## audit [_audit_2] + +Cyberark Privileged Access Security Audit fields. + +**`cyberarkpas.audit.action`** +: A description of the audit record. + +type: keyword + + + +## ca_properties [_ca_properties] + +Account metadata. + +**`cyberarkpas.audit.ca_properties.address`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.cpm_disabled`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.cpm_error_details`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.cpm_status`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.creation_method`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.customer`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.database`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.device_type`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.dual_account_status`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.group_name`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.in_process`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.index`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.last_fail_date`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.last_success_change`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.last_success_reconciliation`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.last_success_verification`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.last_task`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.logon_domain`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.policy_id`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.port`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.privcloud`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.reset_immediately`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.retries_count`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.sequence_id`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.tags`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.user_dn`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.user_name`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.virtual_username`** +: type: keyword + + +**`cyberarkpas.audit.ca_properties.other`** +: type: flattened + + +**`cyberarkpas.audit.category`** +: The category name (for category-related operations). + +type: keyword + + +**`cyberarkpas.audit.desc`** +: A static value that displays a description of the audit codes. + +type: keyword + + + +## extra_details [_extra_details] + +Specific extra details of the audit records. + +**`cyberarkpas.audit.extra_details.ad_process_id`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.ad_process_name`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.application_type`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.command`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.connection_component_id`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.dst_host`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.logon_account`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.managed_account`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.process_id`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.process_name`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.protocol`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.psmid`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.session_duration`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.session_id`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.src_host`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.username`** +: type: keyword + + +**`cyberarkpas.audit.extra_details.other`** +: type: flattened + + +**`cyberarkpas.audit.file`** +: The name of the target file. + +type: keyword + + +**`cyberarkpas.audit.gateway_station`** +: The IP of the web application machine (PVWA). + +type: ip + + +**`cyberarkpas.audit.hostname`** +: The hostname, in upper case. + +type: keyword + +example: MY-COMPUTER + + +**`cyberarkpas.audit.iso_timestamp`** +: The timestamp, in ISO Timestamp format (RFC 3339). + +type: date + +example: 2013-06-25 10:47:19+00:00 + + +**`cyberarkpas.audit.issuer`** +: The Vault user who wrote the audit. This is usually the user who performed the operation. + +type: keyword + + +**`cyberarkpas.audit.location`** +: The target Location (for Location operations). + +type: keyword + +Field is not indexed. + + +**`cyberarkpas.audit.message`** +: A description of the audit records (same information as in the Desc field). + +type: keyword + + +**`cyberarkpas.audit.message_id`** +: The code ID of the audit records. + +type: keyword + + +**`cyberarkpas.audit.product`** +: A static value that represents the product. + +type: keyword + + +**`cyberarkpas.audit.pvwa_details`** +: Specific details of the PVWA audit records. + +type: flattened + + +**`cyberarkpas.audit.raw`** +: Raw XML for the original audit record. Only present when XSLT file has debugging enabled. + +type: keyword + +Field is not indexed. + + +**`cyberarkpas.audit.reason`** +: The reason entered by the user. + +type: text + + +**`cyberarkpas.audit.rfc5424`** +: Whether the syslog format complies with RFC5424. + +type: boolean + +example: True + + +**`cyberarkpas.audit.safe`** +: The name of the target Safe. + +type: keyword + + +**`cyberarkpas.audit.severity`** +: The severity of the audit records. + +type: keyword + + +**`cyberarkpas.audit.source_user`** +: The name of the Vault user who performed the operation. + +type: keyword + + +**`cyberarkpas.audit.station`** +: The IP from where the operation was performed. For PVWA sessions, this will be the real client machine IP. + +type: ip + + +**`cyberarkpas.audit.target_user`** +: The name of the Vault user on which the operation was performed. + +type: keyword + + +**`cyberarkpas.audit.timestamp`** +: The timestamp, in MMM DD HH:MM:SS format. + +type: keyword + +example: Jun 25 10:47:19 + + +**`cyberarkpas.audit.vendor`** +: A static value that represents the vendor. + +type: keyword + + +**`cyberarkpas.audit.version`** +: A static value that represents the version of the Vault. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-docker-processor.md b/docs/reference/filebeat/exported-fields-docker-processor.md new file mode 100644 index 000000000000..81cfd82e4f21 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-docker-processor.md @@ -0,0 +1,33 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-docker-processor.html +--- + +# Docker fields [exported-fields-docker-processor] + +Docker stats collected from Docker. + +**`docker.container.id`** +: type: alias + +alias to: container.id + + +**`docker.container.image`** +: type: alias + +alias to: container.image.name + + +**`docker.container.name`** +: type: alias + +alias to: container.name + + +**`docker.container.labels`** +: Image labels. + +type: object + + diff --git a/docs/reference/filebeat/exported-fields-ecs.md b/docs/reference/filebeat/exported-fields-ecs.md new file mode 100644 index 000000000000..65768bc8b85a --- /dev/null +++ b/docs/reference/filebeat/exported-fields-ecs.md @@ -0,0 +1,10423 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-ecs.html +--- + +# ECS fields [exported-fields-ecs] + +This section defines Elastic Common Schema (ECS) fields—a common set of fields to be used when storing event data in {{es}}. + +This is an exhaustive list, and fields listed here are not necessarily used by Filebeat. The goal of ECS is to enable and encourage users of {{es}} to normalize their event data, so that they can better analyze, visualize, and correlate the data represented in their events. + +See the [ECS reference](ecs://reference/index.md) for more information. + +**`@timestamp`** +: Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events. + +type: date + +example: 2016-05-23T08:05:34.853Z + +required: True + + +**`labels`** +: Custom key/value pairs. Can be used to add meta information to events. Should not contain nested objects. All values are stored as keyword. Example: `docker` and `k8s` labels. + +type: object + +example: {"application": "foo-bar", "env": "production"} + + +**`message`** +: For log events the message field contains the log message, optimized for viewing in a log viewer. For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. If multiple messages exist, they can be combined into one message. + +type: match_only_text + +example: Hello World + + +**`tags`** +: List of keywords used to tag each event. + +type: keyword + +example: ["production", "env2"] + + + +## agent [_agent] + +The agent fields contain the data about the software entity, if any, that collects, detects, or observes events on a host, or takes measurements on a host. Examples include Beats. Agents may also run on observers. ECS agent.* fields shall be populated with details of the agent running on the host or observer where the event happened or the measurement was taken. + +**`agent.build.original`** +: Extended build information for the agent. This field is intended to contain any build information that a data source may provide, no specific formatting is required. + +type: keyword + +example: metricbeat version 7.6.0 (amd64), libbeat 7.6.0 [6a23e8f8f30f5001ba344e4e54d8d9cb82cb107c built 2020-02-05 23:10:10 +0000 UTC] + + +**`agent.ephemeral_id`** +: Ephemeral identifier of this agent (if one exists). This id normally changes across restarts, but `agent.id` does not. + +type: keyword + +example: 8a4f500f + + +**`agent.id`** +: Unique identifier of this agent (if one exists). Example: For Beats this would be beat.id. + +type: keyword + +example: 8a4f500d + + +**`agent.name`** +: Custom name of the agent. This is a name that can be given to an agent. This can be helpful if for example two Filebeat instances are running on the same host but a human readable separation is needed on which Filebeat instance data is coming from. If no name is given, the name is often left empty. + +type: keyword + +example: foo + + +**`agent.type`** +: Type of the agent. The agent type always stays the same and should be given by the agent used. In case of Filebeat the agent would always be Filebeat also if two Filebeat instances are run on the same machine. + +type: keyword + +example: filebeat + + +**`agent.version`** +: Version of the agent. + +type: keyword + +example: 6.0.0-rc2 + + + +## as [_as] + +An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the internet. + +**`as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`as.organization.name.text`** +: type: match_only_text + + + +## client [_client] + +A client is defined as the initiator of a network connection for events regarding sessions, connections, or bidirectional flow records. For TCP events, the client is the initiator of the TCP connection that sends the SYN packet(s). For other protocols, the client is generally the initiator or requestor in the network transaction. Some systems use the term "originator" to refer the client in TCP connections. The client fields describe details about the system acting as the client in the network event. Client fields are usually populated in conjunction with server fields. Client fields are generally not populated for packet-level events. Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. + +**`client.address`** +: Some event client addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. + +type: keyword + + +**`client.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`client.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`client.as.organization.name.text`** +: type: match_only_text + + +**`client.bytes`** +: Bytes sent from the client to the server. + +type: long + +example: 184 + +format: bytes + + +**`client.domain`** +: The domain name of the client system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. + +type: keyword + +example: foo.example.com + + +**`client.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`client.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`client.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`client.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`client.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`client.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`client.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`client.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`client.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`client.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`client.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`client.ip`** +: IP address of the client (IPv4 or IPv6). + +type: ip + + +**`client.mac`** +: MAC address of the client. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: 00-00-5E-00-53-23 + + +**`client.nat.ip`** +: Translated IP of source based NAT sessions (e.g. internal client to internet). Typically connections traversing load balancers, firewalls, or routers. + +type: ip + + +**`client.nat.port`** +: Translated port of source based NAT sessions (e.g. internal client to internet). Typically connections traversing load balancers, firewalls, or routers. + +type: long + +format: string + + +**`client.packets`** +: Packets sent from the client to the server. + +type: long + +example: 12 + + +**`client.port`** +: Port of the client. + +type: long + +format: string + + +**`client.registered_domain`** +: The highest registered client domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`client.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`client.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`client.user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`client.user.email`** +: User email address. + +type: keyword + + +**`client.user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`client.user.full_name.text`** +: type: match_only_text + + +**`client.user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`client.user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`client.user.group.name`** +: Name of the group. + +type: keyword + + +**`client.user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`client.user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`client.user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`client.user.name.text`** +: type: match_only_text + + +**`client.user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## cloud [_cloud] + +Fields related to the cloud or infrastructure the events are coming from. + +**`cloud.account.id`** +: The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. + +type: keyword + +example: 666777888999 + + +**`cloud.account.name`** +: The cloud account name or alias used to identify different entities in a multi-tenant environment. Examples: AWS account name, Google Cloud ORG display name. + +type: keyword + +example: elastic-dev + + +**`cloud.availability_zone`** +: Availability zone in which this host, resource, or service is located. + +type: keyword + +example: us-east-1c + + +**`cloud.instance.id`** +: Instance ID of the host machine. + +type: keyword + +example: i-1234567890abcdef0 + + +**`cloud.instance.name`** +: Instance name of the host machine. + +type: keyword + + +**`cloud.machine.type`** +: Machine type of the host machine. + +type: keyword + +example: t2.medium + + +**`cloud.origin.account.id`** +: The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. + +type: keyword + +example: 666777888999 + + +**`cloud.origin.account.name`** +: The cloud account name or alias used to identify different entities in a multi-tenant environment. Examples: AWS account name, Google Cloud ORG display name. + +type: keyword + +example: elastic-dev + + +**`cloud.origin.availability_zone`** +: Availability zone in which this host, resource, or service is located. + +type: keyword + +example: us-east-1c + + +**`cloud.origin.instance.id`** +: Instance ID of the host machine. + +type: keyword + +example: i-1234567890abcdef0 + + +**`cloud.origin.instance.name`** +: Instance name of the host machine. + +type: keyword + + +**`cloud.origin.machine.type`** +: Machine type of the host machine. + +type: keyword + +example: t2.medium + + +**`cloud.origin.project.id`** +: The cloud project identifier. Examples: Google Cloud Project id, Azure Project id. + +type: keyword + +example: my-project + + +**`cloud.origin.project.name`** +: The cloud project name. Examples: Google Cloud Project name, Azure Project name. + +type: keyword + +example: my project + + +**`cloud.origin.provider`** +: Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. + +type: keyword + +example: aws + + +**`cloud.origin.region`** +: Region in which this host, resource, or service is located. + +type: keyword + +example: us-east-1 + + +**`cloud.origin.service.name`** +: The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. Examples: app engine, app service, cloud run, fargate, lambda. + +type: keyword + +example: lambda + + +**`cloud.project.id`** +: The cloud project identifier. Examples: Google Cloud Project id, Azure Project id. + +type: keyword + +example: my-project + + +**`cloud.project.name`** +: The cloud project name. Examples: Google Cloud Project name, Azure Project name. + +type: keyword + +example: my project + + +**`cloud.provider`** +: Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. + +type: keyword + +example: aws + + +**`cloud.region`** +: Region in which this host, resource, or service is located. + +type: keyword + +example: us-east-1 + + +**`cloud.service.name`** +: The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. Examples: app engine, app service, cloud run, fargate, lambda. + +type: keyword + +example: lambda + + +**`cloud.target.account.id`** +: The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. + +type: keyword + +example: 666777888999 + + +**`cloud.target.account.name`** +: The cloud account name or alias used to identify different entities in a multi-tenant environment. Examples: AWS account name, Google Cloud ORG display name. + +type: keyword + +example: elastic-dev + + +**`cloud.target.availability_zone`** +: Availability zone in which this host, resource, or service is located. + +type: keyword + +example: us-east-1c + + +**`cloud.target.instance.id`** +: Instance ID of the host machine. + +type: keyword + +example: i-1234567890abcdef0 + + +**`cloud.target.instance.name`** +: Instance name of the host machine. + +type: keyword + + +**`cloud.target.machine.type`** +: Machine type of the host machine. + +type: keyword + +example: t2.medium + + +**`cloud.target.project.id`** +: The cloud project identifier. Examples: Google Cloud Project id, Azure Project id. + +type: keyword + +example: my-project + + +**`cloud.target.project.name`** +: The cloud project name. Examples: Google Cloud Project name, Azure Project name. + +type: keyword + +example: my project + + +**`cloud.target.provider`** +: Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. + +type: keyword + +example: aws + + +**`cloud.target.region`** +: Region in which this host, resource, or service is located. + +type: keyword + +example: us-east-1 + + +**`cloud.target.service.name`** +: The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. Examples: app engine, app service, cloud run, fargate, lambda. + +type: keyword + +example: lambda + + + +## code_signature [_code_signature] + +These fields contain information about binary code signatures. + +**`code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + + +## container [_container_2] + +Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. + +**`container.cpu.usage`** +: Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. Scaling factor: 1000. + +type: scaled_float + + +**`container.disk.read.bytes`** +: The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection. + +type: long + + +**`container.disk.write.bytes`** +: The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection. + +type: long + + +**`container.id`** +: Unique container id. + +type: keyword + + +**`container.image.name`** +: Name of the image the container was built on. + +type: keyword + + +**`container.image.tag`** +: Container image tags. + +type: keyword + + +**`container.labels`** +: Image labels. + +type: object + + +**`container.memory.usage`** +: Memory usage percentage and it ranges from 0 to 1. Scaling factor: 1000. + +type: scaled_float + + +**`container.name`** +: Container name. + +type: keyword + + +**`container.network.egress.bytes`** +: The number of bytes (gauge) sent out on all network interfaces by the container since the last metric collection. + +type: long + + +**`container.network.ingress.bytes`** +: The number of bytes received (gauge) on all network interfaces by the container since the last metric collection. + +type: long + + +**`container.runtime`** +: Runtime managing this container. + +type: keyword + +example: docker + + + +## data_stream [_data_stream] + +The data_stream fields take part in defining the new data stream naming scheme. In the new data stream naming scheme the value of the data stream fields combine to the name of the actual data stream in the following manner: `{data_stream.type}-{data_stream.dataset}-{data_stream.namespace}`. This means the fields can only contain characters that are valid as part of names of data streams. More details about this can be found in this [blog post](https://www.elastic.co/blog/an-introduction-to-the-elastic-data-stream-naming-scheme). An Elasticsearch data stream consists of one or more backing indices, and a data stream name forms part of the backing indices names. Due to this convention, data streams must also follow index naming restrictions. For example, data stream names cannot include `\`, `/`, `*`, `?`, `"`, `<`, `>`, `|`, ` ` (space character), `,`, or `#`. Please see the Elasticsearch reference for additional [restrictions](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create). + +**`data_stream.dataset`** +: The field can contain anything that makes sense to signify the source of the data. Examples include `nginx.access`, `prometheus`, `endpoint` etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. `event.dataset` should have the same value as `data_stream.dataset`. Beyond the Elasticsearch data stream naming criteria noted above, the `dataset` value has additional restrictions: * Must not contain `-` * No longer than 100 characters + +type: constant_keyword + +example: nginx.access + + +**`data_stream.namespace`** +: A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with `default`. If no value is used, it falls back to `default`. Beyond the Elasticsearch index naming criteria noted above, `namespace` value has the additional restrictions: * Must not contain `-` * No longer than 100 characters + +type: constant_keyword + +example: production + + +**`data_stream.type`** +: An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future. + +type: constant_keyword + +example: logs + + + +## destination [_destination] + +Destination fields capture details about the receiver of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. Destination fields are usually populated in conjunction with source fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. + +**`destination.address`** +: Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. + +type: keyword + + +**`destination.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`destination.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`destination.as.organization.name.text`** +: type: match_only_text + + +**`destination.bytes`** +: Bytes sent from the destination to the source. + +type: long + +example: 184 + +format: bytes + + +**`destination.domain`** +: The domain name of the destination system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. + +type: keyword + +example: foo.example.com + + +**`destination.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`destination.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`destination.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`destination.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`destination.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`destination.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`destination.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`destination.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`destination.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`destination.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`destination.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`destination.ip`** +: IP address of the destination (IPv4 or IPv6). + +type: ip + + +**`destination.mac`** +: MAC address of the destination. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: 00-00-5E-00-53-23 + + +**`destination.nat.ip`** +: Translated ip of destination based NAT sessions (e.g. internet to private DMZ) Typically used with load balancers, firewalls, or routers. + +type: ip + + +**`destination.nat.port`** +: Port the source session is translated to by NAT Device. Typically used with load balancers, firewalls, or routers. + +type: long + +format: string + + +**`destination.packets`** +: Packets sent from the destination to the source. + +type: long + +example: 12 + + +**`destination.port`** +: Port of the destination. + +type: long + +format: string + + +**`destination.registered_domain`** +: The highest registered destination domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`destination.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`destination.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`destination.user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`destination.user.email`** +: User email address. + +type: keyword + + +**`destination.user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`destination.user.full_name.text`** +: type: match_only_text + + +**`destination.user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`destination.user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`destination.user.group.name`** +: Name of the group. + +type: keyword + + +**`destination.user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`destination.user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`destination.user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`destination.user.name.text`** +: type: match_only_text + + +**`destination.user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## dll [_dll] + +These fields contain information about code libraries dynamically loaded into processes. + +Many operating systems refer to "shared code libraries" with different names, but this field set refers to all of the following: * Dynamic-link library (`.dll`) commonly used on Windows * Shared Object (`.so`) commonly used on Unix-like operating systems * Dynamic library (`.dylib`) commonly used on macOS + +**`dll.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`dll.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`dll.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`dll.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`dll.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`dll.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`dll.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`dll.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`dll.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`dll.hash.md5`** +: MD5 hash. + +type: keyword + + +**`dll.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`dll.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`dll.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`dll.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`dll.name`** +: Name of the library. This generally maps to the name of the file on disk. + +type: keyword + +example: kernel32.dll + + +**`dll.path`** +: Full file path of the library. + +type: keyword + +example: C:\Windows\System32\kernel32.dll + + +**`dll.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`dll.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`dll.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`dll.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`dll.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`dll.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`dll.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + + +## dns [_dns] + +Fields describing DNS queries and answers. DNS events should either represent a single DNS query prior to getting answers (`dns.type:query`) or they should represent a full exchange and contain the query details as well as all of the answers that were provided for this query (`dns.type:answer`). + +**`dns.answers`** +: An array containing an object for each answer section returned by the server. The main keys that should be present in these objects are defined by ECS. Records that have more information may contain more keys than what ECS defines. Not all DNS data sources give all details about DNS answers. At minimum, answer objects must contain the `data` key. If more information is available, map as much of it to ECS as possible, and add any additional fields to the answer objects as custom fields. + +type: object + + +**`dns.answers.class`** +: The class of DNS data contained in this resource record. + +type: keyword + +example: IN + + +**`dns.answers.data`** +: The data describing the resource. The meaning of this data depends on the type and class of the resource record. + +type: keyword + +example: 10.10.10.10 + + +**`dns.answers.name`** +: The domain name to which this resource record pertains. If a chain of CNAME is being resolved, each answer’s `name` should be the one that corresponds with the answer’s `data`. It should not simply be the original `question.name` repeated. + +type: keyword + +example: www.example.com + + +**`dns.answers.ttl`** +: The time interval in seconds that this resource record may be cached before it should be discarded. Zero values mean that the data should not be cached. + +type: long + +example: 180 + + +**`dns.answers.type`** +: The type of data contained in this resource record. + +type: keyword + +example: CNAME + + +**`dns.header_flags`** +: Array of 2 letter DNS header flags. Expected values are: AA, TC, RD, RA, AD, CD, DO. + +type: keyword + +example: ["RD", "RA"] + + +**`dns.id`** +: The DNS packet identifier assigned by the program that generated the query. The identifier is copied to the response. + +type: keyword + +example: 62111 + + +**`dns.op_code`** +: The DNS operation code that specifies the kind of query in the message. This value is set by the originator of a query and copied into the response. + +type: keyword + +example: QUERY + + +**`dns.question.class`** +: The class of records being queried. + +type: keyword + +example: IN + + +**`dns.question.name`** +: The name being queried. If the name field contains non-printable characters (below 32 or above 126), those characters should be represented as escaped base 10 integers (\DDD). Back slashes and quotes should be escaped. Tabs, carriage returns, and line feeds should be converted to \t, \r, and \n respectively. + +type: keyword + +example: www.example.com + + +**`dns.question.registered_domain`** +: The highest registered domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`dns.question.subdomain`** +: The subdomain is all of the labels under the registered_domain. If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: www + + +**`dns.question.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`dns.question.type`** +: The type of record being queried. + +type: keyword + +example: AAAA + + +**`dns.resolved_ip`** +: Array containing all IPs seen in `answers.data`. The `answers` array can be difficult to use, because of the variety of data formats it can contain. Extracting all IP addresses seen in there to `dns.resolved_ip` makes it possible to index them as IP addresses, and makes them easier to visualize and query for. + +type: ip + +example: ["10.10.10.10", "10.10.10.11"] + + +**`dns.response_code`** +: The DNS response code. + +type: keyword + +example: NOERROR + + +**`dns.type`** +: The type of DNS event captured, query or answer. If your source of DNS events only gives you DNS queries, you should only create dns events of type `dns.type:query`. If your source of DNS events gives you answers as well, you should create one event per query (optionally as soon as the query is seen). And a second event containing all query details as well as an array of answers. + +type: keyword + +example: answer + + + +## ecs [_ecs_2] + +Meta-information specific to ECS. + +**`ecs.version`** +: ECS version this event conforms to. `ecs.version` is a required field and must exist in all events. When querying across multiple indices — which may conform to slightly different ECS versions — this field lets integrations adjust to the schema version of the events. + +type: keyword + +example: 1.0.0 + +required: True + + + +## elf [_elf] + +These fields contain Linux Executable Linkable Format (ELF) metadata. + +**`elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + + +## error [_error_2] + +These fields can represent errors of any kind. Use them for errors that happen while fetching events or in cases where the event itself contains an error. + +**`error.code`** +: Error code describing the error. + +type: keyword + + +**`error.id`** +: Unique identifier for the error. + +type: keyword + + +**`error.message`** +: Error message. + +type: match_only_text + + +**`error.stack_trace`** +: The stack trace of this error in plain text. + +type: wildcard + + +**`error.stack_trace.text`** +: type: match_only_text + + +**`error.type`** +: The type of the error, for example the class name of the exception. + +type: keyword + +example: java.lang.NullPointerException + + + +## event [_event_2] + +The event fields are used for context information about the log or metric event itself. A log is defined as an event containing details of something that happened. Log events must include the time at which the thing happened. Examples of log events include a process starting on a host, a network packet being sent from a source to a destination, or a network connection between a client and a server being initiated or closed. A metric is defined as an event containing one or more numerical measurements and the time at which the measurement was taken. Examples of metric events include memory pressure measured on a host and device temperature. See the `event.kind` definition in this section for additional details about metric and state events. + +**`event.action`** +: The action captured by the event. This describes the information in the event. It is more specific than `event.category`. Examples are `group-add`, `process-started`, `file-created`. The value is normally defined by the implementer. + +type: keyword + +example: user-password-change + + +**`event.agent_id_status`** +: Agents are normally responsible for populating the `agent.id` field value. If the system receiving events is capable of validating the value based on authentication information for the client then this field can be used to reflect the outcome of that validation. For example if the agent’s connection is authenticated with mTLS and the client cert contains the ID of the agent to which the cert was issued then the `agent.id` value in events can be checked against the certificate. If the values match then `event.agent_id_status: verified` is added to the event, otherwise one of the other allowed values should be used. If no validation is performed then the field should be omitted. The allowed values are: `verified` - The `agent.id` field value matches expected value obtained from auth metadata. `mismatch` - The `agent.id` field value does not match the expected value obtained from auth metadata. `missing` - There was no `agent.id` field in the event to validate. `auth_metadata_missing` - There was no auth metadata or it was missing information about the agent ID. + +type: keyword + +example: verified + + +**`event.category`** +: This is one of four ECS Categorization Fields, and indicates the second level in the ECS category hierarchy. `event.category` represents the "big buckets" of ECS categories. For example, filtering on `event.category:process` yields all events relating to process activity. This field is closely related to `event.type`, which is used as a subcategory. This field is an array. This will allow proper categorization of some events that fall in multiple categories. + +type: keyword + +example: authentication + + +**`event.code`** +: Identification code for this event, if one exists. Some event sources use event codes to identify messages unambiguously, regardless of message language or wording adjustments over time. An example of this is the Windows Event ID. + +type: keyword + +example: 4648 + + +**`event.created`** +: event.created contains the date/time when the event was first read by an agent, or by your pipeline. This field is distinct from @timestamp in that @timestamp typically contain the time extracted from the original event. In most situations, these two timestamps will be slightly different. The difference can be used to calculate the delay between your source generating an event, and the time when your agent first processed it. This can be used to monitor your agent’s or pipeline’s ability to keep up with your event source. In case the two timestamps are identical, @timestamp should be used. + +type: date + +example: 2016-05-23T08:05:34.857Z + + +**`event.dataset`** +: Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It’s recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name. + +type: keyword + +example: apache.access + + +**`event.duration`** +: Duration of the event in nanoseconds. If event.start and event.end are known this value should be the difference between the end and start time. + +type: long + +format: duration + + +**`event.end`** +: event.end contains the date when the event ended or when the activity was last observed. + +type: date + + +**`event.hash`** +: Hash (perhaps logstash fingerprint) of raw field to be able to demonstrate log integrity. + +type: keyword + +example: 123456789012345678901234567890ABCD + + +**`event.id`** +: Unique ID to describe the event. + +type: keyword + +example: 8a4f500d + + +**`event.ingested`** +: Timestamp when an event arrived in the central data store. This is different from `@timestamp`, which is when the event originally occurred. It’s also different from `event.created`, which is meant to capture the first time an agent saw the event. In normal conditions, assuming no tampering, the timestamps should chronologically look like this: `@timestamp` < `event.created` < `event.ingested`. + +type: date + +example: 2016-05-23T08:05:35.101Z + + +**`event.kind`** +: This is one of four ECS Categorization Fields, and indicates the highest level in the ECS category hierarchy. `event.kind` gives high-level information about what type of information the event contains, without being specific to the contents of the event. For example, values of this field distinguish alert events from metric events. The value of this field can be used to inform how these kinds of events should be handled. They may warrant different retention, different access control, it may also help understand whether the data coming in at a regular interval or not. + +type: keyword + +example: alert + + +**`event.module`** +: Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), `event.module` should contain the name of this module. + +type: keyword + +example: apache + + +**`event.original`** +: Raw text message of entire event. Used to demonstrate log integrity or where the full log message (before splitting it up in multiple parts) may be required, e.g. for reindex. This field is not indexed and doc_values are disabled. It cannot be searched, but it can be retrieved from `_source`. If users wish to override this and index this field, please see `Field data types` in the `Elasticsearch Reference`. + +type: keyword + +example: Sep 19 08:26:10 host CEF:0|Security| threatmanager|1.0|100| worm successfully stopped|10|src=10.0.0.1 dst=2.1.2.2spt=1232 + +Field is not indexed. + + +**`event.outcome`** +: This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. `event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. Note that when a single transaction is described in multiple events, each event may populate different values of `event.outcome`, according to their perspective. Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with `event.type:info`, or any events for which an outcome does not make logical sense. + +type: keyword + +example: success + + +**`event.provider`** +: Source of the event. Event transports such as Syslog or the Windows Event Log typically mention the source of an event. It can be the name of the software that generated the event (e.g. Sysmon, httpd), or of a subsystem of the operating system (kernel, Microsoft-Windows-Security-Auditing). + +type: keyword + +example: kernel + + +**`event.reason`** +: Reason why this event happened, according to the source. This describes the why of a particular action or outcome captured in the event. Where `event.action` captures the action from the event, `event.reason` describes why that action was taken. For example, a web proxy with an `event.action` which denied the request may also populate `event.reason` with the reason why (e.g. `blocked site`). + +type: keyword + +example: Terminated an unexpected process + + +**`event.reference`** +: Reference URL linking to additional information about this event. This URL links to a static definition of this event. Alert events, indicated by `event.kind:alert`, are a common use case for this field. + +type: keyword + +example: [https://system.example.com/event/#0001234](https://system.example.com/event/#0001234) + + +**`event.risk_score`** +: Risk score or priority of the event (e.g. security solutions). Use your system’s original value here. + +type: float + + +**`event.risk_score_norm`** +: Normalized risk score or priority of the event, on a scale of 0 to 100. This is mainly useful if you use more than one system that assigns risk scores, and you want to see a normalized value across all systems. + +type: float + + +**`event.sequence`** +: Sequence number of the event. The sequence number is a value published by some event sources, to make the exact ordering of events unambiguous, regardless of the timestamp precision. + +type: long + +format: string + + +**`event.severity`** +: The numeric severity of the event according to your event source. What the different severity values mean can be different between sources and use cases. It’s up to the implementer to make sure severities are consistent across events from the same source. The Syslog severity belongs in `log.syslog.severity.code`. `event.severity` is meant to represent the severity according to the event source (e.g. firewall, IDS). If the event source does not publish its own severity, you may optionally copy the `log.syslog.severity.code` to `event.severity`. + +type: long + +example: 7 + +format: string + + +**`event.start`** +: event.start contains the date when the event started or when the activity was first observed. + +type: date + + +**`event.timezone`** +: This field should be populated when the event’s timestamp does not include timezone information already (e.g. default Syslog timestamps). It’s optional otherwise. Acceptable timezone formats are: a canonical ID (e.g. "Europe/Amsterdam"), abbreviated (e.g. "EST") or an HH:mm differential (e.g. "-05:00"). + +type: keyword + + +**`event.type`** +: This is one of four ECS Categorization Fields, and indicates the third level in the ECS category hierarchy. `event.type` represents a categorization "sub-bucket" that, when used along with the `event.category` field values, enables filtering events down to a level appropriate for single visualization. This field is an array. This will allow proper categorization of some events that fall in multiple event types. + +type: keyword + + +**`event.url`** +: URL linking to an external system to continue investigation of this event. This URL links to another system where in-depth investigation of the specific occurrence of this event can take place. Alert events, indicated by `event.kind:alert`, are a common use case for this field. + +type: keyword + +example: [https://mysystem.example.com/alert/5271dedb-f5b0-4218-87f0-4ac4870a38fe](https://mysystem.example.com/alert/5271dedb-f5b0-4218-87f0-4ac4870a38fe) + + + +## faas [_faas] + +The user fields describe information about the function as a service that is relevant to the event. + +**`faas.coldstart`** +: Boolean value indicating a cold start of a function. + +type: boolean + + +**`faas.execution`** +: The execution ID of the current function execution. + +type: keyword + +example: af9d5aa4-a685-4c5f-a22b-444f80b3cc28 + + +**`faas.trigger`** +: Details about the function trigger. + +type: nested + + +**`faas.trigger.request_id`** +: The ID of the trigger request , message, event, etc. + +type: keyword + +example: 123456789 + + +**`faas.trigger.type`** +: The trigger for the function execution. Expected values are: * http * pubsub * datasource * timer * other + +type: keyword + +example: http + + + +## file [_file_2] + +A file is defined as a set of information that has been created on, or has existed on a filesystem. File objects can be associated with host events, network events, and/or file events (e.g., those produced by File Integrity Monitoring [FIM] products or services). File fields provide details about the affected file associated with the event or metric. + +**`file.accessed`** +: Last time the file was accessed. Note that not all filesystems keep track of access time. + +type: date + + +**`file.attributes`** +: Array of file attributes. Attributes names will vary by platform. Here’s a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. + +type: keyword + +example: ["readonly", "system"] + + +**`file.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`file.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`file.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`file.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`file.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`file.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`file.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`file.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`file.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`file.created`** +: File creation time. Note that not all filesystems store the creation time. + +type: date + + +**`file.ctime`** +: Last time the file attributes or metadata changed. Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. + +type: date + + +**`file.device`** +: Device that is the source of the file. + +type: keyword + +example: sda + + +**`file.directory`** +: Directory where the file is located. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice + + +**`file.drive_letter`** +: Drive letter where the file is located. This field is only relevant on Windows. The value should be uppercase, and not include the colon. + +type: keyword + +example: C + + +**`file.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`file.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`file.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`file.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`file.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`file.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`file.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`file.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`file.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`file.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`file.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`file.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`file.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`file.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`file.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`file.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`file.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`file.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`file.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`file.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`file.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`file.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`file.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`file.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`file.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`file.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`file.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`file.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`file.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`file.extension`** +: File extension, excluding the leading dot. Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`file.fork_name`** +: A fork is additional data associated with a filesystem object. On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. + +type: keyword + +example: Zone.Identifer + + +**`file.gid`** +: Primary group ID (GID) of the file. + +type: keyword + +example: 1001 + + +**`file.group`** +: Primary group name of the file. + +type: keyword + +example: alice + + +**`file.hash.md5`** +: MD5 hash. + +type: keyword + + +**`file.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`file.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`file.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`file.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`file.inode`** +: Inode representing the file in the filesystem. + +type: keyword + +example: 256383 + + +**`file.mime_type`** +: MIME type should identify the format of the file or stream of bytes using [IANA official types](https://www.iana.org/assignments/media-types/media-types.xhtml), where possible. When more than one type is applicable, the most specific type should be used. + +type: keyword + + +**`file.mode`** +: Mode of the file in octal representation. + +type: keyword + +example: 0640 + + +**`file.mtime`** +: Last time the file content was modified. + +type: date + + +**`file.name`** +: Name of the file including the extension, without the directory. + +type: keyword + +example: example.png + + +**`file.owner`** +: File owner’s username. + +type: keyword + +example: alice + + +**`file.path`** +: Full path to the file, including the file name. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice/example.png + + +**`file.path.text`** +: type: match_only_text + + +**`file.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`file.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`file.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`file.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`file.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`file.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`file.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`file.size`** +: File size in bytes. Only relevant when `file.type` is "file". + +type: long + +example: 16384 + + +**`file.target_path`** +: Target path for symlinks. + +type: keyword + + +**`file.target_path.text`** +: type: match_only_text + + +**`file.type`** +: File type (file, dir, or symlink). + +type: keyword + +example: file + + +**`file.uid`** +: The user ID (UID) or security identifier (SID) of the file owner. + +type: keyword + +example: 1001 + + +**`file.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`file.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`file.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`file.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`file.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`file.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`file.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`file.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`file.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`file.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`file.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`file.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`file.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`file.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`file.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`file.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`file.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`file.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`file.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`file.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`file.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`file.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`file.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`file.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + + +## geo [_geo] + +Geo fields can carry data about a specific location related to an event. This geolocation information can be derived from techniques such as Geo IP, or be user-supplied. + +**`geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + + +## group [_group] + +The group fields are meant to represent groups that are relevant to the event. + +**`group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`group.name`** +: Name of the group. + +type: keyword + + + +## hash [_hash] + +The hash fields represent different bitwise hash algorithms and their values. Field names for common hashes (e.g. MD5, SHA1) are predefined. Add fields for other hashes by lowercasing the hash algorithm name and using underscore separators as appropriate (snake case, e.g. sha3_512). Note that this fieldset is used for common hashes that may be computed over a range of generic bytes. Entity-specific hashes such as ja3 or imphash are placed in the fieldsets to which they relate (tls and pe, respectively). + +**`hash.md5`** +: MD5 hash. + +type: keyword + + +**`hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + + +## host [_host] + +A host is defined as a general computing instance. ECS host.* fields should be populated with details about the host on which the event happened, or from which the measurement was taken. Host types include hardware, virtual machines, Docker containers, and Kubernetes nodes. + +**`host.architecture`** +: Operating system architecture. + +type: keyword + +example: x86_64 + + +**`host.cpu.usage`** +: Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. Scaling factor: 1000. For example: For a two core host, this value should be the average of the two cores, between 0 and 1. + +type: scaled_float + + +**`host.disk.read.bytes`** +: The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection. + +type: long + + +**`host.disk.write.bytes`** +: The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection. + +type: long + + +**`host.domain`** +: Name of the domain of which the host is a member. For example, on Windows this could be the host’s Active Directory domain or NetBIOS domain name. For Linux this could be the domain of the host’s LDAP provider. + +type: keyword + +example: CONTOSO + + +**`host.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`host.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`host.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`host.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`host.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`host.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`host.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`host.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`host.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`host.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`host.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`host.hostname`** +: Hostname of the host. It normally contains what the `hostname` command returns on the host machine. + +type: keyword + + +**`host.id`** +: Unique host id. As hostname is not always unique, use values that are meaningful in your environment. Example: The current usage of `beat.name`. + +type: keyword + + +**`host.ip`** +: Host ip addresses. + +type: ip + + +**`host.mac`** +: Host MAC addresses. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] + + +**`host.name`** +: Name of the host. It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. + +type: keyword + + +**`host.network.egress.bytes`** +: The number of bytes (gauge) sent out on all network interfaces by the host since the last metric collection. + +type: long + + +**`host.network.egress.packets`** +: The number of packets (gauge) sent out on all network interfaces by the host since the last metric collection. + +type: long + + +**`host.network.ingress.bytes`** +: The number of bytes received (gauge) on all network interfaces by the host since the last metric collection. + +type: long + + +**`host.network.ingress.packets`** +: The number of packets (gauge) received on all network interfaces by the host since the last metric collection. + +type: long + + +**`host.os.family`** +: OS family (such as redhat, debian, freebsd, windows). + +type: keyword + +example: debian + + +**`host.os.full`** +: Operating system name, including the version or code name. + +type: keyword + +example: Mac OS Mojave + + +**`host.os.full.text`** +: type: match_only_text + + +**`host.os.kernel`** +: Operating system kernel version as a raw string. + +type: keyword + +example: 4.4.0-112-generic + + +**`host.os.name`** +: Operating system name, without the version. + +type: keyword + +example: Mac OS X + + +**`host.os.name.text`** +: type: match_only_text + + +**`host.os.platform`** +: Operating system platform (such centos, ubuntu, windows). + +type: keyword + +example: darwin + + +**`host.os.type`** +: Use the `os.type` field to categorize the operating system into one of the broad commercial families. One of these following values should be used (lowercase): linux, macos, unix, windows. If the OS you’re dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. + +type: keyword + +example: macos + + +**`host.os.version`** +: Operating system version as a raw string. + +type: keyword + +example: 10.14.1 + + +**`host.type`** +: Type of host. For Cloud providers this can be the machine type like `t2.medium`. If vm, this could be the container, for example, or other information meaningful in your environment. + +type: keyword + + +**`host.uptime`** +: Seconds the host has been up. + +type: long + +example: 1325 + + + +## http [_http] + +Fields related to HTTP activity. Use the `url` field set to store the url of the request. + +**`http.request.body.bytes`** +: Size in bytes of the request body. + +type: long + +example: 887 + +format: bytes + + +**`http.request.body.content`** +: The full HTTP request body. + +type: wildcard + +example: Hello world + + +**`http.request.body.content.text`** +: type: match_only_text + + +**`http.request.bytes`** +: Total size in bytes of the request (body and headers). + +type: long + +example: 1437 + +format: bytes + + +**`http.request.id`** +: A unique identifier for each HTTP request to correlate logs between clients and servers in transactions. The id may be contained in a non-standard HTTP header, such as `X-Request-ID` or `X-Correlation-ID`. + +type: keyword + +example: 123e4567-e89b-12d3-a456-426614174000 + + +**`http.request.method`** +: HTTP request method. The value should retain its casing from the original event. For example, `GET`, `get`, and `GeT` are all considered valid values for this field. + +type: keyword + +example: POST + + +**`http.request.mime_type`** +: Mime type of the body of the request. This value must only be populated based on the content of the request body, not on the `Content-Type` header. Comparing the mime type of a request with the request’s Content-Type header can be helpful in detecting threats or misconfigured clients. + +type: keyword + +example: image/gif + + +**`http.request.referrer`** +: Referrer for this HTTP request. + +type: keyword + +example: [https://blog.example.com/](https://blog.example.com/) + + +**`http.response.body.bytes`** +: Size in bytes of the response body. + +type: long + +example: 887 + +format: bytes + + +**`http.response.body.content`** +: The full HTTP response body. + +type: wildcard + +example: Hello world + + +**`http.response.body.content.text`** +: type: match_only_text + + +**`http.response.bytes`** +: Total size in bytes of the response (body and headers). + +type: long + +example: 1437 + +format: bytes + + +**`http.response.mime_type`** +: Mime type of the body of the response. This value must only be populated based on the content of the response body, not on the `Content-Type` header. Comparing the mime type of a response with the response’s Content-Type header can be helpful in detecting misconfigured servers. + +type: keyword + +example: image/gif + + +**`http.response.status_code`** +: HTTP response status code. + +type: long + +example: 404 + +format: string + + +**`http.version`** +: HTTP version. + +type: keyword + +example: 1.1 + + + +## interface [_interface] + +The interface fields are used to record ingress and egress interface information when reported by an observer (e.g. firewall, router, load balancer) in the context of the observer handling a network connection. In the case of a single observer interface (e.g. network sensor on a span port) only the observer.ingress information should be populated. + +**`interface.alias`** +: Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. + +type: keyword + +example: outside + + +**`interface.id`** +: Interface ID as reported by an observer (typically SNMP interface ID). + +type: keyword + +example: 10 + + +**`interface.name`** +: Interface name as reported by the system. + +type: keyword + +example: eth0 + + + +## log [_log_4] + +Details about the event’s logging mechanism or logging transport. The log.* fields are typically populated with details about the logging mechanism used to create and/or transport the event. For example, syslog details belong under `log.syslog.*`. The details specific to your event source are typically not logged under `log.*`, but rather in `event.*` or in other ECS fields. + +**`log.file.path`** +: Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. If the event wasn’t read from a log file, do not populate this field. + +type: keyword + +example: /var/log/fun-times.log + + +**`log.level`** +: Original log level of the log event. If the source of the event provides a log level or textual severity, this is the one that goes in `log.level`. If your source doesn’t specify one, you may put your event transport’s severity here (e.g. Syslog severity). Some examples are `warn`, `err`, `i`, `informational`. + +type: keyword + +example: error + + +**`log.logger`** +: The name of the logger inside an application. This is usually the name of the class which initialized the logger, or can be a custom name. + +type: keyword + +example: org.elasticsearch.bootstrap.Bootstrap + + +**`log.origin.file.line`** +: The line number of the file containing the source code which originated the log event. + +type: long + +example: 42 + + +**`log.origin.file.name`** +: The name of the file containing the source code which originated the log event. Note that this field is not meant to capture the log file. The correct field to capture the log file is `log.file.path`. + +type: keyword + +example: Bootstrap.java + + +**`log.origin.function`** +: The name of the function or method which originated the log event. + +type: keyword + +example: init + + +**`log.syslog`** +: The Syslog metadata of the event, if the event was transmitted via Syslog. Please see RFCs 5424 or 3164. + +type: object + + +**`log.syslog.facility.code`** +: The Syslog numeric facility of the log event, if available. According to RFCs 5424 and 3164, this value should be an integer between 0 and 23. + +type: long + +example: 23 + +format: string + + +**`log.syslog.facility.name`** +: The Syslog text-based facility of the log event, if available. + +type: keyword + +example: local7 + + +**`log.syslog.priority`** +: Syslog numeric priority of the event, if available. According to RFCs 5424 and 3164, the priority is 8 * facility + severity. This number is therefore expected to contain a value between 0 and 191. + +type: long + +example: 135 + +format: string + + +**`log.syslog.severity.code`** +: The Syslog numeric severity of the log event, if available. If the event source publishing via Syslog provides a different numeric severity value (e.g. firewall, IDS), your source’s numeric severity should go to `event.severity`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `event.severity`. + +type: long + +example: 3 + + +**`log.syslog.severity.name`** +: The Syslog numeric severity of the log event, if available. If the event source publishing via Syslog provides a different severity value (e.g. firewall, IDS), your source’s text severity should go to `log.level`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `log.level`. + +type: keyword + +example: Error + + + +## network [_network] + +The network is defined as the communication path over which a host or network event happens. The network.* fields should be populated with details about the network activity associated with an event. + +**`network.application`** +: When a specific application or service is identified from network connection details (source/dest IPs, ports, certificates, or wire format), this field captures the application’s or service’s name. For example, the original event identifies the network connection being from a specific web service in a `https` network connection, like `facebook` or `twitter`. The field value must be normalized to lowercase for querying. + +type: keyword + +example: aim + + +**`network.bytes`** +: Total bytes transferred in both directions. If `source.bytes` and `destination.bytes` are known, `network.bytes` is their sum. + +type: long + +example: 368 + +format: bytes + + +**`network.community_id`** +: A hash of source and destination IPs and ports, as well as the protocol used in a communication. This is a tool-agnostic standard to identify flows. Learn more at [https://github.com/corelight/community-id-spec](https://github.com/corelight/community-id-spec). + +type: keyword + +example: 1:hO+sN4H+MG5MY/8hIrXPqc4ZQz0= + + +**`network.direction`** +: Direction of the network traffic. Recommended values are: * ingress * egress * inbound * outbound * internal * external * unknown + +When mapping events from a host-based monitoring context, populate this field from the host’s point of view, using the values "ingress" or "egress". When mapping events from a network or perimeter-based monitoring context, populate this field from the point of view of the network perimeter, using the values "inbound", "outbound", "internal" or "external". Note that "internal" is not crossing perimeter boundaries, and is meant to describe communication between two hosts within the perimeter. Note also that "external" is meant to describe traffic between two hosts that are external to the perimeter. This could for example be useful for ISPs or VPN service providers. + +type: keyword + +example: inbound + + +**`network.forwarded_ip`** +: Host IP address when the source IP address is the proxy. + +type: ip + +example: 192.1.1.2 + + +**`network.iana_number`** +: IANA Protocol Number ([https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml)). Standardized list of protocols. This aligns well with NetFlow and sFlow related logs which use the IANA Protocol Number. + +type: keyword + +example: 6 + + +**`network.inner`** +: Network.inner fields are added in addition to network.vlan fields to describe the innermost VLAN when q-in-q VLAN tagging is present. Allowed fields include vlan.id and vlan.name. Inner vlan fields are typically used when sending traffic with multiple 802.1q encapsulations to a network sensor (e.g. Zeek, Wireshark.) + +type: object + + +**`network.inner.vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`network.inner.vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + +**`network.name`** +: Name given by operators to sections of their network. + +type: keyword + +example: Guest Wifi + + +**`network.packets`** +: Total packets transferred in both directions. If `source.packets` and `destination.packets` are known, `network.packets` is their sum. + +type: long + +example: 24 + + +**`network.protocol`** +: In the OSI Model this would be the Application Layer protocol. For example, `http`, `dns`, or `ssh`. The field value must be normalized to lowercase for querying. + +type: keyword + +example: http + + +**`network.transport`** +: Same as network.iana_number, but instead using the Keyword name of the transport layer (udp, tcp, ipv6-icmp, etc.) The field value must be normalized to lowercase for querying. + +type: keyword + +example: tcp + + +**`network.type`** +: In the OSI Model this would be the Network Layer. ipv4, ipv6, ipsec, pim, etc The field value must be normalized to lowercase for querying. + +type: keyword + +example: ipv4 + + +**`network.vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`network.vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + + +## observer [_observer] + +An observer is defined as a special network, security, or application device used to detect, observe, or create network, security, or application-related events and metrics. This could be a custom hardware appliance or a server that has been configured to run special network, security, or application software. Examples include firewalls, web proxies, intrusion detection/prevention systems, network monitoring sensors, web application firewalls, data loss prevention systems, and APM servers. The observer.* fields shall be populated with details of the system, if any, that detects, observes and/or creates a network, security, or application event or metric. Message queues and ETL components used in processing events or metrics are not considered observers in ECS. + +**`observer.egress`** +: Observer.egress holds information like interface number and name, vlan, and zone information to classify egress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. + +type: object + + +**`observer.egress.interface.alias`** +: Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. + +type: keyword + +example: outside + + +**`observer.egress.interface.id`** +: Interface ID as reported by an observer (typically SNMP interface ID). + +type: keyword + +example: 10 + + +**`observer.egress.interface.name`** +: Interface name as reported by the system. + +type: keyword + +example: eth0 + + +**`observer.egress.vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`observer.egress.vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + +**`observer.egress.zone`** +: Network zone of outbound traffic as reported by the observer to categorize the destination area of egress traffic, e.g. Internal, External, DMZ, HR, Legal, etc. + +type: keyword + +example: Public_Internet + + +**`observer.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`observer.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`observer.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`observer.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`observer.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`observer.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`observer.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`observer.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`observer.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`observer.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`observer.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`observer.hostname`** +: Hostname of the observer. + +type: keyword + + +**`observer.ingress`** +: Observer.ingress holds information like interface number and name, vlan, and zone information to classify ingress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. + +type: object + + +**`observer.ingress.interface.alias`** +: Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. + +type: keyword + +example: outside + + +**`observer.ingress.interface.id`** +: Interface ID as reported by an observer (typically SNMP interface ID). + +type: keyword + +example: 10 + + +**`observer.ingress.interface.name`** +: Interface name as reported by the system. + +type: keyword + +example: eth0 + + +**`observer.ingress.vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`observer.ingress.vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + +**`observer.ingress.zone`** +: Network zone of incoming traffic as reported by the observer to categorize the source area of ingress traffic. e.g. internal, External, DMZ, HR, Legal, etc. + +type: keyword + +example: DMZ + + +**`observer.ip`** +: IP addresses of the observer. + +type: ip + + +**`observer.mac`** +: MAC addresses of the observer. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] + + +**`observer.name`** +: Custom name of the observer. This is a name that can be given to an observer. This can be helpful for example if multiple firewalls of the same model are used in an organization. If no custom name is needed, the field can be left empty. + +type: keyword + +example: 1_proxySG + + +**`observer.os.family`** +: OS family (such as redhat, debian, freebsd, windows). + +type: keyword + +example: debian + + +**`observer.os.full`** +: Operating system name, including the version or code name. + +type: keyword + +example: Mac OS Mojave + + +**`observer.os.full.text`** +: type: match_only_text + + +**`observer.os.kernel`** +: Operating system kernel version as a raw string. + +type: keyword + +example: 4.4.0-112-generic + + +**`observer.os.name`** +: Operating system name, without the version. + +type: keyword + +example: Mac OS X + + +**`observer.os.name.text`** +: type: match_only_text + + +**`observer.os.platform`** +: Operating system platform (such centos, ubuntu, windows). + +type: keyword + +example: darwin + + +**`observer.os.type`** +: Use the `os.type` field to categorize the operating system into one of the broad commercial families. One of these following values should be used (lowercase): linux, macos, unix, windows. If the OS you’re dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. + +type: keyword + +example: macos + + +**`observer.os.version`** +: Operating system version as a raw string. + +type: keyword + +example: 10.14.1 + + +**`observer.product`** +: The product name of the observer. + +type: keyword + +example: s200 + + +**`observer.serial_number`** +: Observer serial number. + +type: keyword + + +**`observer.type`** +: The type of the observer the data is coming from. There is no predefined list of observer types. Some examples are `forwarder`, `firewall`, `ids`, `ips`, `proxy`, `poller`, `sensor`, `APM server`. + +type: keyword + +example: firewall + + +**`observer.vendor`** +: Vendor name of the observer. + +type: keyword + +example: Symantec + + +**`observer.version`** +: Observer version. + +type: keyword + + + +## orchestrator [_orchestrator] + +Fields that describe the resources which container orchestrators manage or act upon. + +**`orchestrator.api_version`** +: API version being used to carry out the action + +type: keyword + +example: v1beta1 + + +**`orchestrator.cluster.name`** +: Name of the cluster. + +type: keyword + + +**`orchestrator.cluster.url`** +: URL of the API used to manage the cluster. + +type: keyword + + +**`orchestrator.cluster.version`** +: The version of the cluster. + +type: keyword + + +**`orchestrator.namespace`** +: Namespace in which the action is taking place. + +type: keyword + +example: kube-system + + +**`orchestrator.organization`** +: Organization affected by the event (for multi-tenant orchestrator setups). + +type: keyword + +example: elastic + + +**`orchestrator.resource.name`** +: Name of the resource being acted upon. + +type: keyword + +example: test-pod-cdcws + + +**`orchestrator.resource.type`** +: Type of resource being acted upon. + +type: keyword + +example: service + + +**`orchestrator.type`** +: Orchestrator cluster type (e.g. kubernetes, nomad or cloudfoundry). + +type: keyword + +example: kubernetes + + + +## organization [_organization] + +The organization fields enrich data with information about the company or entity the data is associated with. These fields help you arrange or filter data stored in an index by one or multiple organizations. + +**`organization.id`** +: Unique identifier for the organization. + +type: keyword + + +**`organization.name`** +: Organization name. + +type: keyword + + +**`organization.name.text`** +: type: match_only_text + + + +## os [_os] + +The OS fields contain information about the operating system. + +**`os.family`** +: OS family (such as redhat, debian, freebsd, windows). + +type: keyword + +example: debian + + +**`os.full`** +: Operating system name, including the version or code name. + +type: keyword + +example: Mac OS Mojave + + +**`os.full.text`** +: type: match_only_text + + +**`os.kernel`** +: Operating system kernel version as a raw string. + +type: keyword + +example: 4.4.0-112-generic + + +**`os.name`** +: Operating system name, without the version. + +type: keyword + +example: Mac OS X + + +**`os.name.text`** +: type: match_only_text + + +**`os.platform`** +: Operating system platform (such centos, ubuntu, windows). + +type: keyword + +example: darwin + + +**`os.type`** +: Use the `os.type` field to categorize the operating system into one of the broad commercial families. One of these following values should be used (lowercase): linux, macos, unix, windows. If the OS you’re dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. + +type: keyword + +example: macos + + +**`os.version`** +: Operating system version as a raw string. + +type: keyword + +example: 10.14.1 + + + +## package [_package] + +These fields contain information about an installed software package. It contains general information about a package, such as name, version or size. It also contains installation details, such as time or location. + +**`package.architecture`** +: Package architecture. + +type: keyword + +example: x86_64 + + +**`package.build_version`** +: Additional information about the build version of the installed package. For example use the commit SHA of a non-released package. + +type: keyword + +example: 36f4f7e89dd61b0988b12ee000b98966867710cd + + +**`package.checksum`** +: Checksum of the installed package for verification. + +type: keyword + +example: 68b329da9893e34099c7d8ad5cb9c940 + + +**`package.description`** +: Description of the package. + +type: keyword + +example: Open source programming language to build simple/reliable/efficient software. + + +**`package.install_scope`** +: Indicating how the package was installed, e.g. user-local, global. + +type: keyword + +example: global + + +**`package.installed`** +: Time when package was installed. + +type: date + + +**`package.license`** +: License under which the package was released. Use a short name, e.g. the license identifier from SPDX License List where possible ([https://spdx.org/licenses/](https://spdx.org/licenses/)). + +type: keyword + +example: Apache License 2.0 + + +**`package.name`** +: Package name + +type: keyword + +example: go + + +**`package.path`** +: Path where the package is installed. + +type: keyword + +example: /usr/local/Cellar/go/1.12.9/ + + +**`package.reference`** +: Home page or reference URL of the software in this package, if available. + +type: keyword + +example: [https://golang.org](https://golang.org) + + +**`package.size`** +: Package size in bytes. + +type: long + +example: 62231 + +format: string + + +**`package.type`** +: Type of package. This should contain the package file type, rather than the package manager name. Examples: rpm, dpkg, brew, npm, gem, nupkg, jar. + +type: keyword + +example: rpm + + +**`package.version`** +: Package version + +type: keyword + +example: 1.12.9 + + + +## pe [_pe] + +These fields contain Windows Portable Executable (PE) metadata. + +**`pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + + +## process [_process_2] + +These fields contain information about a process. These fields can help you correlate metrics information with a process id/name from a log message. The `process.pid` often stays in the metric itself and is copied to the global field for correlation. + +**`process.args`** +: Array of process arguments, starting with the absolute path to the executable. May be filtered to protect sensitive information. + +type: keyword + +example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] + + +**`process.args_count`** +: Length of the process.args array. This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. + +type: long + +example: 4 + + +**`process.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`process.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`process.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`process.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`process.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`process.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`process.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`process.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`process.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`process.command_line`** +: Full command line that started the process, including the absolute path to the executable, and all arguments. Some arguments may be filtered to protect sensitive information. + +type: wildcard + +example: /usr/bin/ssh -l user 10.0.0.16 + + +**`process.command_line.text`** +: type: match_only_text + + +**`process.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`process.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`process.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`process.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`process.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`process.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`process.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`process.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`process.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`process.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`process.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`process.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`process.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`process.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`process.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`process.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`process.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`process.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`process.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`process.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`process.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`process.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`process.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`process.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`process.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`process.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`process.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`process.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`process.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`process.end`** +: The time the process ended. + +type: date + +example: 2016-05-23T08:05:34.853Z + + +**`process.entity_id`** +: Unique identifier for the process. The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. + +type: keyword + +example: c2c455d9f99375d + + +**`process.executable`** +: Absolute path to the process executable. + +type: keyword + +example: /usr/bin/ssh + + +**`process.executable.text`** +: type: match_only_text + + +**`process.exit_code`** +: The exit code of the process, if this is a termination event. The field should be absent if there is no exit code for the event (e.g. process start). + +type: long + +example: 137 + + +**`process.hash.md5`** +: MD5 hash. + +type: keyword + + +**`process.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`process.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`process.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`process.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`process.name`** +: Process name. Sometimes called program name or similar. + +type: keyword + +example: ssh + + +**`process.name.text`** +: type: match_only_text + + +**`process.parent.args`** +: Array of process arguments, starting with the absolute path to the executable. May be filtered to protect sensitive information. + +type: keyword + +example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] + + +**`process.parent.args_count`** +: Length of the process.args array. This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. + +type: long + +example: 4 + + +**`process.parent.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`process.parent.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`process.parent.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`process.parent.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`process.parent.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`process.parent.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`process.parent.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`process.parent.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`process.parent.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`process.parent.command_line`** +: Full command line that started the process, including the absolute path to the executable, and all arguments. Some arguments may be filtered to protect sensitive information. + +type: wildcard + +example: /usr/bin/ssh -l user 10.0.0.16 + + +**`process.parent.command_line.text`** +: type: match_only_text + + +**`process.parent.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`process.parent.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`process.parent.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`process.parent.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`process.parent.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`process.parent.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`process.parent.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`process.parent.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`process.parent.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`process.parent.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`process.parent.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`process.parent.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`process.parent.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`process.parent.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`process.parent.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`process.parent.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`process.parent.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`process.parent.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`process.parent.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`process.parent.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`process.parent.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`process.parent.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`process.parent.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`process.parent.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`process.parent.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`process.parent.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`process.parent.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`process.parent.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`process.parent.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`process.parent.end`** +: The time the process ended. + +type: date + +example: 2016-05-23T08:05:34.853Z + + +**`process.parent.entity_id`** +: Unique identifier for the process. The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. + +type: keyword + +example: c2c455d9f99375d + + +**`process.parent.executable`** +: Absolute path to the process executable. + +type: keyword + +example: /usr/bin/ssh + + +**`process.parent.executable.text`** +: type: match_only_text + + +**`process.parent.exit_code`** +: The exit code of the process, if this is a termination event. The field should be absent if there is no exit code for the event (e.g. process start). + +type: long + +example: 137 + + +**`process.parent.hash.md5`** +: MD5 hash. + +type: keyword + + +**`process.parent.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`process.parent.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`process.parent.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`process.parent.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`process.parent.name`** +: Process name. Sometimes called program name or similar. + +type: keyword + +example: ssh + + +**`process.parent.name.text`** +: type: match_only_text + + +**`process.parent.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`process.parent.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`process.parent.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`process.parent.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`process.parent.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`process.parent.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`process.parent.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`process.parent.pgid`** +: Identifier of the group of processes the process belongs to. + +type: long + +format: string + + +**`process.parent.pid`** +: Process id. + +type: long + +example: 4242 + +format: string + + +**`process.parent.start`** +: The time the process started. + +type: date + +example: 2016-05-23T08:05:34.853Z + + +**`process.parent.thread.id`** +: Thread ID. + +type: long + +example: 4242 + +format: string + + +**`process.parent.thread.name`** +: Thread name. + +type: keyword + +example: thread-0 + + +**`process.parent.title`** +: Process title. The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. + +type: keyword + + +**`process.parent.title.text`** +: type: match_only_text + + +**`process.parent.uptime`** +: Seconds the process has been up. + +type: long + +example: 1325 + + +**`process.parent.working_directory`** +: The working directory of the process. + +type: keyword + +example: /home/alice + + +**`process.parent.working_directory.text`** +: type: match_only_text + + +**`process.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`process.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`process.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`process.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`process.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`process.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`process.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`process.pgid`** +: Identifier of the group of processes the process belongs to. + +type: long + +format: string + + +**`process.pid`** +: Process id. + +type: long + +example: 4242 + +format: string + + +**`process.start`** +: The time the process started. + +type: date + +example: 2016-05-23T08:05:34.853Z + + +**`process.thread.id`** +: Thread ID. + +type: long + +example: 4242 + +format: string + + +**`process.thread.name`** +: Thread name. + +type: keyword + +example: thread-0 + + +**`process.title`** +: Process title. The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. + +type: keyword + + +**`process.title.text`** +: type: match_only_text + + +**`process.uptime`** +: Seconds the process has been up. + +type: long + +example: 1325 + + +**`process.working_directory`** +: The working directory of the process. + +type: keyword + +example: /home/alice + + +**`process.working_directory.text`** +: type: match_only_text + + + +## registry [_registry] + +Fields related to Windows Registry operations. + +**`registry.data.bytes`** +: Original bytes written with base64 encoding. For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. + +type: keyword + +example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= + + +**`registry.data.strings`** +: Content when writing string types. Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). + +type: wildcard + +example: ["C:\rta\red_ttp\bin\myapp.exe"] + + +**`registry.data.type`** +: Standard registry type for encoding contents + +type: keyword + +example: REG_SZ + + +**`registry.hive`** +: Abbreviated name for the hive. + +type: keyword + +example: HKLM + + +**`registry.key`** +: Hive-relative path of keys. + +type: keyword + +example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe + + +**`registry.path`** +: Full path, including hive, key and value + +type: keyword + +example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger + + +**`registry.value`** +: Name of the value written. + +type: keyword + +example: Debugger + + + +## related [_related] + +This field set is meant to facilitate pivoting around a piece of data. Some pieces of information can be seen in many places in an ECS event. To facilitate searching for them, store an array of all seen values to their corresponding field in `related.`. A concrete example is IP addresses, which can be under host, observer, source, destination, client, server, and network.forwarded_ip. If you append all IPs to `related.ip`, you can then search for a given IP trivially, no matter where it appeared, by querying `related.ip:192.0.2.15`. + +**`related.hash`** +: All the hashes seen on your event. Populating this field, then using it to search for hashes can help in situations where you’re unsure what the hash algorithm is (and therefore which key name to search). + +type: keyword + + +**`related.hosts`** +: All hostnames or other host identifiers seen on your event. Example identifiers include FQDNs, domain names, workstation names, or aliases. + +type: keyword + + +**`related.ip`** +: All of the IPs seen on your event. + +type: ip + + +**`related.user`** +: All the user names or other user identifiers seen on the event. + +type: keyword + + + +## rule [_rule] + +Rule fields are used to capture the specifics of any observer or agent rules that generate alerts or other notable events. Examples of data sources that would populate the rule fields include: network admission control platforms, network or host IDS/IPS, network firewalls, web application firewalls, url filters, endpoint detection and response (EDR) systems, etc. + +**`rule.author`** +: Name, organization, or pseudonym of the author or authors who created the rule used to generate this event. + +type: keyword + +example: ["Star-Lord"] + + +**`rule.category`** +: A categorization value keyword used by the entity using the rule for detection of this event. + +type: keyword + +example: Attempted Information Leak + + +**`rule.description`** +: The description of the rule generating the event. + +type: keyword + +example: Block requests to public DNS over HTTPS / TLS protocols + + +**`rule.id`** +: A rule ID that is unique within the scope of an agent, observer, or other entity using the rule for detection of this event. + +type: keyword + +example: 101 + + +**`rule.license`** +: Name of the license under which the rule used to generate this event is made available. + +type: keyword + +example: Apache 2.0 + + +**`rule.name`** +: The name of the rule or signature generating the event. + +type: keyword + +example: BLOCK_DNS_over_TLS + + +**`rule.reference`** +: Reference URL to additional information about the rule used to generate this event. The URL can point to the vendor’s documentation about the rule. If that’s not available, it can also be a link to a more general page describing this type of alert. + +type: keyword + +example: [https://en.wikipedia.org/wiki/DNS_over_TLS](https://en.wikipedia.org/wiki/DNS_over_TLS) + + +**`rule.ruleset`** +: Name of the ruleset, policy, group, or parent category in which the rule used to generate this event is a member. + +type: keyword + +example: Standard_Protocol_Filters + + +**`rule.uuid`** +: A rule ID that is unique within the scope of a set or group of agents, observers, or other entities using the rule for detection of this event. + +type: keyword + +example: 1100110011 + + +**`rule.version`** +: The version / revision of the rule being used for analysis. + +type: keyword + +example: 1.1 + + + +## server [_server] + +A Server is defined as the responder in a network connection for events regarding sessions, connections, or bidirectional flow records. For TCP events, the server is the receiver of the initial SYN packet(s) of the TCP connection. For other protocols, the server is generally the responder in the network transaction. Some systems actually use the term "responder" to refer the server in TCP connections. The server fields describe details about the system acting as the server in the network event. Server fields are usually populated in conjunction with client fields. Server fields are generally not populated for packet-level events. Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. + +**`server.address`** +: Some event server addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. + +type: keyword + + +**`server.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`server.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`server.as.organization.name.text`** +: type: match_only_text + + +**`server.bytes`** +: Bytes sent from the server to the client. + +type: long + +example: 184 + +format: bytes + + +**`server.domain`** +: The domain name of the server system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. + +type: keyword + +example: foo.example.com + + +**`server.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`server.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`server.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`server.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`server.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`server.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`server.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`server.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`server.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`server.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`server.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`server.ip`** +: IP address of the server (IPv4 or IPv6). + +type: ip + + +**`server.mac`** +: MAC address of the server. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: 00-00-5E-00-53-23 + + +**`server.nat.ip`** +: Translated ip of destination based NAT sessions (e.g. internet to private DMZ) Typically used with load balancers, firewalls, or routers. + +type: ip + + +**`server.nat.port`** +: Translated port of destination based NAT sessions (e.g. internet to private DMZ) Typically used with load balancers, firewalls, or routers. + +type: long + +format: string + + +**`server.packets`** +: Packets sent from the server to the client. + +type: long + +example: 12 + + +**`server.port`** +: Port of the server. + +type: long + +format: string + + +**`server.registered_domain`** +: The highest registered server domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`server.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`server.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`server.user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`server.user.email`** +: User email address. + +type: keyword + + +**`server.user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`server.user.full_name.text`** +: type: match_only_text + + +**`server.user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`server.user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`server.user.group.name`** +: Name of the group. + +type: keyword + + +**`server.user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`server.user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`server.user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`server.user.name.text`** +: type: match_only_text + + +**`server.user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## service [_service] + +The service fields describe the service for or from which the data was collected. These fields help you find and correlate logs for a specific service and version. + +**`service.address`** +: Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). + +type: keyword + +example: 172.26.0.2:5432 + + +**`service.environment`** +: Identifies the environment where the service is running. If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. + +type: keyword + +example: production + + +**`service.ephemeral_id`** +: Ephemeral identifier of this service (if one exists). This id normally changes across restarts, but `service.id` does not. + +type: keyword + +example: 8a4f500f + + +**`service.id`** +: Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. + +type: keyword + +example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 + + +**`service.name`** +: Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. + +type: keyword + +example: elasticsearch-metrics + + +**`service.node.name`** +: Name of a service node. This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn’t have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. + +type: keyword + +example: instance-0000000016 + + +**`service.origin.address`** +: Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). + +type: keyword + +example: 172.26.0.2:5432 + + +**`service.origin.environment`** +: Identifies the environment where the service is running. If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. + +type: keyword + +example: production + + +**`service.origin.ephemeral_id`** +: Ephemeral identifier of this service (if one exists). This id normally changes across restarts, but `service.id` does not. + +type: keyword + +example: 8a4f500f + + +**`service.origin.id`** +: Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. + +type: keyword + +example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 + + +**`service.origin.name`** +: Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. + +type: keyword + +example: elasticsearch-metrics + + +**`service.origin.node.name`** +: Name of a service node. This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn’t have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. + +type: keyword + +example: instance-0000000016 + + +**`service.origin.state`** +: Current state of the service. + +type: keyword + + +**`service.origin.type`** +: The type of the service data is collected from. The type can be used to group and correlate logs and metrics from one service type. Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. + +type: keyword + +example: elasticsearch + + +**`service.origin.version`** +: Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. + +type: keyword + +example: 3.2.4 + + +**`service.state`** +: Current state of the service. + +type: keyword + + +**`service.target.address`** +: Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). + +type: keyword + +example: 172.26.0.2:5432 + + +**`service.target.environment`** +: Identifies the environment where the service is running. If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. + +type: keyword + +example: production + + +**`service.target.ephemeral_id`** +: Ephemeral identifier of this service (if one exists). This id normally changes across restarts, but `service.id` does not. + +type: keyword + +example: 8a4f500f + + +**`service.target.id`** +: Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. + +type: keyword + +example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 + + +**`service.target.name`** +: Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. + +type: keyword + +example: elasticsearch-metrics + + +**`service.target.node.name`** +: Name of a service node. This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn’t have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. + +type: keyword + +example: instance-0000000016 + + +**`service.target.state`** +: Current state of the service. + +type: keyword + + +**`service.target.type`** +: The type of the service data is collected from. The type can be used to group and correlate logs and metrics from one service type. Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. + +type: keyword + +example: elasticsearch + + +**`service.target.version`** +: Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. + +type: keyword + +example: 3.2.4 + + +**`service.type`** +: The type of the service data is collected from. The type can be used to group and correlate logs and metrics from one service type. Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. + +type: keyword + +example: elasticsearch + + +**`service.version`** +: Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. + +type: keyword + +example: 3.2.4 + + + +## source [_source_2] + +Source fields capture details about the sender of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. Source fields are usually populated in conjunction with destination fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. + +**`source.address`** +: Some event source addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. + +type: keyword + + +**`source.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`source.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`source.as.organization.name.text`** +: type: match_only_text + + +**`source.bytes`** +: Bytes sent from the source to the destination. + +type: long + +example: 184 + +format: bytes + + +**`source.domain`** +: The domain name of the source system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. + +type: keyword + +example: foo.example.com + + +**`source.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`source.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`source.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`source.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`source.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`source.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`source.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`source.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`source.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`source.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`source.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`source.ip`** +: IP address of the source (IPv4 or IPv6). + +type: ip + + +**`source.mac`** +: MAC address of the source. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. + +type: keyword + +example: 00-00-5E-00-53-23 + + +**`source.nat.ip`** +: Translated ip of source based NAT sessions (e.g. internal client to internet) Typically connections traversing load balancers, firewalls, or routers. + +type: ip + + +**`source.nat.port`** +: Translated port of source based NAT sessions. (e.g. internal client to internet) Typically used with load balancers, firewalls, or routers. + +type: long + +format: string + + +**`source.packets`** +: Packets sent from the source to the destination. + +type: long + +example: 12 + + +**`source.port`** +: Port of the source. + +type: long + +format: string + + +**`source.registered_domain`** +: The highest registered source domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`source.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`source.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`source.user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`source.user.email`** +: User email address. + +type: keyword + + +**`source.user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`source.user.full_name.text`** +: type: match_only_text + + +**`source.user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`source.user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`source.user.group.name`** +: Name of the group. + +type: keyword + + +**`source.user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`source.user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`source.user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`source.user.name.text`** +: type: match_only_text + + +**`source.user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## threat [_threat] + +Fields to classify events and alerts according to a threat taxonomy such as the MITRE ATT&CK® framework. These fields are for users to classify alerts from all of their sources (e.g. IDS, NGFW, etc.) within a common taxonomy. The threat.tactic.* fields are meant to capture the high level category of the threat (e.g. "impact"). The threat.technique.* fields are meant to capture which kind of approach is used by this detected threat, to accomplish the goal (e.g. "endpoint denial of service"). + +**`threat.enrichments`** +: A list of associated indicators objects enriching the event, and the context of that association/enrichment. + +type: nested + + +**`threat.enrichments.indicator`** +: Object containing associated indicators enriching the event. + +type: object + + +**`threat.enrichments.indicator.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`threat.enrichments.indicator.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`threat.enrichments.indicator.as.organization.name.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.confidence`** +: Identifies the vendor-neutral confidence rating using the None/Low/Medium/High scale defined in Appendix A of the STIX 2.1 framework. Vendor-specific confidence scales may be added as custom fields. Expected values are: * Not Specified * None * Low * Medium * High + +type: keyword + +example: Medium + + +**`threat.enrichments.indicator.description`** +: Describes the type of action conducted by the threat. + +type: keyword + +example: IP x.x.x.x was observed delivering the Angler EK. + + +**`threat.enrichments.indicator.email.address`** +: Identifies a threat indicator as an email address (irrespective of direction). + +type: keyword + +example: `phish@example.com` + + +**`threat.enrichments.indicator.file.accessed`** +: Last time the file was accessed. Note that not all filesystems keep track of access time. + +type: date + + +**`threat.enrichments.indicator.file.attributes`** +: Array of file attributes. Attributes names will vary by platform. Here’s a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. + +type: keyword + +example: ["readonly", "system"] + + +**`threat.enrichments.indicator.file.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`threat.enrichments.indicator.file.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`threat.enrichments.indicator.file.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`threat.enrichments.indicator.file.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`threat.enrichments.indicator.file.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`threat.enrichments.indicator.file.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`threat.enrichments.indicator.file.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`threat.enrichments.indicator.file.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`threat.enrichments.indicator.file.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`threat.enrichments.indicator.file.created`** +: File creation time. Note that not all filesystems store the creation time. + +type: date + + +**`threat.enrichments.indicator.file.ctime`** +: Last time the file attributes or metadata changed. Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. + +type: date + + +**`threat.enrichments.indicator.file.device`** +: Device that is the source of the file. + +type: keyword + +example: sda + + +**`threat.enrichments.indicator.file.directory`** +: Directory where the file is located. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice + + +**`threat.enrichments.indicator.file.drive_letter`** +: Drive letter where the file is located. This field is only relevant on Windows. The value should be uppercase, and not include the colon. + +type: keyword + +example: C + + +**`threat.enrichments.indicator.file.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`threat.enrichments.indicator.file.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`threat.enrichments.indicator.file.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`threat.enrichments.indicator.file.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`threat.enrichments.indicator.file.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`threat.enrichments.indicator.file.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`threat.enrichments.indicator.file.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`threat.enrichments.indicator.file.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`threat.enrichments.indicator.file.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`threat.enrichments.indicator.file.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`threat.enrichments.indicator.file.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`threat.enrichments.indicator.file.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`threat.enrichments.indicator.file.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`threat.enrichments.indicator.file.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`threat.enrichments.indicator.file.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`threat.enrichments.indicator.file.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`threat.enrichments.indicator.file.extension`** +: File extension, excluding the leading dot. Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`threat.enrichments.indicator.file.fork_name`** +: A fork is additional data associated with a filesystem object. On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. + +type: keyword + +example: Zone.Identifer + + +**`threat.enrichments.indicator.file.gid`** +: Primary group ID (GID) of the file. + +type: keyword + +example: 1001 + + +**`threat.enrichments.indicator.file.group`** +: Primary group name of the file. + +type: keyword + +example: alice + + +**`threat.enrichments.indicator.file.hash.md5`** +: MD5 hash. + +type: keyword + + +**`threat.enrichments.indicator.file.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`threat.enrichments.indicator.file.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`threat.enrichments.indicator.file.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`threat.enrichments.indicator.file.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`threat.enrichments.indicator.file.inode`** +: Inode representing the file in the filesystem. + +type: keyword + +example: 256383 + + +**`threat.enrichments.indicator.file.mime_type`** +: MIME type should identify the format of the file or stream of bytes using [IANA official types](https://www.iana.org/assignments/media-types/media-types.xhtml), where possible. When more than one type is applicable, the most specific type should be used. + +type: keyword + + +**`threat.enrichments.indicator.file.mode`** +: Mode of the file in octal representation. + +type: keyword + +example: 0640 + + +**`threat.enrichments.indicator.file.mtime`** +: Last time the file content was modified. + +type: date + + +**`threat.enrichments.indicator.file.name`** +: Name of the file including the extension, without the directory. + +type: keyword + +example: example.png + + +**`threat.enrichments.indicator.file.owner`** +: File owner’s username. + +type: keyword + +example: alice + + +**`threat.enrichments.indicator.file.path`** +: Full path to the file, including the file name. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice/example.png + + +**`threat.enrichments.indicator.file.path.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.file.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`threat.enrichments.indicator.file.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`threat.enrichments.indicator.file.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`threat.enrichments.indicator.file.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`threat.enrichments.indicator.file.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`threat.enrichments.indicator.file.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`threat.enrichments.indicator.file.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`threat.enrichments.indicator.file.size`** +: File size in bytes. Only relevant when `file.type` is "file". + +type: long + +example: 16384 + + +**`threat.enrichments.indicator.file.target_path`** +: Target path for symlinks. + +type: keyword + + +**`threat.enrichments.indicator.file.target_path.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.file.type`** +: File type (file, dir, or symlink). + +type: keyword + +example: file + + +**`threat.enrichments.indicator.file.uid`** +: The user ID (UID) or security identifier (SID) of the file owner. + +type: keyword + +example: 1001 + + +**`threat.enrichments.indicator.file.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`threat.enrichments.indicator.file.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`threat.enrichments.indicator.file.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`threat.enrichments.indicator.file.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`threat.enrichments.indicator.file.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`threat.enrichments.indicator.file.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`threat.enrichments.indicator.file.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`threat.enrichments.indicator.file.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.enrichments.indicator.file.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`threat.enrichments.indicator.file.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`threat.enrichments.indicator.file.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`threat.enrichments.indicator.file.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`threat.enrichments.indicator.file.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`threat.enrichments.indicator.file.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`threat.enrichments.indicator.file.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`threat.enrichments.indicator.file.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`threat.enrichments.indicator.file.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`threat.enrichments.indicator.file.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`threat.enrichments.indicator.file.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`threat.enrichments.indicator.file.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`threat.enrichments.indicator.file.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`threat.enrichments.indicator.file.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`threat.enrichments.indicator.file.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.enrichments.indicator.file.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`threat.enrichments.indicator.first_seen`** +: The date and time when intelligence source first reported sighting this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.enrichments.indicator.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`threat.enrichments.indicator.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`threat.enrichments.indicator.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`threat.enrichments.indicator.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`threat.enrichments.indicator.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`threat.enrichments.indicator.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`threat.enrichments.indicator.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`threat.enrichments.indicator.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`threat.enrichments.indicator.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`threat.enrichments.indicator.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`threat.enrichments.indicator.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`threat.enrichments.indicator.ip`** +: Identifies a threat indicator as an IP address (irrespective of direction). + +type: ip + +example: 1.2.3.4 + + +**`threat.enrichments.indicator.last_seen`** +: The date and time when intelligence source last reported sighting this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.enrichments.indicator.marking.tlp`** +: Traffic Light Protocol sharing markings. Recommended values are: * WHITE * GREEN * AMBER * RED + +type: keyword + +example: White + + +**`threat.enrichments.indicator.modified_at`** +: The date and time when intelligence source last modified information for this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.enrichments.indicator.port`** +: Identifies a threat indicator as a port number (irrespective of direction). + +type: long + +example: 443 + + +**`threat.enrichments.indicator.provider`** +: The name of the indicator’s provider. + +type: keyword + +example: lrz_urlhaus + + +**`threat.enrichments.indicator.reference`** +: Reference URL linking to additional information about this indicator. + +type: keyword + +example: [https://system.example.com/indicator/0001234](https://system.example.com/indicator/0001234) + + +**`threat.enrichments.indicator.registry.data.bytes`** +: Original bytes written with base64 encoding. For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. + +type: keyword + +example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= + + +**`threat.enrichments.indicator.registry.data.strings`** +: Content when writing string types. Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). + +type: wildcard + +example: ["C:\rta\red_ttp\bin\myapp.exe"] + + +**`threat.enrichments.indicator.registry.data.type`** +: Standard registry type for encoding contents + +type: keyword + +example: REG_SZ + + +**`threat.enrichments.indicator.registry.hive`** +: Abbreviated name for the hive. + +type: keyword + +example: HKLM + + +**`threat.enrichments.indicator.registry.key`** +: Hive-relative path of keys. + +type: keyword + +example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe + + +**`threat.enrichments.indicator.registry.path`** +: Full path, including hive, key and value + +type: keyword + +example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger + + +**`threat.enrichments.indicator.registry.value`** +: Name of the value written. + +type: keyword + +example: Debugger + + +**`threat.enrichments.indicator.scanner_stats`** +: Count of AV/EDR vendors that successfully detected malicious file or URL. + +type: long + +example: 4 + + +**`threat.enrichments.indicator.sightings`** +: Number of times this indicator was observed conducting threat activity. + +type: long + +example: 20 + + +**`threat.enrichments.indicator.type`** +: Type of indicator as represented by Cyber Observable in STIX 2.0. Recommended values: * autonomous-system * artifact * directory * domain-name * email-addr * file * ipv4-addr * ipv6-addr * mac-addr * mutex * port * process * software * url * user-account * windows-registry-key * x509-certificate + +type: keyword + +example: ipv4-addr + + +**`threat.enrichments.indicator.url.domain`** +: Domain of the url, such as "www.elastic.co". In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. + +type: keyword + +example: www.elastic.co + + +**`threat.enrichments.indicator.url.extension`** +: The field contains the file extension from the original request url, excluding the leading dot. The file extension is only set if it exists, as not every url has a file extension. The leading period must not be included. For example, the value must be "png", not ".png". Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`threat.enrichments.indicator.url.fragment`** +: Portion of the url after the `#`, such as "top". The `#` is not part of the fragment. + +type: keyword + + +**`threat.enrichments.indicator.url.full`** +: If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) + + +**`threat.enrichments.indicator.url.full.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.url.original`** +: Unmodified original url as seen in the event source. Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. This field is meant to represent the URL as it was observed, complete or not. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) or /search?q=elasticsearch + + +**`threat.enrichments.indicator.url.original.text`** +: type: match_only_text + + +**`threat.enrichments.indicator.url.password`** +: Password of the request. + +type: keyword + + +**`threat.enrichments.indicator.url.path`** +: Path of the request, such as "/search". + +type: wildcard + + +**`threat.enrichments.indicator.url.port`** +: Port of the request, such as 443. + +type: long + +example: 443 + +format: string + + +**`threat.enrichments.indicator.url.query`** +: The query field describes the query string of the request, such as "q=elasticsearch". The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. + +type: keyword + + +**`threat.enrichments.indicator.url.registered_domain`** +: The highest registered url domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`threat.enrichments.indicator.url.scheme`** +: Scheme of the request, such as "https". Note: The `:` is not part of the scheme. + +type: keyword + +example: https + + +**`threat.enrichments.indicator.url.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`threat.enrichments.indicator.url.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`threat.enrichments.indicator.url.username`** +: Username of the request. + +type: keyword + + +**`threat.enrichments.indicator.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`threat.enrichments.indicator.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`threat.enrichments.indicator.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`threat.enrichments.indicator.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`threat.enrichments.indicator.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`threat.enrichments.indicator.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`threat.enrichments.indicator.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`threat.enrichments.indicator.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.enrichments.indicator.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`threat.enrichments.indicator.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`threat.enrichments.indicator.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`threat.enrichments.indicator.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`threat.enrichments.indicator.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`threat.enrichments.indicator.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`threat.enrichments.indicator.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`threat.enrichments.indicator.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`threat.enrichments.indicator.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`threat.enrichments.indicator.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`threat.enrichments.indicator.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`threat.enrichments.indicator.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`threat.enrichments.indicator.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`threat.enrichments.indicator.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`threat.enrichments.indicator.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.enrichments.indicator.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`threat.enrichments.matched.atomic`** +: Identifies the atomic indicator value that matched a local environment endpoint or network event. + +type: keyword + +example: bad-domain.com + + +**`threat.enrichments.matched.field`** +: Identifies the field of the atomic indicator that matched a local environment endpoint or network event. + +type: keyword + +example: file.hash.sha256 + + +**`threat.enrichments.matched.id`** +: Identifies the _id of the indicator document enriching the event. + +type: keyword + +example: ff93aee5-86a1-4a61-b0e6-0cdc313d01b5 + + +**`threat.enrichments.matched.index`** +: Identifies the _index of the indicator document enriching the event. + +type: keyword + +example: filebeat-8.0.0-2021.05.23-000011 + + +**`threat.enrichments.matched.type`** +: Identifies the type of match that caused the event to be enriched with the given indicator + +type: keyword + +example: indicator_match_rule + + +**`threat.framework`** +: Name of the threat framework used to further categorize and classify the tactic and technique of the reported threat. Framework classification can be provided by detecting systems, evaluated at ingest time, or retrospectively tagged to events. + +type: keyword + +example: MITRE ATT&CK + + +**`threat.group.alias`** +: The alias(es) of the group for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® group alias(es). + +type: keyword + +example: [ "Magecart Group 6" ] + + +**`threat.group.id`** +: The id of the group for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® group id. + +type: keyword + +example: G0037 + + +**`threat.group.name`** +: The name of the group for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® group name. + +type: keyword + +example: FIN6 + + +**`threat.group.reference`** +: The reference URL of the group for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® group reference URL. + +type: keyword + +example: [https://attack.mitre.org/groups/G0037/](https://attack.mitre.org/groups/G0037/) + + +**`threat.indicator.as.number`** +: Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. + +type: long + +example: 15169 + + +**`threat.indicator.as.organization.name`** +: Organization name. + +type: keyword + +example: Google LLC + + +**`threat.indicator.as.organization.name.text`** +: type: match_only_text + + +**`threat.indicator.confidence`** +: Identifies the vendor-neutral confidence rating using the None/Low/Medium/High scale defined in Appendix A of the STIX 2.1 framework. Vendor-specific confidence scales may be added as custom fields. Expected values are: * Not Specified * None * Low * Medium * High + +type: keyword + +example: Medium + + +**`threat.indicator.description`** +: Describes the type of action conducted by the threat. + +type: keyword + +example: IP x.x.x.x was observed delivering the Angler EK. + + +**`threat.indicator.email.address`** +: Identifies a threat indicator as an email address (irrespective of direction). + +type: keyword + +example: `phish@example.com` + + +**`threat.indicator.file.accessed`** +: Last time the file was accessed. Note that not all filesystems keep track of access time. + +type: date + + +**`threat.indicator.file.attributes`** +: Array of file attributes. Attributes names will vary by platform. Here’s a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. + +type: keyword + +example: ["readonly", "system"] + + +**`threat.indicator.file.code_signature.digest_algorithm`** +: The hashing algorithm used to sign the process. This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. + +type: keyword + +example: sha256 + + +**`threat.indicator.file.code_signature.exists`** +: Boolean to capture if a signature is present. + +type: boolean + +example: true + + +**`threat.indicator.file.code_signature.signing_id`** +: The identifier used to sign the process. This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. + +type: keyword + +example: com.apple.xpc.proxy + + +**`threat.indicator.file.code_signature.status`** +: Additional information about the certificate status. This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. + +type: keyword + +example: ERROR_UNTRUSTED_ROOT + + +**`threat.indicator.file.code_signature.subject_name`** +: Subject name of the code signer + +type: keyword + +example: Microsoft Corporation + + +**`threat.indicator.file.code_signature.team_id`** +: The team identifier used to sign the process. This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. + +type: keyword + +example: EQHXZ8M8AV + + +**`threat.indicator.file.code_signature.timestamp`** +: Date and time when the code signature was generated and signed. + +type: date + +example: 2021-01-01T12:10:30Z + + +**`threat.indicator.file.code_signature.trusted`** +: Stores the trust status of the certificate chain. Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. + +type: boolean + +example: true + + +**`threat.indicator.file.code_signature.valid`** +: Boolean to capture if the digital signature is verified against the binary content. Leave unpopulated if a certificate was unchecked. + +type: boolean + +example: true + + +**`threat.indicator.file.created`** +: File creation time. Note that not all filesystems store the creation time. + +type: date + + +**`threat.indicator.file.ctime`** +: Last time the file attributes or metadata changed. Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. + +type: date + + +**`threat.indicator.file.device`** +: Device that is the source of the file. + +type: keyword + +example: sda + + +**`threat.indicator.file.directory`** +: Directory where the file is located. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice + + +**`threat.indicator.file.drive_letter`** +: Drive letter where the file is located. This field is only relevant on Windows. The value should be uppercase, and not include the colon. + +type: keyword + +example: C + + +**`threat.indicator.file.elf.architecture`** +: Machine architecture of the ELF file. + +type: keyword + +example: x86-64 + + +**`threat.indicator.file.elf.byte_order`** +: Byte sequence of ELF file. + +type: keyword + +example: Little Endian + + +**`threat.indicator.file.elf.cpu_type`** +: CPU type of the ELF file. + +type: keyword + +example: Intel + + +**`threat.indicator.file.elf.creation_date`** +: Extracted when possible from the file’s metadata. Indicates when it was built or compiled. It can also be faked by malware creators. + +type: date + + +**`threat.indicator.file.elf.exports`** +: List of exported element names and types. + +type: flattened + + +**`threat.indicator.file.elf.header.abi_version`** +: Version of the ELF Application Binary Interface (ABI). + +type: keyword + + +**`threat.indicator.file.elf.header.class`** +: Header class of the ELF file. + +type: keyword + + +**`threat.indicator.file.elf.header.data`** +: Data table of the ELF header. + +type: keyword + + +**`threat.indicator.file.elf.header.entrypoint`** +: Header entrypoint of the ELF file. + +type: long + +format: string + + +**`threat.indicator.file.elf.header.object_version`** +: "0x1" for original ELF files. + +type: keyword + + +**`threat.indicator.file.elf.header.os_abi`** +: Application Binary Interface (ABI) of the Linux OS. + +type: keyword + + +**`threat.indicator.file.elf.header.type`** +: Header type of the ELF file. + +type: keyword + + +**`threat.indicator.file.elf.header.version`** +: Version of the ELF header. + +type: keyword + + +**`threat.indicator.file.elf.imports`** +: List of imported element names and types. + +type: flattened + + +**`threat.indicator.file.elf.sections`** +: An array containing an object for each section of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. + +type: nested + + +**`threat.indicator.file.elf.sections.chi2`** +: Chi-square probability distribution of the section. + +type: long + +format: number + + +**`threat.indicator.file.elf.sections.entropy`** +: Shannon entropy calculation from the section. + +type: long + +format: number + + +**`threat.indicator.file.elf.sections.flags`** +: ELF Section List flags. + +type: keyword + + +**`threat.indicator.file.elf.sections.name`** +: ELF Section List name. + +type: keyword + + +**`threat.indicator.file.elf.sections.physical_offset`** +: ELF Section List offset. + +type: keyword + + +**`threat.indicator.file.elf.sections.physical_size`** +: ELF Section List physical size. + +type: long + +format: bytes + + +**`threat.indicator.file.elf.sections.type`** +: ELF Section List type. + +type: keyword + + +**`threat.indicator.file.elf.sections.virtual_address`** +: ELF Section List virtual address. + +type: long + +format: string + + +**`threat.indicator.file.elf.sections.virtual_size`** +: ELF Section List virtual size. + +type: long + +format: string + + +**`threat.indicator.file.elf.segments`** +: An array containing an object for each segment of the ELF file. The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. + +type: nested + + +**`threat.indicator.file.elf.segments.sections`** +: ELF object segment sections. + +type: keyword + + +**`threat.indicator.file.elf.segments.type`** +: ELF object segment type. + +type: keyword + + +**`threat.indicator.file.elf.shared_libraries`** +: List of shared libraries used by this ELF object. + +type: keyword + + +**`threat.indicator.file.elf.telfhash`** +: telfhash symbol hash for ELF file. + +type: keyword + + +**`threat.indicator.file.extension`** +: File extension, excluding the leading dot. Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`threat.indicator.file.fork_name`** +: A fork is additional data associated with a filesystem object. On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. + +type: keyword + +example: Zone.Identifer + + +**`threat.indicator.file.gid`** +: Primary group ID (GID) of the file. + +type: keyword + +example: 1001 + + +**`threat.indicator.file.group`** +: Primary group name of the file. + +type: keyword + +example: alice + + +**`threat.indicator.file.hash.md5`** +: MD5 hash. + +type: keyword + + +**`threat.indicator.file.hash.sha1`** +: SHA1 hash. + +type: keyword + + +**`threat.indicator.file.hash.sha256`** +: SHA256 hash. + +type: keyword + + +**`threat.indicator.file.hash.sha512`** +: SHA512 hash. + +type: keyword + + +**`threat.indicator.file.hash.ssdeep`** +: SSDEEP hash. + +type: keyword + + +**`threat.indicator.file.inode`** +: Inode representing the file in the filesystem. + +type: keyword + +example: 256383 + + +**`threat.indicator.file.mime_type`** +: MIME type should identify the format of the file or stream of bytes using [IANA official types](https://www.iana.org/assignments/media-types/media-types.xhtml), where possible. When more than one type is applicable, the most specific type should be used. + +type: keyword + + +**`threat.indicator.file.mode`** +: Mode of the file in octal representation. + +type: keyword + +example: 0640 + + +**`threat.indicator.file.mtime`** +: Last time the file content was modified. + +type: date + + +**`threat.indicator.file.name`** +: Name of the file including the extension, without the directory. + +type: keyword + +example: example.png + + +**`threat.indicator.file.owner`** +: File owner’s username. + +type: keyword + +example: alice + + +**`threat.indicator.file.path`** +: Full path to the file, including the file name. It should include the drive letter, when appropriate. + +type: keyword + +example: /home/alice/example.png + + +**`threat.indicator.file.path.text`** +: type: match_only_text + + +**`threat.indicator.file.pe.architecture`** +: CPU architecture target for the file. + +type: keyword + +example: x64 + + +**`threat.indicator.file.pe.company`** +: Internal company name of the file, provided at compile-time. + +type: keyword + +example: Microsoft Corporation + + +**`threat.indicator.file.pe.description`** +: Internal description of the file, provided at compile-time. + +type: keyword + +example: Paint + + +**`threat.indicator.file.pe.file_version`** +: Internal version of the file, provided at compile-time. + +type: keyword + +example: 6.3.9600.17415 + + +**`threat.indicator.file.pe.imphash`** +: A hash of the imports in a PE file. An imphash — or import hash — can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. Learn more at [https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html](https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html). + +type: keyword + +example: 0c6803c4e922103c4dca5963aad36ddf + + +**`threat.indicator.file.pe.original_file_name`** +: Internal name of the file, provided at compile-time. + +type: keyword + +example: MSPAINT.EXE + + +**`threat.indicator.file.pe.product`** +: Internal product name of the file, provided at compile-time. + +type: keyword + +example: Microsoft® Windows® Operating System + + +**`threat.indicator.file.size`** +: File size in bytes. Only relevant when `file.type` is "file". + +type: long + +example: 16384 + + +**`threat.indicator.file.target_path`** +: Target path for symlinks. + +type: keyword + + +**`threat.indicator.file.target_path.text`** +: type: match_only_text + + +**`threat.indicator.file.type`** +: File type (file, dir, or symlink). + +type: keyword + +example: file + + +**`threat.indicator.file.uid`** +: The user ID (UID) or security identifier (SID) of the file owner. + +type: keyword + +example: 1001 + + +**`threat.indicator.file.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`threat.indicator.file.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`threat.indicator.file.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`threat.indicator.file.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`threat.indicator.file.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`threat.indicator.file.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`threat.indicator.file.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`threat.indicator.file.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.indicator.file.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`threat.indicator.file.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`threat.indicator.file.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`threat.indicator.file.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`threat.indicator.file.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`threat.indicator.file.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`threat.indicator.file.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`threat.indicator.file.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`threat.indicator.file.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`threat.indicator.file.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`threat.indicator.file.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`threat.indicator.file.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`threat.indicator.file.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`threat.indicator.file.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`threat.indicator.file.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.indicator.file.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`threat.indicator.first_seen`** +: The date and time when intelligence source first reported sighting this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.indicator.geo.city_name`** +: City name. + +type: keyword + +example: Montreal + + +**`threat.indicator.geo.continent_code`** +: Two-letter code representing continent’s name. + +type: keyword + +example: NA + + +**`threat.indicator.geo.continent_name`** +: Name of the continent. + +type: keyword + +example: North America + + +**`threat.indicator.geo.country_iso_code`** +: Country ISO code. + +type: keyword + +example: CA + + +**`threat.indicator.geo.country_name`** +: Country name. + +type: keyword + +example: Canada + + +**`threat.indicator.geo.location`** +: Longitude and latitude. + +type: geo_point + +example: { "lon": -73.614830, "lat": 45.505918 } + + +**`threat.indicator.geo.name`** +: User-defined description of a location, at the level of granularity they care about. Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. Not typically used in automated geolocation. + +type: keyword + +example: boston-dc + + +**`threat.indicator.geo.postal_code`** +: Postal code associated with the location. Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. + +type: keyword + +example: 94040 + + +**`threat.indicator.geo.region_iso_code`** +: Region ISO code. + +type: keyword + +example: CA-QC + + +**`threat.indicator.geo.region_name`** +: Region name. + +type: keyword + +example: Quebec + + +**`threat.indicator.geo.timezone`** +: The time zone of the location, such as IANA time zone name. + +type: keyword + +example: America/Argentina/Buenos_Aires + + +**`threat.indicator.ip`** +: Identifies a threat indicator as an IP address (irrespective of direction). + +type: ip + +example: 1.2.3.4 + + +**`threat.indicator.last_seen`** +: The date and time when intelligence source last reported sighting this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.indicator.marking.tlp`** +: Traffic Light Protocol sharing markings. Recommended values are: * WHITE * GREEN * AMBER * RED + +type: keyword + +example: WHITE + + +**`threat.indicator.modified_at`** +: The date and time when intelligence source last modified information for this indicator. + +type: date + +example: 2020-11-05T17:25:47.000Z + + +**`threat.indicator.port`** +: Identifies a threat indicator as a port number (irrespective of direction). + +type: long + +example: 443 + + +**`threat.indicator.provider`** +: The name of the indicator’s provider. + +type: keyword + +example: lrz_urlhaus + + +**`threat.indicator.reference`** +: Reference URL linking to additional information about this indicator. + +type: keyword + +example: [https://system.example.com/indicator/0001234](https://system.example.com/indicator/0001234) + + +**`threat.indicator.registry.data.bytes`** +: Original bytes written with base64 encoding. For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. + +type: keyword + +example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= + + +**`threat.indicator.registry.data.strings`** +: Content when writing string types. Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). + +type: wildcard + +example: ["C:\rta\red_ttp\bin\myapp.exe"] + + +**`threat.indicator.registry.data.type`** +: Standard registry type for encoding contents + +type: keyword + +example: REG_SZ + + +**`threat.indicator.registry.hive`** +: Abbreviated name for the hive. + +type: keyword + +example: HKLM + + +**`threat.indicator.registry.key`** +: Hive-relative path of keys. + +type: keyword + +example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe + + +**`threat.indicator.registry.path`** +: Full path, including hive, key and value + +type: keyword + +example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger + + +**`threat.indicator.registry.value`** +: Name of the value written. + +type: keyword + +example: Debugger + + +**`threat.indicator.scanner_stats`** +: Count of AV/EDR vendors that successfully detected malicious file or URL. + +type: long + +example: 4 + + +**`threat.indicator.sightings`** +: Number of times this indicator was observed conducting threat activity. + +type: long + +example: 20 + + +**`threat.indicator.type`** +: Type of indicator as represented by Cyber Observable in STIX 2.0. Recommended values: * autonomous-system * artifact * directory * domain-name * email-addr * file * ipv4-addr * ipv6-addr * mac-addr * mutex * port * process * software * url * user-account * windows-registry-key * x509-certificate + +type: keyword + +example: ipv4-addr + + +**`threat.indicator.url.domain`** +: Domain of the url, such as "www.elastic.co". In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. + +type: keyword + +example: www.elastic.co + + +**`threat.indicator.url.extension`** +: The field contains the file extension from the original request url, excluding the leading dot. The file extension is only set if it exists, as not every url has a file extension. The leading period must not be included. For example, the value must be "png", not ".png". Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`threat.indicator.url.fragment`** +: Portion of the url after the `#`, such as "top". The `#` is not part of the fragment. + +type: keyword + + +**`threat.indicator.url.full`** +: If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) + + +**`threat.indicator.url.full.text`** +: type: match_only_text + + +**`threat.indicator.url.original`** +: Unmodified original url as seen in the event source. Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. This field is meant to represent the URL as it was observed, complete or not. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) or /search?q=elasticsearch + + +**`threat.indicator.url.original.text`** +: type: match_only_text + + +**`threat.indicator.url.password`** +: Password of the request. + +type: keyword + + +**`threat.indicator.url.path`** +: Path of the request, such as "/search". + +type: wildcard + + +**`threat.indicator.url.port`** +: Port of the request, such as 443. + +type: long + +example: 443 + +format: string + + +**`threat.indicator.url.query`** +: The query field describes the query string of the request, such as "q=elasticsearch". The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. + +type: keyword + + +**`threat.indicator.url.registered_domain`** +: The highest registered url domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`threat.indicator.url.scheme`** +: Scheme of the request, such as "https". Note: The `:` is not part of the scheme. + +type: keyword + +example: https + + +**`threat.indicator.url.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`threat.indicator.url.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`threat.indicator.url.username`** +: Username of the request. + +type: keyword + + +**`threat.indicator.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`threat.indicator.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`threat.indicator.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`threat.indicator.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`threat.indicator.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`threat.indicator.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`threat.indicator.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`threat.indicator.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.indicator.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`threat.indicator.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`threat.indicator.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`threat.indicator.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`threat.indicator.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`threat.indicator.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`threat.indicator.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`threat.indicator.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`threat.indicator.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`threat.indicator.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`threat.indicator.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`threat.indicator.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`threat.indicator.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`threat.indicator.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`threat.indicator.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`threat.indicator.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`threat.software.alias`** +: The alias(es) of the software for a set of related intrusion activity that are tracked by a common name in the security community. While not required, you can use a MITRE ATT&CK® associated software description. + +type: keyword + +example: [ "X-Agent" ] + + +**`threat.software.id`** +: The id of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. While not required, you can use a MITRE ATT&CK® software id. + +type: keyword + +example: S0552 + + +**`threat.software.name`** +: The name of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. While not required, you can use a MITRE ATT&CK® software name. + +type: keyword + +example: AdFind + + +**`threat.software.platforms`** +: The platforms of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. Recommended Values: * AWS * Azure * Azure AD * GCP * Linux * macOS * Network * Office 365 * SaaS * Windows + +While not required, you can use a MITRE ATT&CK® software platforms. + +type: keyword + +example: [ "Windows" ] + + +**`threat.software.reference`** +: The reference URL of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. While not required, you can use a MITRE ATT&CK® software reference URL. + +type: keyword + +example: [https://attack.mitre.org/software/S0552/](https://attack.mitre.org/software/S0552/) + + +**`threat.software.type`** +: The type of software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. Recommended values * Malware * Tool + +``` +While not required, you can use a MITRE ATT&CK® software type. +``` +type: keyword + +example: Tool + + +**`threat.tactic.id`** +: The id of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. [https://attack.mitre.org/tactics/TA0002/](https://attack.mitre.org/tactics/TA0002/) ) + +type: keyword + +example: TA0002 + + +**`threat.tactic.name`** +: Name of the type of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. [https://attack.mitre.org/tactics/TA0002/](https://attack.mitre.org/tactics/TA0002/)) + +type: keyword + +example: Execution + + +**`threat.tactic.reference`** +: The reference url of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. [https://attack.mitre.org/tactics/TA0002/](https://attack.mitre.org/tactics/TA0002/) ) + +type: keyword + +example: [https://attack.mitre.org/tactics/TA0002/](https://attack.mitre.org/tactics/TA0002/) + + +**`threat.technique.id`** +: The id of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. [https://attack.mitre.org/techniques/T1059/](https://attack.mitre.org/techniques/T1059/)) + +type: keyword + +example: T1059 + + +**`threat.technique.name`** +: The name of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. [https://attack.mitre.org/techniques/T1059/](https://attack.mitre.org/techniques/T1059/)) + +type: keyword + +example: Command and Scripting Interpreter + + +**`threat.technique.name.text`** +: type: match_only_text + + +**`threat.technique.reference`** +: The reference url of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. [https://attack.mitre.org/techniques/T1059/](https://attack.mitre.org/techniques/T1059/)) + +type: keyword + +example: [https://attack.mitre.org/techniques/T1059/](https://attack.mitre.org/techniques/T1059/) + + +**`threat.technique.subtechnique.id`** +: The full id of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. [https://attack.mitre.org/techniques/T1059/001/](https://attack.mitre.org/techniques/T1059/001/)) + +type: keyword + +example: T1059.001 + + +**`threat.technique.subtechnique.name`** +: The name of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. [https://attack.mitre.org/techniques/T1059/001/](https://attack.mitre.org/techniques/T1059/001/)) + +type: keyword + +example: PowerShell + + +**`threat.technique.subtechnique.name.text`** +: type: match_only_text + + +**`threat.technique.subtechnique.reference`** +: The reference url of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. [https://attack.mitre.org/techniques/T1059/001/](https://attack.mitre.org/techniques/T1059/001/)) + +type: keyword + +example: [https://attack.mitre.org/techniques/T1059/001/](https://attack.mitre.org/techniques/T1059/001/) + + + +## tls [_tls] + +Fields related to a TLS connection. These fields focus on the TLS protocol itself and intentionally avoids in-depth analysis of the related x.509 certificate files. + +**`tls.cipher`** +: String indicating the cipher used during the current connection. + +type: keyword + +example: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 + + +**`tls.client.certificate`** +: PEM-encoded stand-alone certificate offered by the client. This is usually mutually-exclusive of `client.certificate_chain` since this value also exists in that list. + +type: keyword + +example: MII…​ + + +**`tls.client.certificate_chain`** +: Array of PEM-encoded certificates that make up the certificate chain offered by the client. This is usually mutually-exclusive of `client.certificate` since that value should be the first certificate in the chain. + +type: keyword + +example: ["MII…​", "MII…​"] + + +**`tls.client.hash.md5`** +: Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC + + +**`tls.client.hash.sha1`** +: Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 9E393D93138888D288266C2D915214D1D1CCEB2A + + +**`tls.client.hash.sha256`** +: Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 + + +**`tls.client.issuer`** +: Distinguished name of subject of the issuer of the x.509 certificate presented by the client. + +type: keyword + +example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com + + +**`tls.client.ja3`** +: A hash that identifies clients based on how they perform an SSL/TLS handshake. + +type: keyword + +example: d4e5b18d6b55c71272893221c96ba240 + + +**`tls.client.not_after`** +: Date/Time indicating when client certificate is no longer considered valid. + +type: date + +example: 2021-01-01T00:00:00.000Z + + +**`tls.client.not_before`** +: Date/Time indicating when client certificate is first considered valid. + +type: date + +example: 1970-01-01T00:00:00.000Z + + +**`tls.client.server_name`** +: Also called an SNI, this tells the server which hostname to which the client is attempting to connect to. When this value is available, it should get copied to `destination.domain`. + +type: keyword + +example: www.elastic.co + + +**`tls.client.subject`** +: Distinguished name of subject of the x.509 certificate presented by the client. + +type: keyword + +example: CN=myclient, OU=Documentation Team, DC=example, DC=com + + +**`tls.client.supported_ciphers`** +: Array of ciphers offered by the client during the client hello. + +type: keyword + +example: ["TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "…​"] + + +**`tls.client.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`tls.client.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`tls.client.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`tls.client.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`tls.client.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`tls.client.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`tls.client.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`tls.client.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`tls.client.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`tls.client.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`tls.client.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`tls.client.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`tls.client.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`tls.client.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`tls.client.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`tls.client.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`tls.client.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`tls.client.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`tls.client.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`tls.client.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`tls.client.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`tls.client.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`tls.client.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`tls.client.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`tls.curve`** +: String indicating the curve used for the given cipher, when applicable. + +type: keyword + +example: secp256r1 + + +**`tls.established`** +: Boolean flag indicating if the TLS negotiation was successful and transitioned to an encrypted tunnel. + +type: boolean + + +**`tls.next_protocol`** +: String indicating the protocol being tunneled. Per the values in the IANA registry ([https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids](https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids)), this string should be lower case. + +type: keyword + +example: http/1.1 + + +**`tls.resumed`** +: Boolean flag indicating if this TLS connection was resumed from an existing TLS negotiation. + +type: boolean + + +**`tls.server.certificate`** +: PEM-encoded stand-alone certificate offered by the server. This is usually mutually-exclusive of `server.certificate_chain` since this value also exists in that list. + +type: keyword + +example: MII…​ + + +**`tls.server.certificate_chain`** +: Array of PEM-encoded certificates that make up the certificate chain offered by the server. This is usually mutually-exclusive of `server.certificate` since that value should be the first certificate in the chain. + +type: keyword + +example: ["MII…​", "MII…​"] + + +**`tls.server.hash.md5`** +: Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC + + +**`tls.server.hash.sha1`** +: Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 9E393D93138888D288266C2D915214D1D1CCEB2A + + +**`tls.server.hash.sha256`** +: Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. + +type: keyword + +example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 + + +**`tls.server.issuer`** +: Subject of the issuer of the x.509 certificate presented by the server. + +type: keyword + +example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com + + +**`tls.server.ja3s`** +: A hash that identifies servers based on how they perform an SSL/TLS handshake. + +type: keyword + +example: 394441ab65754e2207b1e1b457b3641d + + +**`tls.server.not_after`** +: Timestamp indicating when server certificate is no longer considered valid. + +type: date + +example: 2021-01-01T00:00:00.000Z + + +**`tls.server.not_before`** +: Timestamp indicating when server certificate is first considered valid. + +type: date + +example: 1970-01-01T00:00:00.000Z + + +**`tls.server.subject`** +: Subject of the x.509 certificate presented by the server. + +type: keyword + +example: CN=www.example.com, OU=Infrastructure Team, DC=example, DC=com + + +**`tls.server.x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`tls.server.x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`tls.server.x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`tls.server.x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`tls.server.x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`tls.server.x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`tls.server.x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`tls.server.x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`tls.server.x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`tls.server.x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`tls.server.x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`tls.server.x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`tls.server.x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`tls.server.x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`tls.server.x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`tls.server.x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`tls.server.x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`tls.server.x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`tls.server.x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`tls.server.x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`tls.server.x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`tls.server.x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`tls.server.x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`tls.server.x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + +**`tls.version`** +: Numeric part of the version parsed from the original string. + +type: keyword + +example: 1.2 + + +**`tls.version_protocol`** +: Normalized lowercase protocol name parsed from original string. + +type: keyword + +example: tls + + +**`span.id`** +: Unique identifier of the span within the scope of its trace. A span represents an operation within a transaction, such as a request to another service, or a database query. + +type: keyword + +example: 3ff9a8981b7ccd5a + + +**`trace.id`** +: Unique identifier of the trace. A trace groups multiple events like transactions that belong together. For example, a user request handled by multiple inter-connected services. + +type: keyword + +example: 4bf92f3577b34da6a3ce929d0e0e4736 + + +**`transaction.id`** +: Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server. + +type: keyword + +example: 00f067aa0ba902b7 + + + +## url [_url_3] + +URL fields provide support for complete or partial URLs, and supports the breaking down into scheme, domain, path, and so on. + +**`url.domain`** +: Domain of the url, such as "www.elastic.co". In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. + +type: keyword + +example: www.elastic.co + + +**`url.extension`** +: The field contains the file extension from the original request url, excluding the leading dot. The file extension is only set if it exists, as not every url has a file extension. The leading period must not be included. For example, the value must be "png", not ".png". Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). + +type: keyword + +example: png + + +**`url.fragment`** +: Portion of the url after the `#`, such as "top". The `#` is not part of the fragment. + +type: keyword + + +**`url.full`** +: If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) + + +**`url.full.text`** +: type: match_only_text + + +**`url.original`** +: Unmodified original url as seen in the event source. Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. This field is meant to represent the URL as it was observed, complete or not. + +type: wildcard + +example: [https://www.elastic.co:443/search?q=elasticsearch#top](https://www.elastic.co:443/search?q=elasticsearch#top) or /search?q=elasticsearch + + +**`url.original.text`** +: type: match_only_text + + +**`url.password`** +: Password of the request. + +type: keyword + + +**`url.path`** +: Path of the request, such as "/search". + +type: wildcard + + +**`url.port`** +: Port of the request, such as 443. + +type: long + +example: 443 + +format: string + + +**`url.query`** +: The query field describes the query string of the request, such as "q=elasticsearch". The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. + +type: keyword + + +**`url.registered_domain`** +: The highest registered url domain, stripped of the subdomain. For example, the registered domain for "foo.example.com" is "example.com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". + +type: keyword + +example: example.com + + +**`url.scheme`** +: Scheme of the request, such as "https". Note: The `:` is not part of the scheme. + +type: keyword + +example: https + + +**`url.subdomain`** +: The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. + +type: keyword + +example: east + + +**`url.top_level_domain`** +: The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". This value can be determined precisely with a list like the public suffix list ([http://publicsuffix.org](http://publicsuffix.org)). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". + +type: keyword + +example: co.uk + + +**`url.username`** +: Username of the request. + +type: keyword + + + +## user [_user_2] + +The user fields describe information about the user that is relevant to the event. Fields can have one entry or multiple entries. If a user has more than one id, provide an array that includes all of them. + +**`user.changes.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.changes.email`** +: User email address. + +type: keyword + + +**`user.changes.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`user.changes.full_name.text`** +: type: match_only_text + + +**`user.changes.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.changes.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.changes.group.name`** +: Name of the group. + +type: keyword + + +**`user.changes.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`user.changes.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`user.changes.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`user.changes.name.text`** +: type: match_only_text + + +**`user.changes.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + +**`user.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.effective.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.effective.email`** +: User email address. + +type: keyword + + +**`user.effective.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`user.effective.full_name.text`** +: type: match_only_text + + +**`user.effective.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.effective.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.effective.group.name`** +: Name of the group. + +type: keyword + + +**`user.effective.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`user.effective.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`user.effective.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`user.effective.name.text`** +: type: match_only_text + + +**`user.effective.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + +**`user.email`** +: User email address. + +type: keyword + + +**`user.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`user.full_name.text`** +: type: match_only_text + + +**`user.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.group.name`** +: Name of the group. + +type: keyword + + +**`user.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`user.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`user.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`user.name.text`** +: type: match_only_text + + +**`user.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + +**`user.target.domain`** +: Name of the directory the user is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.target.email`** +: User email address. + +type: keyword + + +**`user.target.full_name`** +: User’s full name, if available. + +type: keyword + +example: Albert Einstein + + +**`user.target.full_name.text`** +: type: match_only_text + + +**`user.target.group.domain`** +: Name of the directory the group is a member of. For example, an LDAP or Active Directory domain name. + +type: keyword + + +**`user.target.group.id`** +: Unique identifier for the group on the system/platform. + +type: keyword + + +**`user.target.group.name`** +: Name of the group. + +type: keyword + + +**`user.target.hash`** +: Unique user hash to correlate information for a user in anonymized form. Useful if `user.id` or `user.name` contain confidential information and cannot be used. + +type: keyword + + +**`user.target.id`** +: Unique identifier of the user. + +type: keyword + +example: S-1-5-21-202424912787-2692429404-2351956786-1000 + + +**`user.target.name`** +: Short name or login of the user. + +type: keyword + +example: a.einstein + + +**`user.target.name.text`** +: type: match_only_text + + +**`user.target.roles`** +: Array of user roles at the time of the event. + +type: keyword + +example: ["kibana_admin", "reporting_user"] + + + +## user_agent [_user_agent] + +The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. + +**`user_agent.device.name`** +: Name of the device. + +type: keyword + +example: iPhone + + +**`user_agent.name`** +: Name of the user agent. + +type: keyword + +example: Safari + + +**`user_agent.original`** +: Unparsed user_agent string. + +type: keyword + +example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 + + +**`user_agent.original.text`** +: type: match_only_text + + +**`user_agent.os.family`** +: OS family (such as redhat, debian, freebsd, windows). + +type: keyword + +example: debian + + +**`user_agent.os.full`** +: Operating system name, including the version or code name. + +type: keyword + +example: Mac OS Mojave + + +**`user_agent.os.full.text`** +: type: match_only_text + + +**`user_agent.os.kernel`** +: Operating system kernel version as a raw string. + +type: keyword + +example: 4.4.0-112-generic + + +**`user_agent.os.name`** +: Operating system name, without the version. + +type: keyword + +example: Mac OS X + + +**`user_agent.os.name.text`** +: type: match_only_text + + +**`user_agent.os.platform`** +: Operating system platform (such centos, ubuntu, windows). + +type: keyword + +example: darwin + + +**`user_agent.os.type`** +: Use the `os.type` field to categorize the operating system into one of the broad commercial families. One of these following values should be used (lowercase): linux, macos, unix, windows. If the OS you’re dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. + +type: keyword + +example: macos + + +**`user_agent.os.version`** +: Operating system version as a raw string. + +type: keyword + +example: 10.14.1 + + +**`user_agent.version`** +: Version of the user agent. + +type: keyword + +example: 12.0 + + + +## vlan [_vlan] + +The VLAN fields are used to identify 802.1q tag(s) of a packet, as well as ingress and egress VLAN associations of an observer in relation to a specific packet or connection. Network.vlan fields are used to record a single VLAN tag, or the outer tag in the case of q-in-q encapsulations, for a packet or connection as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. Network.inner VLAN fields are used to report inner q-in-q 802.1q tags (multiple 802.1q encapsulations) as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. Network.inner VLAN fields should only be used in addition to network.vlan fields to indicate q-in-q tagging. Observer.ingress and observer.egress VLAN values are used to record observer specific information when observer events contain discrete ingress and egress VLAN information, typically provided by firewalls, routers, or load balancers. + +**`vlan.id`** +: VLAN ID as reported by the observer. + +type: keyword + +example: 10 + + +**`vlan.name`** +: Optional VLAN name as reported by the observer. + +type: keyword + +example: outside + + + +## vulnerability [_vulnerability] + +The vulnerability fields describe information about a vulnerability that is relevant to an event. + +**`vulnerability.category`** +: The type of system or architecture that the vulnerability affects. These may be platform-specific (for example, Debian or SUSE) or general (for example, Database or Firewall). For example ([Qualys vulnerability categories](https://qualysguard.qualys.com/qwebhelp/fo_portal/knowledgebase/vulnerability_categories.htm)) This field must be an array. + +type: keyword + +example: ["Firewall"] + + +**`vulnerability.classification`** +: The classification of the vulnerability scoring system. For example ([https://www.first.org/cvss/](https://www.first.org/cvss/)) + +type: keyword + +example: CVSS + + +**`vulnerability.description`** +: The description of the vulnerability that provides additional context of the vulnerability. For example ([Common Vulnerabilities and Exposure CVE description](https://cve.mitre.org/about/faqs.html#cve_entry_descriptions_created)) + +type: keyword + +example: In macOS before 2.12.6, there is a vulnerability in the RPC…​ + + +**`vulnerability.description.text`** +: type: match_only_text + + +**`vulnerability.enumeration`** +: The type of identifier used for this vulnerability. For example ([https://cve.mitre.org/about/](https://cve.mitre.org/about/)) + +type: keyword + +example: CVE + + +**`vulnerability.id`** +: The identification (ID) is the number portion of a vulnerability entry. It includes a unique identification number for the vulnerability. For example ([Common Vulnerabilities and Exposure CVE ID](https://cve.mitre.org/about/faqs.html#what_is_cve_id)) + +type: keyword + +example: CVE-2019-00001 + + +**`vulnerability.reference`** +: A resource that provides additional information, context, and mitigations for the identified vulnerability. + +type: keyword + +example: [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111) + + +**`vulnerability.report_id`** +: The report or scan identification number. + +type: keyword + +example: 20191018.0001 + + +**`vulnerability.scanner.vendor`** +: The name of the vulnerability scanner vendor. + +type: keyword + +example: Tenable + + +**`vulnerability.score.base`** +: Scores can range from 0.0 to 10.0, with 10.0 being the most severe. Base scores cover an assessment for exploitability metrics (attack vector, complexity, privileges, and user interaction), impact metrics (confidentiality, integrity, and availability), and scope. For example ([https://www.first.org/cvss/specification-document](https://www.first.org/cvss/specification-document)) + +type: float + +example: 5.5 + + +**`vulnerability.score.environmental`** +: Scores can range from 0.0 to 10.0, with 10.0 being the most severe. Environmental scores cover an assessment for any modified Base metrics, confidentiality, integrity, and availability requirements. For example ([https://www.first.org/cvss/specification-document](https://www.first.org/cvss/specification-document)) + +type: float + +example: 5.5 + + +**`vulnerability.score.temporal`** +: Scores can range from 0.0 to 10.0, with 10.0 being the most severe. Temporal scores cover an assessment for code maturity, remediation level, and confidence. For example ([https://www.first.org/cvss/specification-document](https://www.first.org/cvss/specification-document)) + +type: float + + +**`vulnerability.score.version`** +: The National Vulnerability Database (NVD) provides qualitative severity rankings of "Low", "Medium", and "High" for CVSS v2.0 base score ranges in addition to the severity ratings for CVSS v3.0 as they are defined in the CVSS v3.0 specification. CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based non-profit organization, whose mission is to help computer security incident response teams across the world. For example ([https://nvd.nist.gov/vuln-metrics/cvss](https://nvd.nist.gov/vuln-metrics/cvss)) + +type: keyword + +example: 2.0 + + +**`vulnerability.severity`** +: The severity of the vulnerability can help with metrics and internal prioritization regarding remediation. For example ([https://nvd.nist.gov/vuln-metrics/cvss](https://nvd.nist.gov/vuln-metrics/cvss)) + +type: keyword + +example: Critical + + + +## x509 [_x509] + +This implements the common core fields for x509 certificates. This information is likely logged with TLS sessions, digital signatures found in executable binaries, S/MIME information in email bodies, or analysis of files on disk. When the certificate relates to a file, use the fields at `file.x509`. When hashes of the DER-encoded certificate are available, the `hash` data set should be populated as well (e.g. `file.hash.sha256`). Events that contain certificate information about network connections, should use the x509 fields under the relevant TLS fields: `tls.server.x509` and/or `tls.client.x509`. + +**`x509.alternative_names`** +: List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. + +type: keyword + +example: *.elastic.co + + +**`x509.issuer.common_name`** +: List of common name (CN) of issuing certificate authority. + +type: keyword + +example: Example SHA2 High Assurance Server CA + + +**`x509.issuer.country`** +: List of country © codes + +type: keyword + +example: US + + +**`x509.issuer.distinguished_name`** +: Distinguished name (DN) of issuing certificate authority. + +type: keyword + +example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA + + +**`x509.issuer.locality`** +: List of locality names (L) + +type: keyword + +example: Mountain View + + +**`x509.issuer.organization`** +: List of organizations (O) of issuing certificate authority. + +type: keyword + +example: Example Inc + + +**`x509.issuer.organizational_unit`** +: List of organizational units (OU) of issuing certificate authority. + +type: keyword + +example: www.example.com + + +**`x509.issuer.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`x509.not_after`** +: Time at which the certificate is no longer considered valid. + +type: date + +example: 2020-07-16 03:15:39+00:00 + + +**`x509.not_before`** +: Time at which the certificate is first considered valid. + +type: date + +example: 2019-08-16 01:40:25+00:00 + + +**`x509.public_key_algorithm`** +: Algorithm used to generate the public key. + +type: keyword + +example: RSA + + +**`x509.public_key_curve`** +: The curve used by the elliptic curve public key algorithm. This is algorithm specific. + +type: keyword + +example: nistp521 + + +**`x509.public_key_exponent`** +: Exponent used to derive the public key. This is algorithm specific. + +type: long + +example: 65537 + +Field is not indexed. + + +**`x509.public_key_size`** +: The size of the public key space in bits. + +type: long + +example: 2048 + + +**`x509.serial_number`** +: Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. + +type: keyword + +example: 55FBB9C7DEBF09809D12CCAA + + +**`x509.signature_algorithm`** +: Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See [https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353](https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353). + +type: keyword + +example: SHA256-RSA + + +**`x509.subject.common_name`** +: List of common names (CN) of subject. + +type: keyword + +example: shared.global.example.net + + +**`x509.subject.country`** +: List of country © code + +type: keyword + +example: US + + +**`x509.subject.distinguished_name`** +: Distinguished name (DN) of the certificate subject entity. + +type: keyword + +example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net + + +**`x509.subject.locality`** +: List of locality names (L) + +type: keyword + +example: San Francisco + + +**`x509.subject.organization`** +: List of organizations (O) of subject. + +type: keyword + +example: Example, Inc. + + +**`x509.subject.organizational_unit`** +: List of organizational units (OU) of subject. + +type: keyword + + +**`x509.subject.state_or_province`** +: List of state or province names (ST, S, or P) + +type: keyword + +example: California + + +**`x509.version_number`** +: Version of x509 format. + +type: keyword + +example: 3 + + diff --git a/docs/reference/filebeat/exported-fields-elasticsearch.md b/docs/reference/filebeat/exported-fields-elasticsearch.md new file mode 100644 index 000000000000..e16a887cb1fb --- /dev/null +++ b/docs/reference/filebeat/exported-fields-elasticsearch.md @@ -0,0 +1,589 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-elasticsearch.html +--- + +# Elasticsearch fields [exported-fields-elasticsearch] + +elasticsearch Module + + +## elasticsearch [_elasticsearch] + +**`elasticsearch.component`** +: Elasticsearch component from where the log event originated + +type: keyword + +example: o.e.c.m.MetaDataCreateIndexService + + +**`elasticsearch.cluster.uuid`** +: UUID of the cluster + +type: keyword + +example: GmvrbHlNTiSVYiPf8kxg9g + + +**`elasticsearch.cluster.name`** +: Name of the cluster + +type: keyword + +example: docker-cluster + + +**`elasticsearch.node.id`** +: ID of the node + +type: keyword + +example: DSiWcTyeThWtUXLB9J0BMw + + +**`elasticsearch.node.name`** +: Name of the node + +type: keyword + +example: vWNJsZ3 + + +**`elasticsearch.index.name`** +: Index name + +type: keyword + +example: filebeat-test-input + + +**`elasticsearch.index.id`** +: Index id + +type: keyword + +example: aOGgDwbURfCV57AScqbCgw + + +**`elasticsearch.shard.id`** +: Id of the shard + +type: keyword + +example: 0 + + +**`elasticsearch.elastic_product_origin`** +: Used by Elastic stack to identify which component of the stack sent the request + +type: keyword + +example: kibana + + +**`elasticsearch.http.request.x_opaque_id`** +: Used by Elasticsearch to throttle and deduplicate deprecation warnings + +type: keyword + +example: v7app + + +**`elasticsearch.event.category`** +: Category of the deprecation event + +type: keyword + +example: compatible_api + + +**`elasticsearch.audit.layer`** +: The layer from which this event originated: rest, transport or ip_filter + +type: keyword + +example: rest + + +**`elasticsearch.audit.event_type`** +: The type of event that occurred: anonymous_access_denied, authentication_failed, access_denied, access_granted, connection_granted, connection_denied, tampered_request, run_as_granted, run_as_denied + +type: keyword + +example: access_granted + + +**`elasticsearch.audit.origin.type`** +: Where the request originated: rest (request originated from a REST API request), transport (request was received on the transport channel), local_node (the local node issued the request) + +type: keyword + +example: local_node + + +**`elasticsearch.audit.realm`** +: The authentication realm the authentication was validated against + +type: keyword + + +**`elasticsearch.audit.user.realm`** +: The user’s authentication realm, if authenticated + +type: keyword + + +**`elasticsearch.audit.user.roles`** +: Roles to which the principal belongs + +type: keyword + +example: [*kibana_admin*, *beats_admin*] + + +**`elasticsearch.audit.user.run_as.name`** +: type: keyword + + +**`elasticsearch.audit.user.run_as.realm`** +: type: keyword + + +**`elasticsearch.audit.component`** +: type: keyword + + +**`elasticsearch.audit.action`** +: The name of the action that was executed + +type: keyword + +example: cluster:monitor/main + + +**`elasticsearch.audit.url.params`** +: REST URI parameters + +example: {username=jacknich2} + + +**`elasticsearch.audit.indices`** +: Indices accessed by action + +type: keyword + +example: [*foo-2019.01.04*, *foo-2019.01.03*, *foo-2019.01.06*] + + +**`elasticsearch.audit.request.id`** +: Unique ID of request + +type: keyword + +example: WzL_kb6VSvOhAq0twPvHOQ + + +**`elasticsearch.audit.request.name`** +: The type of request that was executed + +type: keyword + +example: ClearScrollRequest + + +**`elasticsearch.audit.request_body`** +: type: alias + +alias to: http.request.body.content + + +**`elasticsearch.audit.origin_address`** +: type: alias + +alias to: source.ip + + +**`elasticsearch.audit.uri`** +: type: alias + +alias to: url.original + + +**`elasticsearch.audit.principal`** +: type: alias + +alias to: user.name + + +**`elasticsearch.audit.message`** +: type: text + + +**`elasticsearch.audit.invalidate.apikeys.owned_by_authenticated_user`** +: type: boolean + + +**`elasticsearch.audit.authentication.type`** +: type: keyword + + +**`elasticsearch.audit.opaque_id`** +: type: text + + + +## deprecation [_deprecation] + + +## gc [_gc] + +GC fileset fields. + + +## phase [_phase] + +Fields specific to GC phase. + +**`elasticsearch.gc.phase.name`** +: Name of the GC collection phase. + +type: keyword + + +**`elasticsearch.gc.phase.duration_sec`** +: Collection phase duration according to the Java virtual machine. + +type: float + + +**`elasticsearch.gc.phase.scrub_symbol_table_time_sec`** +: Pause time in seconds cleaning up symbol tables. + +type: float + + +**`elasticsearch.gc.phase.scrub_string_table_time_sec`** +: Pause time in seconds cleaning up string tables. + +type: float + + +**`elasticsearch.gc.phase.weak_refs_processing_time_sec`** +: Time spent processing weak references in seconds. + +type: float + + +**`elasticsearch.gc.phase.parallel_rescan_time_sec`** +: Time spent in seconds marking live objects while application is stopped. + +type: float + + +**`elasticsearch.gc.phase.class_unload_time_sec`** +: Time spent unloading unused classes in seconds. + +type: float + + + +## cpu_time [_cpu_time] + +Process CPU time spent performing collections. + +**`elasticsearch.gc.phase.cpu_time.user_sec`** +: CPU time spent outside the kernel. + +type: float + + +**`elasticsearch.gc.phase.cpu_time.sys_sec`** +: CPU time spent inside the kernel. + +type: float + + +**`elasticsearch.gc.phase.cpu_time.real_sec`** +: Total elapsed CPU time spent to complete the collection from start to finish. + +type: float + + +**`elasticsearch.gc.jvm_runtime_sec`** +: The time from JVM start up in seconds, as a floating point number. + +type: float + + +**`elasticsearch.gc.threads_total_stop_time_sec`** +: Garbage collection threads total stop time seconds. + +type: float + + +**`elasticsearch.gc.stopping_threads_time_sec`** +: Time took to stop threads seconds. + +type: float + + +**`elasticsearch.gc.tags`** +: GC logging tags. + +type: keyword + + + +## heap [_heap] + +Heap allocation and total size. + +**`elasticsearch.gc.heap.size_kb`** +: Total heap size in kilobytes. + +type: integer + + +**`elasticsearch.gc.heap.used_kb`** +: Used heap in kilobytes. + +type: integer + + + +## old_gen [_old_gen] + +Old generation occupancy and total size. + +**`elasticsearch.gc.old_gen.size_kb`** +: Total size of old generation in kilobytes. + +type: integer + + +**`elasticsearch.gc.old_gen.used_kb`** +: Old generation occupancy in kilobytes. + +type: integer + + + +## young_gen [_young_gen] + +Young generation occupancy and total size. + +**`elasticsearch.gc.young_gen.size_kb`** +: Total size of young generation in kilobytes. + +type: integer + + +**`elasticsearch.gc.young_gen.used_kb`** +: Young generation occupancy in kilobytes. + +type: integer + + + +## server [_server_2] + +Server log file + +**`elasticsearch.server.stacktrace`** +: Field is not indexed. + + + +## gc [_gc_2] + +GC log + + +## young [_young] + +Young GC + +**`elasticsearch.server.gc.young.one`** +: type: long + +example: + + +**`elasticsearch.server.gc.young.two`** +: type: long + +example: + + +**`elasticsearch.server.gc.overhead_seq`** +: Sequence number + +type: long + +example: 3449992 + + +**`elasticsearch.server.gc.collection_duration.ms`** +: Time spent in GC, in milliseconds + +type: float + +example: 1600 + + +**`elasticsearch.server.gc.observation_duration.ms`** +: Total time over which collection was observed, in milliseconds + +type: float + +example: 1800 + + + +## slowlog [_slowlog] + +Slowlog events from Elasticsearch + +**`elasticsearch.slowlog.logger`** +: Logger name + +type: keyword + +example: index.search.slowlog.fetch + + +**`elasticsearch.slowlog.took`** +: Time it took to execute the query + +type: keyword + +example: 300ms + + +**`elasticsearch.slowlog.types`** +: Types + +type: keyword + +example: + + +**`elasticsearch.slowlog.stats`** +: Stats groups + +type: keyword + +example: group1 + + +**`elasticsearch.slowlog.search_type`** +: Search type + +type: keyword + +example: QUERY_THEN_FETCH + + +**`elasticsearch.slowlog.source_query`** +: Slow query + +type: keyword + +example: {"query":{"match_all":{"boost":1.0}}} + + +**`elasticsearch.slowlog.extra_source`** +: Extra source information + +type: keyword + +example: + + +**`elasticsearch.slowlog.total_hits`** +: Total hits + +type: keyword + +example: 42 + + +**`elasticsearch.slowlog.total_shards`** +: Total queried shards + +type: keyword + +example: 22 + + +**`elasticsearch.slowlog.routing`** +: Routing + +type: keyword + +example: s01HZ2QBk9jw4gtgaFtn + + +**`elasticsearch.slowlog.id`** +: Id + +type: keyword + +example: + + +**`elasticsearch.slowlog.type`** +: Type + +type: keyword + +example: doc + + +**`elasticsearch.slowlog.source`** +: Source of document that was indexed + +type: keyword + + +**`elasticsearch.slowlog.user.realm`** +: The authentication realm the user was authenticated against + +type: keyword + +example: default_file + + +**`elasticsearch.slowlog.user.effective.realm`** +: The authentication realm the effective user was authenticated against + +type: keyword + +example: default_file + + +**`elasticsearch.slowlog.auth.type`** +: The authentication type used to authenticate the user. One of TOKEN | REALM | API_KEY + +type: keyword + +example: REALM + + +**`elasticsearch.slowlog.apikey.id`** +: The id of the API key used + +type: keyword + +example: WzL_kb6VSvOhAq0twPvHOQ + + +**`elasticsearch.slowlog.apikey.name`** +: The name of the API key used + +type: keyword + +example: my-api-key + + diff --git a/docs/reference/filebeat/exported-fields-envoyproxy.md b/docs/reference/filebeat/exported-fields-envoyproxy.md new file mode 100644 index 000000000000..f3becb7610a4 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-envoyproxy.md @@ -0,0 +1,52 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-envoyproxy.html +--- + +# Envoyproxy fields [exported-fields-envoyproxy] + +Module for handling logs produced by envoy + + +## envoyproxy [_envoyproxy] + +Fields from envoy proxy logs after normalization + +**`envoyproxy.log_type`** +: Envoy log type, normally ACCESS + +type: keyword + + +**`envoyproxy.response_flags`** +: Response flags + +type: keyword + + +**`envoyproxy.upstream_service_time`** +: Upstream service time in nanoseconds + +type: long + +format: duration + + +**`envoyproxy.request_id`** +: ID of the request + +type: keyword + + +**`envoyproxy.authority`** +: Envoy proxy authority field + +type: keyword + + +**`envoyproxy.proxy_type`** +: Envoy proxy type, tcp or http + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-fortinet.md b/docs/reference/filebeat/exported-fields-fortinet.md new file mode 100644 index 000000000000..6632cf7caa76 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-fortinet.md @@ -0,0 +1,2611 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-fortinet.html +--- + +# Fortinet fields [exported-fields-fortinet] + +fortinet Module + + +## fortinet [_fortinet] + +Fields from fortinet FortiOS + +**`fortinet.file.hash.crc32`** +: CRC32 Hash of file + +type: keyword + + + +## firewall [_firewall] + +Module for parsing Fortinet syslog. + +**`fortinet.firewall.acct_stat`** +: Accounting state (RADIUS) + +type: keyword + + +**`fortinet.firewall.acktime`** +: Alarm Acknowledge Time + +type: keyword + + +**`fortinet.firewall.act`** +: Action + +type: keyword + + +**`fortinet.firewall.action`** +: Status of the session + +type: keyword + + +**`fortinet.firewall.activity`** +: HA activity message + +type: keyword + + +**`fortinet.firewall.addr`** +: IP Address + +type: ip + + +**`fortinet.firewall.addr_type`** +: Address Type + +type: keyword + + +**`fortinet.firewall.addrgrp`** +: Address Group + +type: keyword + + +**`fortinet.firewall.adgroup`** +: AD Group Name + +type: keyword + + +**`fortinet.firewall.admin`** +: Admin User + +type: keyword + + +**`fortinet.firewall.age`** +: Time in seconds - time passed since last seen + +type: integer + + +**`fortinet.firewall.agent`** +: User agent - eg. agent="Mozilla/5.0" + +type: keyword + + +**`fortinet.firewall.alarmid`** +: Alarm ID + +type: integer + + +**`fortinet.firewall.alert`** +: Alert + +type: keyword + + +**`fortinet.firewall.analyticscksum`** +: The checksum of the file submitted for analytics + +type: keyword + + +**`fortinet.firewall.analyticssubmit`** +: The flag for analytics submission + +type: keyword + + +**`fortinet.firewall.ap`** +: Access Point + +type: keyword + + +**`fortinet.firewall.app-type`** +: Address Type + +type: keyword + + +**`fortinet.firewall.appact`** +: The security action from app control + +type: keyword + + +**`fortinet.firewall.appid`** +: Application ID + +type: integer + + +**`fortinet.firewall.applist`** +: Application Control profile + +type: keyword + + +**`fortinet.firewall.apprisk`** +: Application Risk Level + +type: keyword + + +**`fortinet.firewall.apscan`** +: The name of the AP, which scanned and detected the rogue AP + +type: keyword + + +**`fortinet.firewall.apsn`** +: Access Point + +type: keyword + + +**`fortinet.firewall.apstatus`** +: Access Point status + +type: keyword + + +**`fortinet.firewall.aptype`** +: Access Point type + +type: keyword + + +**`fortinet.firewall.assigned`** +: Assigned IP Address + +type: ip + + +**`fortinet.firewall.assignip`** +: Assigned IP Address + +type: ip + + +**`fortinet.firewall.attachment`** +: The flag for email attachement + +type: keyword + + +**`fortinet.firewall.attack`** +: Attack Name + +type: keyword + + +**`fortinet.firewall.attackcontext`** +: The trigger patterns and the packetdata with base64 encoding + +type: keyword + + +**`fortinet.firewall.attackcontextid`** +: Attack context id / total + +type: keyword + + +**`fortinet.firewall.attackid`** +: Attack ID + +type: integer + + +**`fortinet.firewall.auditid`** +: Audit ID + +type: long + + +**`fortinet.firewall.auditscore`** +: The Audit Score + +type: keyword + + +**`fortinet.firewall.audittime`** +: The time of the audit + +type: long + + +**`fortinet.firewall.authgrp`** +: Authorization Group + +type: keyword + + +**`fortinet.firewall.authid`** +: Authentication ID + +type: keyword + + +**`fortinet.firewall.authproto`** +: The protocol that initiated the authentication + +type: keyword + + +**`fortinet.firewall.authserver`** +: Authentication server + +type: keyword + + +**`fortinet.firewall.bandwidth`** +: Bandwidth + +type: keyword + + +**`fortinet.firewall.banned_rule`** +: NAC quarantine Banned Rule Name + +type: keyword + + +**`fortinet.firewall.banned_src`** +: NAC quarantine Banned Source IP + +type: keyword + + +**`fortinet.firewall.banword`** +: Banned word + +type: keyword + + +**`fortinet.firewall.botnetdomain`** +: Botnet Domain Name + +type: keyword + + +**`fortinet.firewall.botnetip`** +: Botnet IP Address + +type: ip + + +**`fortinet.firewall.bssid`** +: Service Set ID + +type: keyword + + +**`fortinet.firewall.call_id`** +: Caller ID + +type: keyword + + +**`fortinet.firewall.carrier_ep`** +: The FortiOS Carrier end-point identification + +type: keyword + + +**`fortinet.firewall.cat`** +: DNS category ID + +type: integer + + +**`fortinet.firewall.category`** +: Authentication category + +type: keyword + + +**`fortinet.firewall.cc`** +: CC Email Address + +type: keyword + + +**`fortinet.firewall.cdrcontent`** +: Cdrcontent + +type: keyword + + +**`fortinet.firewall.centralnatid`** +: Central NAT ID + +type: integer + + +**`fortinet.firewall.cert`** +: Certificate + +type: keyword + + +**`fortinet.firewall.cert-type`** +: Certificate type + +type: keyword + + +**`fortinet.firewall.certhash`** +: Certificate hash + +type: keyword + + +**`fortinet.firewall.cfgattr`** +: Configuration attribute + +type: keyword + + +**`fortinet.firewall.cfgobj`** +: Configuration object + +type: keyword + + +**`fortinet.firewall.cfgpath`** +: Configuration path + +type: keyword + + +**`fortinet.firewall.cfgtid`** +: Configuration transaction ID + +type: keyword + + +**`fortinet.firewall.cfgtxpower`** +: Configuration TX power + +type: integer + + +**`fortinet.firewall.channel`** +: Wireless Channel + +type: integer + + +**`fortinet.firewall.channeltype`** +: SSH channel type + +type: keyword + + +**`fortinet.firewall.chassisid`** +: Chassis ID + +type: integer + + +**`fortinet.firewall.checksum`** +: The checksum of the scanned file + +type: keyword + + +**`fortinet.firewall.chgheaders`** +: HTTP Headers + +type: keyword + + +**`fortinet.firewall.cldobjid`** +: Connector object ID + +type: keyword + + +**`fortinet.firewall.client_addr`** +: Wifi client address + +type: keyword + + +**`fortinet.firewall.cloudaction`** +: Cloud Action + +type: keyword + + +**`fortinet.firewall.clouduser`** +: Cloud User + +type: keyword + + +**`fortinet.firewall.column`** +: VOIP Column + +type: integer + + +**`fortinet.firewall.command`** +: CLI Command + +type: keyword + + +**`fortinet.firewall.community`** +: SNMP Community + +type: keyword + + +**`fortinet.firewall.configcountry`** +: Configuration country + +type: keyword + + +**`fortinet.firewall.connection_type`** +: FortiClient Connection Type + +type: keyword + + +**`fortinet.firewall.conserve`** +: Flag for conserve mode + +type: keyword + + +**`fortinet.firewall.constraint`** +: WAF http protocol restrictions + +type: keyword + + +**`fortinet.firewall.contentdisarmed`** +: Email scanned content + +type: keyword + + +**`fortinet.firewall.contenttype`** +: Content Type from HTTP header + +type: keyword + + +**`fortinet.firewall.cookies`** +: VPN Cookie + +type: keyword + + +**`fortinet.firewall.count`** +: Counts of action type + +type: integer + + +**`fortinet.firewall.countapp`** +: Number of App Ctrl logs associated with the session + +type: integer + + +**`fortinet.firewall.countav`** +: Number of AV logs associated with the session + +type: integer + + +**`fortinet.firewall.countcifs`** +: Number of CIFS logs associated with the session + +type: integer + + +**`fortinet.firewall.countdlp`** +: Number of DLP logs associated with the session + +type: integer + + +**`fortinet.firewall.countdns`** +: Number of DNS logs associated with the session + +type: integer + + +**`fortinet.firewall.countemail`** +: Number of email logs associated with the session + +type: integer + + +**`fortinet.firewall.countff`** +: Number of ff logs associated with the session + +type: integer + + +**`fortinet.firewall.countips`** +: Number of IPS logs associated with the session + +type: integer + + +**`fortinet.firewall.countssh`** +: Number of SSH logs associated with the session + +type: integer + + +**`fortinet.firewall.countssl`** +: Number of SSL logs associated with the session + +type: integer + + +**`fortinet.firewall.countwaf`** +: Number of WAF logs associated with the session + +type: integer + + +**`fortinet.firewall.countweb`** +: Number of Web filter logs associated with the session + +type: integer + + +**`fortinet.firewall.cpu`** +: CPU Usage + +type: integer + + +**`fortinet.firewall.craction`** +: Client Reputation Action + +type: integer + + +**`fortinet.firewall.criticalcount`** +: Number of critical ratings + +type: integer + + +**`fortinet.firewall.crl`** +: Client Reputation Level + +type: keyword + + +**`fortinet.firewall.crlevel`** +: Client Reputation Level + +type: keyword + + +**`fortinet.firewall.crscore`** +: Some description + +type: integer + + +**`fortinet.firewall.cveid`** +: CVE ID + +type: keyword + + +**`fortinet.firewall.daemon`** +: Daemon name + +type: keyword + + +**`fortinet.firewall.datarange`** +: Data range for reports + +type: keyword + + +**`fortinet.firewall.date`** +: Date + +type: keyword + + +**`fortinet.firewall.ddnsserver`** +: DDNS server + +type: ip + + +**`fortinet.firewall.desc`** +: Description + +type: keyword + + +**`fortinet.firewall.detectionmethod`** +: Detection method + +type: keyword + + +**`fortinet.firewall.devcategory`** +: Device category + +type: keyword + + +**`fortinet.firewall.devintfname`** +: HA device Interface Name + +type: keyword + + +**`fortinet.firewall.devtype`** +: Device type + +type: keyword + + +**`fortinet.firewall.dhcp_msg`** +: DHCP Message + +type: keyword + + +**`fortinet.firewall.dintf`** +: Destination interface + +type: keyword + + +**`fortinet.firewall.disk`** +: Assosciated disk + +type: keyword + + +**`fortinet.firewall.disklograte`** +: Disk logging rate + +type: long + + +**`fortinet.firewall.dlpextra`** +: DLP extra information + +type: keyword + + +**`fortinet.firewall.docsource`** +: DLP fingerprint document source + +type: keyword + + +**`fortinet.firewall.domainctrlauthstate`** +: CIFS domain auth state + +type: integer + + +**`fortinet.firewall.domainctrlauthtype`** +: CIFS domain auth type + +type: integer + + +**`fortinet.firewall.domainctrldomain`** +: CIFS domain auth domain + +type: keyword + + +**`fortinet.firewall.domainctrlip`** +: CIFS Domain IP + +type: ip + + +**`fortinet.firewall.domainctrlname`** +: CIFS Domain name + +type: keyword + + +**`fortinet.firewall.domainctrlprotocoltype`** +: CIFS Domain connection protocol + +type: integer + + +**`fortinet.firewall.domainctrlusername`** +: CIFS Domain username + +type: keyword + + +**`fortinet.firewall.domainfilteridx`** +: Domain filter ID + +type: integer + + +**`fortinet.firewall.domainfilterlist`** +: Domain filter name + +type: keyword + + +**`fortinet.firewall.ds`** +: Direction with distribution system + +type: keyword + + +**`fortinet.firewall.dst_int`** +: Destination interface + +type: keyword + + +**`fortinet.firewall.dstintfrole`** +: Destination interface role + +type: keyword + + +**`fortinet.firewall.dstcountry`** +: Destination country + +type: keyword + + +**`fortinet.firewall.dstdevcategory`** +: Destination device category + +type: keyword + + +**`fortinet.firewall.dstdevtype`** +: Destination device type + +type: keyword + + +**`fortinet.firewall.dstfamily`** +: Destination OS family + +type: keyword + + +**`fortinet.firewall.dsthwvendor`** +: Destination HW vendor + +type: keyword + + +**`fortinet.firewall.dsthwversion`** +: Destination HW version + +type: keyword + + +**`fortinet.firewall.dstinetsvc`** +: Destination interface service + +type: keyword + + +**`fortinet.firewall.dstosname`** +: Destination OS name + +type: keyword + + +**`fortinet.firewall.dstosversion`** +: Destination OS version + +type: keyword + + +**`fortinet.firewall.dstserver`** +: Destination server + +type: integer + + +**`fortinet.firewall.dstssid`** +: Destination SSID + +type: keyword + + +**`fortinet.firewall.dstswversion`** +: Destination software version + +type: keyword + + +**`fortinet.firewall.dstunauthusersource`** +: Destination unauthenticated source + +type: keyword + + +**`fortinet.firewall.dstuuid`** +: UUID of the Destination IP address + +type: keyword + + +**`fortinet.firewall.duid`** +: DHCP UID + +type: keyword + + +**`fortinet.firewall.eapolcnt`** +: EAPOL packet count + +type: integer + + +**`fortinet.firewall.eapoltype`** +: EAPOL packet type + +type: keyword + + +**`fortinet.firewall.encrypt`** +: Whether the packet is encrypted or not + +type: integer + + +**`fortinet.firewall.encryption`** +: Encryption method + +type: keyword + + +**`fortinet.firewall.epoch`** +: Epoch used for locating file + +type: integer + + +**`fortinet.firewall.espauth`** +: ESP Authentication + +type: keyword + + +**`fortinet.firewall.esptransform`** +: ESP Transform + +type: keyword + + +**`fortinet.firewall.eventtype`** +: UTM Event Type + +type: keyword + + +**`fortinet.firewall.exch`** +: Mail Exchanges from DNS response answer section + +type: keyword + + +**`fortinet.firewall.exchange`** +: Mail Exchanges from DNS response answer section + +type: keyword + + +**`fortinet.firewall.expectedsignature`** +: Expected SSL signature + +type: keyword + + +**`fortinet.firewall.expiry`** +: FortiGuard override expiry timestamp + +type: keyword + + +**`fortinet.firewall.fams_pause`** +: Fortinet Analysis and Management Service Pause + +type: integer + + +**`fortinet.firewall.fazlograte`** +: FortiAnalyzer Logging Rate + +type: long + + +**`fortinet.firewall.fctemssn`** +: FortiClient Endpoint SSN + +type: keyword + + +**`fortinet.firewall.fctuid`** +: FortiClient UID + +type: keyword + + +**`fortinet.firewall.field`** +: NTP status field + +type: keyword + + +**`fortinet.firewall.filefilter`** +: The filter used to identify the affected file + +type: keyword + + +**`fortinet.firewall.filehashsrc`** +: Filehash source + +type: keyword + + +**`fortinet.firewall.filtercat`** +: DLP filter category + +type: keyword + + +**`fortinet.firewall.filteridx`** +: DLP filter ID + +type: integer + + +**`fortinet.firewall.filtername`** +: DLP rule name + +type: keyword + + +**`fortinet.firewall.filtertype`** +: DLP filter type + +type: keyword + + +**`fortinet.firewall.fortiguardresp`** +: Antispam ESP value + +type: keyword + + +**`fortinet.firewall.forwardedfor`** +: Email address forwarded + +type: keyword + + +**`fortinet.firewall.fqdn`** +: FQDN + +type: keyword + + +**`fortinet.firewall.frametype`** +: Wireless frametype + +type: keyword + + +**`fortinet.firewall.freediskstorage`** +: Free disk integer + +type: integer + + +**`fortinet.firewall.from`** +: From email address + +type: keyword + + +**`fortinet.firewall.from_vcluster`** +: Source virtual cluster number + +type: integer + + +**`fortinet.firewall.fsaverdict`** +: FSA verdict + +type: keyword + + +**`fortinet.firewall.fwserver_name`** +: Web proxy server name + +type: keyword + + +**`fortinet.firewall.gateway`** +: Gateway ip address for PPPoE status report + +type: ip + + +**`fortinet.firewall.green`** +: Memory status + +type: keyword + + +**`fortinet.firewall.groupid`** +: User Group ID + +type: integer + + +**`fortinet.firewall.ha-prio`** +: HA Priority + +type: integer + + +**`fortinet.firewall.ha_group`** +: HA Group + +type: keyword + + +**`fortinet.firewall.ha_role`** +: HA Role + +type: keyword + + +**`fortinet.firewall.handshake`** +: SSL Handshake + +type: keyword + + +**`fortinet.firewall.hash`** +: Hash value of downloaded file + +type: keyword + + +**`fortinet.firewall.hbdn_reason`** +: Heartbeat down reason + +type: keyword + + +**`fortinet.firewall.highcount`** +: Highcount fabric summary + +type: integer + + +**`fortinet.firewall.host`** +: Hostname + +type: keyword + + +**`fortinet.firewall.iaid`** +: DHCPv6 id + +type: keyword + + +**`fortinet.firewall.icmpcode`** +: Destination Port of the ICMP message + +type: keyword + + +**`fortinet.firewall.icmpid`** +: Source port of the ICMP message + +type: keyword + + +**`fortinet.firewall.icmptype`** +: The type of ICMP message + +type: keyword + + +**`fortinet.firewall.identifier`** +: Network traffic identifier + +type: integer + + +**`fortinet.firewall.in_spi`** +: IPSEC inbound SPI + +type: keyword + + +**`fortinet.firewall.incidentserialno`** +: Incident serial number + +type: integer + + +**`fortinet.firewall.infected`** +: Infected MMS + +type: integer + + +**`fortinet.firewall.infectedfilelevel`** +: DLP infected file level + +type: integer + + +**`fortinet.firewall.informationsource`** +: Information source + +type: keyword + + +**`fortinet.firewall.init`** +: IPSEC init stage + +type: keyword + + +**`fortinet.firewall.initiator`** +: Original login user name for Fortiguard override + +type: keyword + + +**`fortinet.firewall.interface`** +: Related interface + +type: keyword + + +**`fortinet.firewall.intf`** +: Related interface + +type: keyword + + +**`fortinet.firewall.invalidmac`** +: The MAC address with invalid OUI + +type: keyword + + +**`fortinet.firewall.ip`** +: Related IP + +type: ip + + +**`fortinet.firewall.iptype`** +: Related IP type + +type: keyword + + +**`fortinet.firewall.keyword`** +: Keyword used for search + +type: keyword + + +**`fortinet.firewall.kind`** +: VOIP kind + +type: keyword + + +**`fortinet.firewall.lanin`** +: LAN incoming traffic in bytes + +type: long + + +**`fortinet.firewall.lanout`** +: LAN outbound traffic in bytes + +type: long + + +**`fortinet.firewall.lease`** +: DHCP lease + +type: integer + + +**`fortinet.firewall.license_limit`** +: Maximum Number of FortiClients for the License + +type: keyword + + +**`fortinet.firewall.limit`** +: Virtual Domain Resource Limit + +type: integer + + +**`fortinet.firewall.line`** +: VOIP line + +type: keyword + + +**`fortinet.firewall.live`** +: Time in seconds + +type: integer + + +**`fortinet.firewall.local`** +: Local IP for a PPPD Connection + +type: ip + + +**`fortinet.firewall.log`** +: Log message + +type: keyword + + +**`fortinet.firewall.login`** +: SSH login + +type: keyword + + +**`fortinet.firewall.lowcount`** +: Fabric lowcount + +type: integer + + +**`fortinet.firewall.mac`** +: DHCP mac address + +type: keyword + + +**`fortinet.firewall.malform_data`** +: VOIP malformed data + +type: integer + + +**`fortinet.firewall.malform_desc`** +: VOIP malformed data description + +type: keyword + + +**`fortinet.firewall.manuf`** +: Manufacturer name + +type: keyword + + +**`fortinet.firewall.masterdstmac`** +: Master mac address for a host with multiple network interfaces + +type: keyword + + +**`fortinet.firewall.mastersrcmac`** +: The master MAC address for a host that has multiple network interfaces + +type: keyword + + +**`fortinet.firewall.mediumcount`** +: Fabric medium count + +type: integer + + +**`fortinet.firewall.mem`** +: Memory usage system statistics + +type: integer + + +**`fortinet.firewall.meshmode`** +: Wireless mesh mode + +type: keyword + + +**`fortinet.firewall.message_type`** +: VOIP message type + +type: keyword + + +**`fortinet.firewall.method`** +: HTTP method + +type: keyword + + +**`fortinet.firewall.mgmtcnt`** +: The number of unauthorized client flooding managemet frames + +type: integer + + +**`fortinet.firewall.mode`** +: IPSEC mode + +type: keyword + + +**`fortinet.firewall.module`** +: PCI-DSS module + +type: keyword + + +**`fortinet.firewall.monitor-name`** +: Health Monitor Name + +type: keyword + + +**`fortinet.firewall.monitor-type`** +: Health Monitor Type + +type: keyword + + +**`fortinet.firewall.mpsk`** +: Wireless MPSK + +type: keyword + + +**`fortinet.firewall.msgproto`** +: Message Protocol Number + +type: keyword + + +**`fortinet.firewall.mtu`** +: Max Transmission Unit Value + +type: integer + + +**`fortinet.firewall.name`** +: Name + +type: keyword + + +**`fortinet.firewall.nat`** +: NAT IP Address + +type: keyword + + +**`fortinet.firewall.netid`** +: Connector NetID + +type: keyword + + +**`fortinet.firewall.new_status`** +: New status on user change + +type: keyword + + +**`fortinet.firewall.new_value`** +: New Virtual Domain Name + +type: keyword + + +**`fortinet.firewall.newchannel`** +: New Channel Number + +type: integer + + +**`fortinet.firewall.newchassisid`** +: New Chassis ID + +type: integer + + +**`fortinet.firewall.newslot`** +: New Slot Number + +type: integer + + +**`fortinet.firewall.nextstat`** +: Time interval in seconds for the next statistics. + +type: integer + + +**`fortinet.firewall.nf_type`** +: Notification Type + +type: keyword + + +**`fortinet.firewall.noise`** +: Wifi Noise + +type: integer + + +**`fortinet.firewall.old_status`** +: Original Status + +type: keyword + + +**`fortinet.firewall.old_value`** +: Original Virtual Domain name + +type: keyword + + +**`fortinet.firewall.oldchannel`** +: Original channel + +type: integer + + +**`fortinet.firewall.oldchassisid`** +: Original Chassis Number + +type: integer + + +**`fortinet.firewall.oldslot`** +: Original Slot Number + +type: integer + + +**`fortinet.firewall.oldsn`** +: Old Serial number + +type: keyword + + +**`fortinet.firewall.oldwprof`** +: Old Web Filter Profile + +type: keyword + + +**`fortinet.firewall.onwire`** +: A flag to indicate if the AP is onwire or not + +type: keyword + + +**`fortinet.firewall.opercountry`** +: Operating Country + +type: keyword + + +**`fortinet.firewall.opertxpower`** +: Operating TX power + +type: integer + + +**`fortinet.firewall.osname`** +: Operating System name + +type: keyword + + +**`fortinet.firewall.osversion`** +: Operating System version + +type: keyword + + +**`fortinet.firewall.out_spi`** +: Out SPI + +type: keyword + + +**`fortinet.firewall.outintf`** +: Out interface + +type: keyword + + +**`fortinet.firewall.passedcount`** +: Fabric passed count + +type: integer + + +**`fortinet.firewall.passwd`** +: Changed user password information + +type: keyword + + +**`fortinet.firewall.path`** +: Path of looped configuration for security fabric + +type: keyword + + +**`fortinet.firewall.peer`** +: WAN optimization peer + +type: keyword + + +**`fortinet.firewall.peer_notif`** +: VPN peer notification + +type: keyword + + +**`fortinet.firewall.phase2_name`** +: VPN phase2 name + +type: keyword + + +**`fortinet.firewall.phone`** +: VOIP Phone + +type: keyword + + +**`fortinet.firewall.pid`** +: Process ID + +type: integer + + +**`fortinet.firewall.policytype`** +: Policy Type + +type: keyword + + +**`fortinet.firewall.poolname`** +: IP Pool name + +type: keyword + + +**`fortinet.firewall.port`** +: Log upload error port + +type: integer + + +**`fortinet.firewall.portbegin`** +: IP Pool port number to begin + +type: integer + + +**`fortinet.firewall.portend`** +: IP Pool port number to end + +type: integer + + +**`fortinet.firewall.probeproto`** +: Link Monitor Probe Protocol + +type: keyword + + +**`fortinet.firewall.process`** +: URL Filter process + +type: keyword + + +**`fortinet.firewall.processtime`** +: Process time for reports + +type: integer + + +**`fortinet.firewall.profile`** +: Profile Name + +type: keyword + + +**`fortinet.firewall.profile_vd`** +: Virtual Domain Name + +type: keyword + + +**`fortinet.firewall.profilegroup`** +: Profile Group Name + +type: keyword + + +**`fortinet.firewall.profiletype`** +: Profile Type + +type: keyword + + +**`fortinet.firewall.qtypeval`** +: DNS question type value + +type: integer + + +**`fortinet.firewall.quarskip`** +: Quarantine skip explanation + +type: keyword + + +**`fortinet.firewall.quotaexceeded`** +: If quota has been exceeded + +type: keyword + + +**`fortinet.firewall.quotamax`** +: Maximum quota allowed - in seconds if time-based - in bytes if traffic-based + +type: long + + +**`fortinet.firewall.quotatype`** +: Quota type + +type: keyword + + +**`fortinet.firewall.quotaused`** +: Quota used - in seconds if time-based - in bytes if trafficbased) + +type: long + + +**`fortinet.firewall.radioband`** +: Radio band + +type: keyword + + +**`fortinet.firewall.radioid`** +: Radio ID + +type: integer + + +**`fortinet.firewall.radioidclosest`** +: Radio ID on the AP closest the rogue AP + +type: integer + + +**`fortinet.firewall.radioiddetected`** +: Radio ID on the AP which detected the rogue AP + +type: integer + + +**`fortinet.firewall.rate`** +: Wireless rogue rate value + +type: keyword + + +**`fortinet.firewall.rawdata`** +: Raw data value + +type: keyword + + +**`fortinet.firewall.rawdataid`** +: Raw data ID + +type: keyword + + +**`fortinet.firewall.rcvddelta`** +: Received bytes delta + +type: keyword + + +**`fortinet.firewall.reason`** +: Alert reason + +type: keyword + + +**`fortinet.firewall.received`** +: Server key exchange received + +type: integer + + +**`fortinet.firewall.receivedsignature`** +: Server key exchange received signature + +type: keyword + + +**`fortinet.firewall.red`** +: Memory information in red + +type: keyword + + +**`fortinet.firewall.referralurl`** +: Web filter referralurl + +type: keyword + + +**`fortinet.firewall.remote`** +: Remote PPP IP address + +type: ip + + +**`fortinet.firewall.remotewtptime`** +: Remote Wifi Radius authentication time + +type: keyword + + +**`fortinet.firewall.reporttype`** +: Report type + +type: keyword + + +**`fortinet.firewall.reqtype`** +: Request type + +type: keyword + + +**`fortinet.firewall.request_name`** +: VOIP request name + +type: keyword + + +**`fortinet.firewall.result`** +: VPN phase result + +type: keyword + + +**`fortinet.firewall.role`** +: VPN Phase 2 role + +type: keyword + + +**`fortinet.firewall.rssi`** +: Received signal strength indicator + +type: integer + + +**`fortinet.firewall.rsso_key`** +: RADIUS SSO attribute value + +type: keyword + + +**`fortinet.firewall.ruledata`** +: Rule data + +type: keyword + + +**`fortinet.firewall.ruletype`** +: Rule type + +type: keyword + + +**`fortinet.firewall.scanned`** +: Number of Scanned MMSs + +type: integer + + +**`fortinet.firewall.scantime`** +: Scanned time + +type: long + + +**`fortinet.firewall.scope`** +: FortiGuard Override Scope + +type: keyword + + +**`fortinet.firewall.security`** +: Wireless rogue security + +type: keyword + + +**`fortinet.firewall.sensitivity`** +: Sensitivity for document fingerprint + +type: keyword + + +**`fortinet.firewall.sensor`** +: NAC Sensor Name + +type: keyword + + +**`fortinet.firewall.sentdelta`** +: Sent bytes delta + +type: keyword + + +**`fortinet.firewall.seq`** +: Sequence number + +type: keyword + + +**`fortinet.firewall.serial`** +: WAN optimisation serial + +type: keyword + + +**`fortinet.firewall.serialno`** +: Serial number + +type: keyword + + +**`fortinet.firewall.server`** +: AD server FQDN or IP + +type: keyword + + +**`fortinet.firewall.session_id`** +: Session ID + +type: keyword + + +**`fortinet.firewall.sessionid`** +: WAD Session ID + +type: integer + + +**`fortinet.firewall.setuprate`** +: Session Setup Rate + +type: long + + +**`fortinet.firewall.severity`** +: Severity + +type: keyword + + +**`fortinet.firewall.shaperdroprcvdbyte`** +: Received bytes dropped by shaper + +type: integer + + +**`fortinet.firewall.shaperdropsentbyte`** +: Sent bytes dropped by shaper + +type: integer + + +**`fortinet.firewall.shaperperipdropbyte`** +: Dropped bytes per IP by shaper + +type: integer + + +**`fortinet.firewall.shaperperipname`** +: Traffic shaper name (per IP) + +type: keyword + + +**`fortinet.firewall.shaperrcvdname`** +: Traffic shaper name for received traffic + +type: keyword + + +**`fortinet.firewall.shapersentname`** +: Traffic shaper name for sent traffic + +type: keyword + + +**`fortinet.firewall.shapingpolicyid`** +: Traffic shaper policy ID + +type: integer + + +**`fortinet.firewall.signal`** +: Wireless rogue API signal + +type: integer + + +**`fortinet.firewall.size`** +: Email size in bytes + +type: long + + +**`fortinet.firewall.slot`** +: Slot number + +type: integer + + +**`fortinet.firewall.sn`** +: Security fabric serial number + +type: keyword + + +**`fortinet.firewall.snclosest`** +: SN of the AP closest to the rogue AP + +type: keyword + + +**`fortinet.firewall.sndetected`** +: SN of the AP which detected the rogue AP + +type: keyword + + +**`fortinet.firewall.snmeshparent`** +: SN of the mesh parent + +type: keyword + + +**`fortinet.firewall.spi`** +: IPSEC SPI + +type: keyword + + +**`fortinet.firewall.src_int`** +: Source interface + +type: keyword + + +**`fortinet.firewall.srcintfrole`** +: Source interface role + +type: keyword + + +**`fortinet.firewall.srccountry`** +: Source country + +type: keyword + + +**`fortinet.firewall.srcfamily`** +: Source family + +type: keyword + + +**`fortinet.firewall.srchwvendor`** +: Source hardware vendor + +type: keyword + + +**`fortinet.firewall.srchwversion`** +: Source hardware version + +type: keyword + + +**`fortinet.firewall.srcinetsvc`** +: Source interface service + +type: keyword + + +**`fortinet.firewall.srcname`** +: Source name + +type: keyword + + +**`fortinet.firewall.srcserver`** +: Source server + +type: integer + + +**`fortinet.firewall.srcssid`** +: Source SSID + +type: keyword + + +**`fortinet.firewall.srcswversion`** +: Source software version + +type: keyword + + +**`fortinet.firewall.srcuuid`** +: Source UUID + +type: keyword + + +**`fortinet.firewall.sscname`** +: SSC name + +type: keyword + + +**`fortinet.firewall.ssid`** +: Base Service Set ID + +type: keyword + + +**`fortinet.firewall.sslaction`** +: SSL Action + +type: keyword + + +**`fortinet.firewall.ssllocal`** +: WAD SSL local + +type: keyword + + +**`fortinet.firewall.sslremote`** +: WAD SSL remote + +type: keyword + + +**`fortinet.firewall.stacount`** +: Number of stations/clients + +type: integer + + +**`fortinet.firewall.stage`** +: IPSEC stage + +type: keyword + + +**`fortinet.firewall.stamac`** +: 802.1x station mac + +type: keyword + + +**`fortinet.firewall.state`** +: Admin login state + +type: keyword + + +**`fortinet.firewall.status`** +: Status + +type: keyword + + +**`fortinet.firewall.stitch`** +: Automation stitch triggered + +type: keyword + + +**`fortinet.firewall.subject`** +: Email subject + +type: keyword + + +**`fortinet.firewall.submodule`** +: Configuration Sub-Module Name + +type: keyword + + +**`fortinet.firewall.subservice`** +: AV subservice + +type: keyword + + +**`fortinet.firewall.subtype`** +: Log subtype + +type: keyword + + +**`fortinet.firewall.suspicious`** +: Number of Suspicious MMSs + +type: integer + + +**`fortinet.firewall.switchproto`** +: Protocol change information + +type: keyword + + +**`fortinet.firewall.sync_status`** +: The sync status with the master + +type: keyword + + +**`fortinet.firewall.sync_type`** +: The sync type with the master + +type: keyword + + +**`fortinet.firewall.sysuptime`** +: System uptime + +type: keyword + + +**`fortinet.firewall.tamac`** +: the MAC address of Transmitter, if none, then Receiver + +type: keyword + + +**`fortinet.firewall.threattype`** +: WIDS threat type + +type: keyword + + +**`fortinet.firewall.time`** +: Time of the event + +type: keyword + + +**`fortinet.firewall.to`** +: Email to field + +type: keyword + + +**`fortinet.firewall.to_vcluster`** +: destination virtual cluster number + +type: integer + + +**`fortinet.firewall.total`** +: Total memory + +type: integer + + +**`fortinet.firewall.totalsession`** +: Total Number of Sessions + +type: integer + + +**`fortinet.firewall.trace_id`** +: Session clash trace ID + +type: keyword + + +**`fortinet.firewall.trandisp`** +: NAT translation type + +type: keyword + + +**`fortinet.firewall.transid`** +: HTTP transaction ID + +type: integer + + +**`fortinet.firewall.translationid`** +: DNS filter transaltion ID + +type: keyword + + +**`fortinet.firewall.trigger`** +: Automation stitch trigger + +type: keyword + + +**`fortinet.firewall.trueclntip`** +: File filter true client IP + +type: ip + + +**`fortinet.firewall.tunnelid`** +: IPSEC tunnel ID + +type: integer + + +**`fortinet.firewall.tunnelip`** +: IPSEC tunnel IP + +type: ip + + +**`fortinet.firewall.tunneltype`** +: IPSEC tunnel type + +type: keyword + + +**`fortinet.firewall.type`** +: Module type + +type: keyword + + +**`fortinet.firewall.ui`** +: Admin authentication UI type + +type: keyword + + +**`fortinet.firewall.unauthusersource`** +: Unauthenticated user source + +type: keyword + + +**`fortinet.firewall.unit`** +: Power supply unit + +type: integer + + +**`fortinet.firewall.urlfilteridx`** +: URL filter ID + +type: integer + + +**`fortinet.firewall.urlfilterlist`** +: URL filter list + +type: keyword + + +**`fortinet.firewall.urlsource`** +: URL filter source + +type: keyword + + +**`fortinet.firewall.urltype`** +: URL filter type + +type: keyword + + +**`fortinet.firewall.used`** +: Number of Used IPs + +type: integer + + +**`fortinet.firewall.used_for_type`** +: Connection for the type + +type: integer + + +**`fortinet.firewall.utmaction`** +: Security action performed by UTM + +type: keyword + + +**`fortinet.firewall.utmref`** +: Reference to UTM + +type: keyword + + +**`fortinet.firewall.vap`** +: Virtual AP + +type: keyword + + +**`fortinet.firewall.vapmode`** +: Virtual AP mode + +type: keyword + + +**`fortinet.firewall.vcluster`** +: virtual cluster id + +type: integer + + +**`fortinet.firewall.vcluster_member`** +: Virtual cluster member + +type: integer + + +**`fortinet.firewall.vcluster_state`** +: Virtual cluster state + +type: keyword + + +**`fortinet.firewall.vd`** +: Virtual Domain Name + +type: keyword + + +**`fortinet.firewall.vdname`** +: Virtual Domain Name + +type: keyword + + +**`fortinet.firewall.vendorurl`** +: Vulnerability scan vendor name + +type: keyword + + +**`fortinet.firewall.version`** +: Version + +type: keyword + + +**`fortinet.firewall.vip`** +: Virtual IP + +type: keyword + + +**`fortinet.firewall.virus`** +: Virus name + +type: keyword + + +**`fortinet.firewall.virusid`** +: Virus ID (unique virus identifier) + +type: integer + + +**`fortinet.firewall.voip_proto`** +: VOIP protocol + +type: keyword + + +**`fortinet.firewall.vpn`** +: VPN description + +type: keyword + + +**`fortinet.firewall.vpntunnel`** +: IPsec Vpn Tunnel Name + +type: keyword + + +**`fortinet.firewall.vpntype`** +: The type of the VPN tunnel + +type: keyword + + +**`fortinet.firewall.vrf`** +: VRF number + +type: integer + + +**`fortinet.firewall.vulncat`** +: Vulnerability Category + +type: keyword + + +**`fortinet.firewall.vulnid`** +: Vulnerability ID + +type: integer + + +**`fortinet.firewall.vulnname`** +: Vulnerability name + +type: keyword + + +**`fortinet.firewall.vwlid`** +: VWL ID + +type: integer + + +**`fortinet.firewall.vwlquality`** +: VWL quality + +type: keyword + + +**`fortinet.firewall.vwlservice`** +: VWL service + +type: keyword + + +**`fortinet.firewall.vwpvlanid`** +: VWP VLAN ID + +type: integer + + +**`fortinet.firewall.wanin`** +: WAN incoming traffic in bytes + +type: long + + +**`fortinet.firewall.wanoptapptype`** +: WAN Optimization Application type + +type: keyword + + +**`fortinet.firewall.wanout`** +: WAN outgoing traffic in bytes + +type: long + + +**`fortinet.firewall.weakwepiv`** +: Weak Wep Initiation Vector + +type: keyword + + +**`fortinet.firewall.xauthgroup`** +: XAuth Group Name + +type: keyword + + +**`fortinet.firewall.xauthuser`** +: XAuth User Name + +type: keyword + + +**`fortinet.firewall.xid`** +: Wireless X ID + +type: integer + + diff --git a/docs/reference/filebeat/exported-fields-gcp.md b/docs/reference/filebeat/exported-fields-gcp.md new file mode 100644 index 000000000000..b5849d9e4c2c --- /dev/null +++ b/docs/reference/filebeat/exported-fields-gcp.md @@ -0,0 +1,377 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-gcp.html +--- + +# Google Cloud Platform (GCP) fields [exported-fields-gcp] + +Module for handling logs from Google Cloud. + + +## gcp [_gcp] + +Fields from Google Cloud logs. + + +## destination.instance [_destination_instance] + +If the destination of the connection was a VM located on the same VPC, this field is populated with VM instance details. In a Shared VPC configuration, project_id corresponds to the project that owns the instance, usually the service project. + +**`gcp.destination.instance.project_id`** +: ID of the project containing the VM. + +type: keyword + + +**`gcp.destination.instance.region`** +: Region of the VM. + +type: keyword + + +**`gcp.destination.instance.zone`** +: Zone of the VM. + +type: keyword + + + +## destination.vpc [_destination_vpc] + +If the destination of the connection was a VM located on the same VPC, this field is populated with VPC network details. In a Shared VPC configuration, project_id corresponds to that of the host project. + +**`gcp.destination.vpc.project_id`** +: ID of the project containing the VM. + +type: keyword + + +**`gcp.destination.vpc.vpc_name`** +: VPC on which the VM is operating. + +type: keyword + + +**`gcp.destination.vpc.subnetwork_name`** +: Subnetwork on which the VM is operating. + +type: keyword + + + +## source.instance [_source_instance] + +If the source of the connection was a VM located on the same VPC, this field is populated with VM instance details. In a Shared VPC configuration, project_id corresponds to the project that owns the instance, usually the service project. + +**`gcp.source.instance.project_id`** +: ID of the project containing the VM. + +type: keyword + + +**`gcp.source.instance.region`** +: Region of the VM. + +type: keyword + + +**`gcp.source.instance.zone`** +: Zone of the VM. + +type: keyword + + + +## source.vpc [_source_vpc] + +If the source of the connection was a VM located on the same VPC, this field is populated with VPC network details. In a Shared VPC configuration, project_id corresponds to that of the host project. + +**`gcp.source.vpc.project_id`** +: ID of the project containing the VM. + +type: keyword + + +**`gcp.source.vpc.vpc_name`** +: VPC on which the VM is operating. + +type: keyword + + +**`gcp.source.vpc.subnetwork_name`** +: Subnetwork on which the VM is operating. + +type: keyword + + + +## audit [_audit_3] + +Fields for Google Cloud audit logs. + +**`gcp.audit.type`** +: Type property. + +type: keyword + + + +## authentication_info [_authentication_info] + +Authentication information. + +**`gcp.audit.authentication_info.principal_email`** +: The email address of the authenticated user making the request. + +type: keyword + + +**`gcp.audit.authentication_info.authority_selector`** +: The authority selector specified by the requestor, if any. It is not guaranteed that the principal was allowed to use this authority. + +type: keyword + + +**`gcp.audit.authorization_info`** +: Authorization information for the operation. + +type: array + + +**`gcp.audit.method_name`** +: The name of the service method or operation. For API calls, this should be the name of the API method. For example, *google.datastore.v1.Datastore.RunQuery*. + +type: keyword + + +**`gcp.audit.num_response_items`** +: The number of items returned from a List or Query API method, if applicable. + +type: long + + + +## request [_request] + +The operation request. + +**`gcp.audit.request.proto_name`** +: Type property of the request. + +type: keyword + + +**`gcp.audit.request.filter`** +: Filter of the request. + +type: keyword + + +**`gcp.audit.request.name`** +: Name of the request. + +type: keyword + + +**`gcp.audit.request.resource_name`** +: Name of the request resource. + +type: keyword + + + +## request_metadata [_request_metadata] + +Metadata about the request. + +**`gcp.audit.request_metadata.caller_ip`** +: The IP address of the caller. + +type: ip + + +**`gcp.audit.request_metadata.caller_supplied_user_agent`** +: The user agent of the caller. This information is not authenticated and should be treated accordingly. + +type: keyword + + + +## response [_response] + +The operation response. + +**`gcp.audit.response.proto_name`** +: Type property of the response. + +type: keyword + + + +## details [_details] + +The details of the response. + +**`gcp.audit.response.details.group`** +: The name of the group. + +type: keyword + + +**`gcp.audit.response.details.kind`** +: The kind of the response details. + +type: keyword + + +**`gcp.audit.response.details.name`** +: The name of the response details. + +type: keyword + + +**`gcp.audit.response.details.uid`** +: The uid of the response details. + +type: keyword + + +**`gcp.audit.response.status`** +: Status of the response. + +type: keyword + + +**`gcp.audit.resource_name`** +: The resource or collection that is the target of the operation. The name is a scheme-less URI, not including the API service name. For example, *shelves/SHELF_ID/books*. + +type: keyword + + + +## resource_location [_resource_location] + +The location of the resource. + +**`gcp.audit.resource_location.current_locations`** +: Current locations of the resource. + +type: keyword + + +**`gcp.audit.service_name`** +: The name of the API service performing the operation. For example, datastore.googleapis.com. + +type: keyword + + + +## status [_status] + +The status of the overall operation. + +**`gcp.audit.status.code`** +: The status code, which should be an enum value of google.rpc.Code. + +type: integer + + +**`gcp.audit.status.message`** +: A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. + +type: keyword + + + +## firewall [_firewall_2] + +Fields for Google Cloud Firewall logs. + + +## rule_details [_rule_details] + +Description of the firewall rule that matched this connection. + +**`gcp.firewall.rule_details.priority`** +: The priority for the firewall rule. + +type: long + + +**`gcp.firewall.rule_details.action`** +: Action that the rule performs on match. + +type: keyword + + +**`gcp.firewall.rule_details.direction`** +: Direction of traffic that matches this rule. + +type: keyword + + +**`gcp.firewall.rule_details.reference`** +: Reference to the firewall rule. + +type: keyword + + +**`gcp.firewall.rule_details.source_range`** +: List of source ranges that the firewall rule applies to. + +type: keyword + + +**`gcp.firewall.rule_details.destination_range`** +: List of destination ranges that the firewall applies to. + +type: keyword + + +**`gcp.firewall.rule_details.source_tag`** +: List of all the source tags that the firewall rule applies to. + +type: keyword + + +**`gcp.firewall.rule_details.target_tag`** +: List of all the target tags that the firewall rule applies to. + +type: keyword + + +**`gcp.firewall.rule_details.ip_port_info`** +: List of ip protocols and applicable port ranges for rules. + +type: array + + +**`gcp.firewall.rule_details.source_service_account`** +: List of all the source service accounts that the firewall rule applies to. + +type: keyword + + +**`gcp.firewall.rule_details.target_service_account`** +: List of all the target service accounts that the firewall rule applies to. + +type: keyword + + + +## vpcflow [_vpcflow_2] + +Fields for Google Cloud VPC flow logs. + +**`gcp.vpcflow.reporter`** +: The side which reported the flow. Can be either *SRC* or *DEST*. + +type: keyword + + +**`gcp.vpcflow.rtt.ms`** +: Latency as measured (for TCP flows only) during the time interval. This is the time elapsed between sending a SEQ and receiving a corresponding ACK and it contains the network RTT as well as the application related delay. + +type: long + + diff --git a/docs/reference/filebeat/exported-fields-google_workspace.md b/docs/reference/filebeat/exported-fields-google_workspace.md new file mode 100644 index 000000000000..6fa5fefbd3f0 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-google_workspace.md @@ -0,0 +1,802 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-google_workspace.html +--- + +# google_workspace fields [exported-fields-google_workspace] + +Google Workspace Module + + +## google_workspace [_google_workspace] + +Google Workspace specific fields. More information about specific fields can be found at [https://developers.google.com/admin-sdk/reports/v1/reference/activities/list](https://developers.google.com/admin-sdk/reports/v1/reference/activities/list) + +**`google_workspace.actor.type`** +: The type of actor. Values can be: **USER**: Another user in the same domain. **EXTERNAL_USER**: A user outside the domain. **KEY**: A non-human actor. + +type: keyword + + +**`google_workspace.actor.key`** +: Only present when `actor.type` is `KEY`. Can be the `consumer_key` of the requestor for OAuth 2LO API requests or an identifier for robot accounts. + +type: keyword + + +**`google_workspace.event.type`** +: The type of Google Workspace event, mapped from `items[].events[].type` in the original payload. Each fileset can have a different set of values for it, more details can be found at [https://developers.google.com/admin-sdk/reports/v1/reference/activities/list](https://developers.google.com/admin-sdk/reports/v1/reference/activities/list) + +type: keyword + +example: audit#activity + + +**`google_workspace.kind`** +: The type of API resource, mapped from `kind` in the original payload. More details can be found at [https://developers.google.com/admin-sdk/reports/v1/reference/activities/list](https://developers.google.com/admin-sdk/reports/v1/reference/activities/list) + +type: keyword + +example: audit#activity + + +**`google_workspace.organization.domain`** +: The domain that is affected by the report’s event. + +type: keyword + + +**`google_workspace.admin.application.edition`** +: The Google Workspace edition. + +type: keyword + + +**`google_workspace.admin.application.name`** +: The application’s name. + +type: keyword + + +**`google_workspace.admin.application.enabled`** +: The enabled application. + +type: keyword + + +**`google_workspace.admin.application.licences_order_number`** +: Order number used to redeem licenses. + +type: keyword + + +**`google_workspace.admin.application.licences_purchased`** +: Number of licences purchased. + +type: keyword + + +**`google_workspace.admin.application.id`** +: The application ID. + +type: keyword + + +**`google_workspace.admin.application.asp_id`** +: The application specific password ID. + +type: keyword + + +**`google_workspace.admin.application.package_id`** +: The mobile application package ID. + +type: keyword + + +**`google_workspace.admin.group.email`** +: The group’s primary email address. + +type: keyword + + +**`google_workspace.admin.new_value`** +: The new value for the setting. + +type: keyword + + +**`google_workspace.admin.old_value`** +: The old value for the setting. + +type: keyword + + +**`google_workspace.admin.org_unit.name`** +: The organizational unit name. + +type: keyword + + +**`google_workspace.admin.org_unit.full`** +: The org unit full path including the root org unit name. + +type: keyword + + +**`google_workspace.admin.setting.name`** +: The setting name. + +type: keyword + + +**`google_workspace.admin.user_defined_setting.name`** +: The name of the user-defined setting. + +type: keyword + + +**`google_workspace.admin.setting.description`** +: The setting name. + +type: keyword + + +**`google_workspace.admin.group.priorities`** +: Group priorities. + +type: keyword + + +**`google_workspace.admin.domain.alias`** +: The domain alias. + +type: keyword + + +**`google_workspace.admin.domain.name`** +: The primary domain name. + +type: keyword + + +**`google_workspace.admin.domain.secondary_name`** +: The secondary domain name. + +type: keyword + + +**`google_workspace.admin.managed_configuration`** +: The name of the managed configuration. + +type: keyword + + +**`google_workspace.admin.non_featured_services_selection`** +: Non-featured services selection. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-application-settings#FLASHLIGHT_EDU_NON_FEATURED_SERVICES_SELECTED](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-application-settings#FLASHLIGHT_EDU_NON_FEATURED_SERVICES_SELECTED) + +type: keyword + + +**`google_workspace.admin.field`** +: The name of the field. + +type: keyword + + +**`google_workspace.admin.resource.id`** +: The name of the resource identifier. + +type: keyword + + +**`google_workspace.admin.user.email`** +: The user’s primary email address. + +type: keyword + + +**`google_workspace.admin.user.nickname`** +: The user’s nickname. + +type: keyword + + +**`google_workspace.admin.user.birthdate`** +: The user’s birth date. + +type: date + + +**`google_workspace.admin.gateway.name`** +: Gateway name. Present on some chat settings. + +type: keyword + + +**`google_workspace.admin.chrome_os.session_type`** +: Chrome OS session type. + +type: keyword + + +**`google_workspace.admin.device.serial_number`** +: Device serial number. + +type: keyword + + +**`google_workspace.admin.device.id`** +: type: keyword + + +**`google_workspace.admin.device.type`** +: Device type. + +type: keyword + + +**`google_workspace.admin.print_server.name`** +: The name of the print server. + +type: keyword + + +**`google_workspace.admin.printer.name`** +: The name of the printer. + +type: keyword + + +**`google_workspace.admin.device.command_details`** +: Command details. + +type: keyword + + +**`google_workspace.admin.role.id`** +: Unique identifier for this role privilege. + +type: keyword + + +**`google_workspace.admin.role.name`** +: The role name. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-delegated-admin-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-delegated-admin-settings) + +type: keyword + + +**`google_workspace.admin.privilege.name`** +: Privilege name. + +type: keyword + + +**`google_workspace.admin.service.name`** +: The service name. + +type: keyword + + +**`google_workspace.admin.url.name`** +: The website name. + +type: keyword + + +**`google_workspace.admin.product.name`** +: The product name. + +type: keyword + + +**`google_workspace.admin.product.sku`** +: The product SKU. + +type: keyword + + +**`google_workspace.admin.bulk_upload.failed`** +: Number of failed records in bulk upload operation. + +type: long + + +**`google_workspace.admin.bulk_upload.total`** +: Number of total records in bulk upload operation. + +type: long + + +**`google_workspace.admin.group.allowed_list`** +: Names of allow-listed groups. + +type: keyword + + +**`google_workspace.admin.email.quarantine_name`** +: The name of the quarantine. + +type: keyword + + +**`google_workspace.admin.email.log_search_filter.message_id`** +: The log search filter’s email message ID. + +type: keyword + + +**`google_workspace.admin.email.log_search_filter.start_date`** +: The log search filter’s start date. + +type: date + + +**`google_workspace.admin.email.log_search_filter.end_date`** +: The log search filter’s ending date. + +type: date + + +**`google_workspace.admin.email.log_search_filter.recipient.value`** +: The log search filter’s email recipient. + +type: keyword + + +**`google_workspace.admin.email.log_search_filter.sender.value`** +: The log search filter’s email sender. + +type: keyword + + +**`google_workspace.admin.email.log_search_filter.recipient.ip`** +: The log search filter’s email recipient’s IP address. + +type: ip + + +**`google_workspace.admin.email.log_search_filter.sender.ip`** +: The log search filter’s email sender’s IP address. + +type: ip + + +**`google_workspace.admin.chrome_licenses.enabled`** +: Licences enabled. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-org-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-org-settings) + +type: keyword + + +**`google_workspace.admin.chrome_licenses.allowed`** +: Licences enabled. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-org-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-org-settings) + +type: keyword + + +**`google_workspace.admin.oauth2.service.name`** +: OAuth2 service name. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-security-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-security-settings) + +type: keyword + + +**`google_workspace.admin.oauth2.application.id`** +: OAuth2 application ID. + +type: keyword + + +**`google_workspace.admin.oauth2.application.name`** +: OAuth2 application name. + +type: keyword + + +**`google_workspace.admin.oauth2.application.type`** +: OAuth2 application type. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-security-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-security-settings) + +type: keyword + + +**`google_workspace.admin.verification_method`** +: Related verification method. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-security-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-security-settings) and [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-domain-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-domain-settings) + +type: keyword + + +**`google_workspace.admin.alert.name`** +: The alert name. + +type: keyword + + +**`google_workspace.admin.rule.name`** +: The rule name. + +type: keyword + + +**`google_workspace.admin.api.client.name`** +: The API client name. + +type: keyword + + +**`google_workspace.admin.api.scopes`** +: The API scopes. + +type: keyword + + +**`google_workspace.admin.mdm.token`** +: The MDM vendor enrollment token. + +type: keyword + + +**`google_workspace.admin.mdm.vendor`** +: The MDM vendor’s name. + +type: keyword + + +**`google_workspace.admin.info_type`** +: This will be used to state what kind of information was changed. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-domain-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-domain-settings) + +type: keyword + + +**`google_workspace.admin.email_monitor.dest_email`** +: The destination address of the email monitor. + +type: keyword + + +**`google_workspace.admin.email_monitor.level.chat`** +: The chat email monitor level. + +type: keyword + + +**`google_workspace.admin.email_monitor.level.draft`** +: The draft email monitor level. + +type: keyword + + +**`google_workspace.admin.email_monitor.level.incoming`** +: The incoming email monitor level. + +type: keyword + + +**`google_workspace.admin.email_monitor.level.outgoing`** +: The outgoing email monitor level. + +type: keyword + + +**`google_workspace.admin.email_dump.include_deleted`** +: Indicates if deleted emails are included in the export. + +type: boolean + + +**`google_workspace.admin.email_dump.package_content`** +: The contents of the mailbox package. + +type: keyword + + +**`google_workspace.admin.email_dump.query`** +: The search query used for the dump. + +type: keyword + + +**`google_workspace.admin.request.id`** +: The request ID. + +type: keyword + + +**`google_workspace.admin.mobile.action.id`** +: The mobile device action’s ID. + +type: keyword + + +**`google_workspace.admin.mobile.action.type`** +: The mobile device action’s type. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-mobile-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-mobile-settings) + +type: keyword + + +**`google_workspace.admin.mobile.certificate.name`** +: The mobile certificate common name. + +type: keyword + + +**`google_workspace.admin.mobile.company_owned_devices`** +: The number of devices a company owns. + +type: long + + +**`google_workspace.admin.distribution.entity.name`** +: The distribution entity value, which can be a group name or an org-unit name. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-mobile-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-mobile-settings) + +type: keyword + + +**`google_workspace.admin.distribution.entity.type`** +: The distribution entity type, which can be a group or an org-unit. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-mobile-settings](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/admin-mobile-settings) + +type: keyword + + +**`google_workspace.drive.billable`** +: Whether this activity is billable. + +type: boolean + + +**`google_workspace.drive.source_folder_id`** +: type: keyword + + +**`google_workspace.drive.source_folder_title`** +: type: keyword + + +**`google_workspace.drive.destination_folder_id`** +: type: keyword + + +**`google_workspace.drive.destination_folder_title`** +: type: keyword + + +**`google_workspace.drive.file.id`** +: type: keyword + + +**`google_workspace.drive.file.type`** +: Document Drive type. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive) + +type: keyword + + +**`google_workspace.drive.originating_app_id`** +: The Google Cloud Project ID of the application that performed the action. + +type: keyword + + +**`google_workspace.drive.file.owner.email`** +: type: keyword + + +**`google_workspace.drive.file.owner.is_shared_drive`** +: Boolean flag denoting whether owner is a shared drive. + +type: boolean + + +**`google_workspace.drive.primary_event`** +: Whether this is a primary event. A single user action in Drive may generate several events. + +type: boolean + + +**`google_workspace.drive.shared_drive_id`** +: The unique identifier of the Team Drive. Only populated for for events relating to a Team Drive or item contained inside a Team Drive. + +type: keyword + + +**`google_workspace.drive.visibility`** +: Visibility of target file. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive) + +type: keyword + + +**`google_workspace.drive.new_value`** +: When a setting or property of the file changes, the new value for it will appear here. + +type: keyword + + +**`google_workspace.drive.old_value`** +: When a setting or property of the file changes, the old value for it will appear here. + +type: keyword + + +**`google_workspace.drive.sheets_import_range_recipient_doc`** +: Doc ID of the recipient of a sheets import range. + +type: keyword + + +**`google_workspace.drive.old_visibility`** +: When visibility changes, this holds the old value. + +type: keyword + + +**`google_workspace.drive.visibility_change`** +: When visibility changes, this holds the new overall visibility of the file. + +type: keyword + + +**`google_workspace.drive.target_domain`** +: The domain for which the acccess scope was changed. This can also be the alias all to indicate the access scope was changed for all domains that have visibility for this document. + +type: keyword + + +**`google_workspace.drive.added_role`** +: Added membership role of a user/group in a Team Drive. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive) + +type: keyword + + +**`google_workspace.drive.membership_change_type`** +: Type of change in Team Drive membership of a user/group. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive) + +type: keyword + + +**`google_workspace.drive.shared_drive_settings_change_type`** +: Type of change in Team Drive settings. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive) + +type: keyword + + +**`google_workspace.drive.removed_role`** +: Removed membership role of a user/group in a Team Drive. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/drive) + +type: keyword + + +**`google_workspace.drive.target`** +: Target user or group. + +type: keyword + + +**`google_workspace.groups.acl_permission`** +: Group permission setting updated. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups) + +type: keyword + + +**`google_workspace.groups.email`** +: Group email. + +type: keyword + + +**`google_workspace.groups.member.email`** +: Member email. + +type: keyword + + +**`google_workspace.groups.member.role`** +: Member role. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups) + +type: keyword + + +**`google_workspace.groups.setting`** +: Group setting updated. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups) + +type: keyword + + +**`google_workspace.groups.new_value`** +: New value(s) of the group setting. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups) + +type: keyword + + +**`google_workspace.groups.old_value`** +: Old value(s) of the group setting. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups) + +type: keyword + + +**`google_workspace.groups.value`** +: Value of the group setting. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/groups) + +type: keyword + + +**`google_workspace.groups.message.id`** +: SMTP message Id of an email message. Present for moderation events. + +type: keyword + + +**`google_workspace.groups.message.moderation_action`** +: Message moderation action. Possible values are `approved` and `rejected`. + +type: keyword + + +**`google_workspace.groups.status`** +: A status describing the output of an operation. Possible values are `failed` and `succeeded`. + +type: keyword + + +**`google_workspace.login.affected_email_address`** +: type: keyword + + +**`google_workspace.login.challenge_method`** +: Login challenge method. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/login](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/login). + +type: keyword + + +**`google_workspace.login.failure_type`** +: Login failure type. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/login](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/login). + +type: keyword + + +**`google_workspace.login.type`** +: Login credentials type. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/login](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/login). + +type: keyword + + +**`google_workspace.login.is_second_factor`** +: type: boolean + + +**`google_workspace.login.is_suspicious`** +: type: boolean + + +**`google_workspace.saml.application_name`** +: Saml SP application name. + +type: keyword + + +**`google_workspace.saml.failure_type`** +: Login failure type. For a list of possible values refer to [https://developers.google.com/admin-sdk/reports/v1/appendix/activity/saml](https://developers.google.com/admin-sdk/reports/v1/appendix/activity/saml). + +type: keyword + + +**`google_workspace.saml.initiated_by`** +: Requester of SAML authentication. + +type: keyword + + +**`google_workspace.saml.orgunit_path`** +: User orgunit. + +type: keyword + + +**`google_workspace.saml.status_code`** +: SAML status code. + +type: keyword + + +**`google_workspace.saml.second_level_status_code`** +: SAML second level status code. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-haproxy.md b/docs/reference/filebeat/exported-fields-haproxy.md new file mode 100644 index 000000000000..1c52c481ad4b --- /dev/null +++ b/docs/reference/filebeat/exported-fields-haproxy.md @@ -0,0 +1,284 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-haproxy.html +--- + +# HAProxy fields [exported-fields-haproxy] + +haproxy Module + + +## haproxy [_haproxy] + +**`haproxy.frontend_name`** +: Name of the frontend (or listener) which received and processed the connection. + + +**`haproxy.backend_name`** +: Name of the backend (or listener) which was selected to manage the connection to the server. + + +**`haproxy.server_name`** +: Name of the last server to which the connection was sent. + + +**`haproxy.total_waiting_time_ms`** +: Total time in milliseconds spent waiting in the various queues + +type: long + + +**`haproxy.connection_wait_time_ms`** +: Total time in milliseconds spent waiting for the connection to establish to the final server + +type: long + + +**`haproxy.bytes_read`** +: Total number of bytes transmitted to the client when the log is emitted. + +type: long + + +**`haproxy.time_queue`** +: Total time in milliseconds spent waiting in the various queues. + +type: long + + +**`haproxy.time_backend_connect`** +: Total time in milliseconds spent waiting for the connection to establish to the final server, including retries. + +type: long + + +**`haproxy.server_queue`** +: Total number of requests which were processed before this one in the server queue. + +type: long + + +**`haproxy.backend_queue`** +: Total number of requests which were processed before this one in the backend’s global queue. + +type: long + + +**`haproxy.bind_name`** +: Name of the listening address which received the connection. + + +**`haproxy.error_message`** +: Error message logged by HAProxy in case of error. + +type: text + + +**`haproxy.source`** +: The HAProxy source of the log + +type: keyword + + +**`haproxy.termination_state`** +: Condition the session was in when the session ended. + + +**`haproxy.mode`** +: mode that the frontend is operating (TCP or HTTP) + +type: keyword + + + +## connections [_connections] + +Contains various counts of connections active in the process. + +**`haproxy.connections.active`** +: Total number of concurrent connections on the process when the session was logged. + +type: long + + +**`haproxy.connections.frontend`** +: Total number of concurrent connections on the frontend when the session was logged. + +type: long + + +**`haproxy.connections.backend`** +: Total number of concurrent connections handled by the backend when the session was logged. + +type: long + + +**`haproxy.connections.server`** +: Total number of concurrent connections still active on the server when the session was logged. + +type: long + + +**`haproxy.connections.retries`** +: Number of connection retries experienced by this session when trying to connect to the server. + +type: long + + + +## client [_client_2] + +Information about the client doing the request + +**`haproxy.client.ip`** +: type: alias + +alias to: source.address + + +**`haproxy.client.port`** +: type: alias + +alias to: source.port + + +**`haproxy.process_name`** +: type: alias + +alias to: process.name + + +**`haproxy.pid`** +: type: alias + +alias to: process.pid + + + +## destination [_destination_2] + +Destination information + +**`haproxy.destination.port`** +: type: alias + +alias to: destination.port + + +**`haproxy.destination.ip`** +: type: alias + +alias to: destination.ip + + + +## geoip [_geoip] + +Contains GeoIP information gathered based on the client.ip field. Only present if the GeoIP Elasticsearch plugin is available and used. + +**`haproxy.geoip.continent_name`** +: type: alias + +alias to: source.geo.continent_name + + +**`haproxy.geoip.country_iso_code`** +: type: alias + +alias to: source.geo.country_iso_code + + +**`haproxy.geoip.location`** +: type: alias + +alias to: source.geo.location + + +**`haproxy.geoip.region_name`** +: type: alias + +alias to: source.geo.region_name + + +**`haproxy.geoip.city_name`** +: type: alias + +alias to: source.geo.city_name + + +**`haproxy.geoip.region_iso_code`** +: type: alias + +alias to: source.geo.region_iso_code + + + +## http [_http_2] + +Please add description + + +## response [_response_2] + +Fields related to the HTTP response + +**`haproxy.http.response.captured_cookie`** +: Optional "name=value" entry indicating that the client had this cookie in the response. + + +**`haproxy.http.response.captured_headers`** +: List of headers captured in the response due to the presence of the "capture response header" statement in the frontend. + +type: keyword + + +**`haproxy.http.response.status_code`** +: type: alias + +alias to: http.response.status_code + + + +## request [_request_2] + +Fields related to the HTTP request + +**`haproxy.http.request.captured_cookie`** +: Optional "name=value" entry indicating that the server has returned a cookie with its request. + + +**`haproxy.http.request.captured_headers`** +: List of headers captured in the request due to the presence of the "capture request header" statement in the frontend. + +type: keyword + + +**`haproxy.http.request.raw_request_line`** +: Complete HTTP request line, including the method, request and HTTP version string. + +type: keyword + + +**`haproxy.http.request.time_wait_without_data_ms`** +: Total time in milliseconds spent waiting for the server to send a full HTTP response, not counting data. + +type: long + + +**`haproxy.http.request.time_wait_ms`** +: Total time in milliseconds spent waiting for a full HTTP request from the client (not counting body) after the first byte was received. + +type: long + + + +## tcp [_tcp] + +TCP log format + +**`haproxy.tcp.connection_waiting_time_ms`** +: Total time in milliseconds elapsed between the accept and the last close + +type: long + + diff --git a/docs/reference/filebeat/exported-fields-host-processor.md b/docs/reference/filebeat/exported-fields-host-processor.md new file mode 100644 index 000000000000..f4eacbcc6060 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-host-processor.md @@ -0,0 +1,31 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-host-processor.html +--- + +# Host fields [exported-fields-host-processor] + +Info collected for the host machine. + +**`host.containerized`** +: If the host is a container. + +type: boolean + + +**`host.os.build`** +: OS build information. + +type: keyword + +example: 18D109 + + +**`host.os.codename`** +: OS codename, if any. + +type: keyword + +example: stretch + + diff --git a/docs/reference/filebeat/exported-fields-ibmmq.md b/docs/reference/filebeat/exported-fields-ibmmq.md new file mode 100644 index 000000000000..c2e371fad7a8 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-ibmmq.md @@ -0,0 +1,67 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-ibmmq.html +--- + +# ibmmq fields [exported-fields-ibmmq] + +ibmmq Module + + +## ibmmq [_ibmmq] + + +## errorlog [_errorlog] + +IBM MQ error logs + +**`ibmmq.errorlog.installation`** +: This is the installation name which can be given at installation time. Each installation of IBM MQ on UNIX, Linux, and Windows, has a unique identifier known as an installation name. The installation name is used to associate things such as queue managers and configuration files with an installation. + +type: keyword + + +**`ibmmq.errorlog.qmgr`** +: Name of the queue manager. Queue managers provide queuing services to applications, and manages the queues that belong to them. + +type: keyword + + +**`ibmmq.errorlog.arithinsert`** +: Changing content based on error.id + +type: keyword + + +**`ibmmq.errorlog.commentinsert`** +: Changing content based on error.id + +type: keyword + + +**`ibmmq.errorlog.errordescription`** +: Please add description + +type: text + +example: Please add example + + +**`ibmmq.errorlog.explanation`** +: Explaines the error in more detail + +type: keyword + + +**`ibmmq.errorlog.action`** +: Defines what to do when the error occurs + +type: keyword + + +**`ibmmq.errorlog.code`** +: Error code. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-icinga.md b/docs/reference/filebeat/exported-fields-icinga.md new file mode 100644 index 000000000000..7d2c0a731e36 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-icinga.md @@ -0,0 +1,81 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-icinga.html +--- + +# Icinga fields [exported-fields-icinga] + +Icinga Module + + +## icinga [_icinga] + + +## debug [_debug_2] + +Contains fields for the Icinga debug logs. + +**`icinga.debug.facility`** +: Specifies what component of Icinga logged the message. + +type: keyword + + +**`icinga.debug.severity`** +: type: alias + +alias to: log.level + + +**`icinga.debug.message`** +: type: alias + +alias to: message + + + +## main [_main] + +Contains fields for the Icinga main logs. + +**`icinga.main.facility`** +: Specifies what component of Icinga logged the message. + +type: keyword + + +**`icinga.main.severity`** +: type: alias + +alias to: log.level + + +**`icinga.main.message`** +: type: alias + +alias to: message + + + +## startup [_startup] + +Contains fields for the Icinga startup logs. + +**`icinga.startup.facility`** +: Specifies what component of Icinga logged the message. + +type: keyword + + +**`icinga.startup.severity`** +: type: alias + +alias to: log.level + + +**`icinga.startup.message`** +: type: alias + +alias to: message + + diff --git a/docs/reference/filebeat/exported-fields-iis.md b/docs/reference/filebeat/exported-fields-iis.md new file mode 100644 index 000000000000..759ecb8d7de8 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-iis.md @@ -0,0 +1,294 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-iis.html +--- + +# IIS fields [exported-fields-iis] + +Module for parsing IIS log files. + + +## iis [_iis] + +Fields from IIS log files. + + +## access [_access_2] + +Contains fields for IIS access logs. + +**`iis.access.sub_status`** +: The HTTP substatus code. + +type: long + + +**`iis.access.win32_status`** +: The Windows status code. + +type: long + + +**`iis.access.site_name`** +: The site name and instance number. + +type: keyword + + +**`iis.access.server_name`** +: The name of the server on which the log file entry was generated. + +type: keyword + + +**`iis.access.cookie`** +: The content of the cookie sent or received, if any. + +type: keyword + + +**`iis.access.body_received.bytes`** +: type: alias + +alias to: http.request.body.bytes + + +**`iis.access.body_sent.bytes`** +: type: alias + +alias to: http.response.body.bytes + + +**`iis.access.server_ip`** +: type: alias + +alias to: destination.address + + +**`iis.access.method`** +: type: alias + +alias to: http.request.method + + +**`iis.access.url`** +: type: alias + +alias to: url.path + + +**`iis.access.query_string`** +: type: alias + +alias to: url.query + + +**`iis.access.port`** +: type: alias + +alias to: destination.port + + +**`iis.access.user_name`** +: type: alias + +alias to: user.name + + +**`iis.access.remote_ip`** +: type: alias + +alias to: source.address + + +**`iis.access.referrer`** +: type: alias + +alias to: http.request.referrer + + +**`iis.access.response_code`** +: type: alias + +alias to: http.response.status_code + + +**`iis.access.http_version`** +: type: alias + +alias to: http.version + + +**`iis.access.hostname`** +: type: alias + +alias to: host.hostname + + +**`iis.access.user_agent.device`** +: type: alias + +alias to: user_agent.device.name + + +**`iis.access.user_agent.name`** +: type: alias + +alias to: user_agent.name + + +**`iis.access.user_agent.os`** +: type: alias + +alias to: user_agent.os.full_name + + +**`iis.access.user_agent.os_name`** +: type: alias + +alias to: user_agent.os.name + + +**`iis.access.user_agent.original`** +: type: alias + +alias to: user_agent.original + + +**`iis.access.geoip.continent_name`** +: type: alias + +alias to: source.geo.continent_name + + +**`iis.access.geoip.country_iso_code`** +: type: alias + +alias to: source.geo.country_iso_code + + +**`iis.access.geoip.location`** +: type: alias + +alias to: source.geo.location + + +**`iis.access.geoip.region_name`** +: type: alias + +alias to: source.geo.region_name + + +**`iis.access.geoip.city_name`** +: type: alias + +alias to: source.geo.city_name + + +**`iis.access.geoip.region_iso_code`** +: type: alias + +alias to: source.geo.region_iso_code + + + +## error [_error_3] + +Contains fields for IIS error logs. + +**`iis.error.reason_phrase`** +: The HTTP reason phrase. + +type: keyword + + +**`iis.error.queue_name`** +: The IIS application pool name. + +type: keyword + + +**`iis.error.remote_ip`** +: type: alias + +alias to: source.address + + +**`iis.error.remote_port`** +: type: alias + +alias to: source.port + + +**`iis.error.server_ip`** +: type: alias + +alias to: destination.address + + +**`iis.error.server_port`** +: type: alias + +alias to: destination.port + + +**`iis.error.http_version`** +: type: alias + +alias to: http.version + + +**`iis.error.method`** +: type: alias + +alias to: http.request.method + + +**`iis.error.url`** +: type: alias + +alias to: url.original + + +**`iis.error.response_code`** +: type: alias + +alias to: http.response.status_code + + +**`iis.error.geoip.continent_name`** +: type: alias + +alias to: source.geo.continent_name + + +**`iis.error.geoip.country_iso_code`** +: type: alias + +alias to: source.geo.country_iso_code + + +**`iis.error.geoip.location`** +: type: alias + +alias to: source.geo.location + + +**`iis.error.geoip.region_name`** +: type: alias + +alias to: source.geo.region_name + + +**`iis.error.geoip.city_name`** +: type: alias + +alias to: source.geo.city_name + + +**`iis.error.geoip.region_iso_code`** +: type: alias + +alias to: source.geo.region_iso_code + + diff --git a/docs/reference/filebeat/exported-fields-iptables.md b/docs/reference/filebeat/exported-fields-iptables.md new file mode 100644 index 000000000000..ccce9163ad4f --- /dev/null +++ b/docs/reference/filebeat/exported-fields-iptables.md @@ -0,0 +1,202 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-iptables.html +--- + +# iptables fields [exported-fields-iptables] + +Module for handling the iptables logs. + + +## iptables [_iptables] + +Fields from the iptables logs. + +**`iptables.ether_type`** +: Value of the ethernet type field identifying the network layer protocol. + +type: long + + +**`iptables.flow_label`** +: IPv6 flow label. + +type: integer + + +**`iptables.fragment_flags`** +: IP fragment flags. A combination of CE, DF and MF. + +type: keyword + + +**`iptables.fragment_offset`** +: Offset of the current IP fragment. + +type: long + + + +## icmp [_icmp] + +ICMP fields. + +**`iptables.icmp.code`** +: ICMP code. + +type: long + + +**`iptables.icmp.id`** +: ICMP ID. + +type: long + + +**`iptables.icmp.parameter`** +: ICMP parameter. + +type: long + + +**`iptables.icmp.redirect`** +: ICMP redirect address. + +type: ip + + +**`iptables.icmp.seq`** +: ICMP sequence number. + +type: long + + +**`iptables.icmp.type`** +: ICMP type. + +type: long + + +**`iptables.id`** +: Packet identifier. + +type: long + + +**`iptables.incomplete_bytes`** +: Number of incomplete bytes. + +type: long + + +**`iptables.input_device`** +: Device that received the packet. + +type: keyword + + +**`iptables.precedence_bits`** +: IP precedence bits. + +type: short + + +**`iptables.tos`** +: IP Type of Service field. + +type: long + + +**`iptables.length`** +: Packet length. + +type: long + + +**`iptables.output_device`** +: Device that output the packet. + +type: keyword + + + +## tcp [_tcp_2] + +TCP fields. + +**`iptables.tcp.flags`** +: TCP flags. + +type: keyword + + +**`iptables.tcp.reserved_bits`** +: TCP reserved bits. + +type: short + + +**`iptables.tcp.seq`** +: TCP sequence number. + +type: long + + +**`iptables.tcp.ack`** +: TCP Acknowledgment number. + +type: long + + +**`iptables.tcp.window`** +: Advertised TCP window size. + +type: long + + +**`iptables.ttl`** +: Time To Live field. + +type: integer + + + +## udp [_udp] + +UDP fields. + +**`iptables.udp.length`** +: Length of the UDP header and payload. + +type: long + + + +## ubiquiti [_ubiquiti] + +Fields for Ubiquiti network devices. + +**`iptables.ubiquiti.input_zone`** +: Input zone. + +type: keyword + + +**`iptables.ubiquiti.output_zone`** +: Output zone. + +type: keyword + + +**`iptables.ubiquiti.rule_number`** +: The rule number within the rule set. + +type: keyword + + +**`iptables.ubiquiti.rule_set`** +: The rule set name. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-jolokia-autodiscover.md b/docs/reference/filebeat/exported-fields-jolokia-autodiscover.md new file mode 100644 index 000000000000..b9023a51b82c --- /dev/null +++ b/docs/reference/filebeat/exported-fields-jolokia-autodiscover.md @@ -0,0 +1,51 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-jolokia-autodiscover.html +--- + +# Jolokia Discovery autodiscover provider fields [exported-fields-jolokia-autodiscover] + +Metadata from Jolokia Discovery added by the jolokia provider. + +**`jolokia.agent.version`** +: Version number of jolokia agent. + +type: keyword + + +**`jolokia.agent.id`** +: Each agent has a unique id which can be either provided during startup of the agent in form of a configuration parameter or being autodetected. If autodected, the id has several parts: The IP, the process id, hashcode of the agent and its type. + +type: keyword + + +**`jolokia.server.product`** +: The container product if detected. + +type: keyword + + +**`jolokia.server.version`** +: The container’s version (if detected). + +type: keyword + + +**`jolokia.server.vendor`** +: The vendor of the container the agent is running in. + +type: keyword + + +**`jolokia.url`** +: The URL how this agent can be contacted. + +type: keyword + + +**`jolokia.secured`** +: Whether the agent was configured for authentication or not. + +type: boolean + + diff --git a/docs/reference/filebeat/exported-fields-juniper.md b/docs/reference/filebeat/exported-fields-juniper.md new file mode 100644 index 000000000000..762004b9d07f --- /dev/null +++ b/docs/reference/filebeat/exported-fields-juniper.md @@ -0,0 +1,590 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-juniper.html +--- + +# Juniper JUNOS fields [exported-fields-juniper] + +juniper fields. + + +## juniper.srx [_juniper_srx] + +Module for parsing junipersrx syslog. + +**`juniper.srx.reason`** +: reason + +type: keyword + + +**`juniper.srx.connection_tag`** +: connection tag + +type: keyword + + +**`juniper.srx.service_name`** +: service name + +type: keyword + + +**`juniper.srx.nat_connection_tag`** +: nat connection tag + +type: keyword + + +**`juniper.srx.src_nat_rule_type`** +: src nat rule type + +type: keyword + + +**`juniper.srx.src_nat_rule_name`** +: src nat rule name + +type: keyword + + +**`juniper.srx.dst_nat_rule_type`** +: dst nat rule type + +type: keyword + + +**`juniper.srx.dst_nat_rule_name`** +: dst nat rule name + +type: keyword + + +**`juniper.srx.protocol_id`** +: protocol id + +type: keyword + + +**`juniper.srx.policy_name`** +: policy name + +type: keyword + + +**`juniper.srx.session_id_32`** +: session id 32 + +type: keyword + + +**`juniper.srx.session_id`** +: session id + +type: keyword + + +**`juniper.srx.outbound_packets`** +: packets from client + +type: integer + + +**`juniper.srx.outbound_bytes`** +: bytes from client + +type: integer + + +**`juniper.srx.inbound_packets`** +: packets from server + +type: integer + + +**`juniper.srx.inbound_bytes`** +: bytes from server + +type: integer + + +**`juniper.srx.elapsed_time`** +: elapsed time + +type: date + + +**`juniper.srx.application`** +: application + +type: keyword + + +**`juniper.srx.nested_application`** +: nested application + +type: keyword + + +**`juniper.srx.username`** +: username + +type: keyword + + +**`juniper.srx.roles`** +: roles + +type: keyword + + +**`juniper.srx.encrypted`** +: encrypted + +type: keyword + + +**`juniper.srx.application_category`** +: application category + +type: keyword + + +**`juniper.srx.application_sub_category`** +: application sub category + +type: keyword + + +**`juniper.srx.application_characteristics`** +: application characteristics + +type: keyword + + +**`juniper.srx.secure_web_proxy_session_type`** +: secure web proxy session type + +type: keyword + + +**`juniper.srx.peer_session_id`** +: peer session id + +type: keyword + + +**`juniper.srx.peer_source_address`** +: peer source address + +type: ip + + +**`juniper.srx.peer_source_port`** +: peer source port + +type: integer + + +**`juniper.srx.peer_destination_address`** +: peer destination address + +type: ip + + +**`juniper.srx.peer_destination_port`** +: peer destination port + +type: integer + + +**`juniper.srx.hostname`** +: hostname + +type: keyword + + +**`juniper.srx.src_vrf_grp`** +: src_vrf_grp + +type: keyword + + +**`juniper.srx.dst_vrf_grp`** +: dst_vrf_grp + +type: keyword + + +**`juniper.srx.icmp_type`** +: icmp type + +type: integer + + +**`juniper.srx.process`** +: process that generated the message + +type: keyword + + +**`juniper.srx.apbr_rule_type`** +: apbr rule type + +type: keyword + + +**`juniper.srx.dscp_value`** +: apbr rule type + +type: integer + + +**`juniper.srx.logical_system_name`** +: logical system name + +type: keyword + + +**`juniper.srx.profile_name`** +: profile name + +type: keyword + + +**`juniper.srx.routing_instance`** +: routing instance + +type: keyword + + +**`juniper.srx.rule_name`** +: rule name + +type: keyword + + +**`juniper.srx.uplink_tx_bytes`** +: uplink tx bytes + +type: integer + + +**`juniper.srx.uplink_rx_bytes`** +: uplink rx bytes + +type: integer + + +**`juniper.srx.obj`** +: url path + +type: keyword + + +**`juniper.srx.url`** +: url domain + +type: keyword + + +**`juniper.srx.profile`** +: filter profile + +type: keyword + + +**`juniper.srx.category`** +: filter category + +type: keyword + + +**`juniper.srx.filename`** +: filename + +type: keyword + + +**`juniper.srx.temporary_filename`** +: temporary_filename + +type: keyword + + +**`juniper.srx.name`** +: name + +type: keyword + + +**`juniper.srx.error_message`** +: error_message + +type: keyword + + +**`juniper.srx.error_code`** +: error_code + +type: keyword + + +**`juniper.srx.action`** +: action + +type: keyword + + +**`juniper.srx.protocol`** +: protocol + +type: keyword + + +**`juniper.srx.protocol_name`** +: protocol name + +type: keyword + + +**`juniper.srx.type`** +: type + +type: keyword + + +**`juniper.srx.repeat_count`** +: repeat count + +type: integer + + +**`juniper.srx.alert`** +: repeat alert + +type: keyword + + +**`juniper.srx.message_type`** +: message type + +type: keyword + + +**`juniper.srx.threat_severity`** +: threat severity + +type: keyword + + +**`juniper.srx.application_name`** +: application name + +type: keyword + + +**`juniper.srx.attack_name`** +: attack name + +type: keyword + + +**`juniper.srx.index`** +: index + +type: keyword + + +**`juniper.srx.message`** +: mesagge + +type: keyword + + +**`juniper.srx.epoch_time`** +: epoch time + +type: date + + +**`juniper.srx.packet_log_id`** +: packet log id + +type: integer + + +**`juniper.srx.export_id`** +: packet log id + +type: integer + + +**`juniper.srx.ddos_application_name`** +: ddos application name + +type: keyword + + +**`juniper.srx.connection_hit_rate`** +: connection hit rate + +type: integer + + +**`juniper.srx.time_scope`** +: time scope + +type: keyword + + +**`juniper.srx.context_hit_rate`** +: context hit rate + +type: integer + + +**`juniper.srx.context_value_hit_rate`** +: context value hit rate + +type: integer + + +**`juniper.srx.time_count`** +: time count + +type: integer + + +**`juniper.srx.time_period`** +: time period + +type: integer + + +**`juniper.srx.context_value`** +: context value + +type: keyword + + +**`juniper.srx.context_name`** +: context name + +type: keyword + + +**`juniper.srx.ruleebase_name`** +: ruleebase name + +type: keyword + + +**`juniper.srx.verdict_source`** +: verdict source + +type: keyword + + +**`juniper.srx.verdict_number`** +: verdict number + +type: integer + + +**`juniper.srx.file_category`** +: file category + +type: keyword + + +**`juniper.srx.sample_sha256`** +: sample sha256 + +type: keyword + + +**`juniper.srx.malware_info`** +: malware info + +type: keyword + + +**`juniper.srx.client_ip`** +: client ip + +type: ip + + +**`juniper.srx.tenant_id`** +: tenant id + +type: keyword + + +**`juniper.srx.timestamp`** +: timestamp + +type: date + + +**`juniper.srx.th`** +: th + +type: keyword + + +**`juniper.srx.status`** +: status + +type: keyword + + +**`juniper.srx.state`** +: state + +type: keyword + + +**`juniper.srx.file_hash_lookup`** +: file hash lookup + +type: keyword + + +**`juniper.srx.file_name`** +: file name + +type: keyword + + +**`juniper.srx.action_detail`** +: action detail + +type: keyword + + +**`juniper.srx.sub_category`** +: sub category + +type: keyword + + +**`juniper.srx.feed_name`** +: feed name + +type: keyword + + +**`juniper.srx.occur_count`** +: occur count + +type: integer + + +**`juniper.srx.tag`** +: system log message tag, which uniquely identifies the message. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-kafka.md b/docs/reference/filebeat/exported-fields-kafka.md new file mode 100644 index 000000000000..8e8c9b6df493 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-kafka.md @@ -0,0 +1,52 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-kafka.html +--- + +# Kafka fields [exported-fields-kafka] + +Kafka module + + +## kafka [_kafka] + + +## log [_log_5] + +Kafka log lines. + +**`kafka.log.component`** +: Component the log is coming from. + +type: keyword + + +**`kafka.log.class`** +: Java class the log is coming from. + +type: keyword + + +**`kafka.log.thread`** +: Thread name the log is coming from. + +type: keyword + + + +## trace [_trace] + +Trace in the log line. + +**`kafka.log.trace.class`** +: Java class the trace is coming from. + +type: keyword + + +**`kafka.log.trace.message`** +: Message part of the trace. + +type: text + + diff --git a/docs/reference/filebeat/exported-fields-kibana.md b/docs/reference/filebeat/exported-fields-kibana.md new file mode 100644 index 000000000000..05205fb07b21 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-kibana.md @@ -0,0 +1,135 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-kibana.html +--- + +# kibana fields [exported-fields-kibana] + +kibana Module + +**`service.node.roles`** +: type: keyword + + + +## kibana [_kibana] + +Module for parsing Kibana logs. + +**`kibana.session_id`** +: The ID of the user session associated with this event. Each login attempt results in a unique session id. + +type: keyword + +example: 123e4567-e89b-12d3-a456-426614174000 + + +**`kibana.space_id`** +: The id of the space associated with this event. + +type: keyword + +example: default + + +**`kibana.saved_object.type`** +: The type of the saved object associated with this event. + +type: keyword + +example: dashboard + + +**`kibana.saved_object.id`** +: The id of the saved object associated with this event. + +type: keyword + +example: 6295bdd0-0a0e-11e7-825f-6748cda7d858 + + +**`kibana.saved_object.name`** +: The name of the saved object associated with this event. + +type: keyword + +example: my-saved-object + + +**`kibana.add_to_spaces`** +: The set of space ids that a saved object was shared to. + +type: keyword + +example: [*default*, *marketing*] + + +**`kibana.delete_from_spaces`** +: The set of space ids that a saved object was removed from. + +type: keyword + +example: [*default*, *marketing*] + + +**`kibana.authentication_provider`** +: The authentication provider associated with a login event. + +type: keyword + +example: basic1 + + +**`kibana.authentication_type`** +: The authentication provider type associated with a login event. + +type: keyword + +example: basic + + +**`kibana.authentication_realm`** +: The Elasticsearch authentication realm name which fulfilled a login event. + +type: keyword + +example: native + + +**`kibana.lookup_realm`** +: The Elasticsearch lookup realm which fulfilled a login event. + +type: keyword + +example: native + + + +## log [_log_6] + +Kibana log lines. + +**`kibana.log.tags`** +: Kibana logging tags. + +type: keyword + + +**`kibana.log.state`** +: Current state of Kibana. + +type: keyword + + +**`kibana.log.meta`** +: type: object + + +**`kibana.log.meta.req.headers`** +: type: flattened + + +**`kibana.log.meta.res.headers`** +: type: flattened + + diff --git a/docs/reference/filebeat/exported-fields-kubernetes-processor.md b/docs/reference/filebeat/exported-fields-kubernetes-processor.md new file mode 100644 index 000000000000..8d8f2d6e83a1 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-kubernetes-processor.md @@ -0,0 +1,87 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-kubernetes-processor.html +--- + +# Kubernetes fields [exported-fields-kubernetes-processor] + +Kubernetes metadata added by the kubernetes processor + +**`kubernetes.pod.name`** +: Kubernetes pod name + +type: keyword + + +**`kubernetes.pod.uid`** +: Kubernetes Pod UID + +type: keyword + + +**`kubernetes.pod.ip`** +: Kubernetes Pod IP + +type: ip + + +**`kubernetes.namespace`** +: Kubernetes namespace + +type: keyword + + +**`kubernetes.node.name`** +: Kubernetes node name + +type: keyword + + +**`kubernetes.node.hostname`** +: Kubernetes hostname as reported by the node’s kernel + +type: keyword + + +**`kubernetes.labels.*`** +: Kubernetes labels map + +type: object + + +**`kubernetes.annotations.*`** +: Kubernetes annotations map + +type: object + + +**`kubernetes.selectors.*`** +: Kubernetes selectors map + +type: object + + +**`kubernetes.replicaset.name`** +: Kubernetes replicaset name + +type: keyword + + +**`kubernetes.deployment.name`** +: Kubernetes deployment name + +type: keyword + + +**`kubernetes.statefulset.name`** +: Kubernetes statefulset name + +type: keyword + + +**`kubernetes.container.name`** +: Kubernetes container name (different than the name from the runtime) + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-log.md b/docs/reference/filebeat/exported-fields-log.md new file mode 100644 index 000000000000..e2e0908ddacc --- /dev/null +++ b/docs/reference/filebeat/exported-fields-log.md @@ -0,0 +1,207 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-log.html +--- + +# Log file content fields [exported-fields-log] + +Contains log file lines. + +**`log.source.address`** +: Source address from which the log event was read / sent from. + +type: keyword + +required: False + + +**`log.offset`** +: The file offset the reported line starts at. + +type: long + +required: False + + +**`stream`** +: Log stream when reading container logs, can be *stdout* or *stderr* + +type: keyword + +required: False + + +**`input.type`** +: The input type from which the event was generated. This field is set to the value specified for the `type` option in the input section of the Filebeat config file. + +required: True + + +**`syslog.facility`** +: The facility extracted from the priority. + +type: long + +required: False + + +**`syslog.priority`** +: The priority of the syslog event. + +type: long + +required: False + + +**`syslog.severity_label`** +: The human readable severity. + +type: keyword + +required: False + + +**`syslog.facility_label`** +: The human readable facility. + +type: keyword + +required: False + + +**`process.program`** +: The name of the program. + +type: keyword + +required: False + + +**`log.flags`** +: This field contains the flags of the event. + + +**`http.response.content_length`** +: type: alias + +alias to: http.response.body.bytes + + +**`user_agent.os.full_name`** +: type: keyword + + +**`fileset.name`** +: The Filebeat fileset that generated this event. + +type: keyword + + +**`fileset.module`** +: type: alias + +alias to: event.module + + +**`read_timestamp`** +: type: alias + +alias to: event.created + + +**`docker.attrs`** +: docker.attrs contains labels and environment variables written by docker’s JSON File logging driver. These fields are only available when they are configured in the logging driver options. + +type: object + + +**`icmp.code`** +: ICMP code. + +type: keyword + + +**`icmp.type`** +: ICMP type. + +type: keyword + + +**`igmp.type`** +: IGMP type. + +type: keyword + + +**`azure.eventhub`** +: Name of the eventhub. + +type: keyword + + +**`azure.offset`** +: The offset. + +type: long + + +**`azure.enqueued_time`** +: The enqueued time. + +type: date + + +**`azure.partition_id`** +: The partition id. + +type: long + + +**`azure.consumer_group`** +: The consumer group. + +type: keyword + + +**`azure.sequence_number`** +: The sequence number. + +type: long + + +**`kafka.topic`** +: Kafka topic + +type: keyword + + +**`kafka.partition`** +: Kafka partition number + +type: long + + +**`kafka.offset`** +: Kafka offset of this message + +type: long + + +**`kafka.key`** +: Kafka key, corresponding to the Kafka value stored in the message + +type: keyword + + +**`kafka.block_timestamp`** +: Kafka outer (compressed) block timestamp + +type: date + + +**`kafka.headers`** +: An array of Kafka header strings for this message, in the form ": ". + +type: array + + diff --git a/docs/reference/filebeat/exported-fields-logstash.md b/docs/reference/filebeat/exported-fields-logstash.md new file mode 100644 index 000000000000..29436edad0a7 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-logstash.md @@ -0,0 +1,140 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-logstash.html +--- + +# logstash fields [exported-fields-logstash] + +logstash Module + + +## logstash [_logstash] + + +## log [_log_7] + +Fields from the Logstash logs. + +**`logstash.log.module`** +: The module or class where the event originate. + +type: keyword + + +**`logstash.log.thread`** +: Information about the running thread where the log originate. + +type: keyword + + +**`logstash.log.thread.text`** +: type: text + + +**`logstash.log.log_event`** +: key and value debugging information. + +type: object + + +**`logstash.log.log_event.action`** +: type: keyword + + +**`logstash.log.pipeline_id`** +: The ID of the pipeline. + +type: keyword + +example: main + + +**`logstash.log.message`** +: type: alias + +alias to: message + + +**`logstash.log.level`** +: type: alias + +alias to: log.level + + + +## slowlog [_slowlog_2] + +slowlog + +**`logstash.slowlog.module`** +: The module or class where the event originate. + +type: keyword + + +**`logstash.slowlog.thread`** +: Information about the running thread where the log originate. + +type: keyword + + +**`logstash.slowlog.thread.text`** +: type: text + + +**`logstash.slowlog.event`** +: Raw dump of the original event + +type: keyword + + +**`logstash.slowlog.event.text`** +: type: text + + +**`logstash.slowlog.plugin_name`** +: Name of the plugin + +type: keyword + + +**`logstash.slowlog.plugin_type`** +: Type of the plugin: Inputs, Filters, Outputs or Codecs. + +type: keyword + + +**`logstash.slowlog.took_in_millis`** +: Execution time for the plugin in milliseconds. + +type: long + + +**`logstash.slowlog.plugin_params`** +: String value of the plugin configuration + +type: keyword + + +**`logstash.slowlog.plugin_params.text`** +: type: text + + +**`logstash.slowlog.plugin_params_object`** +: key → value of the configuration used by the plugin. + +type: object + + +**`logstash.slowlog.level`** +: type: alias + +alias to: log.level + + +**`logstash.slowlog.took_in_nanos`** +: type: alias + +alias to: event.duration + + diff --git a/docs/reference/filebeat/exported-fields-lumberjack.md b/docs/reference/filebeat/exported-fields-lumberjack.md new file mode 100644 index 000000000000..d8614d9d91a0 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-lumberjack.md @@ -0,0 +1,15 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-lumberjack.html +--- + +# Lumberjack fields [exported-fields-lumberjack] + +Fields from Lumberjack input. + +**`lumberjack`** +: Structured data received in an event sent over the Lumberjack protocol. + +type: flattened + + diff --git a/docs/reference/filebeat/exported-fields-microsoft.md b/docs/reference/filebeat/exported-fields-microsoft.md new file mode 100644 index 000000000000..66ffb0c8ebc9 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-microsoft.md @@ -0,0 +1,373 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-microsoft.html +--- + +# Microsoft fields [exported-fields-microsoft] + +Microsoft Module + + +## microsoft.defender_atp [_microsoft_defender_atp] + +Module for ingesting Microsoft Defender ATP. + +**`microsoft.defender_atp.lastUpdateTime`** +: The date and time (in UTC) the alert was last updated. + +type: date + + +**`microsoft.defender_atp.resolvedTime`** +: The date and time in which the status of the alert was changed to *Resolved*. + +type: date + + +**`microsoft.defender_atp.incidentId`** +: The Incident ID of the Alert. + +type: keyword + + +**`microsoft.defender_atp.investigationId`** +: The Investigation ID related to the Alert. + +type: keyword + + +**`microsoft.defender_atp.investigationState`** +: The current state of the Investigation. + +type: keyword + + +**`microsoft.defender_atp.assignedTo`** +: Owner of the alert. + +type: keyword + + +**`microsoft.defender_atp.status`** +: Specifies the current status of the alert. Possible values are: *Unknown*, *New*, *InProgress* and *Resolved*. + +type: keyword + + +**`microsoft.defender_atp.classification`** +: Specification of the alert. Possible values are: *Unknown*, *FalsePositive*, *TruePositive*. + +type: keyword + + +**`microsoft.defender_atp.determination`** +: Specifies the determination of the alert. Possible values are: *NotAvailable*, *Apt*, *Malware*, *SecurityPersonnel*, *SecurityTesting*, *UnwantedSoftware*, *Other*. + +type: keyword + + +**`microsoft.defender_atp.threatFamilyName`** +: Threat family. + +type: keyword + + +**`microsoft.defender_atp.rbacGroupName`** +: User group related to the alert + +type: keyword + + +**`microsoft.defender_atp.evidence.domainName`** +: Domain name related to the alert + +type: keyword + + +**`microsoft.defender_atp.evidence.ipAddress`** +: IP address involved in the alert + +type: ip + + +**`microsoft.defender_atp.evidence.aadUserId`** +: ID of the user involved in the alert + +type: keyword + + +**`microsoft.defender_atp.evidence.accountName`** +: Username of the user involved in the alert + +type: keyword + + +**`microsoft.defender_atp.evidence.entityType`** +: The type of evidence + +type: keyword + + +**`microsoft.defender_atp.evidence.userPrincipalName`** +: Principal name of the user involved in the alert + +type: keyword + + + +## microsoft.m365_defender [_microsoft_m365_defender] + +Module for ingesting Microsoft Defender ATP. + +**`microsoft.m365_defender.incidentId`** +: Unique identifier to represent the incident. + +type: keyword + + +**`microsoft.m365_defender.redirectIncidentId`** +: Only populated in case an incident is being grouped together with another incident, as part of the incident processing logic. + +type: keyword + + +**`microsoft.m365_defender.incidentName`** +: Name of the Incident. + +type: keyword + + +**`microsoft.m365_defender.determination`** +: Specifies the determination of the incident. The property values are: NotAvailable, Apt, Malware, SecurityPersonnel, SecurityTesting, UnwantedSoftware, Other. + +type: keyword + + +**`microsoft.m365_defender.investigationState`** +: The current state of the Investigation. + +type: keyword + + +**`microsoft.m365_defender.assignedTo`** +: Owner of the alert. + +type: keyword + + +**`microsoft.m365_defender.tags`** +: Array of custom tags associated with an incident, for example to flag a group of incidents with a common characteristic. + +type: keyword + + +**`microsoft.m365_defender.status`** +: Specifies the current status of the alert. Possible values are: *Unknown*, *New*, *InProgress* and *Resolved*. + +type: keyword + + +**`microsoft.m365_defender.classification`** +: Specification of the alert. Possible values are: *Unknown*, *FalsePositive*, *TruePositive*. + +type: keyword + + +**`microsoft.m365_defender.alerts.incidentId`** +: Unique identifier to represent the incident this alert is associated with. + +type: keyword + + +**`microsoft.m365_defender.alerts.resolvedTime`** +: Time when alert was resolved. + +type: date + + +**`microsoft.m365_defender.alerts.status`** +: Categorize alerts (as New, Active, or Resolved). + +type: keyword + + +**`microsoft.m365_defender.alerts.severity`** +: The severity of the related alert. + +type: keyword + + +**`microsoft.m365_defender.alerts.creationTime`** +: Time when alert was first created. + +type: date + + +**`microsoft.m365_defender.alerts.lastUpdatedTime`** +: Time when alert was last updated. + +type: date + + +**`microsoft.m365_defender.alerts.investigationId`** +: The automated investigation id triggered by this alert. + +type: keyword + + +**`microsoft.m365_defender.alerts.userSid`** +: The SID of the related user + +type: keyword + + +**`microsoft.m365_defender.alerts.detectionSource`** +: The service that initially detected the threat. + +type: keyword + + +**`microsoft.m365_defender.alerts.classification`** +: The specification for the incident. The property values are: Unknown, FalsePositive, TruePositive or null. + +type: keyword + + +**`microsoft.m365_defender.alerts.investigationState`** +: Information on the investigation’s current status. + +type: keyword + + +**`microsoft.m365_defender.alerts.determination`** +: Specifies the determination of the incident. The property values are: NotAvailable, Apt, Malware, SecurityPersonnel, SecurityTesting, UnwantedSoftware, Other or null + +type: keyword + + +**`microsoft.m365_defender.alerts.assignedTo`** +: Owner of the incident, or null if no owner is assigned. + +type: keyword + + +**`microsoft.m365_defender.alerts.actorName`** +: The activity group, if any, the associated with this alert. + +type: keyword + + +**`microsoft.m365_defender.alerts.threatFamilyName`** +: Threat family associated with this alert. + +type: keyword + + +**`microsoft.m365_defender.alerts.mitreTechniques`** +: The attack techniques, as aligned with the MITRE ATT&CK™ framework. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.entityType`** +: Entities that have been identified to be part of, or related to, a given alert. The properties values are: User, Ip, Url, File, Process, MailBox, MailMessage, MailCluster, Registry. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.accountName`** +: Account name of the related user. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.mailboxDisplayName`** +: The display name of the related mailbox. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.mailboxAddress`** +: The mail address of the related mailbox. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.clusterBy`** +: A list of metadata if the entityType is MailCluster. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.sender`** +: The sender for the related email message. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.recipient`** +: The recipient for the related email message. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.subject`** +: The subject for the related email message. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.deliveryAction`** +: The delivery status for the related email message. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.securityGroupId`** +: The Security Group ID for the user related to the email message. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.securityGroupName`** +: The Security Group Name for the user related to the email message. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.registryHive`** +: Reference to which Hive in registry the event is related to, if eventType is registry. Example: HKEY_LOCAL_MACHINE. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.registryKey`** +: Reference to the related registry key to the event. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.registryValueType`** +: Value type of the registry key/value pair related to the event. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.deviceId`** +: The unique ID of the device related to the event. + +type: keyword + + +**`microsoft.m365_defender.alerts.entities.ipAddress`** +: The related IP address to the event. + +type: keyword + + +**`microsoft.m365_defender.alerts.devices`** +: The devices related to the investigation. + +type: flattened + + diff --git a/docs/reference/filebeat/exported-fields-misp.md b/docs/reference/filebeat/exported-fields-misp.md new file mode 100644 index 000000000000..bd784f759931 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-misp.md @@ -0,0 +1,657 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-misp.html +--- + +# MISP fields [exported-fields-misp] + +Module for handling threat information from MISP. + + +## misp [_misp] + +Fields from MISP threat information. + + +## attack_pattern [_attack_pattern] + +Fields provide support for specifying information about attack patterns. + +**`misp.attack_pattern.id`** +: Identifier of the threat indicator. + +type: keyword + + +**`misp.attack_pattern.name`** +: Name of the attack pattern. + +type: keyword + + +**`misp.attack_pattern.description`** +: Description of the attack pattern. + +type: text + + +**`misp.attack_pattern.kill_chain_phases`** +: The kill chain phase(s) to which this attack pattern corresponds. + +type: keyword + + + +## campaign [_campaign] + +Fields provide support for specifying information about campaigns. + +**`misp.campaign.id`** +: Identifier of the campaign. + +type: keyword + + +**`misp.campaign.name`** +: Name of the campaign. + +type: keyword + + +**`misp.campaign.description`** +: Description of the campaign. + +type: text + + +**`misp.campaign.aliases`** +: Alternative names used to identify this campaign. + +type: text + + +**`misp.campaign.first_seen`** +: The time that this Campaign was first seen, in RFC3339 format. + +type: date + + +**`misp.campaign.last_seen`** +: The time that this Campaign was last seen, in RFC3339 format. + +type: date + + +**`misp.campaign.objective`** +: This field defines the Campaign’s primary goal, objective, desired outcome, or intended effect. + +type: keyword + + + +## course_of_action [_course_of_action] + +A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. + +**`misp.course_of_action.id`** +: Identifier of the Course of Action. + +type: keyword + + +**`misp.course_of_action.name`** +: The name used to identify the Course of Action. + +type: keyword + + +**`misp.course_of_action.description`** +: Description of the Course of Action. + +type: text + + + +## identity [_identity_2] + +Identity can represent actual individuals, organizations, or groups, as well as classes of individuals, organizations, or groups. + +**`misp.identity.id`** +: Identifier of the Identity. + +type: keyword + + +**`misp.identity.name`** +: The name used to identify the Identity. + +type: keyword + + +**`misp.identity.description`** +: Description of the Identity. + +type: text + + +**`misp.identity.identity_class`** +: The type of entity that this Identity describes, e.g., an individual or organization. Open Vocab - identity-class-ov + +type: keyword + + +**`misp.identity.labels`** +: The list of roles that this Identity performs. + +type: keyword + +example: CEO + + +**`misp.identity.sectors`** +: The list of sectors that this Identity belongs to. Open Vocab - industry-sector-ov + +type: keyword + + +**`misp.identity.contact_information`** +: The contact information (e-mail, phone number, etc.) for this Identity. + +type: text + + + +## intrusion_set [_intrusion_set] + +An Intrusion Set is a grouped set of adversary behavior and resources with common properties that is believed to be orchestrated by a single organization. + +**`misp.intrusion_set.id`** +: Identifier of the Intrusion Set. + +type: keyword + + +**`misp.intrusion_set.name`** +: The name used to identify the Intrusion Set. + +type: keyword + + +**`misp.intrusion_set.description`** +: Description of the Intrusion Set. + +type: text + + +**`misp.intrusion_set.aliases`** +: Alternative names used to identify the Intrusion Set. + +type: text + + +**`misp.intrusion_set.first_seen`** +: The time that this Intrusion Set was first seen, in RFC3339 format. + +type: date + + +**`misp.intrusion_set.last_seen`** +: The time that this Intrusion Set was last seen, in RFC3339 format. + +type: date + + +**`misp.intrusion_set.goals`** +: The high level goals of this Intrusion Set, namely, what are they trying to do. + +type: text + + +**`misp.intrusion_set.resource_level`** +: This defines the organizational level at which this Intrusion Set typically works. Open Vocab - attack-resource-level-ov + +type: text + + +**`misp.intrusion_set.primary_motivation`** +: The primary reason, motivation, or purpose behind this Intrusion Set. Open Vocab - attack-motivation-ov + +type: text + + +**`misp.intrusion_set.secondary_motivations`** +: The secondary reasons, motivations, or purposes behind this Intrusion Set. Open Vocab - attack-motivation-ov + +type: text + + + +## malware [_malware] + +Malware is a type of TTP that is also known as malicious code and malicious software, refers to a program that is inserted into a system, usually covertly, with the intent of compromising the confidentiality, integrity, or availability of the victim’s data, applications, or operating system (OS) or of otherwise annoying or disrupting the victim. + +**`misp.malware.id`** +: Identifier of the Malware. + +type: keyword + + +**`misp.malware.name`** +: The name used to identify the Malware. + +type: keyword + + +**`misp.malware.description`** +: Description of the Malware. + +type: text + + +**`misp.malware.labels`** +: The type of malware being described. Open Vocab - malware-label-ov. adware,backdoor,bot,ddos,dropper,exploit-kit,keylogger,ransomware, remote-access-trojan,resource-exploitation,rogue-security-software,rootkit, screen-capture,spyware,trojan,virus,worm + +type: keyword + + +**`misp.malware.kill_chain_phases`** +: The list of kill chain phases for which this Malware instance can be used. + +type: keyword + +format: string + + + +## note [_note] + +A Note is a comment or note containing informative text to help explain the context of one or more STIX Objects (SDOs or SROs) or to provide additional analysis that is not contained in the original object. + +**`misp.note.id`** +: Identifier of the Note. + +type: keyword + + +**`misp.note.summary`** +: A brief description used as a summary of the Note. + +type: keyword + + +**`misp.note.description`** +: The content of the Note. + +type: text + + +**`misp.note.authors`** +: The name of the author(s) of this Note. + +type: keyword + + +**`misp.note.object_refs`** +: The STIX Objects (SDOs and SROs) that the note is being applied to. + +type: keyword + + + +## threat_indicator [_threat_indicator] + +Fields provide support for specifying information about threat indicators, and related matching patterns. + +**`misp.threat_indicator.labels`** +: list of type open-vocab that specifies the type of indicator. + +type: keyword + +example: Domain Watchlist + + +**`misp.threat_indicator.id`** +: Identifier of the threat indicator. + +type: keyword + + +**`misp.threat_indicator.version`** +: Version of the threat indicator. + +type: keyword + + +**`misp.threat_indicator.type`** +: Type of the threat indicator. + +type: keyword + + +**`misp.threat_indicator.description`** +: Description of the threat indicator. + +type: text + + +**`misp.threat_indicator.feed`** +: Name of the threat feed. + +type: text + + +**`misp.threat_indicator.valid_from`** +: The time from which this Indicator should be considered valuable intelligence, in RFC3339 format. + +type: date + + +**`misp.threat_indicator.valid_until`** +: The time at which this Indicator should no longer be considered valuable intelligence. If the valid_until property is omitted, then there is no constraint on the latest time for which the indicator should be used, in RFC3339 format. + +type: date + + +**`misp.threat_indicator.severity`** +: Threat severity to which this indicator corresponds. + +type: keyword + +example: high + +format: string + + +**`misp.threat_indicator.confidence`** +: Confidence level to which this indicator corresponds. + +type: keyword + +example: high + + +**`misp.threat_indicator.kill_chain_phases`** +: The kill chain phase(s) to which this indicator corresponds. + +type: keyword + +format: string + + +**`misp.threat_indicator.mitre_tactic`** +: MITRE tactics to which this indicator corresponds. + +type: keyword + +example: Initial Access + +format: string + + +**`misp.threat_indicator.mitre_technique`** +: MITRE techniques to which this indicator corresponds. + +type: keyword + +example: Drive-by Compromise + +format: string + + +**`misp.threat_indicator.attack_pattern`** +: The attack_pattern for this indicator is a STIX Pattern as specified in STIX Version 2.0 Part 5 - STIX Patterning. + +type: keyword + +example: [destination:ip = *91.219.29.188/32*] + + +**`misp.threat_indicator.attack_pattern_kql`** +: The attack_pattern for this indicator is KQL query that matches the attack_pattern specified in the STIX Pattern format. + +type: keyword + +example: destination.ip: "91.219.29.188/32" + + +**`misp.threat_indicator.negate`** +: When set to true, it specifies the absence of the attack_pattern. + +type: boolean + + +**`misp.threat_indicator.intrusion_set`** +: Name of the intrusion set if known. + +type: keyword + + +**`misp.threat_indicator.campaign`** +: Name of the attack campaign if known. + +type: keyword + + +**`misp.threat_indicator.threat_actor`** +: Name of the threat actor if known. + +type: keyword + + + +## observed_data [_observed_data] + +Observed data conveys information that was observed on systems and networks, such as log data or network traffic, using the Cyber Observable specification. + +**`misp.observed_data.id`** +: Identifier of the Observed Data. + +type: keyword + + +**`misp.observed_data.first_observed`** +: The beginning of the time window that the data was observed, in RFC3339 format. + +type: date + + +**`misp.observed_data.last_observed`** +: The end of the time window that the data was observed, in RFC3339 format. + +type: date + + +**`misp.observed_data.number_observed`** +: The number of times the data represented in the objects property was observed. This MUST be an integer between 1 and 999,999,999 inclusive. + +type: integer + + +**`misp.observed_data.objects`** +: A dictionary of Cyber Observable Objects that describes the single fact that was observed. + +type: keyword + + + +## report [_report] + +Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. + +**`misp.report.id`** +: Identifier of the Report. + +type: keyword + + +**`misp.report.labels`** +: This field is an Open Vocabulary that specifies the primary subject of this report. Open Vocab - report-label-ov. threat-report,attack-pattern,campaign,identity,indicator,malware,observed-data,threat-actor,tool,vulnerability + +type: keyword + + +**`misp.report.name`** +: The name used to identify the Report. + +type: keyword + + +**`misp.report.description`** +: A description that provides more details and context about Report. + +type: text + + +**`misp.report.published`** +: The date that this report object was officially published by the creator of this report, in RFC3339 format. + +type: date + + +**`misp.report.object_refs`** +: Specifies the STIX Objects that are referred to by this Report. + +type: text + + + +## threat_actor [_threat_actor] + +Threat Actors are actual individuals, groups, or organizations believed to be operating with malicious intent. + +**`misp.threat_actor.id`** +: Identifier of the Threat Actor. + +type: keyword + + +**`misp.threat_actor.labels`** +: This field specifies the type of threat actor. Open Vocab - threat-actor-label-ov. activist,competitor,crime-syndicate,criminal,hacker,insider-accidental,insider-disgruntled,nation-state,sensationalist,spy,terrorist + +type: keyword + + +**`misp.threat_actor.name`** +: The name used to identify this Threat Actor or Threat Actor group. + +type: keyword + + +**`misp.threat_actor.description`** +: A description that provides more details and context about the Threat Actor. + +type: text + + +**`misp.threat_actor.aliases`** +: A list of other names that this Threat Actor is believed to use. + +type: text + + +**`misp.threat_actor.roles`** +: This is a list of roles the Threat Actor plays. Open Vocab - threat-actor-role-ov. agent,director,independent,sponsor,infrastructure-operator,infrastructure-architect,malware-author + +type: text + + +**`misp.threat_actor.goals`** +: The high level goals of this Threat Actor, namely, what are they trying to do. + +type: text + + +**`misp.threat_actor.sophistication`** +: The skill, specific knowledge, special training, or expertise a Threat Actor must have to perform the attack. Open Vocab - threat-actor-sophistication-ov. none,minimal,intermediate,advanced,strategic,expert,innovator + +type: text + + +**`misp.threat_actor.resource_level`** +: This defines the organizational level at which this Threat Actor typically works. Open Vocab - attack-resource-level-ov. individual,club,contest,team,organization,government + +type: text + + +**`misp.threat_actor.primary_motivation`** +: The primary reason, motivation, or purpose behind this Threat Actor. Open Vocab - attack-motivation-ov. accidental,coercion,dominance,ideology,notoriety,organizational-gain,personal-gain,personal-satisfaction,revenge,unpredictable + +type: text + + +**`misp.threat_actor.secondary_motivations`** +: The secondary reasons, motivations, or purposes behind this Threat Actor. Open Vocab - attack-motivation-ov. accidental,coercion,dominance,ideology,notoriety,organizational-gain,personal-gain,personal-satisfaction,revenge,unpredictable + +type: text + + +**`misp.threat_actor.personal_motivations`** +: The personal reasons, motivations, or purposes of the Threat Actor regardless of organizational goals. Open Vocab - attack-motivation-ov. accidental,coercion,dominance,ideology,notoriety,organizational-gain,personal-gain,personal-satisfaction,revenge,unpredictable + +type: text + + + +## tool [_tool] + +Tools are legitimate software that can be used by threat actors to perform attacks. + +**`misp.tool.id`** +: Identifier of the Tool. + +type: keyword + + +**`misp.tool.labels`** +: The kind(s) of tool(s) being described. Open Vocab - tool-label-ov. denial-of-service,exploitation,information-gathering,network-capture,credential-exploitation,remote-access,vulnerability-scanning + +type: keyword + + +**`misp.tool.name`** +: The name used to identify the Tool. + +type: keyword + + +**`misp.tool.description`** +: A description that provides more details and context about the Tool. + +type: text + + +**`misp.tool.tool_version`** +: The version identifier associated with the Tool. + +type: keyword + + +**`misp.tool.kill_chain_phases`** +: The list of kill chain phases for which this Tool instance can be used. + +type: text + + + +## vulnerability [_vulnerability_2] + +A Vulnerability is a mistake in software that can be directly used by a hacker to gain access to a system or network. + +**`misp.vulnerability.id`** +: Identifier of the Vulnerability. + +type: keyword + + +**`misp.vulnerability.name`** +: The name used to identify the Vulnerability. + +type: keyword + + +**`misp.vulnerability.description`** +: A description that provides more details and context about the Vulnerability. + +type: text + + diff --git a/docs/reference/filebeat/exported-fields-mongodb.md b/docs/reference/filebeat/exported-fields-mongodb.md new file mode 100644 index 000000000000..ece3e38488d0 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-mongodb.md @@ -0,0 +1,55 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-mongodb.html +--- + +# mongodb fields [exported-fields-mongodb] + +Module for parsing MongoDB log files. + + +## mongodb [_mongodb] + +Fields from MongoDB logs. + + +## log [_log_8] + +Contains fields from MongoDB logs. + +**`mongodb.log.component`** +: Functional categorization of message + +type: keyword + +example: COMMAND + + +**`mongodb.log.context`** +: Context of message + +type: keyword + +example: initandlisten + + +**`mongodb.log.severity`** +: type: alias + +alias to: log.level + + +**`mongodb.log.message`** +: type: alias + +alias to: message + + +**`mongodb.log.id`** +: Integer representing the unique identifier of the log statement + +type: long + +example: 4615611 + + diff --git a/docs/reference/filebeat/exported-fields-mssql.md b/docs/reference/filebeat/exported-fields-mssql.md new file mode 100644 index 000000000000..6067d30ea6ea --- /dev/null +++ b/docs/reference/filebeat/exported-fields-mssql.md @@ -0,0 +1,25 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-mssql.html +--- + +# mssql fields [exported-fields-mssql] + +MS SQL Filebeat Module + + +## mssql [_mssql] + +Fields from the MSSQL log files + + +## log [_log_9] + +Common log fields + +**`mssql.log.origin`** +: Origin of the message, usually the server but it can also be a recovery process + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-mysql.md b/docs/reference/filebeat/exported-fields-mysql.md new file mode 100644 index 000000000000..8e07439ba140 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-mysql.md @@ -0,0 +1,341 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-mysql.html +--- + +# MySQL fields [exported-fields-mysql] + +Module for parsing the MySQL log files. + + +## mysql [_mysql] + +Fields from the MySQL log files. + +**`mysql.thread_id`** +: The connection or thread ID for the query. + +type: long + + + +## error [_error_4] + +Contains fields from the MySQL error logs. + +**`mysql.error.thread_id`** +: type: alias + +alias to: mysql.thread_id + + +**`mysql.error.level`** +: type: alias + +alias to: log.level + + +**`mysql.error.message`** +: type: alias + +alias to: message + + + +## slowlog [_slowlog_3] + +Contains fields from the MySQL slow logs. + +**`mysql.slowlog.lock_time.sec`** +: The amount of time the query waited for the lock to be available. The value is in seconds, as a floating point number. + +type: float + + +**`mysql.slowlog.rows_sent`** +: The number of rows returned by the query. + +type: long + + +**`mysql.slowlog.rows_examined`** +: The number of rows scanned by the query. + +type: long + + +**`mysql.slowlog.rows_affected`** +: The number of rows modified by the query. + +type: long + + +**`mysql.slowlog.bytes_sent`** +: The number of bytes sent to client. + +type: long + +format: bytes + + +**`mysql.slowlog.bytes_received`** +: The number of bytes received from client. + +type: long + +format: bytes + + +**`mysql.slowlog.query`** +: The slow query. + + +**`mysql.slowlog.id`** +: type: alias + +alias to: mysql.thread_id + + +**`mysql.slowlog.schema`** +: The schema where the slow query was executed. + +type: keyword + + +**`mysql.slowlog.current_user`** +: Current authenticated user, used to determine access privileges. Can differ from the value for user. + +type: keyword + + +**`mysql.slowlog.last_errno`** +: Last SQL error seen. + +type: keyword + + +**`mysql.slowlog.killed`** +: Code of the reason if the query was killed. + +type: keyword + + +**`mysql.slowlog.query_cache_hit`** +: Whether the query cache was hit. + +type: boolean + + +**`mysql.slowlog.tmp_table`** +: Whether a temporary table was used to resolve the query. + +type: boolean + + +**`mysql.slowlog.tmp_table_on_disk`** +: Whether the query needed temporary tables on disk. + +type: boolean + + +**`mysql.slowlog.tmp_tables`** +: Number of temporary tables created for this query + +type: long + + +**`mysql.slowlog.tmp_disk_tables`** +: Number of temporary tables created on disk for this query. + +type: long + + +**`mysql.slowlog.tmp_table_sizes`** +: Size of temporary tables created for this query. + +type: long + +format: bytes + + +**`mysql.slowlog.filesort`** +: Whether filesort optimization was used. + +type: boolean + + +**`mysql.slowlog.filesort_on_disk`** +: Whether filesort optimization was used and it needed temporary tables on disk. + +type: boolean + + +**`mysql.slowlog.priority_queue`** +: Whether a priority queue was used for filesort. + +type: boolean + + +**`mysql.slowlog.full_scan`** +: Whether a full table scan was needed for the slow query. + +type: boolean + + +**`mysql.slowlog.full_join`** +: Whether a full join was needed for the slow query (no indexes were used for joins). + +type: boolean + + +**`mysql.slowlog.merge_passes`** +: Number of merge passes executed for the query. + +type: long + + +**`mysql.slowlog.sort_merge_passes`** +: Number of merge passes that the sort algorithm has had to do. + +type: long + + +**`mysql.slowlog.sort_range_count`** +: Number of sorts that were done using ranges. + +type: long + + +**`mysql.slowlog.sort_rows`** +: Number of sorted rows. + +type: long + + +**`mysql.slowlog.sort_scan_count`** +: Number of sorts that were done by scanning the table. + +type: long + + +**`mysql.slowlog.log_slow_rate_type`** +: Type of slow log rate limit, it can be `session` if the rate limit is applied per session, or `query` if it applies per query. + +type: keyword + + +**`mysql.slowlog.log_slow_rate_limit`** +: Slow log rate limit, a value of 100 means that one in a hundred queries or sessions are being logged. + +type: keyword + + +**`mysql.slowlog.read_first`** +: The number of times the first entry in an index was read. + +type: long + + +**`mysql.slowlog.read_last`** +: The number of times the last key in an index was read. + +type: long + + +**`mysql.slowlog.read_key`** +: The number of requests to read a row based on a key. + +type: long + + +**`mysql.slowlog.read_next`** +: The number of requests to read the next row in key order. + +type: long + + +**`mysql.slowlog.read_prev`** +: The number of requests to read the previous row in key order. + +type: long + + +**`mysql.slowlog.read_rnd`** +: The number of requests to read a row based on a fixed position. + +type: long + + +**`mysql.slowlog.read_rnd_next`** +: The number of requests to read the next row in the data file. + +type: long + + + +## innodb [_innodb] + +Contains fields relative to InnoDB engine + +**`mysql.slowlog.innodb.trx_id`** +: Transaction ID + +type: keyword + + +**`mysql.slowlog.innodb.io_r_ops`** +: Number of page read operations. + +type: long + + +**`mysql.slowlog.innodb.io_r_bytes`** +: Bytes read during page read operations. + +type: long + +format: bytes + + +**`mysql.slowlog.innodb.io_r_wait.sec`** +: How long it took to read all needed data from storage. + +type: long + + +**`mysql.slowlog.innodb.rec_lock_wait.sec`** +: How long the query waited for locks. + +type: long + + +**`mysql.slowlog.innodb.queue_wait.sec`** +: How long the query waited to enter the InnoDB queue and to be executed once in the queue. + +type: long + + +**`mysql.slowlog.innodb.pages_distinct`** +: Approximated count of pages accessed to execute the query. + +type: long + + +**`mysql.slowlog.user`** +: type: alias + +alias to: user.name + + +**`mysql.slowlog.host`** +: type: alias + +alias to: source.domain + + +**`mysql.slowlog.ip`** +: type: alias + +alias to: source.ip + + diff --git a/docs/reference/filebeat/exported-fields-mysqlenterprise.md b/docs/reference/filebeat/exported-fields-mysqlenterprise.md new file mode 100644 index 000000000000..1dd60c46621b --- /dev/null +++ b/docs/reference/filebeat/exported-fields-mysqlenterprise.md @@ -0,0 +1,157 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-mysqlenterprise.html +--- + +# MySQL Enterprise fields [exported-fields-mysqlenterprise] + +MySQL Enterprise Audit module + + +## mysqlenterprise [_mysqlenterprise] + +Fields from MySQL Enterprise Logs + + +## audit [_audit_4] + +Module for parsing MySQL Enterprise Audit Logs + +**`mysqlenterprise.audit.class`** +: A string representing the event class. The class defines the type of event, when taken together with the event item that specifies the event subclass. + +type: keyword + + +**`mysqlenterprise.audit.connection_id`** +: An integer representing the client connection identifier. This is the same as the value returned by the CONNECTION_ID() function within the session. + +type: keyword + + +**`mysqlenterprise.audit.id`** +: An unsigned integer representing an event ID. + +type: keyword + + +**`mysqlenterprise.audit.connection_data.connection_type`** +: The security state of the connection to the server. Permitted values are tcp/ip (TCP/IP connection established without encryption), ssl (TCP/IP connection established with encryption), socket (Unix socket file connection), named_pipe (Windows named pipe connection), and shared_memory (Windows shared memory connection). + +type: keyword + + +**`mysqlenterprise.audit.connection_data.status`** +: An integer representing the command status: 0 for success, nonzero if an error occurred. + +type: long + + +**`mysqlenterprise.audit.connection_data.db`** +: A string representing a database name. For connection_data, it is the default database. For table_access_data, it is the table database. + +type: keyword + + +**`mysqlenterprise.audit.connection_data.connection_attributes`** +: Connection attributes that might be passed by different MySQL Clients. + +type: flattened + + +**`mysqlenterprise.audit.general_data.command`** +: A string representing the type of instruction that generated the audit event, such as a command that the server received from a client. + +type: keyword + + +**`mysqlenterprise.audit.general_data.sql_command`** +: A string that indicates the SQL statement type. + +type: keyword + + +**`mysqlenterprise.audit.general_data.query`** +: A string representing the text of an SQL statement. The value can be empty. Long values may be truncated. The string, like the audit log file itself, is written using UTF-8 (up to 4 bytes per character), so the value may be the result of conversion. + +type: keyword + + +**`mysqlenterprise.audit.general_data.status`** +: An integer representing the command status: 0 for success, nonzero if an error occurred. This is the same as the value of the mysql_errno() C API function. + +type: long + + +**`mysqlenterprise.audit.login.user`** +: A string representing the information indicating how a client connected to the server. + +type: keyword + + +**`mysqlenterprise.audit.login.proxy`** +: A string representing the proxy user. The value is empty if user proxying is not in effect. + +type: keyword + + +**`mysqlenterprise.audit.shutdown_data.server_id`** +: An integer representing the server ID. This is the same as the value of the server_id system variable. + +type: keyword + + +**`mysqlenterprise.audit.startup_data.server_id`** +: An integer representing the server ID. This is the same as the value of the server_id system variable. + +type: keyword + + +**`mysqlenterprise.audit.startup_data.mysql_version`** +: An integer representing the server ID. This is the same as the value of the server_id system variable. + +type: keyword + + +**`mysqlenterprise.audit.table_access_data.db`** +: A string representing a database name. For connection_data, it is the default database. For table_access_data, it is the table database. + +type: keyword + + +**`mysqlenterprise.audit.table_access_data.table`** +: A string representing a table name. + +type: keyword + + +**`mysqlenterprise.audit.table_access_data.query`** +: A string representing the text of an SQL statement. The value can be empty. Long values may be truncated. The string, like the audit log file itself, is written using UTF-8 (up to 4 bytes per character), so the value may be the result of conversion. + +type: keyword + + +**`mysqlenterprise.audit.table_access_data.sql_command`** +: A string that indicates the SQL statement type. + +type: keyword + + +**`mysqlenterprise.audit.account.user`** +: A string representing the user that the server authenticated the client as. This is the user name that the server uses for privilege checking. + +type: keyword + + +**`mysqlenterprise.audit.account.host`** +: A string representing the client host name. + +type: keyword + + +**`mysqlenterprise.audit.login.os`** +: A string representing the external user name used during the authentication process, as set by the plugin used to authenticate the client. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-nats.md b/docs/reference/filebeat/exported-fields-nats.md new file mode 100644 index 000000000000..2f481ec670d2 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-nats.md @@ -0,0 +1,85 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-nats.html +--- + +# NATS fields [exported-fields-nats] + +Module for parsing NATS log files. + + +## nats [_nats] + +Fields from NATS logs. + + +## log [_log_10] + +Nats log files + + +## client [_client_3] + +Fields from NATS logs client. + +**`nats.log.client.id`** +: The id of the client + +type: integer + + + +## msg [_msg] + +Fields from NATS logs message. + +**`nats.log.msg.bytes`** +: Size of the payload in bytes + +type: long + +format: bytes + + +**`nats.log.msg.type`** +: The protocol message type + +type: keyword + + +**`nats.log.msg.subject`** +: Subject name this message was received on + +type: keyword + + +**`nats.log.msg.sid`** +: The unique alphanumeric subscription ID of the subject + +type: integer + + +**`nats.log.msg.reply_to`** +: The inbox subject on which the publisher is listening for responses + +type: keyword + + +**`nats.log.msg.max_messages`** +: An optional number of messages to wait for before automatically unsubscribing + +type: integer + + +**`nats.log.msg.error.message`** +: Details about the error occurred + +type: text + + +**`nats.log.msg.queue_group`** +: The queue group which subscriber will join + +type: text + + diff --git a/docs/reference/filebeat/exported-fields-netflow.md b/docs/reference/filebeat/exported-fields-netflow.md new file mode 100644 index 000000000000..4c598bbe2ce0 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-netflow.md @@ -0,0 +1,5319 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-netflow.html +--- + +# NetFlow fields [exported-fields-netflow] + +Fields from NetFlow and IPFIX flows. + + +## netflow [_netflow] + +Fields from NetFlow and IPFIX. + +**`netflow.type`** +: The type of NetFlow record described by this event. + +type: keyword + + + +## exporter [_exporter] + +Metadata related to the exporter device that generated this record. + +**`netflow.exporter.address`** +: Exporter’s network address in IP:port format. + +type: keyword + + +**`netflow.exporter.source_id`** +: Observation domain ID to which this record belongs. + +type: long + + +**`netflow.exporter.timestamp`** +: Time and date of export. + +type: date + + +**`netflow.exporter.uptime_millis`** +: How long the exporter process has been running, in milliseconds. + +type: long + + +**`netflow.exporter.version`** +: NetFlow version used. + +type: integer + + +**`netflow.absolute_error`** +: type: double + + +**`netflow.address_pool_high_threshold`** +: type: long + + +**`netflow.address_pool_low_threshold`** +: type: long + + +**`netflow.address_port_mapping_high_threshold`** +: type: long + + +**`netflow.address_port_mapping_low_threshold`** +: type: long + + +**`netflow.address_port_mapping_per_user_high_threshold`** +: type: long + + +**`netflow.afc_protocol`** +: type: integer + + +**`netflow.afc_protocol_name`** +: type: keyword + + +**`netflow.anonymization_flags`** +: type: integer + + +**`netflow.anonymization_technique`** +: type: integer + + +**`netflow.application_business-relevance`** +: type: long + + +**`netflow.application_category_name`** +: type: keyword + + +**`netflow.application_description`** +: type: keyword + + +**`netflow.application_group_name`** +: type: keyword + + +**`netflow.application_http_uri_statistics`** +: type: short + + +**`netflow.application_http_user-agent`** +: type: short + + +**`netflow.application_id`** +: type: short + + +**`netflow.application_name`** +: type: keyword + + +**`netflow.application_sub_category_name`** +: type: keyword + + +**`netflow.application_traffic-class`** +: type: long + + +**`netflow.art_client_network_time_maximum`** +: type: long + + +**`netflow.art_client_network_time_minimum`** +: type: long + + +**`netflow.art_client_network_time_sum`** +: type: long + + +**`netflow.art_clientpackets`** +: type: long + + +**`netflow.art_count_late_responses`** +: type: long + + +**`netflow.art_count_new_connections`** +: type: long + + +**`netflow.art_count_responses`** +: type: long + + +**`netflow.art_count_responses_histogram_bucket1`** +: type: long + + +**`netflow.art_count_responses_histogram_bucket2`** +: type: long + + +**`netflow.art_count_responses_histogram_bucket3`** +: type: long + + +**`netflow.art_count_responses_histogram_bucket4`** +: type: long + + +**`netflow.art_count_responses_histogram_bucket5`** +: type: long + + +**`netflow.art_count_responses_histogram_bucket6`** +: type: long + + +**`netflow.art_count_responses_histogram_bucket7`** +: type: long + + +**`netflow.art_count_retransmissions`** +: type: long + + +**`netflow.art_count_transactions`** +: type: long + + +**`netflow.art_network_time_maximum`** +: type: long + + +**`netflow.art_network_time_minimum`** +: type: long + + +**`netflow.art_network_time_sum`** +: type: long + + +**`netflow.art_response_time_maximum`** +: type: long + + +**`netflow.art_response_time_minimum`** +: type: long + + +**`netflow.art_response_time_sum`** +: type: long + + +**`netflow.art_server_network_time_maximum`** +: type: long + + +**`netflow.art_server_network_time_minimum`** +: type: long + + +**`netflow.art_server_network_time_sum`** +: type: long + + +**`netflow.art_server_response_time_maximum`** +: type: long + + +**`netflow.art_server_response_time_minimum`** +: type: long + + +**`netflow.art_server_response_time_sum`** +: type: long + + +**`netflow.art_serverpackets`** +: type: long + + +**`netflow.art_total_response_time_maximum`** +: type: long + + +**`netflow.art_total_response_time_minimum`** +: type: long + + +**`netflow.art_total_response_time_sum`** +: type: long + + +**`netflow.art_total_transaction_time_maximum`** +: type: long + + +**`netflow.art_total_transaction_time_minimum`** +: type: long + + +**`netflow.art_total_transaction_time_sum`** +: type: long + + +**`netflow.assembled_fragment_count`** +: type: long + + +**`netflow.audit_counter`** +: type: long + + +**`netflow.average_interarrival_time`** +: type: long + + +**`netflow.bgp_destination_as_number`** +: type: long + + +**`netflow.bgp_next_adjacent_as_number`** +: type: long + + +**`netflow.bgp_next_hop_ipv4_address`** +: type: ip + + +**`netflow.bgp_next_hop_ipv6_address`** +: type: ip + + +**`netflow.bgp_prev_adjacent_as_number`** +: type: long + + +**`netflow.bgp_source_as_number`** +: type: long + + +**`netflow.bgp_validity_state`** +: type: short + + +**`netflow.biflow_direction`** +: type: short + + +**`netflow.bind_ipv4_address`** +: type: ip + + +**`netflow.bind_transport_port`** +: type: integer + + +**`netflow.class_id`** +: type: long + + +**`netflow.class_name`** +: type: keyword + + +**`netflow.classification_engine_id`** +: type: short + + +**`netflow.collection_time_milliseconds`** +: type: date + + +**`netflow.collector_certificate`** +: type: short + + +**`netflow.collector_ipv4_address`** +: type: ip + + +**`netflow.collector_ipv6_address`** +: type: ip + + +**`netflow.collector_transport_port`** +: type: integer + + +**`netflow.common_properties_id`** +: type: long + + +**`netflow.confidence_level`** +: type: double + + +**`netflow.conn_ipv4_address`** +: type: ip + + +**`netflow.conn_transport_port`** +: type: integer + + +**`netflow.connection_sum_duration_seconds`** +: type: long + + +**`netflow.connection_transaction_id`** +: type: long + + +**`netflow.conntrack_id`** +: type: long + + +**`netflow.data_byte_count`** +: type: long + + +**`netflow.data_link_frame_section`** +: type: short + + +**`netflow.data_link_frame_size`** +: type: integer + + +**`netflow.data_link_frame_type`** +: type: integer + + +**`netflow.data_records_reliability`** +: type: boolean + + +**`netflow.delta_flow_count`** +: type: long + + +**`netflow.destination_ipv4_address`** +: type: ip + + +**`netflow.destination_ipv4_prefix`** +: type: ip + + +**`netflow.destination_ipv4_prefix_length`** +: type: short + + +**`netflow.destination_ipv6_address`** +: type: ip + + +**`netflow.destination_ipv6_prefix`** +: type: ip + + +**`netflow.destination_ipv6_prefix_length`** +: type: short + + +**`netflow.destination_mac_address`** +: type: keyword + + +**`netflow.destination_transport_port`** +: type: integer + + +**`netflow.digest_hash_value`** +: type: long + + +**`netflow.distinct_count_of_destination_ip_address`** +: type: long + + +**`netflow.distinct_count_of_destination_ipv4_address`** +: type: long + + +**`netflow.distinct_count_of_destination_ipv6_address`** +: type: long + + +**`netflow.distinct_count_of_source_ip_address`** +: type: long + + +**`netflow.distinct_count_of_source_ipv4_address`** +: type: long + + +**`netflow.distinct_count_of_source_ipv6_address`** +: type: long + + +**`netflow.dns_authoritative`** +: type: short + + +**`netflow.dns_cname`** +: type: keyword + + +**`netflow.dns_id`** +: type: integer + + +**`netflow.dns_mx_exchange`** +: type: keyword + + +**`netflow.dns_mx_preference`** +: type: integer + + +**`netflow.dns_nsd_name`** +: type: keyword + + +**`netflow.dns_nx_domain`** +: type: short + + +**`netflow.dns_ptrd_name`** +: type: keyword + + +**`netflow.dns_qname`** +: type: keyword + + +**`netflow.dns_qr_type`** +: type: integer + + +**`netflow.dns_query_response`** +: type: short + + +**`netflow.dns_rr_section`** +: type: short + + +**`netflow.dns_soa_expire`** +: type: long + + +**`netflow.dns_soa_minimum`** +: type: long + + +**`netflow.dns_soa_refresh`** +: type: long + + +**`netflow.dns_soa_retry`** +: type: long + + +**`netflow.dns_soa_serial`** +: type: long + + +**`netflow.dns_soam_name`** +: type: keyword + + +**`netflow.dns_soar_name`** +: type: keyword + + +**`netflow.dns_srv_port`** +: type: integer + + +**`netflow.dns_srv_priority`** +: type: integer + + +**`netflow.dns_srv_target`** +: type: integer + + +**`netflow.dns_srv_weight`** +: type: integer + + +**`netflow.dns_ttl`** +: type: long + + +**`netflow.dns_txt_data`** +: type: keyword + + +**`netflow.dot1q_customer_dei`** +: type: boolean + + +**`netflow.dot1q_customer_destination_mac_address`** +: type: keyword + + +**`netflow.dot1q_customer_priority`** +: type: short + + +**`netflow.dot1q_customer_source_mac_address`** +: type: keyword + + +**`netflow.dot1q_customer_vlan_id`** +: type: integer + + +**`netflow.dot1q_dei`** +: type: boolean + + +**`netflow.dot1q_priority`** +: type: short + + +**`netflow.dot1q_service_instance_id`** +: type: long + + +**`netflow.dot1q_service_instance_priority`** +: type: short + + +**`netflow.dot1q_service_instance_tag`** +: type: short + + +**`netflow.dot1q_vlan_id`** +: type: integer + + +**`netflow.dropped_layer2_octet_delta_count`** +: type: long + + +**`netflow.dropped_layer2_octet_total_count`** +: type: long + + +**`netflow.dropped_octet_delta_count`** +: type: long + + +**`netflow.dropped_octet_total_count`** +: type: long + + +**`netflow.dropped_packet_delta_count`** +: type: long + + +**`netflow.dropped_packet_total_count`** +: type: long + + +**`netflow.dst_traffic_index`** +: type: long + + +**`netflow.egress_broadcast_packet_total_count`** +: type: long + + +**`netflow.egress_interface`** +: type: long + + +**`netflow.egress_interface_type`** +: type: long + + +**`netflow.egress_physical_interface`** +: type: long + + +**`netflow.egress_unicast_packet_total_count`** +: type: long + + +**`netflow.egress_vrfid`** +: type: long + + +**`netflow.encrypted_technology`** +: type: keyword + + +**`netflow.engine_id`** +: type: short + + +**`netflow.engine_type`** +: type: short + + +**`netflow.ethernet_header_length`** +: type: short + + +**`netflow.ethernet_payload_length`** +: type: integer + + +**`netflow.ethernet_total_length`** +: type: integer + + +**`netflow.ethernet_type`** +: type: integer + + +**`netflow.expired_fragment_count`** +: type: long + + +**`netflow.export_interface`** +: type: long + + +**`netflow.export_protocol_version`** +: type: short + + +**`netflow.export_sctp_stream_id`** +: type: integer + + +**`netflow.export_transport_protocol`** +: type: short + + +**`netflow.exported_flow_record_total_count`** +: type: long + + +**`netflow.exported_message_total_count`** +: type: long + + +**`netflow.exported_octet_total_count`** +: type: long + + +**`netflow.exporter_certificate`** +: type: short + + +**`netflow.exporter_ipv4_address`** +: type: ip + + +**`netflow.exporter_ipv6_address`** +: type: ip + + +**`netflow.exporter_transport_port`** +: type: integer + + +**`netflow.exporting_process_id`** +: type: long + + +**`netflow.external_address_realm`** +: type: short + + +**`netflow.firewall_event`** +: type: short + + +**`netflow.first_eight_non_empty_packet_directions`** +: type: short + + +**`netflow.first_non_empty_packet_size`** +: type: integer + + +**`netflow.first_packet_banner`** +: type: keyword + + +**`netflow.flags_and_sampler_id`** +: type: long + + +**`netflow.flow_active_timeout`** +: type: integer + + +**`netflow.flow_attributes`** +: type: integer + + +**`netflow.flow_direction`** +: type: short + + +**`netflow.flow_duration_microseconds`** +: type: long + + +**`netflow.flow_duration_milliseconds`** +: type: long + + +**`netflow.flow_end_delta_microseconds`** +: type: long + + +**`netflow.flow_end_microseconds`** +: type: date + + +**`netflow.flow_end_milliseconds`** +: type: date + + +**`netflow.flow_end_nanoseconds`** +: type: date + + +**`netflow.flow_end_reason`** +: type: short + + +**`netflow.flow_end_seconds`** +: type: date + + +**`netflow.flow_end_sys_up_time`** +: type: long + + +**`netflow.flow_id`** +: type: long + + +**`netflow.flow_idle_timeout`** +: type: integer + + +**`netflow.flow_key_indicator`** +: type: long + + +**`netflow.flow_label_ipv6`** +: type: long + + +**`netflow.flow_sampling_time_interval`** +: type: long + + +**`netflow.flow_sampling_time_spacing`** +: type: long + + +**`netflow.flow_selected_flow_delta_count`** +: type: long + + +**`netflow.flow_selected_octet_delta_count`** +: type: long + + +**`netflow.flow_selected_packet_delta_count`** +: type: long + + +**`netflow.flow_selector_algorithm`** +: type: integer + + +**`netflow.flow_start_delta_microseconds`** +: type: long + + +**`netflow.flow_start_microseconds`** +: type: date + + +**`netflow.flow_start_milliseconds`** +: type: date + + +**`netflow.flow_start_nanoseconds`** +: type: date + + +**`netflow.flow_start_seconds`** +: type: date + + +**`netflow.flow_start_sys_up_time`** +: type: long + + +**`netflow.flow_table_flush_event_count`** +: type: long + + +**`netflow.flow_table_peak_count`** +: type: long + + +**`netflow.forwarding_status`** +: type: short + + +**`netflow.fragment_flags`** +: type: short + + +**`netflow.fragment_identification`** +: type: long + + +**`netflow.fragment_offset`** +: type: integer + + +**`netflow.fw_blackout_secs`** +: type: long + + +**`netflow.fw_configured_value`** +: type: long + + +**`netflow.fw_cts_src_sgt`** +: type: long + + +**`netflow.fw_event_level`** +: type: long + + +**`netflow.fw_event_level_id`** +: type: long + + +**`netflow.fw_ext_event`** +: type: integer + + +**`netflow.fw_ext_event_alt`** +: type: long + + +**`netflow.fw_ext_event_desc`** +: type: keyword + + +**`netflow.fw_half_open_count`** +: type: long + + +**`netflow.fw_half_open_high`** +: type: long + + +**`netflow.fw_half_open_rate`** +: type: long + + +**`netflow.fw_max_sessions`** +: type: long + + +**`netflow.fw_rule`** +: type: keyword + + +**`netflow.fw_summary_pkt_count`** +: type: long + + +**`netflow.fw_zone_pair_id`** +: type: long + + +**`netflow.fw_zone_pair_name`** +: type: long + + +**`netflow.global_address_mapping_high_threshold`** +: type: long + + +**`netflow.gre_key`** +: type: long + + +**`netflow.hash_digest_output`** +: type: boolean + + +**`netflow.hash_flow_domain`** +: type: integer + + +**`netflow.hash_initialiser_value`** +: type: long + + +**`netflow.hash_ip_payload_offset`** +: type: long + + +**`netflow.hash_ip_payload_size`** +: type: long + + +**`netflow.hash_output_range_max`** +: type: long + + +**`netflow.hash_output_range_min`** +: type: long + + +**`netflow.hash_selected_range_max`** +: type: long + + +**`netflow.hash_selected_range_min`** +: type: long + + +**`netflow.http_content_type`** +: type: keyword + + +**`netflow.http_message_version`** +: type: keyword + + +**`netflow.http_reason_phrase`** +: type: keyword + + +**`netflow.http_request_host`** +: type: keyword + + +**`netflow.http_request_method`** +: type: keyword + + +**`netflow.http_request_target`** +: type: keyword + + +**`netflow.http_status_code`** +: type: integer + + +**`netflow.http_user_agent`** +: type: keyword + + +**`netflow.icmp_code_ipv4`** +: type: short + + +**`netflow.icmp_code_ipv6`** +: type: short + + +**`netflow.icmp_type_code_ipv4`** +: type: integer + + +**`netflow.icmp_type_code_ipv6`** +: type: integer + + +**`netflow.icmp_type_ipv4`** +: type: short + + +**`netflow.icmp_type_ipv6`** +: type: short + + +**`netflow.igmp_type`** +: type: short + + +**`netflow.ignored_data_record_total_count`** +: type: long + + +**`netflow.ignored_layer2_frame_total_count`** +: type: long + + +**`netflow.ignored_layer2_octet_total_count`** +: type: long + + +**`netflow.ignored_octet_total_count`** +: type: long + + +**`netflow.ignored_packet_total_count`** +: type: long + + +**`netflow.information_element_data_type`** +: type: short + + +**`netflow.information_element_description`** +: type: keyword + + +**`netflow.information_element_id`** +: type: integer + + +**`netflow.information_element_index`** +: type: integer + + +**`netflow.information_element_name`** +: type: keyword + + +**`netflow.information_element_range_begin`** +: type: long + + +**`netflow.information_element_range_end`** +: type: long + + +**`netflow.information_element_semantics`** +: type: short + + +**`netflow.information_element_units`** +: type: integer + + +**`netflow.ingress_broadcast_packet_total_count`** +: type: long + + +**`netflow.ingress_interface`** +: type: long + + +**`netflow.ingress_interface_type`** +: type: long + + +**`netflow.ingress_multicast_packet_total_count`** +: type: long + + +**`netflow.ingress_physical_interface`** +: type: long + + +**`netflow.ingress_unicast_packet_total_count`** +: type: long + + +**`netflow.ingress_vrfid`** +: type: long + + +**`netflow.initial_tcp_flags`** +: type: short + + +**`netflow.initiator_octets`** +: type: long + + +**`netflow.initiator_packets`** +: type: long + + +**`netflow.interface_description`** +: type: keyword + + +**`netflow.interface_name`** +: type: keyword + + +**`netflow.intermediate_process_id`** +: type: long + + +**`netflow.internal_address_realm`** +: type: short + + +**`netflow.ip_class_of_service`** +: type: short + + +**`netflow.ip_diff_serv_code_point`** +: type: short + + +**`netflow.ip_header_length`** +: type: short + + +**`netflow.ip_header_packet_section`** +: type: short + + +**`netflow.ip_next_hop_ipv4_address`** +: type: ip + + +**`netflow.ip_next_hop_ipv6_address`** +: type: ip + + +**`netflow.ip_payload_length`** +: type: long + + +**`netflow.ip_payload_packet_section`** +: type: short + + +**`netflow.ip_precedence`** +: type: short + + +**`netflow.ip_sec_spi`** +: type: long + + +**`netflow.ip_total_length`** +: type: long + + +**`netflow.ip_ttl`** +: type: short + + +**`netflow.ip_version`** +: type: short + + +**`netflow.ipv4_ihl`** +: type: short + + +**`netflow.ipv4_options`** +: type: long + + +**`netflow.ipv4_router_sc`** +: type: ip + + +**`netflow.ipv6_extension_headers`** +: type: long + + +**`netflow.is_multicast`** +: type: short + + +**`netflow.ixia_browser_id`** +: type: short + + +**`netflow.ixia_browser_name`** +: type: keyword + + +**`netflow.ixia_device_id`** +: type: short + + +**`netflow.ixia_device_name`** +: type: keyword + + +**`netflow.ixia_dns_answer`** +: type: keyword + + +**`netflow.ixia_dns_classes`** +: type: keyword + + +**`netflow.ixia_dns_query`** +: type: keyword + + +**`netflow.ixia_dns_record_txt`** +: type: keyword + + +**`netflow.ixia_dst_as_name`** +: type: keyword + + +**`netflow.ixia_dst_city_name`** +: type: keyword + + +**`netflow.ixia_dst_country_code`** +: type: keyword + + +**`netflow.ixia_dst_country_name`** +: type: keyword + + +**`netflow.ixia_dst_latitude`** +: type: float + + +**`netflow.ixia_dst_longitude`** +: type: float + + +**`netflow.ixia_dst_region_code`** +: type: keyword + + +**`netflow.ixia_dst_region_node`** +: type: keyword + + +**`netflow.ixia_encrypt_cipher`** +: type: keyword + + +**`netflow.ixia_encrypt_key_length`** +: type: integer + + +**`netflow.ixia_encrypt_type`** +: type: keyword + + +**`netflow.ixia_http_host_name`** +: type: keyword + + +**`netflow.ixia_http_uri`** +: type: keyword + + +**`netflow.ixia_http_user_agent`** +: type: keyword + + +**`netflow.ixia_imsi_subscriber`** +: type: keyword + + +**`netflow.ixia_l7_app_id`** +: type: long + + +**`netflow.ixia_l7_app_name`** +: type: keyword + + +**`netflow.ixia_latency`** +: type: long + + +**`netflow.ixia_rev_octet_delta_count`** +: type: long + + +**`netflow.ixia_rev_packet_delta_count`** +: type: long + + +**`netflow.ixia_src_as_name`** +: type: keyword + + +**`netflow.ixia_src_city_name`** +: type: keyword + + +**`netflow.ixia_src_country_code`** +: type: keyword + + +**`netflow.ixia_src_country_name`** +: type: keyword + + +**`netflow.ixia_src_latitude`** +: type: float + + +**`netflow.ixia_src_longitude`** +: type: float + + +**`netflow.ixia_src_region_code`** +: type: keyword + + +**`netflow.ixia_src_region_name`** +: type: keyword + + +**`netflow.ixia_threat_ipv4`** +: type: ip + + +**`netflow.ixia_threat_ipv6`** +: type: ip + + +**`netflow.ixia_threat_type`** +: type: keyword + + +**`netflow.large_packet_count`** +: type: long + + +**`netflow.layer2_frame_delta_count`** +: type: long + + +**`netflow.layer2_frame_total_count`** +: type: long + + +**`netflow.layer2_octet_delta_count`** +: type: long + + +**`netflow.layer2_octet_delta_sum_of_squares`** +: type: long + + +**`netflow.layer2_octet_total_count`** +: type: long + + +**`netflow.layer2_octet_total_sum_of_squares`** +: type: long + + +**`netflow.layer2_segment_id`** +: type: long + + +**`netflow.layer2packet_section_data`** +: type: short + + +**`netflow.layer2packet_section_offset`** +: type: integer + + +**`netflow.layer2packet_section_size`** +: type: integer + + +**`netflow.line_card_id`** +: type: long + + +**`netflow.log_op`** +: type: short + + +**`netflow.lower_ci_limit`** +: type: double + + +**`netflow.mark`** +: type: long + + +**`netflow.max_bib_entries`** +: type: long + + +**`netflow.max_entries_per_user`** +: type: long + + +**`netflow.max_export_seconds`** +: type: date + + +**`netflow.max_flow_end_microseconds`** +: type: date + + +**`netflow.max_flow_end_milliseconds`** +: type: date + + +**`netflow.max_flow_end_nanoseconds`** +: type: date + + +**`netflow.max_flow_end_seconds`** +: type: date + + +**`netflow.max_fragments_pending_reassembly`** +: type: long + + +**`netflow.max_packet_size`** +: type: integer + + +**`netflow.max_session_entries`** +: type: long + + +**`netflow.max_subscribers`** +: type: long + + +**`netflow.maximum_ip_total_length`** +: type: long + + +**`netflow.maximum_layer2_total_length`** +: type: long + + +**`netflow.maximum_ttl`** +: type: short + + +**`netflow.mean_flow_rate`** +: type: long + + +**`netflow.mean_packet_rate`** +: type: long + + +**`netflow.message_md5_checksum`** +: type: short + + +**`netflow.message_scope`** +: type: short + + +**`netflow.metering_process_id`** +: type: long + + +**`netflow.metro_evc_id`** +: type: keyword + + +**`netflow.metro_evc_type`** +: type: short + + +**`netflow.mib_capture_time_semantics`** +: type: short + + +**`netflow.mib_context_engine_id`** +: type: short + + +**`netflow.mib_context_name`** +: type: keyword + + +**`netflow.mib_index_indicator`** +: type: long + + +**`netflow.mib_module_name`** +: type: keyword + + +**`netflow.mib_object_description`** +: type: keyword + + +**`netflow.mib_object_identifier`** +: type: short + + +**`netflow.mib_object_name`** +: type: keyword + + +**`netflow.mib_object_syntax`** +: type: keyword + + +**`netflow.mib_object_value_bits`** +: type: short + + +**`netflow.mib_object_value_counter`** +: type: long + + +**`netflow.mib_object_value_gauge`** +: type: long + + +**`netflow.mib_object_value_integer`** +: type: integer + + +**`netflow.mib_object_value_ip_address`** +: type: ip + + +**`netflow.mib_object_value_octet_string`** +: type: short + + +**`netflow.mib_object_value_oid`** +: type: short + + +**`netflow.mib_object_value_time_ticks`** +: type: long + + +**`netflow.mib_object_value_unsigned`** +: type: long + + +**`netflow.mib_sub_identifier`** +: type: long + + +**`netflow.min_export_seconds`** +: type: date + + +**`netflow.min_flow_start_microseconds`** +: type: date + + +**`netflow.min_flow_start_milliseconds`** +: type: date + + +**`netflow.min_flow_start_nanoseconds`** +: type: date + + +**`netflow.min_flow_start_seconds`** +: type: date + + +**`netflow.minimum_ip_total_length`** +: type: long + + +**`netflow.minimum_layer2_total_length`** +: type: long + + +**`netflow.minimum_ttl`** +: type: short + + +**`netflow.mobile_imsi`** +: type: keyword + + +**`netflow.mobile_msisdn`** +: type: keyword + + +**`netflow.monitoring_interval_end_milli_seconds`** +: type: date + + +**`netflow.monitoring_interval_start_milli_seconds`** +: type: date + + +**`netflow.mpls_label_stack_depth`** +: type: long + + +**`netflow.mpls_label_stack_length`** +: type: long + + +**`netflow.mpls_label_stack_section`** +: type: short + + +**`netflow.mpls_label_stack_section10`** +: type: short + + +**`netflow.mpls_label_stack_section2`** +: type: short + + +**`netflow.mpls_label_stack_section3`** +: type: short + + +**`netflow.mpls_label_stack_section4`** +: type: short + + +**`netflow.mpls_label_stack_section5`** +: type: short + + +**`netflow.mpls_label_stack_section6`** +: type: short + + +**`netflow.mpls_label_stack_section7`** +: type: short + + +**`netflow.mpls_label_stack_section8`** +: type: short + + +**`netflow.mpls_label_stack_section9`** +: type: short + + +**`netflow.mpls_payload_length`** +: type: long + + +**`netflow.mpls_payload_packet_section`** +: type: short + + +**`netflow.mpls_top_label_exp`** +: type: short + + +**`netflow.mpls_top_label_ipv4_address`** +: type: ip + + +**`netflow.mpls_top_label_ipv6_address`** +: type: ip + + +**`netflow.mpls_top_label_prefix_length`** +: type: short + + +**`netflow.mpls_top_label_stack_section`** +: type: short + + +**`netflow.mpls_top_label_ttl`** +: type: short + + +**`netflow.mpls_top_label_type`** +: type: short + + +**`netflow.mpls_vpn_route_distinguisher`** +: type: short + + +**`netflow.mptcp_address_id`** +: type: short + + +**`netflow.mptcp_flags`** +: type: short + + +**`netflow.mptcp_initial_data_sequence_number`** +: type: long + + +**`netflow.mptcp_maximum_segment_size`** +: type: integer + + +**`netflow.mptcp_receiver_token`** +: type: long + + +**`netflow.multicast_replication_factor`** +: type: long + + +**`netflow.nat_event`** +: type: short + + +**`netflow.nat_inside_svcid`** +: type: integer + + +**`netflow.nat_instance_id`** +: type: long + + +**`netflow.nat_originating_address_realm`** +: type: short + + +**`netflow.nat_outside_svcid`** +: type: integer + + +**`netflow.nat_pool_id`** +: type: long + + +**`netflow.nat_pool_name`** +: type: keyword + + +**`netflow.nat_quota_exceeded_event`** +: type: long + + +**`netflow.nat_sub_string`** +: type: keyword + + +**`netflow.nat_threshold_event`** +: type: long + + +**`netflow.nat_type`** +: type: short + + +**`netflow.netscale_ica_client_version`** +: type: keyword + + +**`netflow.netscaler_aaa_username`** +: type: keyword + + +**`netflow.netscaler_app_name`** +: type: keyword + + +**`netflow.netscaler_app_name_app_id`** +: type: long + + +**`netflow.netscaler_app_name_incarnation_number`** +: type: long + + +**`netflow.netscaler_app_template_name`** +: type: keyword + + +**`netflow.netscaler_app_unit_name_app_id`** +: type: long + + +**`netflow.netscaler_application_startup_duration`** +: type: long + + +**`netflow.netscaler_application_startup_time`** +: type: long + + +**`netflow.netscaler_cache_redir_client_connection_core_id`** +: type: long + + +**`netflow.netscaler_cache_redir_client_connection_transaction_id`** +: type: long + + +**`netflow.netscaler_client_rtt`** +: type: long + + +**`netflow.netscaler_connection_chain_hop_count`** +: type: long + + +**`netflow.netscaler_connection_chain_id`** +: type: short + + +**`netflow.netscaler_connection_id`** +: type: long + + +**`netflow.netscaler_current_license_consumed`** +: type: long + + +**`netflow.netscaler_db_clt_host_name`** +: type: keyword + + +**`netflow.netscaler_db_database_name`** +: type: keyword + + +**`netflow.netscaler_db_login_flags`** +: type: long + + +**`netflow.netscaler_db_protocol_name`** +: type: short + + +**`netflow.netscaler_db_req_string`** +: type: keyword + + +**`netflow.netscaler_db_req_type`** +: type: short + + +**`netflow.netscaler_db_resp_length`** +: type: long + + +**`netflow.netscaler_db_resp_status`** +: type: long + + +**`netflow.netscaler_db_resp_status_string`** +: type: keyword + + +**`netflow.netscaler_db_user_name`** +: type: keyword + + +**`netflow.netscaler_flow_flags`** +: type: long + + +**`netflow.netscaler_http_client_interaction_end_time`** +: type: keyword + + +**`netflow.netscaler_http_client_interaction_start_time`** +: type: keyword + + +**`netflow.netscaler_http_client_render_end_time`** +: type: keyword + + +**`netflow.netscaler_http_client_render_start_time`** +: type: keyword + + +**`netflow.netscaler_http_content_type`** +: type: keyword + + +**`netflow.netscaler_http_domain_name`** +: type: keyword + + +**`netflow.netscaler_http_req_authorization`** +: type: keyword + + +**`netflow.netscaler_http_req_cookie`** +: type: keyword + + +**`netflow.netscaler_http_req_forw_fb`** +: type: long + + +**`netflow.netscaler_http_req_forw_lb`** +: type: long + + +**`netflow.netscaler_http_req_host`** +: type: keyword + + +**`netflow.netscaler_http_req_method`** +: type: keyword + + +**`netflow.netscaler_http_req_rcv_fb`** +: type: long + + +**`netflow.netscaler_http_req_rcv_lb`** +: type: long + + +**`netflow.netscaler_http_req_referer`** +: type: keyword + + +**`netflow.netscaler_http_req_url`** +: type: keyword + + +**`netflow.netscaler_http_req_user_agent`** +: type: keyword + + +**`netflow.netscaler_http_req_via`** +: type: keyword + + +**`netflow.netscaler_http_req_xforwarded_for`** +: type: keyword + + +**`netflow.netscaler_http_res_forw_fb`** +: type: long + + +**`netflow.netscaler_http_res_forw_lb`** +: type: long + + +**`netflow.netscaler_http_res_location`** +: type: keyword + + +**`netflow.netscaler_http_res_rcv_fb`** +: type: long + + +**`netflow.netscaler_http_res_rcv_lb`** +: type: long + + +**`netflow.netscaler_http_res_set_cookie`** +: type: keyword + + +**`netflow.netscaler_http_res_set_cookie2`** +: type: keyword + + +**`netflow.netscaler_http_rsp_len`** +: type: long + + +**`netflow.netscaler_http_rsp_status`** +: type: integer + + +**`netflow.netscaler_ica_app_module_path`** +: type: keyword + + +**`netflow.netscaler_ica_app_process_id`** +: type: long + + +**`netflow.netscaler_ica_application_name`** +: type: keyword + + +**`netflow.netscaler_ica_application_termination_time`** +: type: long + + +**`netflow.netscaler_ica_application_termination_type`** +: type: integer + + +**`netflow.netscaler_ica_channel_id1`** +: type: long + + +**`netflow.netscaler_ica_channel_id1_bytes`** +: type: long + + +**`netflow.netscaler_ica_channel_id2`** +: type: long + + +**`netflow.netscaler_ica_channel_id2_bytes`** +: type: long + + +**`netflow.netscaler_ica_channel_id3`** +: type: long + + +**`netflow.netscaler_ica_channel_id3_bytes`** +: type: long + + +**`netflow.netscaler_ica_channel_id4`** +: type: long + + +**`netflow.netscaler_ica_channel_id4_bytes`** +: type: long + + +**`netflow.netscaler_ica_channel_id5`** +: type: long + + +**`netflow.netscaler_ica_channel_id5_bytes`** +: type: long + + +**`netflow.netscaler_ica_client_host_name`** +: type: keyword + + +**`netflow.netscaler_ica_client_ip`** +: type: ip + + +**`netflow.netscaler_ica_client_launcher`** +: type: integer + + +**`netflow.netscaler_ica_client_side_rto_count`** +: type: integer + + +**`netflow.netscaler_ica_client_side_window_size`** +: type: integer + + +**`netflow.netscaler_ica_client_type`** +: type: integer + + +**`netflow.netscaler_ica_clientside_delay`** +: type: long + + +**`netflow.netscaler_ica_clientside_jitter`** +: type: long + + +**`netflow.netscaler_ica_clientside_packets_retransmit`** +: type: integer + + +**`netflow.netscaler_ica_clientside_rtt`** +: type: long + + +**`netflow.netscaler_ica_clientside_rx_bytes`** +: type: long + + +**`netflow.netscaler_ica_clientside_srtt`** +: type: long + + +**`netflow.netscaler_ica_clientside_tx_bytes`** +: type: long + + +**`netflow.netscaler_ica_connection_priority`** +: type: integer + + +**`netflow.netscaler_ica_device_serial_no`** +: type: long + + +**`netflow.netscaler_ica_domain_name`** +: type: keyword + + +**`netflow.netscaler_ica_flags`** +: type: long + + +**`netflow.netscaler_ica_host_delay`** +: type: long + + +**`netflow.netscaler_ica_l7_client_latency`** +: type: long + + +**`netflow.netscaler_ica_l7_server_latency`** +: type: long + + +**`netflow.netscaler_ica_launch_mechanism`** +: type: integer + + +**`netflow.netscaler_ica_network_update_end_time`** +: type: long + + +**`netflow.netscaler_ica_network_update_start_time`** +: type: long + + +**`netflow.netscaler_ica_rtt`** +: type: long + + +**`netflow.netscaler_ica_server_name`** +: type: keyword + + +**`netflow.netscaler_ica_server_side_rto_count`** +: type: integer + + +**`netflow.netscaler_ica_server_side_window_size`** +: type: integer + + +**`netflow.netscaler_ica_serverside_delay`** +: type: long + + +**`netflow.netscaler_ica_serverside_jitter`** +: type: long + + +**`netflow.netscaler_ica_serverside_packets_retransmit`** +: type: integer + + +**`netflow.netscaler_ica_serverside_rtt`** +: type: long + + +**`netflow.netscaler_ica_serverside_srtt`** +: type: long + + +**`netflow.netscaler_ica_session_end_time`** +: type: long + + +**`netflow.netscaler_ica_session_guid`** +: type: short + + +**`netflow.netscaler_ica_session_reconnects`** +: type: short + + +**`netflow.netscaler_ica_session_setup_time`** +: type: long + + +**`netflow.netscaler_ica_session_update_begin_sec`** +: type: long + + +**`netflow.netscaler_ica_session_update_end_sec`** +: type: long + + +**`netflow.netscaler_ica_username`** +: type: keyword + + +**`netflow.netscaler_license_type`** +: type: short + + +**`netflow.netscaler_main_page_core_id`** +: type: long + + +**`netflow.netscaler_main_page_id`** +: type: long + + +**`netflow.netscaler_max_license_count`** +: type: long + + +**`netflow.netscaler_msi_client_cookie`** +: type: short + + +**`netflow.netscaler_round_trip_time`** +: type: long + + +**`netflow.netscaler_server_ttfb`** +: type: long + + +**`netflow.netscaler_server_ttlb`** +: type: long + + +**`netflow.netscaler_syslog_message`** +: type: keyword + + +**`netflow.netscaler_syslog_priority`** +: type: short + + +**`netflow.netscaler_syslog_timestamp`** +: type: long + + +**`netflow.netscaler_transaction_id`** +: type: long + + +**`netflow.netscaler_unknown270`** +: type: long + + +**`netflow.netscaler_unknown271`** +: type: long + + +**`netflow.netscaler_unknown272`** +: type: long + + +**`netflow.netscaler_unknown273`** +: type: long + + +**`netflow.netscaler_unknown274`** +: type: long + + +**`netflow.netscaler_unknown275`** +: type: long + + +**`netflow.netscaler_unknown276`** +: type: long + + +**`netflow.netscaler_unknown277`** +: type: long + + +**`netflow.netscaler_unknown278`** +: type: long + + +**`netflow.netscaler_unknown279`** +: type: long + + +**`netflow.netscaler_unknown280`** +: type: long + + +**`netflow.netscaler_unknown281`** +: type: long + + +**`netflow.netscaler_unknown282`** +: type: long + + +**`netflow.netscaler_unknown283`** +: type: long + + +**`netflow.netscaler_unknown284`** +: type: long + + +**`netflow.netscaler_unknown285`** +: type: long + + +**`netflow.netscaler_unknown286`** +: type: long + + +**`netflow.netscaler_unknown287`** +: type: long + + +**`netflow.netscaler_unknown288`** +: type: long + + +**`netflow.netscaler_unknown289`** +: type: long + + +**`netflow.netscaler_unknown290`** +: type: long + + +**`netflow.netscaler_unknown291`** +: type: long + + +**`netflow.netscaler_unknown292`** +: type: long + + +**`netflow.netscaler_unknown293`** +: type: long + + +**`netflow.netscaler_unknown294`** +: type: long + + +**`netflow.netscaler_unknown295`** +: type: long + + +**`netflow.netscaler_unknown296`** +: type: long + + +**`netflow.netscaler_unknown297`** +: type: long + + +**`netflow.netscaler_unknown298`** +: type: long + + +**`netflow.netscaler_unknown299`** +: type: long + + +**`netflow.netscaler_unknown300`** +: type: long + + +**`netflow.netscaler_unknown301`** +: type: long + + +**`netflow.netscaler_unknown302`** +: type: long + + +**`netflow.netscaler_unknown303`** +: type: long + + +**`netflow.netscaler_unknown304`** +: type: long + + +**`netflow.netscaler_unknown305`** +: type: long + + +**`netflow.netscaler_unknown306`** +: type: long + + +**`netflow.netscaler_unknown307`** +: type: long + + +**`netflow.netscaler_unknown308`** +: type: long + + +**`netflow.netscaler_unknown309`** +: type: long + + +**`netflow.netscaler_unknown310`** +: type: long + + +**`netflow.netscaler_unknown311`** +: type: long + + +**`netflow.netscaler_unknown312`** +: type: long + + +**`netflow.netscaler_unknown313`** +: type: long + + +**`netflow.netscaler_unknown314`** +: type: long + + +**`netflow.netscaler_unknown315`** +: type: long + + +**`netflow.netscaler_unknown316`** +: type: keyword + + +**`netflow.netscaler_unknown317`** +: type: long + + +**`netflow.netscaler_unknown318`** +: type: long + + +**`netflow.netscaler_unknown319`** +: type: keyword + + +**`netflow.netscaler_unknown320`** +: type: integer + + +**`netflow.netscaler_unknown321`** +: type: long + + +**`netflow.netscaler_unknown322`** +: type: long + + +**`netflow.netscaler_unknown323`** +: type: integer + + +**`netflow.netscaler_unknown324`** +: type: integer + + +**`netflow.netscaler_unknown325`** +: type: integer + + +**`netflow.netscaler_unknown326`** +: type: integer + + +**`netflow.netscaler_unknown327`** +: type: long + + +**`netflow.netscaler_unknown328`** +: type: integer + + +**`netflow.netscaler_unknown329`** +: type: integer + + +**`netflow.netscaler_unknown330`** +: type: integer + + +**`netflow.netscaler_unknown331`** +: type: integer + + +**`netflow.netscaler_unknown332`** +: type: long + + +**`netflow.netscaler_unknown333`** +: type: keyword + + +**`netflow.netscaler_unknown334`** +: type: keyword + + +**`netflow.netscaler_unknown335`** +: type: long + + +**`netflow.netscaler_unknown336`** +: type: long + + +**`netflow.netscaler_unknown337`** +: type: long + + +**`netflow.netscaler_unknown338`** +: type: long + + +**`netflow.netscaler_unknown339`** +: type: long + + +**`netflow.netscaler_unknown340`** +: type: long + + +**`netflow.netscaler_unknown341`** +: type: long + + +**`netflow.netscaler_unknown342`** +: type: long + + +**`netflow.netscaler_unknown343`** +: type: long + + +**`netflow.netscaler_unknown344`** +: type: long + + +**`netflow.netscaler_unknown345`** +: type: long + + +**`netflow.netscaler_unknown346`** +: type: long + + +**`netflow.netscaler_unknown347`** +: type: long + + +**`netflow.netscaler_unknown348`** +: type: integer + + +**`netflow.netscaler_unknown349`** +: type: keyword + + +**`netflow.netscaler_unknown350`** +: type: keyword + + +**`netflow.netscaler_unknown351`** +: type: keyword + + +**`netflow.netscaler_unknown352`** +: type: integer + + +**`netflow.netscaler_unknown353`** +: type: long + + +**`netflow.netscaler_unknown354`** +: type: long + + +**`netflow.netscaler_unknown355`** +: type: long + + +**`netflow.netscaler_unknown356`** +: type: long + + +**`netflow.netscaler_unknown357`** +: type: long + + +**`netflow.netscaler_unknown363`** +: type: short + + +**`netflow.netscaler_unknown383`** +: type: short + + +**`netflow.netscaler_unknown391`** +: type: long + + +**`netflow.netscaler_unknown398`** +: type: long + + +**`netflow.netscaler_unknown404`** +: type: long + + +**`netflow.netscaler_unknown405`** +: type: long + + +**`netflow.netscaler_unknown427`** +: type: long + + +**`netflow.netscaler_unknown429`** +: type: short + + +**`netflow.netscaler_unknown432`** +: type: short + + +**`netflow.netscaler_unknown433`** +: type: short + + +**`netflow.netscaler_unknown453`** +: type: long + + +**`netflow.netscaler_unknown465`** +: type: long + + +**`netflow.new_connection_delta_count`** +: type: long + + +**`netflow.next_header_ipv6`** +: type: short + + +**`netflow.non_empty_packet_count`** +: type: long + + +**`netflow.not_sent_flow_total_count`** +: type: long + + +**`netflow.not_sent_layer2_octet_total_count`** +: type: long + + +**`netflow.not_sent_octet_total_count`** +: type: long + + +**`netflow.not_sent_packet_total_count`** +: type: long + + +**`netflow.observation_domain_id`** +: type: long + + +**`netflow.observation_domain_name`** +: type: keyword + + +**`netflow.observation_point_id`** +: type: long + + +**`netflow.observation_point_type`** +: type: short + + +**`netflow.observation_time_microseconds`** +: type: date + + +**`netflow.observation_time_milliseconds`** +: type: date + + +**`netflow.observation_time_nanoseconds`** +: type: date + + +**`netflow.observation_time_seconds`** +: type: date + + +**`netflow.observed_flow_total_count`** +: type: long + + +**`netflow.octet_delta_count`** +: type: long + + +**`netflow.octet_delta_sum_of_squares`** +: type: long + + +**`netflow.octet_total_count`** +: type: long + + +**`netflow.octet_total_sum_of_squares`** +: type: long + + +**`netflow.opaque_octets`** +: type: short + + +**`netflow.original_exporter_ipv4_address`** +: type: ip + + +**`netflow.original_exporter_ipv6_address`** +: type: ip + + +**`netflow.original_flows_completed`** +: type: long + + +**`netflow.original_flows_initiated`** +: type: long + + +**`netflow.original_flows_present`** +: type: long + + +**`netflow.original_observation_domain_id`** +: type: long + + +**`netflow.os_finger_print`** +: type: keyword + + +**`netflow.os_name`** +: type: keyword + + +**`netflow.os_version`** +: type: keyword + + +**`netflow.p2p_technology`** +: type: keyword + + +**`netflow.packet_delta_count`** +: type: long + + +**`netflow.packet_total_count`** +: type: long + + +**`netflow.padding_octets`** +: type: short + + +**`netflow.payload`** +: type: keyword + + +**`netflow.payload_entropy`** +: type: short + + +**`netflow.payload_length_ipv6`** +: type: integer + + +**`netflow.policy_qos_classification_hierarchy`** +: type: long + + +**`netflow.policy_qos_queue_index`** +: type: long + + +**`netflow.policy_qos_queuedrops`** +: type: long + + +**`netflow.policy_qos_queueindex`** +: type: long + + +**`netflow.port_id`** +: type: long + + +**`netflow.port_range_end`** +: type: integer + + +**`netflow.port_range_num_ports`** +: type: integer + + +**`netflow.port_range_start`** +: type: integer + + +**`netflow.port_range_step_size`** +: type: integer + + +**`netflow.post_destination_mac_address`** +: type: keyword + + +**`netflow.post_dot1q_customer_vlan_id`** +: type: integer + + +**`netflow.post_dot1q_vlan_id`** +: type: integer + + +**`netflow.post_ip_class_of_service`** +: type: short + + +**`netflow.post_ip_diff_serv_code_point`** +: type: short + + +**`netflow.post_ip_precedence`** +: type: short + + +**`netflow.post_layer2_octet_delta_count`** +: type: long + + +**`netflow.post_layer2_octet_total_count`** +: type: long + + +**`netflow.post_mcast_layer2_octet_delta_count`** +: type: long + + +**`netflow.post_mcast_layer2_octet_total_count`** +: type: long + + +**`netflow.post_mcast_octet_delta_count`** +: type: long + + +**`netflow.post_mcast_octet_total_count`** +: type: long + + +**`netflow.post_mcast_packet_delta_count`** +: type: long + + +**`netflow.post_mcast_packet_total_count`** +: type: long + + +**`netflow.post_mpls_top_label_exp`** +: type: short + + +**`netflow.post_napt_destination_transport_port`** +: type: integer + + +**`netflow.post_napt_source_transport_port`** +: type: integer + + +**`netflow.post_nat_destination_ipv4_address`** +: type: ip + + +**`netflow.post_nat_destination_ipv6_address`** +: type: ip + + +**`netflow.post_nat_source_ipv4_address`** +: type: ip + + +**`netflow.post_nat_source_ipv6_address`** +: type: ip + + +**`netflow.post_octet_delta_count`** +: type: long + + +**`netflow.post_octet_total_count`** +: type: long + + +**`netflow.post_packet_delta_count`** +: type: long + + +**`netflow.post_packet_total_count`** +: type: long + + +**`netflow.post_source_mac_address`** +: type: keyword + + +**`netflow.post_vlan_id`** +: type: integer + + +**`netflow.private_enterprise_number`** +: type: long + + +**`netflow.procera_apn`** +: type: keyword + + +**`netflow.procera_base_service`** +: type: keyword + + +**`netflow.procera_content_categories`** +: type: keyword + + +**`netflow.procera_device_id`** +: type: long + + +**`netflow.procera_external_rtt`** +: type: integer + + +**`netflow.procera_flow_behavior`** +: type: keyword + + +**`netflow.procera_ggsn`** +: type: keyword + + +**`netflow.procera_http_content_type`** +: type: keyword + + +**`netflow.procera_http_file_length`** +: type: long + + +**`netflow.procera_http_language`** +: type: keyword + + +**`netflow.procera_http_location`** +: type: keyword + + +**`netflow.procera_http_referer`** +: type: keyword + + +**`netflow.procera_http_request_method`** +: type: keyword + + +**`netflow.procera_http_request_version`** +: type: keyword + + +**`netflow.procera_http_response_status`** +: type: integer + + +**`netflow.procera_http_url`** +: type: keyword + + +**`netflow.procera_http_user_agent`** +: type: keyword + + +**`netflow.procera_imsi`** +: type: long + + +**`netflow.procera_incoming_octets`** +: type: long + + +**`netflow.procera_incoming_packets`** +: type: long + + +**`netflow.procera_incoming_shaping_drops`** +: type: long + + +**`netflow.procera_incoming_shaping_latency`** +: type: integer + + +**`netflow.procera_internal_rtt`** +: type: integer + + +**`netflow.procera_local_ipv4_host`** +: type: ip + + +**`netflow.procera_local_ipv6_host`** +: type: ip + + +**`netflow.procera_msisdn`** +: type: long + + +**`netflow.procera_outgoing_octets`** +: type: long + + +**`netflow.procera_outgoing_packets`** +: type: long + + +**`netflow.procera_outgoing_shaping_drops`** +: type: long + + +**`netflow.procera_outgoing_shaping_latency`** +: type: integer + + +**`netflow.procera_property`** +: type: keyword + + +**`netflow.procera_qoe_incoming_external`** +: type: float + + +**`netflow.procera_qoe_incoming_internal`** +: type: float + + +**`netflow.procera_qoe_outgoing_external`** +: type: float + + +**`netflow.procera_qoe_outgoing_internal`** +: type: float + + +**`netflow.procera_rat`** +: type: keyword + + +**`netflow.procera_remote_ipv4_host`** +: type: ip + + +**`netflow.procera_remote_ipv6_host`** +: type: ip + + +**`netflow.procera_rnc`** +: type: integer + + +**`netflow.procera_server_hostname`** +: type: keyword + + +**`netflow.procera_service`** +: type: keyword + + +**`netflow.procera_sgsn`** +: type: keyword + + +**`netflow.procera_subscriber_identifier`** +: type: keyword + + +**`netflow.procera_template_name`** +: type: keyword + + +**`netflow.procera_user_location_information`** +: type: keyword + + +**`netflow.protocol_identifier`** +: type: short + + +**`netflow.pseudo_wire_control_word`** +: type: long + + +**`netflow.pseudo_wire_destination_ipv4_address`** +: type: ip + + +**`netflow.pseudo_wire_id`** +: type: long + + +**`netflow.pseudo_wire_type`** +: type: integer + + +**`netflow.reason`** +: type: long + + +**`netflow.reason_text`** +: type: keyword + + +**`netflow.relative_error`** +: type: double + + +**`netflow.responder_octets`** +: type: long + + +**`netflow.responder_packets`** +: type: long + + +**`netflow.reverse_absolute_error`** +: type: double + + +**`netflow.reverse_anonymization_flags`** +: type: integer + + +**`netflow.reverse_anonymization_technique`** +: type: integer + + +**`netflow.reverse_application_category_name`** +: type: keyword + + +**`netflow.reverse_application_description`** +: type: keyword + + +**`netflow.reverse_application_group_name`** +: type: keyword + + +**`netflow.reverse_application_id`** +: type: keyword + + +**`netflow.reverse_application_name`** +: type: keyword + + +**`netflow.reverse_application_sub_category_name`** +: type: keyword + + +**`netflow.reverse_average_interarrival_time`** +: type: long + + +**`netflow.reverse_bgp_destination_as_number`** +: type: long + + +**`netflow.reverse_bgp_next_adjacent_as_number`** +: type: long + + +**`netflow.reverse_bgp_next_hop_ipv4_address`** +: type: ip + + +**`netflow.reverse_bgp_next_hop_ipv6_address`** +: type: ip + + +**`netflow.reverse_bgp_prev_adjacent_as_number`** +: type: long + + +**`netflow.reverse_bgp_source_as_number`** +: type: long + + +**`netflow.reverse_bgp_validity_state`** +: type: short + + +**`netflow.reverse_class_id`** +: type: short + + +**`netflow.reverse_class_name`** +: type: keyword + + +**`netflow.reverse_classification_engine_id`** +: type: short + + +**`netflow.reverse_collection_time_milliseconds`** +: type: long + + +**`netflow.reverse_collector_certificate`** +: type: keyword + + +**`netflow.reverse_confidence_level`** +: type: double + + +**`netflow.reverse_connection_sum_duration_seconds`** +: type: long + + +**`netflow.reverse_connection_transaction_id`** +: type: long + + +**`netflow.reverse_data_byte_count`** +: type: long + + +**`netflow.reverse_data_link_frame_section`** +: type: keyword + + +**`netflow.reverse_data_link_frame_size`** +: type: integer + + +**`netflow.reverse_data_link_frame_type`** +: type: integer + + +**`netflow.reverse_data_records_reliability`** +: type: short + + +**`netflow.reverse_delta_flow_count`** +: type: long + + +**`netflow.reverse_destination_ipv4_address`** +: type: ip + + +**`netflow.reverse_destination_ipv4_prefix`** +: type: ip + + +**`netflow.reverse_destination_ipv4_prefix_length`** +: type: short + + +**`netflow.reverse_destination_ipv6_address`** +: type: ip + + +**`netflow.reverse_destination_ipv6_prefix`** +: type: ip + + +**`netflow.reverse_destination_ipv6_prefix_length`** +: type: short + + +**`netflow.reverse_destination_mac_address`** +: type: keyword + + +**`netflow.reverse_destination_transport_port`** +: type: integer + + +**`netflow.reverse_digest_hash_value`** +: type: long + + +**`netflow.reverse_distinct_count_of_destination_ip_address`** +: type: long + + +**`netflow.reverse_distinct_count_of_destination_ipv4_address`** +: type: long + + +**`netflow.reverse_distinct_count_of_destination_ipv6_address`** +: type: long + + +**`netflow.reverse_distinct_count_of_source_ip_address`** +: type: long + + +**`netflow.reverse_distinct_count_of_source_ipv4_address`** +: type: long + + +**`netflow.reverse_distinct_count_of_source_ipv6_address`** +: type: long + + +**`netflow.reverse_dot1q_customer_dei`** +: type: short + + +**`netflow.reverse_dot1q_customer_destination_mac_address`** +: type: keyword + + +**`netflow.reverse_dot1q_customer_priority`** +: type: short + + +**`netflow.reverse_dot1q_customer_source_mac_address`** +: type: keyword + + +**`netflow.reverse_dot1q_customer_vlan_id`** +: type: integer + + +**`netflow.reverse_dot1q_dei`** +: type: short + + +**`netflow.reverse_dot1q_priority`** +: type: short + + +**`netflow.reverse_dot1q_service_instance_id`** +: type: long + + +**`netflow.reverse_dot1q_service_instance_priority`** +: type: short + + +**`netflow.reverse_dot1q_service_instance_tag`** +: type: keyword + + +**`netflow.reverse_dot1q_vlan_id`** +: type: integer + + +**`netflow.reverse_dropped_layer2_octet_delta_count`** +: type: long + + +**`netflow.reverse_dropped_layer2_octet_total_count`** +: type: long + + +**`netflow.reverse_dropped_octet_delta_count`** +: type: long + + +**`netflow.reverse_dropped_octet_total_count`** +: type: long + + +**`netflow.reverse_dropped_packet_delta_count`** +: type: long + + +**`netflow.reverse_dropped_packet_total_count`** +: type: long + + +**`netflow.reverse_dst_traffic_index`** +: type: long + + +**`netflow.reverse_egress_broadcast_packet_total_count`** +: type: long + + +**`netflow.reverse_egress_interface`** +: type: long + + +**`netflow.reverse_egress_interface_type`** +: type: long + + +**`netflow.reverse_egress_physical_interface`** +: type: long + + +**`netflow.reverse_egress_unicast_packet_total_count`** +: type: long + + +**`netflow.reverse_egress_vrfid`** +: type: long + + +**`netflow.reverse_encrypted_technology`** +: type: keyword + + +**`netflow.reverse_engine_id`** +: type: short + + +**`netflow.reverse_engine_type`** +: type: short + + +**`netflow.reverse_ethernet_header_length`** +: type: short + + +**`netflow.reverse_ethernet_payload_length`** +: type: integer + + +**`netflow.reverse_ethernet_total_length`** +: type: integer + + +**`netflow.reverse_ethernet_type`** +: type: integer + + +**`netflow.reverse_export_sctp_stream_id`** +: type: integer + + +**`netflow.reverse_exporter_certificate`** +: type: keyword + + +**`netflow.reverse_exporting_process_id`** +: type: long + + +**`netflow.reverse_firewall_event`** +: type: short + + +**`netflow.reverse_first_non_empty_packet_size`** +: type: integer + + +**`netflow.reverse_first_packet_banner`** +: type: keyword + + +**`netflow.reverse_flags_and_sampler_id`** +: type: long + + +**`netflow.reverse_flow_active_timeout`** +: type: integer + + +**`netflow.reverse_flow_attributes`** +: type: integer + + +**`netflow.reverse_flow_delta_milliseconds`** +: type: long + + +**`netflow.reverse_flow_direction`** +: type: short + + +**`netflow.reverse_flow_duration_microseconds`** +: type: long + + +**`netflow.reverse_flow_duration_milliseconds`** +: type: long + + +**`netflow.reverse_flow_end_delta_microseconds`** +: type: long + + +**`netflow.reverse_flow_end_microseconds`** +: type: long + + +**`netflow.reverse_flow_end_milliseconds`** +: type: long + + +**`netflow.reverse_flow_end_nanoseconds`** +: type: long + + +**`netflow.reverse_flow_end_reason`** +: type: short + + +**`netflow.reverse_flow_end_seconds`** +: type: long + + +**`netflow.reverse_flow_end_sys_up_time`** +: type: long + + +**`netflow.reverse_flow_idle_timeout`** +: type: integer + + +**`netflow.reverse_flow_label_ipv6`** +: type: long + + +**`netflow.reverse_flow_sampling_time_interval`** +: type: long + + +**`netflow.reverse_flow_sampling_time_spacing`** +: type: long + + +**`netflow.reverse_flow_selected_flow_delta_count`** +: type: long + + +**`netflow.reverse_flow_selected_octet_delta_count`** +: type: long + + +**`netflow.reverse_flow_selected_packet_delta_count`** +: type: long + + +**`netflow.reverse_flow_selector_algorithm`** +: type: integer + + +**`netflow.reverse_flow_start_delta_microseconds`** +: type: long + + +**`netflow.reverse_flow_start_microseconds`** +: type: long + + +**`netflow.reverse_flow_start_milliseconds`** +: type: long + + +**`netflow.reverse_flow_start_nanoseconds`** +: type: long + + +**`netflow.reverse_flow_start_seconds`** +: type: long + + +**`netflow.reverse_flow_start_sys_up_time`** +: type: long + + +**`netflow.reverse_forwarding_status`** +: type: long + + +**`netflow.reverse_fragment_flags`** +: type: short + + +**`netflow.reverse_fragment_identification`** +: type: long + + +**`netflow.reverse_fragment_offset`** +: type: integer + + +**`netflow.reverse_gre_key`** +: type: long + + +**`netflow.reverse_hash_digest_output`** +: type: short + + +**`netflow.reverse_hash_flow_domain`** +: type: integer + + +**`netflow.reverse_hash_initialiser_value`** +: type: long + + +**`netflow.reverse_hash_ip_payload_offset`** +: type: long + + +**`netflow.reverse_hash_ip_payload_size`** +: type: long + + +**`netflow.reverse_hash_output_range_max`** +: type: long + + +**`netflow.reverse_hash_output_range_min`** +: type: long + + +**`netflow.reverse_hash_selected_range_max`** +: type: long + + +**`netflow.reverse_hash_selected_range_min`** +: type: long + + +**`netflow.reverse_icmp_code_ipv4`** +: type: short + + +**`netflow.reverse_icmp_code_ipv6`** +: type: short + + +**`netflow.reverse_icmp_type_code_ipv4`** +: type: integer + + +**`netflow.reverse_icmp_type_code_ipv6`** +: type: integer + + +**`netflow.reverse_icmp_type_ipv4`** +: type: short + + +**`netflow.reverse_icmp_type_ipv6`** +: type: short + + +**`netflow.reverse_igmp_type`** +: type: short + + +**`netflow.reverse_ignored_data_record_total_count`** +: type: long + + +**`netflow.reverse_ignored_layer2_frame_total_count`** +: type: long + + +**`netflow.reverse_ignored_layer2_octet_total_count`** +: type: long + + +**`netflow.reverse_information_element_data_type`** +: type: short + + +**`netflow.reverse_information_element_description`** +: type: keyword + + +**`netflow.reverse_information_element_id`** +: type: integer + + +**`netflow.reverse_information_element_index`** +: type: integer + + +**`netflow.reverse_information_element_name`** +: type: keyword + + +**`netflow.reverse_information_element_range_begin`** +: type: long + + +**`netflow.reverse_information_element_range_end`** +: type: long + + +**`netflow.reverse_information_element_semantics`** +: type: short + + +**`netflow.reverse_information_element_units`** +: type: integer + + +**`netflow.reverse_ingress_broadcast_packet_total_count`** +: type: long + + +**`netflow.reverse_ingress_interface`** +: type: long + + +**`netflow.reverse_ingress_interface_type`** +: type: long + + +**`netflow.reverse_ingress_multicast_packet_total_count`** +: type: long + + +**`netflow.reverse_ingress_physical_interface`** +: type: long + + +**`netflow.reverse_ingress_unicast_packet_total_count`** +: type: long + + +**`netflow.reverse_ingress_vrfid`** +: type: long + + +**`netflow.reverse_initial_tcp_flags`** +: type: short + + +**`netflow.reverse_initiator_octets`** +: type: long + + +**`netflow.reverse_initiator_packets`** +: type: long + + +**`netflow.reverse_interface_description`** +: type: keyword + + +**`netflow.reverse_interface_name`** +: type: keyword + + +**`netflow.reverse_intermediate_process_id`** +: type: long + + +**`netflow.reverse_ip_class_of_service`** +: type: short + + +**`netflow.reverse_ip_diff_serv_code_point`** +: type: short + + +**`netflow.reverse_ip_header_length`** +: type: short + + +**`netflow.reverse_ip_header_packet_section`** +: type: keyword + + +**`netflow.reverse_ip_next_hop_ipv4_address`** +: type: ip + + +**`netflow.reverse_ip_next_hop_ipv6_address`** +: type: ip + + +**`netflow.reverse_ip_payload_length`** +: type: long + + +**`netflow.reverse_ip_payload_packet_section`** +: type: keyword + + +**`netflow.reverse_ip_precedence`** +: type: short + + +**`netflow.reverse_ip_sec_spi`** +: type: long + + +**`netflow.reverse_ip_total_length`** +: type: long + + +**`netflow.reverse_ip_ttl`** +: type: short + + +**`netflow.reverse_ip_version`** +: type: short + + +**`netflow.reverse_ipv4_ihl`** +: type: short + + +**`netflow.reverse_ipv4_options`** +: type: long + + +**`netflow.reverse_ipv4_router_sc`** +: type: ip + + +**`netflow.reverse_ipv6_extension_headers`** +: type: long + + +**`netflow.reverse_is_multicast`** +: type: short + + +**`netflow.reverse_large_packet_count`** +: type: long + + +**`netflow.reverse_layer2_frame_delta_count`** +: type: long + + +**`netflow.reverse_layer2_frame_total_count`** +: type: long + + +**`netflow.reverse_layer2_octet_delta_count`** +: type: long + + +**`netflow.reverse_layer2_octet_delta_sum_of_squares`** +: type: long + + +**`netflow.reverse_layer2_octet_total_count`** +: type: long + + +**`netflow.reverse_layer2_octet_total_sum_of_squares`** +: type: long + + +**`netflow.reverse_layer2_segment_id`** +: type: long + + +**`netflow.reverse_layer2packet_section_data`** +: type: keyword + + +**`netflow.reverse_layer2packet_section_offset`** +: type: integer + + +**`netflow.reverse_layer2packet_section_size`** +: type: integer + + +**`netflow.reverse_line_card_id`** +: type: long + + +**`netflow.reverse_lower_ci_limit`** +: type: double + + +**`netflow.reverse_max_export_seconds`** +: type: long + + +**`netflow.reverse_max_flow_end_microseconds`** +: type: long + + +**`netflow.reverse_max_flow_end_milliseconds`** +: type: long + + +**`netflow.reverse_max_flow_end_nanoseconds`** +: type: long + + +**`netflow.reverse_max_flow_end_seconds`** +: type: long + + +**`netflow.reverse_max_packet_size`** +: type: integer + + +**`netflow.reverse_maximum_ip_total_length`** +: type: long + + +**`netflow.reverse_maximum_layer2_total_length`** +: type: long + + +**`netflow.reverse_maximum_ttl`** +: type: short + + +**`netflow.reverse_message_md5_checksum`** +: type: keyword + + +**`netflow.reverse_message_scope`** +: type: short + + +**`netflow.reverse_metering_process_id`** +: type: long + + +**`netflow.reverse_metro_evc_id`** +: type: keyword + + +**`netflow.reverse_metro_evc_type`** +: type: short + + +**`netflow.reverse_min_export_seconds`** +: type: long + + +**`netflow.reverse_min_flow_start_microseconds`** +: type: long + + +**`netflow.reverse_min_flow_start_milliseconds`** +: type: long + + +**`netflow.reverse_min_flow_start_nanoseconds`** +: type: long + + +**`netflow.reverse_min_flow_start_seconds`** +: type: long + + +**`netflow.reverse_minimum_ip_total_length`** +: type: long + + +**`netflow.reverse_minimum_layer2_total_length`** +: type: long + + +**`netflow.reverse_minimum_ttl`** +: type: short + + +**`netflow.reverse_monitoring_interval_end_milli_seconds`** +: type: long + + +**`netflow.reverse_monitoring_interval_start_milli_seconds`** +: type: long + + +**`netflow.reverse_mpls_label_stack_depth`** +: type: long + + +**`netflow.reverse_mpls_label_stack_length`** +: type: long + + +**`netflow.reverse_mpls_label_stack_section`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section10`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section2`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section3`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section4`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section5`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section6`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section7`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section8`** +: type: keyword + + +**`netflow.reverse_mpls_label_stack_section9`** +: type: keyword + + +**`netflow.reverse_mpls_payload_length`** +: type: long + + +**`netflow.reverse_mpls_payload_packet_section`** +: type: keyword + + +**`netflow.reverse_mpls_top_label_exp`** +: type: short + + +**`netflow.reverse_mpls_top_label_ipv4_address`** +: type: ip + + +**`netflow.reverse_mpls_top_label_ipv6_address`** +: type: ip + + +**`netflow.reverse_mpls_top_label_prefix_length`** +: type: short + + +**`netflow.reverse_mpls_top_label_stack_section`** +: type: keyword + + +**`netflow.reverse_mpls_top_label_ttl`** +: type: short + + +**`netflow.reverse_mpls_top_label_type`** +: type: short + + +**`netflow.reverse_mpls_vpn_route_distinguisher`** +: type: keyword + + +**`netflow.reverse_multicast_replication_factor`** +: type: long + + +**`netflow.reverse_nat_event`** +: type: short + + +**`netflow.reverse_nat_originating_address_realm`** +: type: short + + +**`netflow.reverse_nat_pool_id`** +: type: long + + +**`netflow.reverse_nat_pool_name`** +: type: keyword + + +**`netflow.reverse_nat_type`** +: type: short + + +**`netflow.reverse_new_connection_delta_count`** +: type: long + + +**`netflow.reverse_next_header_ipv6`** +: type: short + + +**`netflow.reverse_non_empty_packet_count`** +: type: long + + +**`netflow.reverse_not_sent_layer2_octet_total_count`** +: type: long + + +**`netflow.reverse_observation_domain_name`** +: type: keyword + + +**`netflow.reverse_observation_point_id`** +: type: long + + +**`netflow.reverse_observation_point_type`** +: type: short + + +**`netflow.reverse_observation_time_microseconds`** +: type: long + + +**`netflow.reverse_observation_time_milliseconds`** +: type: long + + +**`netflow.reverse_observation_time_nanoseconds`** +: type: long + + +**`netflow.reverse_observation_time_seconds`** +: type: long + + +**`netflow.reverse_octet_delta_count`** +: type: long + + +**`netflow.reverse_octet_delta_sum_of_squares`** +: type: long + + +**`netflow.reverse_octet_total_count`** +: type: long + + +**`netflow.reverse_octet_total_sum_of_squares`** +: type: long + + +**`netflow.reverse_opaque_octets`** +: type: keyword + + +**`netflow.reverse_original_exporter_ipv4_address`** +: type: ip + + +**`netflow.reverse_original_exporter_ipv6_address`** +: type: ip + + +**`netflow.reverse_original_flows_completed`** +: type: long + + +**`netflow.reverse_original_flows_initiated`** +: type: long + + +**`netflow.reverse_original_flows_present`** +: type: long + + +**`netflow.reverse_original_observation_domain_id`** +: type: long + + +**`netflow.reverse_os_finger_print`** +: type: keyword + + +**`netflow.reverse_os_name`** +: type: keyword + + +**`netflow.reverse_os_version`** +: type: keyword + + +**`netflow.reverse_p2p_technology`** +: type: keyword + + +**`netflow.reverse_packet_delta_count`** +: type: long + + +**`netflow.reverse_packet_total_count`** +: type: long + + +**`netflow.reverse_payload`** +: type: keyword + + +**`netflow.reverse_payload_entropy`** +: type: short + + +**`netflow.reverse_payload_length_ipv6`** +: type: integer + + +**`netflow.reverse_port_id`** +: type: long + + +**`netflow.reverse_port_range_end`** +: type: integer + + +**`netflow.reverse_port_range_num_ports`** +: type: integer + + +**`netflow.reverse_port_range_start`** +: type: integer + + +**`netflow.reverse_port_range_step_size`** +: type: integer + + +**`netflow.reverse_post_destination_mac_address`** +: type: keyword + + +**`netflow.reverse_post_dot1q_customer_vlan_id`** +: type: integer + + +**`netflow.reverse_post_dot1q_vlan_id`** +: type: integer + + +**`netflow.reverse_post_ip_class_of_service`** +: type: short + + +**`netflow.reverse_post_ip_diff_serv_code_point`** +: type: short + + +**`netflow.reverse_post_ip_precedence`** +: type: short + + +**`netflow.reverse_post_layer2_octet_delta_count`** +: type: long + + +**`netflow.reverse_post_layer2_octet_total_count`** +: type: long + + +**`netflow.reverse_post_mcast_layer2_octet_delta_count`** +: type: long + + +**`netflow.reverse_post_mcast_layer2_octet_total_count`** +: type: long + + +**`netflow.reverse_post_mcast_octet_delta_count`** +: type: long + + +**`netflow.reverse_post_mcast_octet_total_count`** +: type: long + + +**`netflow.reverse_post_mcast_packet_delta_count`** +: type: long + + +**`netflow.reverse_post_mcast_packet_total_count`** +: type: long + + +**`netflow.reverse_post_mpls_top_label_exp`** +: type: short + + +**`netflow.reverse_post_napt_destination_transport_port`** +: type: integer + + +**`netflow.reverse_post_napt_source_transport_port`** +: type: integer + + +**`netflow.reverse_post_nat_destination_ipv4_address`** +: type: ip + + +**`netflow.reverse_post_nat_destination_ipv6_address`** +: type: ip + + +**`netflow.reverse_post_nat_source_ipv4_address`** +: type: ip + + +**`netflow.reverse_post_nat_source_ipv6_address`** +: type: ip + + +**`netflow.reverse_post_octet_delta_count`** +: type: long + + +**`netflow.reverse_post_octet_total_count`** +: type: long + + +**`netflow.reverse_post_packet_delta_count`** +: type: long + + +**`netflow.reverse_post_packet_total_count`** +: type: long + + +**`netflow.reverse_post_source_mac_address`** +: type: keyword + + +**`netflow.reverse_post_vlan_id`** +: type: integer + + +**`netflow.reverse_private_enterprise_number`** +: type: long + + +**`netflow.reverse_protocol_identifier`** +: type: short + + +**`netflow.reverse_pseudo_wire_control_word`** +: type: long + + +**`netflow.reverse_pseudo_wire_destination_ipv4_address`** +: type: ip + + +**`netflow.reverse_pseudo_wire_id`** +: type: long + + +**`netflow.reverse_pseudo_wire_type`** +: type: integer + + +**`netflow.reverse_relative_error`** +: type: double + + +**`netflow.reverse_responder_octets`** +: type: long + + +**`netflow.reverse_responder_packets`** +: type: long + + +**`netflow.reverse_rfc3550_jitter_microseconds`** +: type: long + + +**`netflow.reverse_rfc3550_jitter_milliseconds`** +: type: long + + +**`netflow.reverse_rfc3550_jitter_nanoseconds`** +: type: long + + +**`netflow.reverse_rtp_payload_type`** +: type: short + + +**`netflow.reverse_rtp_sequence_number`** +: type: integer + + +**`netflow.reverse_sampler_id`** +: type: short + + +**`netflow.reverse_sampler_mode`** +: type: short + + +**`netflow.reverse_sampler_name`** +: type: keyword + + +**`netflow.reverse_sampler_random_interval`** +: type: long + + +**`netflow.reverse_sampling_algorithm`** +: type: short + + +**`netflow.reverse_sampling_flow_interval`** +: type: long + + +**`netflow.reverse_sampling_flow_spacing`** +: type: long + + +**`netflow.reverse_sampling_interval`** +: type: long + + +**`netflow.reverse_sampling_packet_interval`** +: type: long + + +**`netflow.reverse_sampling_packet_space`** +: type: long + + +**`netflow.reverse_sampling_population`** +: type: long + + +**`netflow.reverse_sampling_probability`** +: type: double + + +**`netflow.reverse_sampling_size`** +: type: long + + +**`netflow.reverse_sampling_time_interval`** +: type: long + + +**`netflow.reverse_sampling_time_space`** +: type: long + + +**`netflow.reverse_second_packet_banner`** +: type: keyword + + +**`netflow.reverse_section_exported_octets`** +: type: integer + + +**`netflow.reverse_section_offset`** +: type: integer + + +**`netflow.reverse_selection_sequence_id`** +: type: long + + +**`netflow.reverse_selector_algorithm`** +: type: integer + + +**`netflow.reverse_selector_id`** +: type: long + + +**`netflow.reverse_selector_id_total_flows_observed`** +: type: long + + +**`netflow.reverse_selector_id_total_flows_selected`** +: type: long + + +**`netflow.reverse_selector_id_total_pkts_observed`** +: type: long + + +**`netflow.reverse_selector_id_total_pkts_selected`** +: type: long + + +**`netflow.reverse_selector_name`** +: type: keyword + + +**`netflow.reverse_session_scope`** +: type: short + + +**`netflow.reverse_small_packet_count`** +: type: long + + +**`netflow.reverse_source_ipv4_address`** +: type: ip + + +**`netflow.reverse_source_ipv4_prefix`** +: type: ip + + +**`netflow.reverse_source_ipv4_prefix_length`** +: type: short + + +**`netflow.reverse_source_ipv6_address`** +: type: ip + + +**`netflow.reverse_source_ipv6_prefix`** +: type: ip + + +**`netflow.reverse_source_ipv6_prefix_length`** +: type: short + + +**`netflow.reverse_source_mac_address`** +: type: keyword + + +**`netflow.reverse_source_transport_port`** +: type: integer + + +**`netflow.reverse_src_traffic_index`** +: type: long + + +**`netflow.reverse_sta_ipv4_address`** +: type: ip + + +**`netflow.reverse_sta_mac_address`** +: type: keyword + + +**`netflow.reverse_standard_deviation_interarrival_time`** +: type: long + + +**`netflow.reverse_standard_deviation_payload_length`** +: type: integer + + +**`netflow.reverse_system_init_time_milliseconds`** +: type: long + + +**`netflow.reverse_tcp_ack_total_count`** +: type: long + + +**`netflow.reverse_tcp_acknowledgement_number`** +: type: long + + +**`netflow.reverse_tcp_control_bits`** +: type: integer + + +**`netflow.reverse_tcp_destination_port`** +: type: integer + + +**`netflow.reverse_tcp_fin_total_count`** +: type: long + + +**`netflow.reverse_tcp_header_length`** +: type: short + + +**`netflow.reverse_tcp_options`** +: type: long + + +**`netflow.reverse_tcp_psh_total_count`** +: type: long + + +**`netflow.reverse_tcp_rst_total_count`** +: type: long + + +**`netflow.reverse_tcp_sequence_number`** +: type: long + + +**`netflow.reverse_tcp_source_port`** +: type: integer + + +**`netflow.reverse_tcp_syn_total_count`** +: type: long + + +**`netflow.reverse_tcp_urg_total_count`** +: type: long + + +**`netflow.reverse_tcp_urgent_pointer`** +: type: integer + + +**`netflow.reverse_tcp_window_scale`** +: type: integer + + +**`netflow.reverse_tcp_window_size`** +: type: integer + + +**`netflow.reverse_total_length_ipv4`** +: type: integer + + +**`netflow.reverse_transport_octet_delta_count`** +: type: long + + +**`netflow.reverse_transport_packet_delta_count`** +: type: long + + +**`netflow.reverse_tunnel_technology`** +: type: keyword + + +**`netflow.reverse_udp_destination_port`** +: type: integer + + +**`netflow.reverse_udp_message_length`** +: type: integer + + +**`netflow.reverse_udp_source_port`** +: type: integer + + +**`netflow.reverse_union_tcp_flags`** +: type: short + + +**`netflow.reverse_upper_ci_limit`** +: type: double + + +**`netflow.reverse_user_name`** +: type: keyword + + +**`netflow.reverse_value_distribution_method`** +: type: short + + +**`netflow.reverse_virtual_station_interface_id`** +: type: keyword + + +**`netflow.reverse_virtual_station_interface_name`** +: type: keyword + + +**`netflow.reverse_virtual_station_name`** +: type: keyword + + +**`netflow.reverse_virtual_station_uuid`** +: type: keyword + + +**`netflow.reverse_vlan_id`** +: type: integer + + +**`netflow.reverse_vr_fname`** +: type: keyword + + +**`netflow.reverse_wlan_channel_id`** +: type: short + + +**`netflow.reverse_wlan_ssid`** +: type: keyword + + +**`netflow.reverse_wtp_mac_address`** +: type: keyword + + +**`netflow.rfc3550_jitter_microseconds`** +: type: long + + +**`netflow.rfc3550_jitter_milliseconds`** +: type: long + + +**`netflow.rfc3550_jitter_nanoseconds`** +: type: long + + +**`netflow.rtp_payload_type`** +: type: short + + +**`netflow.rtp_sequence_number`** +: type: integer + + +**`netflow.sampler_id`** +: type: short + + +**`netflow.sampler_mode`** +: type: short + + +**`netflow.sampler_name`** +: type: keyword + + +**`netflow.sampler_random_interval`** +: type: long + + +**`netflow.sampling_algorithm`** +: type: short + + +**`netflow.sampling_flow_interval`** +: type: long + + +**`netflow.sampling_flow_spacing`** +: type: long + + +**`netflow.sampling_interval`** +: type: long + + +**`netflow.sampling_packet_interval`** +: type: long + + +**`netflow.sampling_packet_space`** +: type: long + + +**`netflow.sampling_population`** +: type: long + + +**`netflow.sampling_probability`** +: type: double + + +**`netflow.sampling_size`** +: type: long + + +**`netflow.sampling_time_interval`** +: type: long + + +**`netflow.sampling_time_space`** +: type: long + + +**`netflow.second_packet_banner`** +: type: keyword + + +**`netflow.section_exported_octets`** +: type: integer + + +**`netflow.section_offset`** +: type: integer + + +**`netflow.selection_sequence_id`** +: type: long + + +**`netflow.selector_algorithm`** +: type: integer + + +**`netflow.selector_id`** +: type: long + + +**`netflow.selector_id_total_flows_observed`** +: type: long + + +**`netflow.selector_id_total_flows_selected`** +: type: long + + +**`netflow.selector_id_total_pkts_observed`** +: type: long + + +**`netflow.selector_id_total_pkts_selected`** +: type: long + + +**`netflow.selector_name`** +: type: keyword + + +**`netflow.service_name`** +: type: keyword + + +**`netflow.session_scope`** +: type: short + + +**`netflow.silk_app_label`** +: type: integer + + +**`netflow.small_packet_count`** +: type: long + + +**`netflow.source_ipv4_address`** +: type: ip + + +**`netflow.source_ipv4_prefix`** +: type: ip + + +**`netflow.source_ipv4_prefix_length`** +: type: short + + +**`netflow.source_ipv6_address`** +: type: ip + + +**`netflow.source_ipv6_prefix`** +: type: ip + + +**`netflow.source_ipv6_prefix_length`** +: type: short + + +**`netflow.source_mac_address`** +: type: keyword + + +**`netflow.source_transport_port`** +: type: integer + + +**`netflow.source_transport_ports_limit`** +: type: integer + + +**`netflow.src_traffic_index`** +: type: long + + +**`netflow.ssl_cert_serial_number`** +: type: keyword + + +**`netflow.ssl_cert_signature`** +: type: keyword + + +**`netflow.ssl_cert_validity_not_after`** +: type: keyword + + +**`netflow.ssl_cert_validity_not_before`** +: type: keyword + + +**`netflow.ssl_cert_version`** +: type: short + + +**`netflow.ssl_certificate_hash`** +: type: keyword + + +**`netflow.ssl_cipher`** +: type: keyword + + +**`netflow.ssl_client_version`** +: type: short + + +**`netflow.ssl_compression_method`** +: type: short + + +**`netflow.ssl_object_type`** +: type: keyword + + +**`netflow.ssl_object_value`** +: type: keyword + + +**`netflow.ssl_public_key_algorithm`** +: type: keyword + + +**`netflow.ssl_public_key_length`** +: type: keyword + + +**`netflow.ssl_server_cipher`** +: type: long + + +**`netflow.ssl_server_name`** +: type: keyword + + +**`netflow.sta_ipv4_address`** +: type: ip + + +**`netflow.sta_mac_address`** +: type: keyword + + +**`netflow.standard_deviation_interarrival_time`** +: type: long + + +**`netflow.standard_deviation_payload_length`** +: type: short + + +**`netflow.system_init_time_milliseconds`** +: type: date + + +**`netflow.tcp_ack_total_count`** +: type: long + + +**`netflow.tcp_acknowledgement_number`** +: type: long + + +**`netflow.tcp_control_bits`** +: type: integer + + +**`netflow.tcp_destination_port`** +: type: integer + + +**`netflow.tcp_fin_total_count`** +: type: long + + +**`netflow.tcp_header_length`** +: type: short + + +**`netflow.tcp_options`** +: type: long + + +**`netflow.tcp_psh_total_count`** +: type: long + + +**`netflow.tcp_rst_total_count`** +: type: long + + +**`netflow.tcp_sequence_number`** +: type: long + + +**`netflow.tcp_source_port`** +: type: integer + + +**`netflow.tcp_syn_total_count`** +: type: long + + +**`netflow.tcp_urg_total_count`** +: type: long + + +**`netflow.tcp_urgent_pointer`** +: type: integer + + +**`netflow.tcp_window_scale`** +: type: integer + + +**`netflow.tcp_window_size`** +: type: integer + + +**`netflow.template_id`** +: type: integer + + +**`netflow.tftp_filename`** +: type: keyword + + +**`netflow.tftp_mode`** +: type: keyword + + +**`netflow.timestamp`** +: type: long + + +**`netflow.timestamp_absolute_monitoring-interval`** +: type: long + + +**`netflow.total_length_ipv4`** +: type: integer + + +**`netflow.traffic_type`** +: type: short + + +**`netflow.transport_octet_delta_count`** +: type: long + + +**`netflow.transport_packet_delta_count`** +: type: long + + +**`netflow.tunnel_technology`** +: type: keyword + + +**`netflow.udp_destination_port`** +: type: integer + + +**`netflow.udp_message_length`** +: type: integer + + +**`netflow.udp_source_port`** +: type: integer + + +**`netflow.union_tcp_flags`** +: type: short + + +**`netflow.upper_ci_limit`** +: type: double + + +**`netflow.user_name`** +: type: keyword + + +**`netflow.username`** +: type: keyword + + +**`netflow.value_distribution_method`** +: type: short + + +**`netflow.viptela_vpn_id`** +: type: long + + +**`netflow.virtual_station_interface_id`** +: type: short + + +**`netflow.virtual_station_interface_name`** +: type: keyword + + +**`netflow.virtual_station_name`** +: type: keyword + + +**`netflow.virtual_station_uuid`** +: type: short + + +**`netflow.vlan_id`** +: type: integer + + +**`netflow.vmware_egress_interface_attr`** +: type: integer + + +**`netflow.vmware_ingress_interface_attr`** +: type: integer + + +**`netflow.vmware_tenant_dest_ipv4`** +: type: ip + + +**`netflow.vmware_tenant_dest_ipv6`** +: type: ip + + +**`netflow.vmware_tenant_dest_port`** +: type: integer + + +**`netflow.vmware_tenant_protocol`** +: type: short + + +**`netflow.vmware_tenant_source_ipv4`** +: type: ip + + +**`netflow.vmware_tenant_source_ipv6`** +: type: ip + + +**`netflow.vmware_tenant_source_port`** +: type: integer + + +**`netflow.vmware_vxlan_export_role`** +: type: short + + +**`netflow.vpn_identifier`** +: type: short + + +**`netflow.vr_fname`** +: type: keyword + + +**`netflow.waasoptimization_segment`** +: type: short + + +**`netflow.wlan_channel_id`** +: type: short + + +**`netflow.wlan_ssid`** +: type: keyword + + +**`netflow.wtp_mac_address`** +: type: keyword + + +**`netflow.xlate_destination_address_ip_v4`** +: type: ip + + +**`netflow.xlate_destination_port`** +: type: integer + + +**`netflow.xlate_source_address_ip_v4`** +: type: ip + + +**`netflow.xlate_source_port`** +: type: integer + + diff --git a/docs/reference/filebeat/exported-fields-nginx.md b/docs/reference/filebeat/exported-fields-nginx.md new file mode 100644 index 000000000000..02f15d819bcb --- /dev/null +++ b/docs/reference/filebeat/exported-fields-nginx.md @@ -0,0 +1,391 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-nginx.html +--- + +# Nginx fields [exported-fields-nginx] + +Module for parsing the Nginx log files. + + +## nginx [_nginx] + +Fields from the Nginx log files. + + +## access [_access_3] + +Contains fields for the Nginx access logs. + +**`nginx.access.remote_ip_list`** +: An array of remote IP addresses. It is a list because it is common to include, besides the client IP address, IP addresses from headers like `X-Forwarded-For`. Real source IP is restored to `source.ip`. + +type: array + + +**`nginx.access.body_sent.bytes`** +: type: alias + +alias to: http.response.body.bytes + + +**`nginx.access.user_name`** +: type: alias + +alias to: user.name + + +**`nginx.access.method`** +: type: alias + +alias to: http.request.method + + +**`nginx.access.url`** +: type: alias + +alias to: url.original + + +**`nginx.access.http_version`** +: type: alias + +alias to: http.version + + +**`nginx.access.response_code`** +: type: alias + +alias to: http.response.status_code + + +**`nginx.access.referrer`** +: type: alias + +alias to: http.request.referrer + + +**`nginx.access.agent`** +: type: alias + +alias to: user_agent.original + + +**`nginx.access.user_agent.device`** +: type: alias + +alias to: user_agent.device.name + + +**`nginx.access.user_agent.name`** +: type: alias + +alias to: user_agent.name + + +**`nginx.access.user_agent.os`** +: type: alias + +alias to: user_agent.os.full_name + + +**`nginx.access.user_agent.os_name`** +: type: alias + +alias to: user_agent.os.name + + +**`nginx.access.user_agent.original`** +: type: alias + +alias to: user_agent.original + + +**`nginx.access.geoip.continent_name`** +: type: alias + +alias to: source.geo.continent_name + + +**`nginx.access.geoip.country_iso_code`** +: type: alias + +alias to: source.geo.country_iso_code + + +**`nginx.access.geoip.location`** +: type: alias + +alias to: source.geo.location + + +**`nginx.access.geoip.region_name`** +: type: alias + +alias to: source.geo.region_name + + +**`nginx.access.geoip.city_name`** +: type: alias + +alias to: source.geo.city_name + + +**`nginx.access.geoip.region_iso_code`** +: type: alias + +alias to: source.geo.region_iso_code + + + +## error [_error_5] + +Contains fields for the Nginx error logs. + +**`nginx.error.connection_id`** +: Connection identifier. + +type: long + + +**`nginx.error.level`** +: type: alias + +alias to: log.level + + +**`nginx.error.pid`** +: type: alias + +alias to: process.pid + + +**`nginx.error.tid`** +: type: alias + +alias to: process.thread.id + + +**`nginx.error.message`** +: type: alias + +alias to: message + + + +## ingress_controller [_ingress_controller] + +Contains fields for the Ingress Nginx controller access logs. + +**`nginx.ingress_controller.remote_ip_list`** +: An array of remote IP addresses. It is a list because it is common to include, besides the client IP address, IP addresses from headers like `X-Forwarded-For`. Real source IP is restored to `source.ip`. + +type: array + + +**`nginx.ingress_controller.upstream_address_list`** +: An array of the upstream addresses. It is a list because it is common that several upstream servers were contacted during request processing. + +type: keyword + + +**`nginx.ingress_controller.upstream.response.length_list`** +: An array of upstream response lengths. It is a list because it is common that several upstream servers were contacted during request processing. + +type: keyword + + +**`nginx.ingress_controller.upstream.response.time_list`** +: An array of upstream response durations. It is a list because it is common that several upstream servers were contacted during request processing. + +type: keyword + + +**`nginx.ingress_controller.upstream.response.status_code_list`** +: An array of upstream response status codes. It is a list because it is common that several upstream servers were contacted during request processing. + +type: keyword + + +**`nginx.ingress_controller.http.request.length`** +: The request length (including request line, header, and request body) + +type: long + +format: bytes + + +**`nginx.ingress_controller.http.request.time`** +: Time elapsed since the first bytes were read from the client + +type: double + +format: duration + + +**`nginx.ingress_controller.upstream.name`** +: The name of the upstream. + +type: keyword + + +**`nginx.ingress_controller.upstream.alternative_name`** +: The name of the alternative upstream. + +type: keyword + + +**`nginx.ingress_controller.upstream.response.length`** +: The length of the response obtained from the upstream server. If several servers were contacted during request process, the summary of the multiple response lengths is stored. + +type: long + +format: bytes + + +**`nginx.ingress_controller.upstream.response.time`** +: The time spent on receiving the response from the upstream as seconds with millisecond resolution. If several servers were contacted during request process, the summary of the multiple response times is stored. + +type: double + +format: duration + + +**`nginx.ingress_controller.upstream.response.status_code`** +: The status code of the response obtained from the upstream server. If several servers were contacted during request process, only the status code of the response from the last one is stored in this field. + +type: long + + +**`nginx.ingress_controller.upstream.ip`** +: The IP address of the upstream server. If several servers were contacted during request process, only the last one is stored in this field. + +type: ip + + +**`nginx.ingress_controller.upstream.port`** +: The port of the upstream server. If several servers were contacted during request process, only the last one is stored in this field. + +type: long + + +**`nginx.ingress_controller.http.request.id`** +: The randomly generated ID of the request + +type: keyword + + +**`nginx.ingress_controller.body_sent.bytes`** +: type: alias + +alias to: http.response.body.bytes + + +**`nginx.ingress_controller.user_name`** +: type: alias + +alias to: user.name + + +**`nginx.ingress_controller.method`** +: type: alias + +alias to: http.request.method + + +**`nginx.ingress_controller.url`** +: type: alias + +alias to: url.original + + +**`nginx.ingress_controller.http_version`** +: type: alias + +alias to: http.version + + +**`nginx.ingress_controller.response_code`** +: type: alias + +alias to: http.response.status_code + + +**`nginx.ingress_controller.referrer`** +: type: alias + +alias to: http.request.referrer + + +**`nginx.ingress_controller.agent`** +: type: alias + +alias to: user_agent.original + + +**`nginx.ingress_controller.user_agent.device`** +: type: alias + +alias to: user_agent.device.name + + +**`nginx.ingress_controller.user_agent.name`** +: type: alias + +alias to: user_agent.name + + +**`nginx.ingress_controller.user_agent.os`** +: type: alias + +alias to: user_agent.os.full_name + + +**`nginx.ingress_controller.user_agent.os_name`** +: type: alias + +alias to: user_agent.os.name + + +**`nginx.ingress_controller.user_agent.original`** +: type: alias + +alias to: user_agent.original + + +**`nginx.ingress_controller.geoip.continent_name`** +: type: alias + +alias to: source.geo.continent_name + + +**`nginx.ingress_controller.geoip.country_iso_code`** +: type: alias + +alias to: source.geo.country_iso_code + + +**`nginx.ingress_controller.geoip.location`** +: type: alias + +alias to: source.geo.location + + +**`nginx.ingress_controller.geoip.region_name`** +: type: alias + +alias to: source.geo.region_name + + +**`nginx.ingress_controller.geoip.city_name`** +: type: alias + +alias to: source.geo.city_name + + +**`nginx.ingress_controller.geoip.region_iso_code`** +: type: alias + +alias to: source.geo.region_iso_code + + diff --git a/docs/reference/filebeat/exported-fields-o365.md b/docs/reference/filebeat/exported-fields-o365.md new file mode 100644 index 000000000000..68ece7e933a0 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-o365.md @@ -0,0 +1,474 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-o365.html +--- + +# Office 365 fields [exported-fields-o365] + +Module for handling logs from Office 365. + + +## o365.audit [_o365_audit] + +Fields from Office 365 Management API audit logs. + +**`o365.audit.AADGroupId`** +: type: keyword + + +**`o365.audit.Activity`** +: type: keyword + + +**`o365.audit.Actor`** +: type: array + + +**`o365.audit.ActorContextId`** +: type: keyword + + +**`o365.audit.ActorIpAddress`** +: type: keyword + + +**`o365.audit.ActorUserId`** +: type: keyword + + +**`o365.audit.ActorYammerUserId`** +: type: keyword + + +**`o365.audit.AlertEntityId`** +: type: keyword + + +**`o365.audit.AlertId`** +: type: keyword + + +**`o365.audit.AlertLinks`** +: type: array + + +**`o365.audit.AlertType`** +: type: keyword + + +**`o365.audit.AppId`** +: type: keyword + + +**`o365.audit.ApplicationDisplayName`** +: type: keyword + + +**`o365.audit.ApplicationId`** +: type: keyword + + +**`o365.audit.AzureActiveDirectoryEventType`** +: type: keyword + + +**`o365.audit.ExchangeMetaData.*`** +: type: object + + +**`o365.audit.Category`** +: type: keyword + + +**`o365.audit.ClientAppId`** +: type: keyword + + +**`o365.audit.ClientInfoString`** +: type: keyword + + +**`o365.audit.ClientIP`** +: type: keyword + + +**`o365.audit.ClientIPAddress`** +: type: keyword + + +**`o365.audit.Comments`** +: type: text + + +**`o365.audit.CommunicationType`** +: type: keyword + + +**`o365.audit.CorrelationId`** +: type: keyword + + +**`o365.audit.CreationTime`** +: type: keyword + + +**`o365.audit.CustomUniqueId`** +: type: keyword + + +**`o365.audit.Data`** +: type: keyword + + +**`o365.audit.DataType`** +: type: keyword + + +**`o365.audit.DoNotDistributeEvent`** +: type: boolean + + +**`o365.audit.EntityType`** +: type: keyword + + +**`o365.audit.ErrorNumber`** +: type: keyword + + +**`o365.audit.EventData`** +: type: keyword + + +**`o365.audit.EventSource`** +: type: keyword + + +**`o365.audit.ExceptionInfo.*`** +: type: object + + +**`o365.audit.Experience`** +: type: keyword + + +**`o365.audit.ExtendedProperties.*`** +: type: object + + +**`o365.audit.ExternalAccess`** +: type: keyword + + +**`o365.audit.FromApp`** +: type: boolean + + +**`o365.audit.GroupName`** +: type: keyword + + +**`o365.audit.Id`** +: type: keyword + + +**`o365.audit.ImplicitShare`** +: type: keyword + + +**`o365.audit.IncidentId`** +: type: keyword + + +**`o365.audit.InternalLogonType`** +: type: keyword + + +**`o365.audit.InterSystemsId`** +: type: keyword + + +**`o365.audit.IntraSystemId`** +: type: keyword + + +**`o365.audit.IsDocLib`** +: type: boolean + + +**`o365.audit.Item.*`** +: type: object + + +**`o365.audit.Item.*.*`** +: type: object + + +**`o365.audit.ItemCount`** +: type: long + + +**`o365.audit.ItemName`** +: type: keyword + + +**`o365.audit.ItemType`** +: type: keyword + + +**`o365.audit.ListBaseTemplateType`** +: type: keyword + + +**`o365.audit.ListBaseType`** +: type: keyword + + +**`o365.audit.ListColor`** +: type: keyword + + +**`o365.audit.ListIcon`** +: type: keyword + + +**`o365.audit.ListId`** +: type: keyword + + +**`o365.audit.ListTitle`** +: type: keyword + + +**`o365.audit.ListItemUniqueId`** +: type: keyword + + +**`o365.audit.LogonError`** +: type: keyword + + +**`o365.audit.LogonType`** +: type: keyword + + +**`o365.audit.LogonUserSid`** +: type: keyword + + +**`o365.audit.MailboxGuid`** +: type: keyword + + +**`o365.audit.MailboxOwnerMasterAccountSid`** +: type: keyword + + +**`o365.audit.MailboxOwnerSid`** +: type: keyword + + +**`o365.audit.MailboxOwnerUPN`** +: type: keyword + + +**`o365.audit.Members`** +: type: array + + +**`o365.audit.Members.*`** +: type: object + + +**`o365.audit.ModifiedProperties.*.*`** +: type: object + + +**`o365.audit.Name`** +: type: keyword + + +**`o365.audit.ObjectId`** +: type: keyword + + +**`o365.audit.ObjectDisplayName`** +: type: keyword + + +**`o365.audit.ObjectType`** +: type: keyword + + +**`o365.audit.Operation`** +: type: keyword + + +**`o365.audit.OperationId`** +: type: keyword + + +**`o365.audit.OperationProperties`** +: type: object + + +**`o365.audit.OrganizationId`** +: type: keyword + + +**`o365.audit.OrganizationName`** +: type: keyword + + +**`o365.audit.OriginatingServer`** +: type: keyword + + +**`o365.audit.Parameters.*`** +: type: object + + +**`o365.audit.PolicyDetails`** +: type: array + + +**`o365.audit.PolicyId`** +: type: keyword + + +**`o365.audit.RecordType`** +: type: keyword + + +**`o365.audit.RequestId`** +: type: keyword + + +**`o365.audit.ResultStatus`** +: type: keyword + + +**`o365.audit.SensitiveInfoDetectionIsIncluded`** +: type: keyword + + +**`o365.audit.SharePointMetaData.*`** +: type: object + + +**`o365.audit.SessionId`** +: type: keyword + + +**`o365.audit.Severity`** +: type: keyword + + +**`o365.audit.Site`** +: type: keyword + + +**`o365.audit.SiteUrl`** +: type: keyword + + +**`o365.audit.Source`** +: type: keyword + + +**`o365.audit.SourceFileExtension`** +: type: keyword + + +**`o365.audit.SourceFileName`** +: type: keyword + + +**`o365.audit.SourceRelativeUrl`** +: type: keyword + + +**`o365.audit.Status`** +: type: keyword + + +**`o365.audit.SupportTicketId`** +: type: keyword + + +**`o365.audit.Target`** +: type: array + + +**`o365.audit.TargetContextId`** +: type: keyword + + +**`o365.audit.TargetUserOrGroupName`** +: type: keyword + + +**`o365.audit.TargetUserOrGroupType`** +: type: keyword + + +**`o365.audit.TeamName`** +: type: keyword + + +**`o365.audit.TeamGuid`** +: type: keyword + + +**`o365.audit.TemplateTypeId`** +: type: keyword + + +**`o365.audit.Timestamp`** +: type: keyword + + +**`o365.audit.UniqueSharingId`** +: type: keyword + + +**`o365.audit.UserAgent`** +: type: keyword + + +**`o365.audit.UserId`** +: type: keyword + + +**`o365.audit.UserKey`** +: type: keyword + + +**`o365.audit.UserType`** +: type: keyword + + +**`o365.audit.Version`** +: type: keyword + + +**`o365.audit.WebId`** +: type: keyword + + +**`o365.audit.Workload`** +: type: keyword + + +**`o365.audit.WorkspaceId`** +: type: keyword + + +**`o365.audit.WorkspaceName`** +: type: keyword + + +**`o365.audit.YammerNetworkId`** +: type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-okta.md b/docs/reference/filebeat/exported-fields-okta.md new file mode 100644 index 000000000000..7090ece760ff --- /dev/null +++ b/docs/reference/filebeat/exported-fields-okta.md @@ -0,0 +1,415 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-okta.html +--- + +# Okta fields [exported-fields-okta] + +Module for handling system logs from Okta. + + +## okta [_okta] + +Fields from Okta. + +**`okta.uuid`** +: The unique identifier of the Okta LogEvent. + +type: keyword + + +**`okta.event_type`** +: The type of the LogEvent. + +type: keyword + + +**`okta.version`** +: The version of the LogEvent. + +type: keyword + + +**`okta.severity`** +: The severity of the LogEvent. Must be one of DEBUG, INFO, WARN, or ERROR. + +type: keyword + + +**`okta.display_message`** +: The display message of the LogEvent. + +type: keyword + + + +## actor [_actor] + +Fields that let you store information of the actor for the LogEvent. + +**`okta.actor.id`** +: Identifier of the actor. + +type: keyword + + +**`okta.actor.type`** +: Type of the actor. + +type: keyword + + +**`okta.actor.alternate_id`** +: Alternate identifier of the actor. + +type: keyword + + +**`okta.actor.display_name`** +: Display name of the actor. + +type: keyword + + + +## client [_client_4] + +Fields that let you store information about the client of the actor. + +**`okta.client.ip`** +: The IP address of the client. + +type: ip + + + +## user_agent [_user_agent_2] + +Fields about the user agent information of the client. + +**`okta.client.user_agent.raw_user_agent`** +: The raw informaton of the user agent. + +type: keyword + + +**`okta.client.user_agent.os`** +: The OS informaton. + +type: keyword + + +**`okta.client.user_agent.browser`** +: The browser informaton of the client. + +type: keyword + + +**`okta.client.zone`** +: The zone information of the client. + +type: keyword + + +**`okta.client.device`** +: The information of the client device. + +type: keyword + + +**`okta.client.id`** +: The identifier of the client. + +type: keyword + + + +## outcome [_outcome] + +Fields that let you store information about the outcome. + +**`okta.outcome.reason`** +: The reason of the outcome. + +type: keyword + + +**`okta.outcome.result`** +: The result of the outcome. Must be one of: SUCCESS, FAILURE, SKIPPED, ALLOW, DENY, CHALLENGE, UNKNOWN. + +type: keyword + + +**`okta.target`** +: The list of targets. + +type: flattened + + + +## transaction [_transaction] + +Fields that let you store information about related transaction. + +**`okta.transaction.id`** +: Identifier of the transaction. + +type: keyword + + +**`okta.transaction.type`** +: The type of transaction. Must be one of "WEB", "JOB". + +type: keyword + + + +## debug_context [_debug_context] + +Fields that let you store information about the debug context. + + +## debug_data [_debug_data] + +The debug data. + +**`okta.debug_context.debug_data.device_fingerprint`** +: The fingerprint of the device. + +type: keyword + + +**`okta.debug_context.debug_data.factor`** +: The factor used for authentication. + +type: keyword + + +**`okta.debug_context.debug_data.request_id`** +: The identifier of the request. + +type: keyword + + +**`okta.debug_context.debug_data.request_uri`** +: The request URI. + +type: keyword + + +**`okta.debug_context.debug_data.threat_suspected`** +: Threat suspected. + +type: keyword + + +**`okta.debug_context.debug_data.risk_behaviors`** +: The set of behaviors that contribute to a risk assessment. + +type: keyword + + +**`okta.debug_context.debug_data.risk_level`** +: The risk level assigned to the sign in attempt. + +type: keyword + + +**`okta.debug_context.debug_data.risk_reasons`** +: The reasons for the risk. + +type: keyword + + +**`okta.debug_context.debug_data.url`** +: The URL. + +type: keyword + + +**`okta.debug_context.debug_data.flattened`** +: The complete debug_data object. + +type: flattened + + + +## suspicious_activity [_suspicious_activity] + +The suspicious activity fields from the debug data. + +**`okta.debug_context.debug_data.suspicious_activity.browser`** +: The browser used. + +type: keyword + + +**`okta.debug_context.debug_data.suspicious_activity.event_city`** +: The city where the suspicious activity took place. + +type: keyword + + +**`okta.debug_context.debug_data.suspicious_activity.event_country`** +: The country where the suspicious activity took place. + +type: keyword + + +**`okta.debug_context.debug_data.suspicious_activity.event_id`** +: The event ID. + +type: keyword + + +**`okta.debug_context.debug_data.suspicious_activity.event_ip`** +: The IP of the suspicious event. + +type: ip + + +**`okta.debug_context.debug_data.suspicious_activity.event_latitude`** +: The latitude where the suspicious activity took place. + +type: float + + +**`okta.debug_context.debug_data.suspicious_activity.event_longitude`** +: The longitude where the suspicious activity took place. + +type: float + + +**`okta.debug_context.debug_data.suspicious_activity.event_state`** +: The state where the suspicious activity took place. + +type: keyword + + +**`okta.debug_context.debug_data.suspicious_activity.event_transaction_id`** +: The event transaction ID. + +type: keyword + + +**`okta.debug_context.debug_data.suspicious_activity.event_type`** +: The event type. + +type: keyword + + +**`okta.debug_context.debug_data.suspicious_activity.os`** +: The OS of the system from where the suspicious activity occured. + +type: keyword + + +**`okta.debug_context.debug_data.suspicious_activity.timestamp`** +: The timestamp of when the activity occurred. + +type: date + + + +## authentication_context [_authentication_context] + +Fields that let you store information about authentication context. + +**`okta.authentication_context.authentication_provider`** +: The information about the authentication provider. Must be one of OKTA_AUTHENTICATION_PROVIDER, ACTIVE_DIRECTORY, LDAP, FEDERATION, SOCIAL, FACTOR_PROVIDER. + +type: keyword + + +**`okta.authentication_context.authentication_step`** +: The authentication step. + +type: integer + + +**`okta.authentication_context.credential_provider`** +: The information about credential provider. Must be one of OKTA_CREDENTIAL_PROVIDER, RSA, SYMANTEC, GOOGLE, DUO, YUBIKEY. + +type: keyword + + +**`okta.authentication_context.credential_type`** +: The information about credential type. Must be one of OTP, SMS, PASSWORD, ASSERTION, IWA, EMAIL, OAUTH2, JWT, CERTIFICATE, PRE_SHARED_SYMMETRIC_KEY, OKTA_CLIENT_SESSION, DEVICE_UDID. + +type: keyword + + +**`okta.authentication_context.issuer`** +: The information about the issuer. + +type: array + + +**`okta.authentication_context.external_session_id`** +: The session identifer of the external session if any. + +type: keyword + + +**`okta.authentication_context.interface`** +: The interface used. e.g., Outlook, Office365, wsTrust + +type: keyword + + + +## security_context [_security_context] + +Fields that let you store information about security context. + + +## as [_as_2] + +The autonomous system. + +**`okta.security_context.as.number`** +: The AS number. + +type: integer + + + +## organization [_organization_2] + +The organization that owns the AS number. + +**`okta.security_context.as.organization.name`** +: The organization name. + +type: keyword + + +**`okta.security_context.isp`** +: The Internet Service Provider. + +type: keyword + + +**`okta.security_context.domain`** +: The domain name. + +type: keyword + + +**`okta.security_context.is_proxy`** +: Whether it is a proxy or not. + +type: boolean + + + +## request [_request_3] + +Fields that let you store information about the request, in the form of list of ip_chain. + +**`okta.request.ip_chain`** +: List of ip_chain objects. + +type: flattened + + diff --git a/docs/reference/filebeat/exported-fields-oracle.md b/docs/reference/filebeat/exported-fields-oracle.md new file mode 100644 index 000000000000..75be1b689d89 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-oracle.md @@ -0,0 +1,175 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-oracle.html +--- + +# Oracle fields [exported-fields-oracle] + +Oracle Module + + +## oracle [_oracle] + +Fields from Oracle logs. + + +## database_audit [_database_audit] + +Module for parsing Oracle Database audit logs + +**`oracle.database_audit.priv_used`** +: System privilege used to execute the action. + +type: integer + + +**`oracle.database_audit.logoff_pread`** +: Physical reads for the session. + +type: integer + + +**`oracle.database_audit.logoff_lread`** +: Logical reads for the session. + +type: integer + + +**`oracle.database_audit.logoff_lwrite`** +: Logical writes for the session. + +type: integer + + +**`oracle.database_audit.logoff_dead`** +: Deadlocks detected during the session. + +type: integer + + +**`oracle.database_audit.sessioncpu`** +: Amount of CPU time used by each Oracle session. + +type: integer + + +**`oracle.database_audit.returncode`** +: Oracle error code generated by the action. + +type: integer + + +**`oracle.database_audit.statement`** +: nth statement in the user session. + +type: integer + + +**`oracle.database_audit.userid`** +: Name of the user whose actions were audited. + +type: keyword + + +**`oracle.database_audit.entryid`** +: Numeric ID for each audit trail entry in the session. The entry ID is an index of a session’s audit entries that starts at 1 and increases to the number of entries that are written. + +type: integer + + +**`oracle.database_audit.comment_text`** +: Text comment on the audit trail entry, providing more information about the statement audited. + +type: text + + +**`oracle.database_audit.os_userid`** +: Operating system login username of the user whose actions were audited. + +type: keyword + + +**`oracle.database_audit.terminal`** +: Identifier of the user’s terminal. + +type: text + + +**`oracle.database_audit.status`** +: Database Audit Status. + +type: keyword + + +**`oracle.database_audit.session_id`** +: Indicates the audit session ID number. + +type: keyword + + +**`oracle.database_audit.client.terminal`** +: If available, the client terminal type, for example "pty". + +type: keyword + + +**`oracle.database_audit.client.address`** +: The IP Address or Domain used by the client. + +type: keyword + + +**`oracle.database_audit.client.user`** +: The user running the client or connection to the database. + +type: keyword + + +**`oracle.database_audit.database.user`** +: The database user used to authenticate. + +type: keyword + + +**`oracle.database_audit.privilege`** +: The privilege group related to the database user. + +type: keyword + + +**`oracle.database_audit.entry.id`** +: Indicates the current audit entry number, assigned to each audit trail record. The audit entry.id sequence number is shared between fine-grained audit records and regular audit records. + +type: keyword + + +**`oracle.database_audit.database.host`** +: Client host machine name. + +type: keyword + + +**`oracle.database_audit.action`** +: The action performed during the audit event. This could for example be the raw query. + +type: keyword + + +**`oracle.database_audit.action_number`** +: Action is a numeric value representing the action the user performed. The corresponding name of the action type is in the AUDIT_ACTIONS table. For example, action 100 refers to LOGON. + +type: keyword + + +**`oracle.database_audit.database.id`** +: Database identifier calculated when the database is created. It corresponds to the DBID column of the V$DATABASE data dictionary view. + +type: keyword + + +**`oracle.database_audit.length`** +: Refers to the total number of bytes used in this audit record. This number includes the trailing newline bytes (\n), if any, at the end of the audit record. + +type: long + + diff --git a/docs/reference/filebeat/exported-fields-osquery.md b/docs/reference/filebeat/exported-fields-osquery.md new file mode 100644 index 000000000000..0c70abd6d254 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-osquery.md @@ -0,0 +1,47 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-osquery.html +--- + +# Osquery fields [exported-fields-osquery] + +Fields exported by the `osquery` module + + +## osquery [_osquery] + + +## result [_result] + +Common fields exported by the result metricset. + +**`osquery.result.name`** +: The name of the query that generated this event. + +type: keyword + + +**`osquery.result.action`** +: For incremental data, marks whether the entry was added or removed. It can be one of "added", "removed", or "snapshot". + +type: keyword + + +**`osquery.result.host_identifier`** +: The identifier for the host on which the osquery agent is running. Normally the hostname. + +type: keyword + + +**`osquery.result.unix_time`** +: Unix timestamp of the event, in seconds since the epoch. Used for computing the `@timestamp` column. + +type: long + + +**`osquery.result.calendar_time`** +: String representation of the collection time, as formatted by osquery. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-panw.md b/docs/reference/filebeat/exported-fields-panw.md new file mode 100644 index 000000000000..b94cf8086a92 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-panw.md @@ -0,0 +1,401 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-panw.html +--- + +# panw fields [exported-fields-panw] + +Module for Palo Alto Networks (PAN-OS) + + +## panw [_panw] + +Fields from the panw module. + + +## panos [_panos] + +Fields for the Palo Alto Networks PAN-OS logs. + +**`panw.panos.ruleset`** +: Name of the rule that matched this session. + +type: keyword + + + +## source [_source_3] + +Fields to extend the top-level source object. + +**`panw.panos.source.zone`** +: Source zone for this session. + +type: keyword + + +**`panw.panos.source.interface`** +: Source interface for this session. + +type: keyword + + + +## nat [_nat] + +Post-NAT source address, if source NAT is performed. + +**`panw.panos.source.nat.ip`** +: Post-NAT source IP. + +type: ip + + +**`panw.panos.source.nat.port`** +: Post-NAT source port. + +type: long + + + +## destination [_destination_3] + +Fields to extend the top-level destination object. + +**`panw.panos.destination.zone`** +: Destination zone for this session. + +type: keyword + + +**`panw.panos.destination.interface`** +: Destination interface for this session. + +type: keyword + + + +## nat [_nat_2] + +Post-NAT destination address, if destination NAT is performed. + +**`panw.panos.destination.nat.ip`** +: Post-NAT destination IP. + +type: ip + + +**`panw.panos.destination.nat.port`** +: Post-NAT destination port. + +type: long + + +**`panw.panos.endreason`** +: The reason a session terminated. + +type: keyword + + + +## network [_network_2] + +Fields to extend the top-level network object. + +**`panw.panos.network.pcap_id`** +: Packet capture ID for a threat. + +type: keyword + + +**`panw.panos.network.nat.community_id`** +: Community ID flow-hash for the NAT 5-tuple. + +type: keyword + + + +## file [_file_3] + +Fields to extend the top-level file object. + +**`panw.panos.file.hash`** +: Binary hash for a threat file sent to be analyzed by the WildFire service. + +type: keyword + + + +## url [_url_4] + +Fields to extend the top-level url object. + +**`panw.panos.url.category`** +: For threat URLs, it’s the URL category. For WildFire, the verdict on the file and is either *malicious*, *grayware*, or *benign*. + +type: keyword + + +**`panw.panos.flow_id`** +: Internal numeric identifier for each session. + +type: keyword + + +**`panw.panos.sequence_number`** +: Log entry identifier that is incremented sequentially. Unique for each log type. + +type: long + + +**`panw.panos.threat.resource`** +: URL or file name for a threat. + +type: keyword + + +**`panw.panos.threat.id`** +: Palo Alto Networks identifier for the threat. + +type: keyword + + +**`panw.panos.threat.name`** +: Palo Alto Networks name for the threat. + +type: keyword + + +**`panw.panos.action`** +: Action taken for the session. + +type: keyword + + +**`panw.panos.type`** +: Specifies the type of the log + + +**`panw.panos.sub_type`** +: Specifies the sub type of the log + + +**`panw.panos.virtual_sys`** +: Virtual system instance + +type: keyword + + +**`panw.panos.client_os_ver`** +: The client device’s OS version. + +type: keyword + + +**`panw.panos.client_os`** +: The client device’s OS version. + +type: keyword + + +**`panw.panos.client_ver`** +: The client’s GlobalProtect app version. + +type: keyword + + +**`panw.panos.stage`** +: A string showing the stage of the connection + +type: keyword + +example: before-login + + +**`panw.panos.actionflags`** +: A bit field indicating if the log was forwarded to Panorama. + +type: keyword + + +**`panw.panos.error`** +: A string showing that error that has occurred in any event. + +type: keyword + + +**`panw.panos.error_code`** +: An integer associated with any errors that occurred. + +type: integer + + +**`panw.panos.repeatcnt`** +: The number of sessions with the same source IP address, destination IP address, application, and subtype that GlobalProtect has detected within the last five seconds.An integer associated with any errors that occurred. + +type: integer + + +**`panw.panos.serial_number`** +: The serial number of the user’s machine or device. + +type: keyword + + +**`panw.panos.auth_method`** +: A string showing the authentication type + +type: keyword + +example: LDAP + + +**`panw.panos.datasource`** +: Source from which mapping information is collected. + +type: keyword + + +**`panw.panos.datasourcetype`** +: Mechanism used to identify the IP/User mappings within a data source. + +type: keyword + + +**`panw.panos.datasourcename`** +: User-ID source that sends the IP (Port)-User Mapping. + +type: keyword + + +**`panw.panos.factorno`** +: Indicates the use of primary authentication (1) or additional factors (2, 3). + +type: integer + + +**`panw.panos.factortype`** +: Vendor used to authenticate a user when Multi Factor authentication is present. + +type: keyword + + +**`panw.panos.factorcompletiontime`** +: Time the authentication was completed. + +type: date + + +**`panw.panos.ugflags`** +: Displays whether the user group that was found during user group mapping. Supported values are: User Group Found—Indicates whether the user could be mapped to a group. Duplicate User—Indicates whether duplicate users were found in a user group. Displays N/A if no user group is found. + +type: keyword + + + +## device_group_hierarchy [_device_group_hierarchy] + +A sequence of identification numbers that indicate the device group’s location within a device group hierarchy. The firewall (or virtual system) generating the log includes the identification number of each ancestor in its device group hierarchy. The shared device group (level 0) is not included in this structure. If the log values are 12, 34, 45, 0, it means that the log was generated by a firewall (or virtual system) that belongs to device group 45, and its ancestors are 34, and 12. + +**`panw.panos.device_group_hierarchy.level_1`** +: A sequence of identification numbers that indicate the device group’s location within a device group hierarchy. The firewall (or virtual system) generating the log includes the identification number of each ancestor in its device group hierarchy. The shared device group (level 0) is not included in this structure. If the log values are 12, 34, 45, 0, it means that the log was generated by a firewall (or virtual system) that belongs to device group 45, and its ancestors are 34, and 12. + +type: keyword + + +**`panw.panos.device_group_hierarchy.level_2`** +: A sequence of identification numbers that indicate the device group’s location within a device group hierarchy. The firewall (or virtual system) generating the log includes the identification number of each ancestor in its device group hierarchy. The shared device group (level 0) is not included in this structure. If the log values are 12, 34, 45, 0, it means that the log was generated by a firewall (or virtual system) that belongs to device group 45, and its ancestors are 34, and 12. + +type: keyword + + +**`panw.panos.device_group_hierarchy.level_3`** +: A sequence of identification numbers that indicate the device group’s location within a device group hierarchy. The firewall (or virtual system) generating the log includes the identification number of each ancestor in its device group hierarchy. The shared device group (level 0) is not included in this structure. If the log values are 12, 34, 45, 0, it means that the log was generated by a firewall (or virtual system) that belongs to device group 45, and its ancestors are 34, and 12. + +type: keyword + + +**`panw.panos.device_group_hierarchy.level_4`** +: A sequence of identification numbers that indicate the device group’s location within a device group hierarchy. The firewall (or virtual system) generating the log includes the identification number of each ancestor in its device group hierarchy. The shared device group (level 0) is not included in this structure. If the log values are 12, 34, 45, 0, it means that the log was generated by a firewall (or virtual system) that belongs to device group 45, and its ancestors are 34, and 12. + +type: keyword + + +**`panw.panos.timeout`** +: Timeout after which the IP/User Mappings are cleared. + +type: integer + + +**`panw.panos.vsys_id`** +: A unique identifier for a virtual system on a Palo Alto Networks firewall. + +type: keyword + + +**`panw.panos.vsys_name`** +: The name of the virtual system associated with the session; only valid on firewalls enabled for multiple virtual systems. + +type: keyword + + +**`panw.panos.description`** +: Additional information for any event that has occurred. + +type: keyword + + +**`panw.panos.tunnel_type`** +: The type of tunnel (either SSLVPN or IPSec). + +type: keyword + + +**`panw.panos.connect_method`** +: A string showing the how the GlobalProtect app connects to Gateway + +type: keyword + + +**`panw.panos.matchname`** +: Name of the HIP object or profile. + +type: keyword + + +**`panw.panos.matchtype`** +: Whether the hip field represents a HIP object or a HIP profile. + +type: keyword + + +**`panw.panos.priority`** +: The priority order of the gateway that is based on highest (1), high (2), medium (3), low (4), or lowest (5) to which the GlobalProtect app can connect. + +type: keyword + + +**`panw.panos.response_time`** +: The SSL response time of the selected gateway that is measured in milliseconds on the endpoint during tunnel setup. + +type: keyword + + +**`panw.panos.attempted_gateways`** +: The fields that are collected for each gateway connection attempt with the gateway name, SSL response time, and priority + +type: keyword + + +**`panw.panos.gateway`** +: The name of the gateway that is specified on the portal configuration. + +type: keyword + + +**`panw.panos.selection_type`** +: The connection method that is selected to connect to the gateway. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-pensando.md b/docs/reference/filebeat/exported-fields-pensando.md new file mode 100644 index 000000000000..2e24579f8f07 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-pensando.md @@ -0,0 +1,91 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-pensando.html +--- + +# Pensando fields [exported-fields-pensando] + +pensando Module + + +## pensando [_pensando] + +Fields from Pensando logs. + + +## dfw [_dfw] + +Fields for Pensando DFW + +**`pensando.dfw.action`** +: Action on the flow. + +type: keyword + + +**`pensando.dfw.app_id`** +: Application ID + +type: integer + + +**`pensando.dfw.destination_address`** +: Address of destination. + +type: keyword + + +**`pensando.dfw.destination_port`** +: Port of destination. + +type: integer + + +**`pensando.dfw.direction`** +: Direction of the flow + +type: keyword + + +**`pensando.dfw.protocol`** +: Protocol of the flow + +type: keyword + + +**`pensando.dfw.rule_id`** +: Rule ID that was matched. + +type: keyword + + +**`pensando.dfw.session_id`** +: Session ID of the flow + +type: integer + + +**`pensando.dfw.session_state`** +: Session state of the flow. + +type: keyword + + +**`pensando.dfw.source_address`** +: Source address of the flow. + +type: keyword + + +**`pensando.dfw.source_port`** +: Source port of the flow. + +type: integer + + +**`pensando.dfw.timestamp`** +: Timestamp of the log. + +type: date + + diff --git a/docs/reference/filebeat/exported-fields-postgresql.md b/docs/reference/filebeat/exported-fields-postgresql.md new file mode 100644 index 000000000000..a5fd2cfba56d --- /dev/null +++ b/docs/reference/filebeat/exported-fields-postgresql.md @@ -0,0 +1,191 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-postgresql.html +--- + +# PostgreSQL fields [exported-fields-postgresql] + +Module for parsing the PostgreSQL log files. + + +## postgresql [_postgresql] + +Fields from PostgreSQL logs. + + +## log [_log_11] + +Fields from the PostgreSQL log files. + +**`postgresql.log.timestamp`** +: [7.3.0] + +The timestamp from the log line. + + +**`postgresql.log.core_id`** +: [8.0.0] + +Core id. (deprecated, there is no core_id in PostgreSQL logs, this is actually session_line_number). + +type: alias + +alias to: postgresql.log.session_line_number + + +**`postgresql.log.client_addr`** +: Host where the connection originated from. + +example: 127.0.0.1 + + +**`postgresql.log.client_port`** +: Port where the connection originated from. + +example: 59700 + + +**`postgresql.log.session_id`** +: PostgreSQL session. + +example: 5ff1dd98.22 + + +**`postgresql.log.session_line_number`** +: Line number inside a session. (%l in `log_line_prefix`). + +type: long + + +**`postgresql.log.database`** +: Name of database. + +example: postgres + + +**`postgresql.log.query`** +: Query statement. In the case of CSV parse, look at command_tag to get more context. + +example: SELECT * FROM users; + + +**`postgresql.log.query_step`** +: Statement step when using extended query protocol (one of statement, parse, bind or execute). + +example: parse + + +**`postgresql.log.query_name`** +: Name given to a query when using extended query protocol. If it is "", or not present, this field is ignored. + +example: pdo_stmt_00000001 + + +**`postgresql.log.command_tag`** +: Type of session’s current command. The complete list can be found at: src/include/tcop/cmdtaglist.h + +example: SELECT + + +**`postgresql.log.session_start_time`** +: Time when this session started. + +type: date + + +**`postgresql.log.virtual_transaction_id`** +: Backend local transaction id. + + +**`postgresql.log.transaction_id`** +: The id of current transaction. + +type: long + + +**`postgresql.log.sql_state_code`** +: State code returned by Postgres (if any). See also [https://www.postgresql.org/docs/current/errcodes-appendix.html](https://www.postgresql.org/docs/current/errcodes-appendix.html) + +type: keyword + + +**`postgresql.log.detail`** +: More information about the message, parameters in case of a parametrized query. e.g. *Role \"user\" does not exist.*, *parameters: $1 = 42*, etc. + + +**`postgresql.log.hint`** +: A possible solution to solve an error. + + +**`postgresql.log.internal_query`** +: Internal query that led to the error (if any). + + +**`postgresql.log.internal_query_pos`** +: Character count of the internal query (if any). + +type: long + + +**`postgresql.log.context`** +: Error context. + + +**`postgresql.log.query_pos`** +: Character count of the error position (if any). + +type: long + + +**`postgresql.log.location`** +: Location of the error in the PostgreSQL source code (if log_error_verbosity is set to verbose). + + +**`postgresql.log.application_name`** +: Name of the application of this event. It is defined by the client. + + +**`postgresql.log.backend_type`** +: Type of backend of this event. Possible types are autovacuum launcher, autovacuum worker, logical replication launcher, logical replication worker, parallel worker, background writer, client backend, checkpointer, startup, walreceiver, walsender and walwriter. In addition, background workers registered by extensions may have additional types. + +example: client backend + + +**`postgresql.log.error.code`** +: [8.0.0] + +Error code returned by Postgres (if any). Deprecated: errors can have letters. Use sql_state_code instead. + +type: alias + +alias to: postgresql.log.sql_state_code + + +**`postgresql.log.timezone`** +: type: alias + +alias to: event.timezone + + +**`postgresql.log.user`** +: type: alias + +alias to: user.name + + +**`postgresql.log.level`** +: Valid values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. + +type: alias + +example: LOG + +alias to: log.level + + +**`postgresql.log.message`** +: type: alias + +alias to: message + + diff --git a/docs/reference/filebeat/exported-fields-process.md b/docs/reference/filebeat/exported-fields-process.md new file mode 100644 index 000000000000..1376d72752fb --- /dev/null +++ b/docs/reference/filebeat/exported-fields-process.md @@ -0,0 +1,38 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-process.html +--- + +# Process fields [exported-fields-process] + +Process metadata fields + +**`process.exe`** +: type: alias + +alias to: process.executable + + + +## owner [_owner] + +Process owner information. + +**`process.owner.id`** +: Unique identifier of the user. + +type: keyword + + +**`process.owner.name`** +: Short name or login of the user. + +type: keyword + +example: albert + + +**`process.owner.name.text`** +: type: text + + diff --git a/docs/reference/filebeat/exported-fields-rabbitmq.md b/docs/reference/filebeat/exported-fields-rabbitmq.md new file mode 100644 index 000000000000..56e221758b16 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-rabbitmq.md @@ -0,0 +1,25 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-rabbitmq.html +--- + +# RabbitMQ fields [exported-fields-rabbitmq] + +RabbitMQ Module + + +## rabbitmq [_rabbitmq] + + +## log [_log_12] + +RabbitMQ log files + +**`rabbitmq.log.pid`** +: The Erlang process id + +type: keyword + +example: <0.222.0> + + diff --git a/docs/reference/filebeat/exported-fields-redis.md b/docs/reference/filebeat/exported-fields-redis.md new file mode 100644 index 000000000000..d49af843cd32 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-redis.md @@ -0,0 +1,76 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-redis.html +--- + +# Redis fields [exported-fields-redis] + +Redis Module + + +## redis [_redis] + + +## log [_log_13] + +Redis log files + +**`redis.log.role`** +: The role of the Redis instance. Can be one of `master`, `slave`, `child` (for RDF/AOF writing child), or `sentinel`. + +type: keyword + + +**`redis.log.pid`** +: type: alias + +alias to: process.pid + + +**`redis.log.level`** +: type: alias + +alias to: log.level + + +**`redis.log.message`** +: type: alias + +alias to: message + + + +## slowlog [_slowlog_4] + +Slow logs are retrieved from Redis via a network connection. + +**`redis.slowlog.cmd`** +: The command executed. + +type: keyword + + +**`redis.slowlog.duration.us`** +: How long it took to execute the command in microseconds. + +type: long + + +**`redis.slowlog.id`** +: The ID of the query. + +type: long + + +**`redis.slowlog.key`** +: The key on which the command was executed. + +type: keyword + + +**`redis.slowlog.args`** +: The arguments with which the command was called. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-s3.md b/docs/reference/filebeat/exported-fields-s3.md new file mode 100644 index 000000000000..32a4982df101 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-s3.md @@ -0,0 +1,33 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-s3.html +--- + +# s3 fields [exported-fields-s3] + +S3 fields from s3 input. + +**`bucket.name`** +: Name of the S3 bucket that this log retrieved from. + +type: keyword + + +**`bucket.arn`** +: ARN of the S3 bucket that this log retrieved from. + +type: keyword + + +**`object.key`** +: Name of the S3 object that this log retrieved from. + +type: keyword + + +**`metadata`** +: AWS S3 object metadata values. + +type: flattened + + diff --git a/docs/reference/filebeat/exported-fields-salesforce.md b/docs/reference/filebeat/exported-fields-salesforce.md new file mode 100644 index 000000000000..d2962a345b95 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-salesforce.md @@ -0,0 +1,646 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-salesforce.html +--- + +# Salesforce fields [exported-fields-salesforce] + +Salesforce Module + + +## salesforce [_salesforce] + +Fileset for ingesting Salesforce Apex logs. + +**`salesforce.instance_url`** +: The Instance URL of the Salesforce instance. + +type: keyword + + + +## apex [_apex] + +Fileset for ingesting Salesforce Apex logs. + +**`salesforce.apex.document_id`** +: Unique ID of the Apex document. + +type: keyword + + +**`salesforce.apex.action`** +: Action performed by the callout. + +type: keyword + + +**`salesforce.apex.callout_time`** +: Time spent waiting on web service callouts, in milliseconds. + +type: float + + +**`salesforce.apex.class_name`** +: The Apex class name. If the class is part of a managed package, this string includes the package namespace. + +type: keyword + + +**`salesforce.apex.client_name`** +: The name of the client that’s using Salesforce services. This field is an optional parameter that can be passed in API calls. If blank, the caller didn’t specify a client in the CallOptions header. + +type: keyword + + +**`salesforce.apex.cpu_time`** +: The CPU time in milliseconds used to complete the request. + +type: float + + +**`salesforce.apex.db_blocks`** +: Indicates how much activity is occurring in the database. A high value for this field suggests that adding indexes or filters on your queries would benefit performance. + +type: long + + +**`salesforce.apex.db_cpu_time`** +: The CPU time in milliseconds to complete the request. Indicates the amount of activity taking place in the database layer during the request. + +type: float + + +**`salesforce.apex.db_total_time`** +: Time (in milliseconds) spent waiting for database processing in aggregate for all operations in the request. Compare this field to cpu_time to determine whether performance issues are occurring in the database layer or in your own code. + +type: float + + +**`salesforce.apex.entity`** +: Name of the external object being accessed. + +type: keyword + + +**`salesforce.apex.entity_name`** +: The name of the object affected by the trigger. + +type: keyword + + +**`salesforce.apex.entry_point`** +: The entry point for this Apex execution. + +type: keyword + + +**`salesforce.apex.event_type`** +: The type of event. + +type: keyword + + +**`salesforce.apex.execute_ms`** +: How long it took (in milliseconds) for Salesforce to prepare and execute the query. Available in API version 42.0 and later. + +type: float + + +**`salesforce.apex.fetch_ms`** +: How long it took (in milliseconds) to retrieve the query results from the external system. Available in API version 42.0 and later. + +type: float + + +**`salesforce.apex.filter`** +: Field expressions to filter which rows to return. Corresponds to WHERE in SOQL queries. + +type: keyword + + +**`salesforce.apex.is_long_running_request`** +: Indicates whether the request is counted against your org’s concurrent long-running Apex request limit (true) or not (false). + +type: keyword + + +**`salesforce.apex.limit`** +: Maximum number of rows to return for a query. Corresponds to LIMIT in SOQL queries. + +type: long + + +**`salesforce.apex.limit_usage_pct`** +: The percentage of Apex SOAP calls that were made against the organization’s limit. + +type: float + + +**`salesforce.apex.login_key`** +: The string that ties together all events in a given user’s login session. It starts with a login event and ends with either a logout event or the user session expiring. + +type: keyword + + +**`salesforce.apex.media_type`** +: The media type of the response. + +type: keyword + + +**`salesforce.apex.message`** +: Error or warning message associated with the failed call. + +type: text + + +**`salesforce.apex.method_name`** +: The name of the calling Apex method. + +type: keyword + + +**`salesforce.apex.fields_count`** +: The number of fields or columns, where applicable. + +type: long + + +**`salesforce.apex.soql_queries_count`** +: The number of SOQL queries that were executed during the event. + +type: long + + +**`salesforce.apex.offset`** +: Number of rows to skip when paging through a result set. Corresponds to OFFSET in SOQL queries. + +type: long + + +**`salesforce.apex.orderby`** +: Field or column to use for sorting query results, and whether to sort the results in ascending (default) or descending order. Corresponds to ORDER BY in SOQL queries. + +type: keyword + + +**`salesforce.apex.organization_id`** +: The 15-character ID of the organization. + +type: keyword + + +**`salesforce.apex.query`** +: The SOQL query, if one was performed. + +type: keyword + + +**`salesforce.apex.quiddity`** +: The type of outer execution associated with this event. + +type: keyword + + +**`salesforce.apex.request_id`** +: The unique ID of a single transaction. A transaction can contain one or more events. Each event in a given transaction has the same request_id. + +type: keyword + + +**`salesforce.apex.request_status`** +: The status of the request for a page view or user interface action. + +type: keyword + + +**`salesforce.apex.rows_total`** +: Total number of records in the result set. The value is always -1 if the custom adapter’s DataSource.Provider class doesn’t declare the QUERY_TOTAL_SIZE capability. + +type: long + + +**`salesforce.apex.rows_fetched`** +: Number of rows fetched by the callout. Available in API version 42.0 and later. + +type: long + + +**`salesforce.apex.rows_processed`** +: The number of rows that were processed in the request. + +type: long + + +**`salesforce.apex.run_time`** +: The amount of time that the request took in milliseconds. + +type: float + + +**`salesforce.apex.select`** +: Comma-separated list of fields being queried. Corresponds to SELECT in SOQL queries. + +type: keyword + + +**`salesforce.apex.subqueries`** +: Reserved for future use. + +type: keyword + + +**`salesforce.apex.throughput`** +: Number of records retrieved in one second. + +type: float + + +**`salesforce.apex.trigger_id`** +: The 15-character ID of the trigger that was fired. + +type: keyword + + +**`salesforce.apex.trigger_name`** +: For triggers coming from managed packages, trigger_name includes a namespace prefix separated with a . character. If no namespace prefix is present, the trigger is from an unmanaged trigger. + +type: keyword + + +**`salesforce.apex.trigger_type`** +: The type of this trigger. + +type: keyword + + +**`salesforce.apex.type`** +: The type of Apex callout. + +type: keyword + + +**`salesforce.apex.uri`** +: The URI of the page that’s receiving the request. + +type: keyword + + +**`salesforce.apex.uri_derived_id`** +: The 18-character case-safe ID of the URI of the page that’s receiving the request. + +type: keyword + + +**`salesforce.apex.user_agent`** +: The numeric code for the type of client used to make the request (for example, the browser, application, or API). + +type: keyword + + +**`salesforce.apex.user_id_derived`** +: The 18-character case-safe ID of the user who’s using Salesforce services through the UI or the API. + +type: keyword + + + +## salesforce.login [_salesforce_login] + +Fileset for ingesting Salesforce Login (REST) logs. + +**`salesforce.login.document_id`** +: Unique Id. + +type: keyword + + +**`salesforce.login.application`** +: The application used to access the organization. + +type: keyword + + +**`salesforce.login.api.type`** +: The type of Salesforce API request. + +type: keyword + + +**`salesforce.login.api.version`** +: The version of the Salesforce API that’s being used. + +type: keyword + + +**`salesforce.login.auth.service_id`** +: The authentication method used by a third-party identification provider for an OpenID Connect single sign-on protocol. + +type: keyword + + +**`salesforce.login.auth.method_reference`** +: The authentication method used by a third-party identification provider for an OpenID Connect single sign-on protocol. This field is available in API version 51.0 and later. + +type: keyword + + +**`salesforce.login.session.level`** +: Session-level security controls user access to features that support it, such as connected apps and reporting. This field is available in API version 42.0 and later. + +type: text + + +**`salesforce.login.session.key`** +: The user’s unique session ID. Use this value to identify all user events within a session. When a user logs out and logs in again, a new session is started. For LoginEvent, this field is often null because the event is captured before a session is created. For example, vMASKIU6AxEr+Op5. This field is available in API version 46.0 and later. + +type: keyword + + +**`salesforce.login.key`** +: The string that ties together all events in a given user’s login session. It starts with a login event and ends with either a logout event or the user session expiring. + +type: keyword + + +**`salesforce.login.history_id`** +: Tracks a user session so you can correlate user activity with a particular login instance. This field is also available on the LoginHistory, AuthSession, and other objects, making it easier to trace events back to a user’s original authentication. + +type: keyword + + +**`salesforce.login.type`** +: The type of login used to access the session. + +type: keyword + + +**`salesforce.login.geo_id`** +: The Salesforce ID of the LoginGeo object associated with the login user’s IP address. + +type: keyword + + +**`salesforce.login.additional_info`** +: JSON serialization of additional information that’s captured from the HTTP headers during a login request. + +type: text + + +**`salesforce.login.client_version`** +: The version number of the login client. If no version number is available, “Unknown” is returned. + +type: keyword + + +**`salesforce.login.client_ip`** +: The IP address of the client that’s using Salesforce services. A Salesforce internal IP (such as a login from Salesforce Workbench or AppExchange) is shown as “Salesforce.com IP”. + +type: keyword + + +**`salesforce.login.cpu_time`** +: The CPU time in milliseconds used to complete the request. This field indicates the amount of activity taking place in the app server layer. + +type: long + + +**`salesforce.login.db_time_total`** +: The time in nanoseconds for a database round trip. Includes time spent in the JDBC driver, network to the database, and DB’s CPU time. Compare this field to cpu_time to determine whether performance issues are occurring in the database layer or in your own code. + +type: double + + +**`salesforce.login.event_type`** +: The type of event. The value is always Login. + +type: keyword + + +**`salesforce.login.organization_id`** +: The 15-character ID of the organization. + +type: keyword + + +**`salesforce.login.request_id`** +: The unique ID of a single transaction. A transaction can contain one or more events. Each event in a given transaction has the same REQUEST_ID. + +type: keyword + + +**`salesforce.login.request_status`** +: The status of the request for a page view or user interface action. + +type: keyword + + +**`salesforce.login.run_time`** +: The amount of time that the request took in milliseconds. + +type: long + + +**`salesforce.login.user_id`** +: The 15-character ID of the user who’s using Salesforce services through the UI or the API. + +type: keyword + + +**`salesforce.login.uri_id_derived`** +: The 18-character case insensitive ID of the URI of the page that’s receiving the request. + +type: keyword + + +**`salesforce.login.evaluation_time`** +: The amount of time it took to evaluate the transaction security policy, in milliseconds. + +type: float + + +**`salesforce.login.login_type`** +: The type of login used to access the session. + +type: keyword + + + +## salesforce.logout [_salesforce_logout] + +Fileset for parsing Salesforce Logout (REST) logs. + +**`salesforce.logout.document_id`** +: Unique Id. + +type: keyword + + +**`salesforce.logout.session.key`** +: The user’s unique session ID. You can use this value to identify all user events within a session. When a user logs out and logs in again, a new session is started. + +type: keyword + + +**`salesforce.logout.session.level`** +: The security level of the session that was used when logging out (e.g. Standard Session or High-Assurance Session). + +type: text + + +**`salesforce.logout.session.type`** +: The session type that was used when logging out (e.g. API, Oauth2 or UI). + +type: keyword + + +**`salesforce.logout.login_key`** +: The string that ties together all events in a given user’s login session. It starts with a login event and ends with either a logout event or the user session expiring. + +type: keyword + + +**`salesforce.logout.api.type`** +: The type of Salesforce API request. + +type: keyword + + +**`salesforce.logout.api.version`** +: The version of the Salesforce API that’s being used. + +type: keyword + + +**`salesforce.logout.app_type`** +: The application type that was in use upon logging out. + +type: keyword + + +**`salesforce.logout.browser_type`** +: The identifier string returned by the browser used at login. + +type: keyword + + +**`salesforce.logout.client_version`** +: The version of the client that was in use upon logging out. + +type: keyword + + +**`salesforce.logout.event_type`** +: The type of event. The value is always Logout. + +type: keyword + + +**`salesforce.logout.organization_by_id`** +: The 15-character ID of the organization. + +type: keyword + + +**`salesforce.logout.platform_type`** +: The code for the client platform. If a timeout caused the logout, this field is null. + +type: keyword + + +**`salesforce.logout.resolution_type`** +: The screen resolution of the client. If a timeout caused the logout, this field is null. + +type: keyword + + +**`salesforce.logout.user_id`** +: The 15-character ID of the user who’s using Salesforce services through the UI or the API. + +type: keyword + + +**`salesforce.logout.user_id_derived`** +: The 18-character case-safe ID of the user who’s using Salesforce services through the UI or the API. + +type: keyword + + +**`salesforce.logout.user_initiated_logout`** +: The value is 1 if the user intentionally logged out of the organization by clicking the Logout button. If the user’s session timed out due to inactivity or another implicit logout action, the value is 0. + +type: keyword + + +**`salesforce.logout.created_by_id`** +: Unavailable + +type: keyword + + +**`salesforce.logout.event_identifier`** +: This field is populated only when the activity that this event monitors requires extra authentication, such as multi-factor authentication. In this case, Salesforce generates more events and sets the RelatedEventIdentifier field of the new events to the value of the EventIdentifier field of the original event. Use this field with the EventIdentifier field to correlate all the related events. If no extra authentication is required, this field is blank. + +type: keyword + + +**`salesforce.logout.organization_id`** +: The 15-character ID of the organization. + +type: keyword + + + +## salesforce.setup_audit_trail [_salesforce_setup_audit_trail] + +Fileset for ingesting Salesforce SetupAuditTrail logs. + +**`salesforce.setup_audit_trail.document_id`** +: Unique Id. + +type: keyword + + +**`salesforce.setup_audit_trail.created_by_context`** +: The context under which the Setup change was made. For example, if Einstein uses cloud-to-cloud services to make a change in Setup, the value of this field is Einstein. + +type: keyword + + +**`salesforce.setup_audit_trail.created_by_id`** +: Unknown + +type: keyword + + +**`salesforce.setup_audit_trail.created_by_issuer`** +: Reserved for future use. + +type: keyword + + +**`salesforce.setup_audit_trail.delegate_user`** +: The Login-As user who executed the action in Setup. If a Login-As user didn’t perform the action, this field is blank. This field is available in API version 35.0 and later. + +type: keyword + + +**`salesforce.setup_audit_trail.display`** +: The full description of changes made in Setup. For example, if the Action field has a value of PermSetCreate, the Display field has a value like “Created permission set MAD: with user license Salesforce. + +type: keyword + + +**`salesforce.setup_audit_trail.responsible_namespace_prefix`** +: Unknown + +type: keyword + + +**`salesforce.setup_audit_trail.section`** +: The section in the Setup menu where the action occurred. For example, Manage Users or Company Profile. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-santa.md b/docs/reference/filebeat/exported-fields-santa.md new file mode 100644 index 000000000000..ef58370052bc --- /dev/null +++ b/docs/reference/filebeat/exported-fields-santa.md @@ -0,0 +1,95 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-santa.html +--- + +# Google Santa fields [exported-fields-santa] + +Santa Module + + +## santa [_santa] + +**`santa.action`** +: Action + +type: keyword + +example: EXEC + + +**`santa.decision`** +: Decision that santad took. + +type: keyword + +example: ALLOW + + +**`santa.reason`** +: Reason for the decsision. + +type: keyword + +example: CERT + + +**`santa.mode`** +: Operating mode of Santa. + +type: keyword + +example: M + + + +## disk [_disk] + +Fields for DISKAPPEAR actions. + +**`santa.disk.volume`** +: The volume name. + + +**`santa.disk.bus`** +: The disk bus protocol. + + +**`santa.disk.serial`** +: The disk serial number. + + +**`santa.disk.bsdname`** +: The disk BSD name. + +example: disk1s3 + + +**`santa.disk.model`** +: The disk model. + +example: APPLE SSD SM0512L + + +**`santa.disk.fs`** +: The disk volume kind (filesystem type). + +example: apfs + + +**`santa.disk.mount`** +: The disk volume path. + + +**`santa.certificate.common_name`** +: Common name from code signing certificate. + +type: keyword + + +**`santa.certificate.sha256`** +: SHA256 hash of code signing certificate. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-snyk.md b/docs/reference/filebeat/exported-fields-snyk.md new file mode 100644 index 000000000000..d19da3d68af2 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-snyk.md @@ -0,0 +1,222 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-snyk.html +--- + +# Snyk fields [exported-fields-snyk] + +Snyk module + + +## snyk [_snyk] + +Module for parsing Snyk project vulnerabilities. + +**`snyk.projects`** +: Array with all related projects objects. + +type: flattened + + +**`snyk.related.projects`** +: Array of all the related project ID’s. + +type: keyword + + + +## audit [_audit_5] + +Module for parsing Snyk audit logs. + +**`snyk.audit.org_id`** +: ID of the related Organization related to the event. + +type: keyword + + +**`snyk.audit.project_id`** +: ID of the project related to the event. + +type: keyword + + +**`snyk.audit.content`** +: Overview of the content that was changed, both old and new values. + +type: flattened + + + +## vulnerabilities [_vulnerabilities] + +Module for parsing Snyk project vulnerabilities. + +**`snyk.vulnerabilities.cvss3`** +: CSSv3 scores. + +type: keyword + + +**`snyk.vulnerabilities.disclosure_time`** +: The time this vulnerability was originally disclosed to the package maintainers. + +type: date + + +**`snyk.vulnerabilities.exploit_maturity`** +: The Snyk exploit maturity level. + +type: keyword + + +**`snyk.vulnerabilities.id`** +: The vulnerability reference ID. + +type: keyword + + +**`snyk.vulnerabilities.is_ignored`** +: If the vulnerability report has been ignored. + +type: boolean + + +**`snyk.vulnerabilities.is_patchable`** +: If vulnerability is fixable by using a Snyk supplied patch. + +type: boolean + + +**`snyk.vulnerabilities.is_patched`** +: If the vulnerability has been patched. + +type: boolean + + +**`snyk.vulnerabilities.is_pinnable`** +: If the vulnerability is fixable by pinning a transitive dependency. + +type: boolean + + +**`snyk.vulnerabilities.is_upgradable`** +: If the vulnerability fixable by upgrading a dependency. + +type: boolean + + +**`snyk.vulnerabilities.language`** +: The package’s programming language. + +type: keyword + + +**`snyk.vulnerabilities.package`** +: The package identifier according to its package manager. + +type: keyword + + +**`snyk.vulnerabilities.package_manager`** +: The package manager. + +type: keyword + + +**`snyk.vulnerabilities.patches`** +: Patches required to resolve the issue created by Snyk. + +type: flattened + + +**`snyk.vulnerabilities.priority_score`** +: The CVS priority score. + +type: long + + +**`snyk.vulnerabilities.publication_time`** +: The vulnerability publication time. + +type: date + + +**`snyk.vulnerabilities.jira_issue_url`** +: Link to the related Jira issue. + +type: keyword + + +**`snyk.vulnerabilities.original_severity`** +: The original severity of the vulnerability. + +type: long + + +**`snyk.vulnerabilities.reachability`** +: If the vulnerable function from the library is used in the code scanned. Can either be No Info, Potentially reachable and Reachable. + +type: keyword + + +**`snyk.vulnerabilities.title`** +: The issue title. + +type: keyword + + +**`snyk.vulnerabilities.type`** +: The issue type. Can be either "license" or "vulnerability". + +type: keyword + + +**`snyk.vulnerabilities.unique_severities_list`** +: A list of related unique severities. + +type: keyword + + +**`snyk.vulnerabilities.version`** +: The package version this issue is applicable to. + +type: keyword + + +**`snyk.vulnerabilities.introduced_date`** +: The date the vulnerability was initially found. + +type: date + + +**`snyk.vulnerabilities.is_fixed`** +: If the related vulnerability has been resolved. + +type: boolean + + +**`snyk.vulnerabilities.credit`** +: Reference to the person that original found the vulnerability. + +type: keyword + + +**`snyk.vulnerabilities.semver`** +: One or more semver ranges this issue is applicable to. The format varies according to package manager. + +type: flattened + + +**`snyk.vulnerabilities.identifiers.alternative`** +: Additional vulnerability identifiers. + +type: keyword + + +**`snyk.vulnerabilities.identifiers.cwe`** +: CWE vulnerability identifiers. + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-sophos.md b/docs/reference/filebeat/exported-fields-sophos.md new file mode 100644 index 000000000000..d8b805f4b101 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-sophos.md @@ -0,0 +1,1244 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-sophos.html +--- + +# sophos fields [exported-fields-sophos] + +sophos Module + + +## sophos.xg [_sophos_xg] + +Module for parsing sophosxg syslog. + +**`sophos.xg.action`** +: Event Action + +type: keyword + + +**`sophos.xg.activityname`** +: Web policy activity that matched and caused the policy result. + +type: keyword + + +**`sophos.xg.ap`** +: Access Point Serial ID or LocalWifi0 or LocalWifi1. + +type: keyword + + +**`sophos.xg.app_category`** +: Name of the category under which application falls + +type: keyword + + +**`sophos.xg.app_filter_policy_id`** +: Application filter policy ID applied on the traffic + +type: keyword + + +**`sophos.xg.app_is_cloud`** +: Application is Cloud + +type: keyword + + +**`sophos.xg.app_name`** +: Application name + +type: keyword + + +**`sophos.xg.app_resolved_by`** +: Application is resolved by signature or synchronized application + +type: keyword + + +**`sophos.xg.app_risk`** +: Risk level assigned to the application + +type: keyword + + +**`sophos.xg.app_technology`** +: Technology of the application + +type: keyword + + +**`sophos.xg.appfilter_policy_id`** +: Application Filter policy applied on the traffic + +type: integer + + +**`sophos.xg.application`** +: Application name + +type: keyword + + +**`sophos.xg.application_category`** +: Application is resolved by signature or synchronized application + +type: keyword + + +**`sophos.xg.application_filter_policy`** +: Application Filter policy applied on the traffic + +type: integer + + +**`sophos.xg.application_name`** +: Application name + +type: keyword + + +**`sophos.xg.application_risk`** +: Risk level assigned to the application + +type: keyword + + +**`sophos.xg.application_technology`** +: Technology of the application + +type: keyword + + +**`sophos.xg.appresolvedby`** +: Technology of the application + +type: keyword + + +**`sophos.xg.auth_client`** +: Auth Client + +type: keyword + + +**`sophos.xg.auth_mechanism`** +: Auth mechanism + +type: keyword + + +**`sophos.xg.av_policy_name`** +: Malware scanning policy name which is applied on the traffic + +type: keyword + + +**`sophos.xg.backup_mode`** +: Backup mode + +type: keyword + + +**`sophos.xg.branch_name`** +: Branch Name + +type: keyword + + +**`sophos.xg.category`** +: IPS signature category. + +type: keyword + + +**`sophos.xg.category_type`** +: Type of category under which website falls + +type: keyword + + +**`sophos.xg.classification`** +: Signature classification + +type: keyword + + +**`sophos.xg.client_host_name`** +: Client host name + +type: keyword + + +**`sophos.xg.client_physical_address`** +: Client physical address + +type: keyword + + +**`sophos.xg.clients_conn_ssid`** +: Number of client connected to the SSID. + +type: long + + +**`sophos.xg.collisions`** +: collisions + +type: long + + +**`sophos.xg.con_event`** +: Event Start/Stop + +type: keyword + + +**`sophos.xg.con_id`** +: Unique identifier of connection + +type: integer + + +**`sophos.xg.configuration`** +: Configuration + +type: float + + +**`sophos.xg.conn_id`** +: Unique identifier of connection + +type: integer + + +**`sophos.xg.connectionname`** +: Connectionname + +type: keyword + + +**`sophos.xg.connectiontype`** +: Connectiontype + +type: keyword + + +**`sophos.xg.connevent`** +: Event on which this log is generated + +type: keyword + + +**`sophos.xg.connid`** +: Connection ID + +type: keyword + + +**`sophos.xg.content_type`** +: Type of the content + +type: keyword + + +**`sophos.xg.contenttype`** +: Type of the content + +type: keyword + + +**`sophos.xg.context_match`** +: Context Match + +type: keyword + + +**`sophos.xg.context_prefix`** +: Content Prefix + +type: keyword + + +**`sophos.xg.context_suffix`** +: Context Suffix + +type: keyword + + +**`sophos.xg.cookie`** +: cookie + +type: keyword + + +**`sophos.xg.date`** +: Date (yyyy-mm-dd) when the event occurred + +type: date + + +**`sophos.xg.destinationip`** +: Original destination IP address of traffic + +type: ip + + +**`sophos.xg.device`** +: device + +type: keyword + + +**`sophos.xg.device_id`** +: Serial number of the device + +type: keyword + + +**`sophos.xg.device_model`** +: Model number of the device + +type: keyword + + +**`sophos.xg.device_name`** +: Model number of the device + +type: keyword + + +**`sophos.xg.dictionary_name`** +: Dictionary Name + +type: keyword + + +**`sophos.xg.dir_disp`** +: TPacket direction. Possible values:“org”, “reply”, “” + +type: keyword + + +**`sophos.xg.direction`** +: Direction + +type: keyword + + +**`sophos.xg.domainname`** +: Domain from which virus was downloaded + +type: keyword + + +**`sophos.xg.download_file_name`** +: Download file name + +type: keyword + + +**`sophos.xg.download_file_type`** +: Download file type + +type: keyword + + +**`sophos.xg.dst_country_code`** +: Code of the country to which the destination IP belongs + +type: keyword + + +**`sophos.xg.dst_domainname`** +: Receiver domain name + +type: keyword + + +**`sophos.xg.dst_ip`** +: Original destination IP address of traffic + +type: ip + + +**`sophos.xg.dst_port`** +: Original destination port of TCP and UDP traffic + +type: integer + + +**`sophos.xg.dst_zone_type`** +: Type of destination zone + +type: keyword + + +**`sophos.xg.dstdomain`** +: Destination Domain + +type: keyword + + +**`sophos.xg.duration`** +: Durability of traffic (seconds) + +type: long + + +**`sophos.xg.email_subject`** +: Email Subject + +type: keyword + + +**`sophos.xg.ep_uuid`** +: Endpoint UUID + +type: keyword + + +**`sophos.xg.ether_type`** +: ethernet frame type + +type: keyword + + +**`sophos.xg.eventid`** +: ATP Evenet ID + +type: keyword + + +**`sophos.xg.eventtime`** +: Event time + +type: date + + +**`sophos.xg.eventtype`** +: ATP event type + +type: keyword + + +**`sophos.xg.exceptions`** +: List of the checks excluded by web exceptions. + +type: keyword + + +**`sophos.xg.execution_path`** +: ATP execution path + +type: keyword + + +**`sophos.xg.extra`** +: extra + +type: keyword + + +**`sophos.xg.file_name`** +: Filename + +type: keyword + + +**`sophos.xg.file_path`** +: File path + +type: keyword + + +**`sophos.xg.file_size`** +: File Size + +type: integer + + +**`sophos.xg.filename`** +: File name associated with the event + +type: keyword + + +**`sophos.xg.filepath`** +: Path of the file containing virus + +type: keyword + + +**`sophos.xg.filesize`** +: Size of the file that contained virus + +type: integer + + +**`sophos.xg.free`** +: free + +type: integer + + +**`sophos.xg.from_email_address`** +: Sender email address + +type: keyword + + +**`sophos.xg.ftp_direction`** +: Direction of FTP transfer: Upload or Download + +type: keyword + + +**`sophos.xg.ftp_url`** +: FTP URL from which virus was downloaded + +type: keyword + + +**`sophos.xg.ftpcommand`** +: FTP command used when virus was found + +type: keyword + + +**`sophos.xg.fw_rule_id`** +: Firewall Rule ID which is applied on the traffic + +type: integer + + +**`sophos.xg.fw_rule_type`** +: Firewall rule type which is applied on the traffic + +type: keyword + + +**`sophos.xg.hb_health`** +: Heartbeat status + +type: keyword + + +**`sophos.xg.hb_status`** +: Heartbeat status + +type: keyword + + +**`sophos.xg.host`** +: Host + +type: keyword + + +**`sophos.xg.http_category`** +: HTTP Category + +type: keyword + + +**`sophos.xg.http_category_type`** +: HTTP Category Type + +type: keyword + + +**`sophos.xg.httpresponsecode`** +: code of HTTP response + +type: long + + +**`sophos.xg.iap`** +: Internet Access policy ID applied on the traffic + +type: keyword + + +**`sophos.xg.icmp_code`** +: ICMP code of ICMP traffic + +type: keyword + + +**`sophos.xg.icmp_type`** +: ICMP type of ICMP traffic + +type: keyword + + +**`sophos.xg.idle_cpu`** +: idle ## + +type: float + + +**`sophos.xg.idp_policy_id`** +: IPS policy ID which is applied on the traffic + +type: integer + + +**`sophos.xg.idp_policy_name`** +: IPS policy name i.e. IPS policy name which is applied on the traffic + +type: keyword + + +**`sophos.xg.in_interface`** +: Interface for incoming traffic, e.g., Port A + +type: keyword + + +**`sophos.xg.interface`** +: interface + +type: keyword + + +**`sophos.xg.ipaddress`** +: Ipaddress + +type: keyword + + +**`sophos.xg.ips_policy_id`** +: IPS policy ID applied on the traffic + +type: integer + + +**`sophos.xg.lease_time`** +: Lease Time + +type: keyword + + +**`sophos.xg.localgateway`** +: Localgateway + +type: keyword + + +**`sophos.xg.localnetwork`** +: Localnetwork + +type: keyword + + +**`sophos.xg.log_component`** +: Component responsible for logging e.g. Firewall rule + +type: keyword + + +**`sophos.xg.log_id`** +: Unique 12 characters code (0101011) + +type: keyword + + +**`sophos.xg.log_subtype`** +: Sub type of event + +type: keyword + + +**`sophos.xg.log_type`** +: Type of event e.g. firewall event + +type: keyword + + +**`sophos.xg.log_version`** +: Log Version + +type: keyword + + +**`sophos.xg.login_user`** +: ATP login user + +type: keyword + + +**`sophos.xg.mailid`** +: mailid + +type: keyword + + +**`sophos.xg.mailsize`** +: mailsize + +type: integer + + +**`sophos.xg.message`** +: Message + +type: keyword + + +**`sophos.xg.mode`** +: Mode + +type: keyword + + +**`sophos.xg.nat_rule_id`** +: NAT Rule ID + +type: keyword + + +**`sophos.xg.newversion`** +: Newversion + +type: keyword + + +**`sophos.xg.oldversion`** +: Oldversion + +type: keyword + + +**`sophos.xg.out_interface`** +: Interface for outgoing traffic, e.g., Port B + +type: keyword + + +**`sophos.xg.override_authorizer`** +: Override authorizer + +type: keyword + + +**`sophos.xg.override_name`** +: Override name + +type: keyword + + +**`sophos.xg.override_token`** +: Override token + +type: keyword + + +**`sophos.xg.phpsessid`** +: PHP session ID + +type: keyword + + +**`sophos.xg.platform`** +: Platform of the traffic. + +type: keyword + + +**`sophos.xg.policy_type`** +: Policy type applied to the traffic + +type: keyword + + +**`sophos.xg.priority`** +: Severity level of traffic + +type: keyword + + +**`sophos.xg.protocol`** +: Protocol number of traffic + +type: keyword + + +**`sophos.xg.qualifier`** +: Qualifier + +type: keyword + + +**`sophos.xg.quarantine`** +: Path and filename of the file quarantined + +type: keyword + + +**`sophos.xg.quarantine_reason`** +: Quarantine reason + +type: keyword + + +**`sophos.xg.querystring`** +: querystring + +type: keyword + + +**`sophos.xg.raw_data`** +: Raw data + +type: keyword + + +**`sophos.xg.received_pkts`** +: Total number of packets received + +type: long + + +**`sophos.xg.receiveddrops`** +: received drops + +type: long + + +**`sophos.xg.receivederrors`** +: received errors + +type: keyword + + +**`sophos.xg.receivedkbits`** +: received kbits + +type: long + + +**`sophos.xg.recv_bytes`** +: Total number of bytes received + +type: long + + +**`sophos.xg.red_id`** +: RED ID + +type: keyword + + +**`sophos.xg.referer`** +: Referer + +type: keyword + + +**`sophos.xg.remote_ip`** +: Remote IP + +type: ip + + +**`sophos.xg.remotenetwork`** +: remotenetwork + +type: keyword + + +**`sophos.xg.reported_host`** +: Reported Host + +type: keyword + + +**`sophos.xg.reported_ip`** +: Reported IP + +type: keyword + + +**`sophos.xg.reports`** +: Reports + +type: float + + +**`sophos.xg.rule_priority`** +: Priority of IPS policy + +type: keyword + + +**`sophos.xg.sent_bytes`** +: Total number of bytes sent + +type: long + + +**`sophos.xg.sent_pkts`** +: Total number of packets sent + +type: long + + +**`sophos.xg.server`** +: Server + +type: keyword + + +**`sophos.xg.sessionid`** +: Sessionid + +type: keyword + + +**`sophos.xg.sha1sum`** +: SHA1 checksum of the item being analyzed + +type: keyword + + +**`sophos.xg.signature`** +: Signature + +type: float + + +**`sophos.xg.signature_id`** +: Signature ID + +type: keyword + + +**`sophos.xg.signature_msg`** +: Signature messsage + +type: keyword + + +**`sophos.xg.site_category`** +: Site Category + +type: keyword + + +**`sophos.xg.source`** +: Source + +type: keyword + + +**`sophos.xg.sourceip`** +: Original source IP address of traffic + +type: ip + + +**`sophos.xg.spamaction`** +: Spam Action + +type: keyword + + +**`sophos.xg.sqli`** +: related SQLI caught by the WAF + +type: keyword + + +**`sophos.xg.src_country_code`** +: Code of the country to which the source IP belongs + +type: keyword + + +**`sophos.xg.src_domainname`** +: Sender domain name + +type: keyword + + +**`sophos.xg.src_ip`** +: Original source IP address of traffic + +type: ip + + +**`sophos.xg.src_mac`** +: Original source MAC address of traffic + +type: keyword + + +**`sophos.xg.src_port`** +: Original source port of TCP and UDP traffic + +type: integer + + +**`sophos.xg.src_zone_type`** +: Type of source zone + +type: keyword + + +**`sophos.xg.ssid`** +: Configured SSID name. + +type: keyword + + +**`sophos.xg.start_time`** +: Start time + +type: date + + +**`sophos.xg.starttime`** +: Starttime + +type: date + + +**`sophos.xg.status`** +: Ultimate status of traffic – Allowed or Denied + +type: keyword + + +**`sophos.xg.status_code`** +: Status code + +type: keyword + + +**`sophos.xg.subject`** +: Email subject + +type: keyword + + +**`sophos.xg.syslog_server_name`** +: Syslog server name. + +type: keyword + + +**`sophos.xg.system_cpu`** +: system + +type: float + + +**`sophos.xg.target`** +: Platform of the traffic. + +type: keyword + + +**`sophos.xg.temp`** +: Temp + +type: float + + +**`sophos.xg.threatname`** +: ATP threatname + +type: keyword + + +**`sophos.xg.timestamp`** +: timestamp + +type: date + + +**`sophos.xg.timezone`** +: Time (hh:mm:ss) when the event occurred + +type: keyword + + +**`sophos.xg.to_email_address`** +: Receipeint email address + +type: keyword + + +**`sophos.xg.total_memory`** +: Total Memory + +type: integer + + +**`sophos.xg.trans_dst_ip`** +: Translated destination IP address for outgoing traffic + +type: ip + + +**`sophos.xg.trans_dst_port`** +: Translated destination port for outgoing traffic + +type: integer + + +**`sophos.xg.trans_src_ip`** +: Translated source IP address for outgoing traffic + +type: ip + + +**`sophos.xg.trans_src_port`** +: Translated source port for outgoing traffic + +type: integer + + +**`sophos.xg.transaction_id`** +: Transaction ID + +type: keyword + + +**`sophos.xg.transactionid`** +: Transaction ID of the AV scan. + +type: keyword + + +**`sophos.xg.transmitteddrops`** +: transmitted drops + +type: long + + +**`sophos.xg.transmittederrors`** +: transmitted errors + +type: keyword + + +**`sophos.xg.transmittedkbits`** +: transmitted kbits + +type: long + + +**`sophos.xg.unit`** +: unit + +type: keyword + + +**`sophos.xg.updatedip`** +: updatedip + +type: ip + + +**`sophos.xg.upload_file_name`** +: Upload file name + +type: keyword + + +**`sophos.xg.upload_file_type`** +: Upload file type + +type: keyword + + +**`sophos.xg.url`** +: URL from which virus was downloaded + +type: keyword + + +**`sophos.xg.used`** +: used + +type: integer + + +**`sophos.xg.used_quota`** +: Used Quota + +type: keyword + + +**`sophos.xg.user`** +: User + +type: keyword + + +**`sophos.xg.user_cpu`** +: system + +type: float + + +**`sophos.xg.user_gp`** +: Group name to which the user belongs. + +type: keyword + + +**`sophos.xg.user_group`** +: Group name to which the user belongs + +type: keyword + + +**`sophos.xg.user_name`** +: user_name + +type: keyword + + +**`sophos.xg.users`** +: Number of users from System Health / Live User events. + +type: long + + +**`sophos.xg.vconn_id`** +: Connection ID of the master connection + +type: integer + + +**`sophos.xg.virus`** +: virus name + +type: keyword + + +**`sophos.xg.web_policy_id`** +: Web policy ID + +type: keyword + + +**`sophos.xg.website`** +: Website + +type: keyword + + +**`sophos.xg.xss`** +: related XSS caught by the WAF + +type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-suricata.md b/docs/reference/filebeat/exported-fields-suricata.md new file mode 100644 index 000000000000..685bb47faf72 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-suricata.md @@ -0,0 +1,861 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-suricata.html +--- + +# Suricata fields [exported-fields-suricata] + +Module for handling the EVE JSON logs produced by Suricata. + + +## suricata [_suricata] + +Fields from the Suricata EVE log file. + + +## eve [_eve] + +Fields exported by the EVE JSON logs + +**`suricata.eve.event_type`** +: type: keyword + + +**`suricata.eve.app_proto_orig`** +: type: keyword + + +**`suricata.eve.tcp.tcp_flags`** +: type: keyword + + +**`suricata.eve.tcp.psh`** +: type: boolean + + +**`suricata.eve.tcp.tcp_flags_tc`** +: type: keyword + + +**`suricata.eve.tcp.ack`** +: type: boolean + + +**`suricata.eve.tcp.syn`** +: type: boolean + + +**`suricata.eve.tcp.state`** +: type: keyword + + +**`suricata.eve.tcp.tcp_flags_ts`** +: type: keyword + + +**`suricata.eve.tcp.rst`** +: type: boolean + + +**`suricata.eve.tcp.fin`** +: type: boolean + + +**`suricata.eve.fileinfo.sha1`** +: type: keyword + + +**`suricata.eve.fileinfo.tx_id`** +: type: long + + +**`suricata.eve.fileinfo.state`** +: type: keyword + + +**`suricata.eve.fileinfo.stored`** +: type: boolean + + +**`suricata.eve.fileinfo.gaps`** +: type: boolean + + +**`suricata.eve.fileinfo.sha256`** +: type: keyword + + +**`suricata.eve.fileinfo.md5`** +: type: keyword + + +**`suricata.eve.icmp_type`** +: type: long + + +**`suricata.eve.pcap_cnt`** +: type: long + + +**`suricata.eve.dns.type`** +: type: keyword + + +**`suricata.eve.dns.rrtype`** +: type: keyword + + +**`suricata.eve.dns.rrname`** +: type: keyword + + +**`suricata.eve.dns.rdata`** +: type: keyword + + +**`suricata.eve.dns.tx_id`** +: type: long + + +**`suricata.eve.dns.ttl`** +: type: long + + +**`suricata.eve.dns.rcode`** +: type: keyword + + +**`suricata.eve.dns.id`** +: type: long + + +**`suricata.eve.flow_id`** +: type: keyword + + +**`suricata.eve.email.status`** +: type: keyword + + +**`suricata.eve.icmp_code`** +: type: long + + +**`suricata.eve.http.redirect`** +: type: keyword + + +**`suricata.eve.http.protocol`** +: type: keyword + + +**`suricata.eve.http.http_content_type`** +: type: keyword + + +**`suricata.eve.in_iface`** +: type: keyword + + +**`suricata.eve.alert.metadata`** +: Metadata about the alert. + +type: flattened + + +**`suricata.eve.alert.category`** +: type: keyword + + +**`suricata.eve.alert.rev`** +: type: long + + +**`suricata.eve.alert.gid`** +: type: long + + +**`suricata.eve.alert.signature`** +: type: keyword + + +**`suricata.eve.alert.signature_id`** +: type: long + + +**`suricata.eve.alert.protocols`** +: type: keyword + + +**`suricata.eve.alert.attack_target`** +: type: keyword + + +**`suricata.eve.alert.capec_id`** +: type: keyword + + +**`suricata.eve.alert.cwe_id`** +: type: keyword + + +**`suricata.eve.alert.malware`** +: type: keyword + + +**`suricata.eve.alert.cve`** +: type: keyword + + +**`suricata.eve.alert.cvss_v2_base`** +: type: keyword + + +**`suricata.eve.alert.cvss_v2_temporal`** +: type: keyword + + +**`suricata.eve.alert.cvss_v3_base`** +: type: keyword + + +**`suricata.eve.alert.cvss_v3_temporal`** +: type: keyword + + +**`suricata.eve.alert.priority`** +: type: keyword + + +**`suricata.eve.alert.hostile`** +: type: keyword + + +**`suricata.eve.alert.infected`** +: type: keyword + + +**`suricata.eve.alert.created_at`** +: type: date + + +**`suricata.eve.alert.updated_at`** +: type: date + + +**`suricata.eve.alert.classtype`** +: type: keyword + + +**`suricata.eve.alert.rule_source`** +: type: keyword + + +**`suricata.eve.alert.sid`** +: type: keyword + + +**`suricata.eve.alert.affected_product`** +: type: keyword + + +**`suricata.eve.alert.deployment`** +: type: keyword + + +**`suricata.eve.alert.former_category`** +: type: keyword + + +**`suricata.eve.alert.mitre_tool_id`** +: type: keyword + + +**`suricata.eve.alert.performance_impact`** +: type: keyword + + +**`suricata.eve.alert.signature_severity`** +: type: keyword + + +**`suricata.eve.alert.tag`** +: type: keyword + + +**`suricata.eve.ssh.client.proto_version`** +: type: keyword + + +**`suricata.eve.ssh.client.software_version`** +: type: keyword + + +**`suricata.eve.ssh.server.proto_version`** +: type: keyword + + +**`suricata.eve.ssh.server.software_version`** +: type: keyword + + +**`suricata.eve.stats.capture.kernel_packets`** +: type: long + + +**`suricata.eve.stats.capture.kernel_drops`** +: type: long + + +**`suricata.eve.stats.capture.kernel_ifdrops`** +: type: long + + +**`suricata.eve.stats.uptime`** +: type: long + + +**`suricata.eve.stats.detect.alert`** +: type: long + + +**`suricata.eve.stats.http.memcap`** +: type: long + + +**`suricata.eve.stats.http.memuse`** +: type: long + + +**`suricata.eve.stats.file_store.open_files`** +: type: long + + +**`suricata.eve.stats.defrag.max_frag_hits`** +: type: long + + +**`suricata.eve.stats.defrag.ipv4.timeouts`** +: type: long + + +**`suricata.eve.stats.defrag.ipv4.fragments`** +: type: long + + +**`suricata.eve.stats.defrag.ipv4.reassembled`** +: type: long + + +**`suricata.eve.stats.defrag.ipv6.timeouts`** +: type: long + + +**`suricata.eve.stats.defrag.ipv6.fragments`** +: type: long + + +**`suricata.eve.stats.defrag.ipv6.reassembled`** +: type: long + + +**`suricata.eve.stats.flow.tcp_reuse`** +: type: long + + +**`suricata.eve.stats.flow.udp`** +: type: long + + +**`suricata.eve.stats.flow.memcap`** +: type: long + + +**`suricata.eve.stats.flow.emerg_mode_entered`** +: type: long + + +**`suricata.eve.stats.flow.emerg_mode_over`** +: type: long + + +**`suricata.eve.stats.flow.tcp`** +: type: long + + +**`suricata.eve.stats.flow.icmpv6`** +: type: long + + +**`suricata.eve.stats.flow.icmpv4`** +: type: long + + +**`suricata.eve.stats.flow.spare`** +: type: long + + +**`suricata.eve.stats.flow.memuse`** +: type: long + + +**`suricata.eve.stats.tcp.pseudo_failed`** +: type: long + + +**`suricata.eve.stats.tcp.ssn_memcap_drop`** +: type: long + + +**`suricata.eve.stats.tcp.insert_data_overlap_fail`** +: type: long + + +**`suricata.eve.stats.tcp.sessions`** +: type: long + + +**`suricata.eve.stats.tcp.pseudo`** +: type: long + + +**`suricata.eve.stats.tcp.synack`** +: type: long + + +**`suricata.eve.stats.tcp.insert_data_normal_fail`** +: type: long + + +**`suricata.eve.stats.tcp.syn`** +: type: long + + +**`suricata.eve.stats.tcp.memuse`** +: type: long + + +**`suricata.eve.stats.tcp.invalid_checksum`** +: type: long + + +**`suricata.eve.stats.tcp.segment_memcap_drop`** +: type: long + + +**`suricata.eve.stats.tcp.overlap`** +: type: long + + +**`suricata.eve.stats.tcp.insert_list_fail`** +: type: long + + +**`suricata.eve.stats.tcp.rst`** +: type: long + + +**`suricata.eve.stats.tcp.stream_depth_reached`** +: type: long + + +**`suricata.eve.stats.tcp.reassembly_memuse`** +: type: long + + +**`suricata.eve.stats.tcp.reassembly_gap`** +: type: long + + +**`suricata.eve.stats.tcp.overlap_diff_data`** +: type: long + + +**`suricata.eve.stats.tcp.no_flow`** +: type: long + + +**`suricata.eve.stats.decoder.avg_pkt_size`** +: type: long + + +**`suricata.eve.stats.decoder.bytes`** +: type: long + + +**`suricata.eve.stats.decoder.tcp`** +: type: long + + +**`suricata.eve.stats.decoder.raw`** +: type: long + + +**`suricata.eve.stats.decoder.ppp`** +: type: long + + +**`suricata.eve.stats.decoder.vlan_qinq`** +: type: long + + +**`suricata.eve.stats.decoder.null`** +: type: long + + +**`suricata.eve.stats.decoder.ltnull.unsupported_type`** +: type: long + + +**`suricata.eve.stats.decoder.ltnull.pkt_too_small`** +: type: long + + +**`suricata.eve.stats.decoder.invalid`** +: type: long + + +**`suricata.eve.stats.decoder.gre`** +: type: long + + +**`suricata.eve.stats.decoder.ipv4`** +: type: long + + +**`suricata.eve.stats.decoder.ipv6`** +: type: long + + +**`suricata.eve.stats.decoder.pkts`** +: type: long + + +**`suricata.eve.stats.decoder.ipv6_in_ipv6`** +: type: long + + +**`suricata.eve.stats.decoder.ipraw.invalid_ip_version`** +: type: long + + +**`suricata.eve.stats.decoder.pppoe`** +: type: long + + +**`suricata.eve.stats.decoder.udp`** +: type: long + + +**`suricata.eve.stats.decoder.dce.pkt_too_small`** +: type: long + + +**`suricata.eve.stats.decoder.vlan`** +: type: long + + +**`suricata.eve.stats.decoder.sctp`** +: type: long + + +**`suricata.eve.stats.decoder.max_pkt_size`** +: type: long + + +**`suricata.eve.stats.decoder.teredo`** +: type: long + + +**`suricata.eve.stats.decoder.mpls`** +: type: long + + +**`suricata.eve.stats.decoder.sll`** +: type: long + + +**`suricata.eve.stats.decoder.icmpv6`** +: type: long + + +**`suricata.eve.stats.decoder.icmpv4`** +: type: long + + +**`suricata.eve.stats.decoder.erspan`** +: type: long + + +**`suricata.eve.stats.decoder.ethernet`** +: type: long + + +**`suricata.eve.stats.decoder.ipv4_in_ipv6`** +: type: long + + +**`suricata.eve.stats.decoder.ieee8021ah`** +: type: long + + +**`suricata.eve.stats.dns.memcap_global`** +: type: long + + +**`suricata.eve.stats.dns.memcap_state`** +: type: long + + +**`suricata.eve.stats.dns.memuse`** +: type: long + + +**`suricata.eve.stats.flow_mgr.rows_busy`** +: type: long + + +**`suricata.eve.stats.flow_mgr.flows_timeout`** +: type: long + + +**`suricata.eve.stats.flow_mgr.flows_notimeout`** +: type: long + + +**`suricata.eve.stats.flow_mgr.rows_skipped`** +: type: long + + +**`suricata.eve.stats.flow_mgr.closed_pruned`** +: type: long + + +**`suricata.eve.stats.flow_mgr.new_pruned`** +: type: long + + +**`suricata.eve.stats.flow_mgr.flows_removed`** +: type: long + + +**`suricata.eve.stats.flow_mgr.bypassed_pruned`** +: type: long + + +**`suricata.eve.stats.flow_mgr.est_pruned`** +: type: long + + +**`suricata.eve.stats.flow_mgr.flows_timeout_inuse`** +: type: long + + +**`suricata.eve.stats.flow_mgr.flows_checked`** +: type: long + + +**`suricata.eve.stats.flow_mgr.rows_maxlen`** +: type: long + + +**`suricata.eve.stats.flow_mgr.rows_checked`** +: type: long + + +**`suricata.eve.stats.flow_mgr.rows_empty`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.tls`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.ftp`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.http`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.failed_udp`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.dns_udp`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.dns_tcp`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.smtp`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.failed_tcp`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.msn`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.ssh`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.imap`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.dcerpc_udp`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.dcerpc_tcp`** +: type: long + + +**`suricata.eve.stats.app_layer.flow.smb`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.tls`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.ftp`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.http`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.dns_udp`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.dns_tcp`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.smtp`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.ssh`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.dcerpc_udp`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.dcerpc_tcp`** +: type: long + + +**`suricata.eve.stats.app_layer.tx.smb`** +: type: long + + +**`suricata.eve.tls.notbefore`** +: type: date + + +**`suricata.eve.tls.issuerdn`** +: type: keyword + + +**`suricata.eve.tls.sni`** +: type: keyword + + +**`suricata.eve.tls.version`** +: type: keyword + + +**`suricata.eve.tls.session_resumed`** +: type: boolean + + +**`suricata.eve.tls.fingerprint`** +: type: keyword + + +**`suricata.eve.tls.serial`** +: type: keyword + + +**`suricata.eve.tls.notafter`** +: type: date + + +**`suricata.eve.tls.subject`** +: type: keyword + + +**`suricata.eve.tls.ja3s.string`** +: type: keyword + + +**`suricata.eve.tls.ja3s.hash`** +: type: keyword + + +**`suricata.eve.tls.ja3.string`** +: type: keyword + + +**`suricata.eve.tls.ja3.hash`** +: type: keyword + + +**`suricata.eve.app_proto_ts`** +: type: keyword + + +**`suricata.eve.flow.age`** +: type: long + + +**`suricata.eve.flow.state`** +: type: keyword + + +**`suricata.eve.flow.reason`** +: type: keyword + + +**`suricata.eve.flow.alerted`** +: type: boolean + + +**`suricata.eve.tx_id`** +: type: long + + +**`suricata.eve.app_proto_tc`** +: type: keyword + + +**`suricata.eve.smtp.rcpt_to`** +: type: keyword + + +**`suricata.eve.smtp.mail_from`** +: type: keyword + + +**`suricata.eve.smtp.helo`** +: type: keyword + + +**`suricata.eve.app_proto_expected`** +: type: keyword + + diff --git a/docs/reference/filebeat/exported-fields-system.md b/docs/reference/filebeat/exported-fields-system.md new file mode 100644 index 000000000000..b0021fd1a8bb --- /dev/null +++ b/docs/reference/filebeat/exported-fields-system.md @@ -0,0 +1,235 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-system.html +--- + +# System fields [exported-fields-system] + +Module for parsing system log files. + + +## system [_system] + +Fields from the system log files. + + +## auth [_auth_2] + +Fields from the Linux authorization logs. + +**`system.auth.timestamp`** +: type: alias + +alias to: @timestamp + + +**`system.auth.hostname`** +: type: alias + +alias to: host.hostname + + +**`system.auth.program`** +: type: alias + +alias to: process.name + + +**`system.auth.pid`** +: type: alias + +alias to: process.pid + + +**`system.auth.message`** +: type: alias + +alias to: message + + +**`system.auth.user`** +: type: alias + +alias to: user.name + + +**`system.auth.ssh.method`** +: The SSH authentication method. Can be one of "password" or "publickey". + + +**`system.auth.ssh.signature`** +: The signature of the client public key. + + +**`system.auth.ssh.dropped_ip`** +: The client IP from SSH connections that are open and immediately dropped. + +type: ip + + +**`system.auth.ssh.event`** +: The SSH event as found in the logs (Accepted, Invalid, Failed, etc.) + +example: Accepted + + +**`system.auth.ssh.ip`** +: type: alias + +alias to: source.ip + + +**`system.auth.ssh.port`** +: type: alias + +alias to: source.port + + +**`system.auth.ssh.geoip.continent_name`** +: type: alias + +alias to: source.geo.continent_name + + +**`system.auth.ssh.geoip.country_iso_code`** +: type: alias + +alias to: source.geo.country_iso_code + + +**`system.auth.ssh.geoip.location`** +: type: alias + +alias to: source.geo.location + + +**`system.auth.ssh.geoip.region_name`** +: type: alias + +alias to: source.geo.region_name + + +**`system.auth.ssh.geoip.city_name`** +: type: alias + +alias to: source.geo.city_name + + +**`system.auth.ssh.geoip.region_iso_code`** +: type: alias + +alias to: source.geo.region_iso_code + + + +## sudo [_sudo] + +Fields specific to events created by the `sudo` command. + +**`system.auth.sudo.error`** +: The error message in case the sudo command failed. + +example: user NOT in sudoers + + +**`system.auth.sudo.tty`** +: The TTY where the sudo command is executed. + + +**`system.auth.sudo.pwd`** +: The current directory where the sudo command is executed. + + +**`system.auth.sudo.user`** +: The target user to which the sudo command is switching. + +example: root + + +**`system.auth.sudo.command`** +: The command executed via sudo. + + + +## useradd [_useradd] + +Fields specific to events created by the `useradd` command. + +**`system.auth.useradd.home`** +: The home folder for the new user. + + +**`system.auth.useradd.shell`** +: The default shell for the new user. + + +**`system.auth.useradd.name`** +: type: alias + +alias to: user.name + + +**`system.auth.useradd.uid`** +: type: alias + +alias to: user.id + + +**`system.auth.useradd.gid`** +: type: alias + +alias to: group.id + + + +## groupadd [_groupadd] + +Fields specific to events created by the `groupadd` command. + +**`system.auth.groupadd.name`** +: type: alias + +alias to: group.name + + +**`system.auth.groupadd.gid`** +: type: alias + +alias to: group.id + + + +## syslog [_syslog_3] + +Contains fields from the syslog system logs. + +**`system.syslog.timestamp`** +: type: alias + +alias to: @timestamp + + +**`system.syslog.hostname`** +: type: alias + +alias to: host.hostname + + +**`system.syslog.program`** +: type: alias + +alias to: process.name + + +**`system.syslog.pid`** +: type: alias + +alias to: process.pid + + +**`system.syslog.message`** +: type: alias + +alias to: message + + diff --git a/docs/reference/filebeat/exported-fields-threatintel.md b/docs/reference/filebeat/exported-fields-threatintel.md new file mode 100644 index 000000000000..20a9378de79b --- /dev/null +++ b/docs/reference/filebeat/exported-fields-threatintel.md @@ -0,0 +1,815 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-threatintel.html +--- + +# threatintel fields [exported-fields-threatintel] + +Threat intelligence Filebeat Module. + +**`threat.indicator.file.hash.tlsh`** +: The file’s import tlsh, if available. + +type: keyword + + +**`threat.indicator.file.hash.sha384`** +: The file’s sha384 hash, if available. + +type: keyword + + +**`threat.feed.name`** +: type: keyword + + +**`threat.feed.dashboard_id`** +: type: keyword + + + +## abusech.malware [_abusech_malware] + +Fields for AbuseCH Malware Threat Intel + +**`abusech.malware.file_type`** +: File type guessed by URLhaus. + +type: keyword + + +**`abusech.malware.signature`** +: Malware familiy. + +type: keyword + + +**`abusech.malware.urlhaus_download`** +: Location (URL) where you can download a copy of this file. + +type: keyword + + +**`abusech.malware.virustotal.result`** +: AV detection ration. + +type: keyword + + +**`abusech.malware.virustotal.percent`** +: AV detection in percent. + +type: float + + +**`abusech.malware.virustotal.link`** +: Link to the Virustotal report. + +type: keyword + + + +## abusech.url [_abusech_url] + +Fields for AbuseCH Malware Threat Intel + +**`abusech.url.id`** +: The ID of the url. + +type: keyword + + +**`abusech.url.urlhaus_reference`** +: Link to URLhaus entry. + +type: keyword + + +**`abusech.url.url_status`** +: The current status of the URL. Possible values are: online, offline and unknown. + +type: keyword + + +**`abusech.url.threat`** +: The threat corresponding to this malware URL. + +type: keyword + + +**`abusech.url.blacklists.surbl`** +: SURBL blacklist status. Possible values are: listed and not_listed + +type: keyword + + +**`abusech.url.blacklists.spamhaus_dbl`** +: Spamhaus DBL blacklist status. + +type: keyword + + +**`abusech.url.reporter`** +: The Twitter handle of the reporter that has reported this malware URL (or anonymous). + +type: keyword + + +**`abusech.url.larted`** +: Indicates whether the malware URL has been reported to the hosting provider (true or false) + +type: boolean + + +**`abusech.url.tags`** +: A list of tags associated with the queried malware URL + +type: keyword + + + +## anomali.limo [_anomali_limo] + +Fields for Anomali Threat Intel + +**`anomali.limo.id`** +: The ID of the indicator. + +type: keyword + + +**`anomali.limo.name`** +: The name of the indicator. + +type: keyword + + +**`anomali.limo.pattern`** +: The pattern ID of the indicator. + +type: keyword + + +**`anomali.limo.valid_from`** +: When the indicator was first found or is considered valid. + +type: date + + +**`anomali.limo.modified`** +: When the indicator was last modified + +type: date + + +**`anomali.limo.labels`** +: The labels related to the indicator + +type: keyword + + +**`anomali.limo.indicator`** +: The value of the indicator, for example if the type is domain, this would be the value. + +type: keyword + + +**`anomali.limo.description`** +: A description of the indicator. + +type: keyword + + +**`anomali.limo.title`** +: Title describing the indicator. + +type: keyword + + +**`anomali.limo.content`** +: Extra text or descriptive content related to the indicator. + +type: keyword + + +**`anomali.limo.type`** +: The indicator type, can for example be "domain, email, FileHash-SHA256". + +type: keyword + + +**`anomali.limo.object_marking_refs`** +: The STIX reference object. + +type: keyword + + + +## anomali.threatstream [_anomali_threatstream] + +Fields for Anomali ThreatStream + +**`anomali.threatstream.classification`** +: Indicates whether an indicator is private or from a public feed and available publicly. Possible values: private, public. + +type: keyword + +example: private + + +**`anomali.threatstream.confidence`** +: The measure of the accuracy (from 0 to 100) assigned by ThreatStream’s predictive analytics technology to indicators. + +type: short + + +**`anomali.threatstream.detail2`** +: Detail text for indicator. + +type: text + +example: Imported by user 42. + + +**`anomali.threatstream.id`** +: The ID of the indicator. + +type: keyword + + +**`anomali.threatstream.import_session_id`** +: ID of the import session that created the indicator on ThreatStream. + +type: keyword + + +**`anomali.threatstream.itype`** +: Indicator type. Possible values: "apt_domain", "apt_email", "apt_ip", "apt_url", "bot_ip", "c2_domain", "c2_ip", "c2_url", "i2p_ip", "mal_domain", "mal_email", "mal_ip", "mal_md5", "mal_url", "parked_ip", "phish_email", "phish_ip", "phish_url", "scan_ip", "spam_domain", "ssh_ip", "suspicious_domain", "tor_ip" and "torrent_tracker_url". + +type: keyword + + +**`anomali.threatstream.maltype`** +: Information regarding a malware family, a CVE ID, or another attack or threat, associated with the indicator. + +type: wildcard + + +**`anomali.threatstream.md5`** +: Hash for the indicator. + +type: keyword + + +**`anomali.threatstream.resource_uri`** +: Relative URI for the indicator details. + +type: keyword + + +**`anomali.threatstream.severity`** +: Criticality associated with the threat feed that supplied the indicator. Possible values: low, medium, high, very-high. + +type: keyword + + +**`anomali.threatstream.source`** +: Source for the indicator. + +type: keyword + +example: Analyst + + +**`anomali.threatstream.source_feed_id`** +: ID for the integrator source. + +type: keyword + + +**`anomali.threatstream.state`** +: State for this indicator. + +type: keyword + +example: active + + +**`anomali.threatstream.trusted_circle_ids`** +: ID of the trusted circle that imported the indicator. + +type: keyword + + +**`anomali.threatstream.update_id`** +: Update ID. + +type: keyword + + +**`anomali.threatstream.url`** +: URL for the indicator. + +type: keyword + + +**`anomali.threatstream.value_type`** +: Data type of the indicator. Possible values: ip, domain, url, email, md5. + +type: keyword + + + +## abusech.malwarebazaar [_abusech_malwarebazaar] + +Fields for Malware Bazaar Threat Intel + +**`abusech.malwarebazaar.file_type`** +: File type guessed by Malware Bazaar. + +type: keyword + + +**`abusech.malwarebazaar.signature`** +: Malware familiy. + +type: keyword + + +**`abusech.malwarebazaar.tags`** +: A list of tags associated with the queried malware sample. + +type: keyword + + +**`abusech.malwarebazaar.intelligence.downloads`** +: Number of downloads from MalwareBazaar. + +type: long + + +**`abusech.malwarebazaar.intelligence.uploads`** +: Number of uploads from MalwareBazaar. + +type: long + + +**`abusech.malwarebazaar.intelligence.mail.Generic`** +: Malware seen in generic spam traffic. + +type: keyword + + +**`abusech.malwarebazaar.intelligence.mail.IT`** +: Malware seen in IT spam traffic. + +type: keyword + + +**`abusech.malwarebazaar.anonymous`** +: Identifies if the sample was submitted anonymously. + +type: long + + +**`abusech.malwarebazaar.code_sign`** +: Code signing information for the sample. + +type: nested + + + +## misp [_misp_2] + +Fields for MISP Threat Intel + +**`misp.id`** +: Attribute ID. + +type: keyword + + +**`misp.orgc_id`** +: Organization Community ID of the event. + +type: keyword + + +**`misp.org_id`** +: Organization ID of the event. + +type: keyword + + +**`misp.threat_level_id`** +: Threat level from 5 to 1, where 1 is the most critical. + +type: long + + +**`misp.info`** +: Additional text or information related to the event. + +type: keyword + + +**`misp.published`** +: When the event was published. + +type: boolean + + +**`misp.uuid`** +: The UUID of the event object. + +type: keyword + + +**`misp.date`** +: The date of when the event object was created. + +type: date + + +**`misp.attribute_count`** +: How many attributes are included in a single event object. + +type: long + + +**`misp.timestamp`** +: The timestamp of when the event object was created. + +type: date + + +**`misp.distribution`** +: Distribution type related to MISP. + +type: keyword + + +**`misp.proposal_email_lock`** +: Settings configured on MISP for email lock on this event object. + +type: boolean + + +**`misp.locked`** +: If the current MISP event object is locked or not. + +type: boolean + + +**`misp.publish_timestamp`** +: At what time the event object was published + +type: date + + +**`misp.sharing_group_id`** +: The ID of the grouped events or sources of the event. + +type: keyword + + +**`misp.disable_correlation`** +: If correlation is disabled on the MISP event object. + +type: boolean + + +**`misp.extends_uuid`** +: The UUID of the event object it might extend. + +type: keyword + + +**`misp.org.id`** +: The organization ID related to the event object. + +type: keyword + + +**`misp.org.name`** +: The organization name related to the event object. + +type: keyword + + +**`misp.org.uuid`** +: The UUID of the organization related to the event object. + +type: keyword + + +**`misp.org.local`** +: If the event object is local or from a remote source. + +type: boolean + + +**`misp.orgc.id`** +: The Organization Community ID in which the event object was reported from. + +type: keyword + + +**`misp.orgc.name`** +: The Organization Community name in which the event object was reported from. + +type: keyword + + +**`misp.orgc.uuid`** +: The Organization Community UUID in which the event object was reported from. + +type: keyword + + +**`misp.orgc.local`** +: If the Organization Community was local or synced from a remote source. + +type: boolean + + +**`misp.attribute.id`** +: The ID of the attribute related to the event object. + +type: keyword + + +**`misp.attribute.type`** +: The type of the attribute related to the event object. For example email, ipv4, sha1 and such. + +type: keyword + + +**`misp.attribute.category`** +: The category of the attribute related to the event object. For example "Network Activity". + +type: keyword + + +**`misp.attribute.to_ids`** +: If the attribute should be automatically synced with an IDS. + +type: boolean + + +**`misp.attribute.uuid`** +: The UUID of the attribute related to the event. + +type: keyword + + +**`misp.attribute.event_id`** +: The local event ID of the attribute related to the event. + +type: keyword + + +**`misp.attribute.distribution`** +: How the attribute has been distributed, represented by integer numbers. + +type: long + + +**`misp.attribute.timestamp`** +: The timestamp in which the attribute was attached to the event object. + +type: date + + +**`misp.attribute.comment`** +: Comments made to the attribute itself. + +type: keyword + + +**`misp.attribute.sharing_group_id`** +: The group ID of the sharing group related to the specific attribute. + +type: keyword + + +**`misp.attribute.deleted`** +: If the attribute has been removed from the event object. + +type: boolean + + +**`misp.attribute.disable_correlation`** +: If correlation has been enabled on the attribute related to the event object. + +type: boolean + + +**`misp.attribute.object_id`** +: The ID of the Object in which the attribute is attached. + +type: keyword + + +**`misp.attribute.object_relation`** +: The type of relation the attribute has with the event object itself. + +type: keyword + + +**`misp.attribute.value`** +: The value of the attribute, depending on the type like "url, sha1, email-src". + +type: keyword + + +**`misp.context.attribute.id`** +: The ID of the secondary attribute related to the event object. + +type: keyword + + +**`misp.context.attribute.type`** +: The type of the secondary attribute related to the event object. For example email, ipv4, sha1 and such. + +type: keyword + + +**`misp.context.attribute.category`** +: The category of the secondary attribute related to the event object. For example "Network Activity". + +type: keyword + + +**`misp.context.attribute.to_ids`** +: If the secondary attribute should be automatically synced with an IDS. + +type: boolean + + +**`misp.context.attribute.uuid`** +: The UUID of the secondary attribute related to the event. + +type: keyword + + +**`misp.context.attribute.event_id`** +: The local event ID of the secondary attribute related to the event. + +type: keyword + + +**`misp.context.attribute.distribution`** +: How the secondary attribute has been distributed, represented by integer numbers. + +type: long + + +**`misp.context.attribute.timestamp`** +: The timestamp in which the secondary attribute was attached to the event object. + +type: date + + +**`misp.context.attribute.comment`** +: Comments made to the secondary attribute itself. + +type: keyword + + +**`misp.context.attribute.sharing_group_id`** +: The group ID of the sharing group related to the specific secondary attribute. + +type: keyword + + +**`misp.context.attribute.deleted`** +: If the secondary attribute has been removed from the event object. + +type: boolean + + +**`misp.context.attribute.disable_correlation`** +: If correlation has been enabled on the secondary attribute related to the event object. + +type: boolean + + +**`misp.context.attribute.object_id`** +: The ID of the Object in which the secondary attribute is attached. + +type: keyword + + +**`misp.context.attribute.object_relation`** +: The type of relation the secondary attribute has with the event object itself. + +type: keyword + + +**`misp.context.attribute.value`** +: The value of the attribute, depending on the type like "url, sha1, email-src". + +type: keyword + + + +## otx [_otx] + +Fields for OTX Threat Intel + +**`otx.id`** +: The ID of the indicator. + +type: keyword + + +**`otx.indicator`** +: The value of the indicator, for example if the type is domain, this would be the value. + +type: keyword + + +**`otx.description`** +: A description of the indicator. + +type: keyword + + +**`otx.title`** +: Title describing the indicator. + +type: keyword + + +**`otx.content`** +: Extra text or descriptive content related to the indicator. + +type: keyword + + +**`otx.type`** +: The indicator type, can for example be "domain, email, FileHash-SHA256". + +type: keyword + + + +## threatq [_threatq] + +Fields for ThreatQ Threat Library + +**`threatq.updated_at`** +: Last modification time + +type: date + + +**`threatq.created_at`** +: Object creation time + +type: date + + +**`threatq.expires_at`** +: Expiration time + +type: date + + +**`threatq.expires_calculated_at`** +: Expiration calculation time + +type: date + + +**`threatq.published_at`** +: Object publication time + +type: date + + +**`threatq.status`** +: Object status within the Threat Library + +type: keyword + + +**`threatq.indicator_value`** +: Original indicator value + +type: keyword + + +**`threatq.adversaries`** +: Adversaries that are linked to the object + +type: keyword + + +**`threatq.attributes`** +: These provide additional context about an object + +type: flattened + + diff --git a/docs/reference/filebeat/exported-fields-traefik.md b/docs/reference/filebeat/exported-fields-traefik.md new file mode 100644 index 000000000000..7a9a5805f8bd --- /dev/null +++ b/docs/reference/filebeat/exported-fields-traefik.md @@ -0,0 +1,157 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-traefik.html +--- + +# Traefik fields [exported-fields-traefik] + +Module for parsing the Traefik log files. + + +## traefik [_traefik] + +Fields from the Traefik log files. + + +## access [_access_4] + +Contains fields for the Traefik access logs. + +**`traefik.access.user_identifier`** +: Is the RFC 1413 identity of the client + +type: keyword + + +**`traefik.access.request_count`** +: The number of requests + +type: long + + +**`traefik.access.frontend_name`** +: The name of the frontend used + +type: keyword + + +**`traefik.access.backend_url`** +: The url of the backend where request is forwarded + +type: keyword + + +**`traefik.access.body_sent.bytes`** +: type: alias + +alias to: http.response.body.bytes + + +**`traefik.access.remote_ip`** +: type: alias + +alias to: source.address + + +**`traefik.access.user_name`** +: type: alias + +alias to: user.name + + +**`traefik.access.method`** +: type: alias + +alias to: http.request.method + + +**`traefik.access.url`** +: type: alias + +alias to: url.original + + +**`traefik.access.http_version`** +: type: alias + +alias to: http.version + + +**`traefik.access.response_code`** +: type: alias + +alias to: http.response.status_code + + +**`traefik.access.referrer`** +: type: alias + +alias to: http.request.referrer + + +**`traefik.access.agent`** +: type: alias + +alias to: user_agent.original + + +**`traefik.access.user_agent.name`** +: type: alias + +alias to: user_agent.name + + +**`traefik.access.user_agent.os`** +: type: alias + +alias to: user_agent.os.full_name + + +**`traefik.access.user_agent.os_name`** +: type: alias + +alias to: user_agent.os.name + + +**`traefik.access.user_agent.original`** +: type: alias + +alias to: user_agent.original + + +**`traefik.access.geoip.continent_name`** +: type: alias + +alias to: source.geo.continent_name + + +**`traefik.access.geoip.country_iso_code`** +: type: alias + +alias to: source.geo.country_iso_code + + +**`traefik.access.geoip.location`** +: type: alias + +alias to: source.geo.location + + +**`traefik.access.geoip.region_name`** +: type: alias + +alias to: source.geo.region_name + + +**`traefik.access.geoip.city_name`** +: type: alias + +alias to: source.geo.city_name + + +**`traefik.access.geoip.region_iso_code`** +: type: alias + +alias to: source.geo.region_iso_code + + diff --git a/docs/reference/filebeat/exported-fields-winlog.md b/docs/reference/filebeat/exported-fields-winlog.md new file mode 100644 index 000000000000..53147bd3bfb3 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-winlog.md @@ -0,0 +1,134 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-winlog.html +--- + +# Windows ETW fields [exported-fields-winlog] + +Fields from the ETW input (Event Tracing for Windows). + + +## winlog [_winlog] + +All fields specific to the Windows Event Tracing are defined here. + +**`winlog.activity_id`** +: A globally unique identifier that identifies the current activity. The events that are published with this identifier are part of the same activity. + +type: keyword + +required: False + + +**`winlog.channel`** +: Used to enable special event processing. Channel values below 16 are reserved for use by Microsoft to enable special treatment by the ETW runtime. Channel values 16 and above will be ignored by the ETW runtime (treated the same as channel 0) and can be given user-defined semantics. + +type: keyword + +required: False + + +**`winlog.event_data`** +: The event-specific data. The content of this object is specific to any provider and event. + +type: object + +required: False + + +**`winlog.flags`** +: Flags that provide information about the event such as the type of session it was logged to and if the event contains extended data. + +type: keyword + +required: False + + +**`winlog.keywords`** +: The keywords are used to indicate an event’s membership in a set of event categories. + +type: keyword + +required: False + + +**`winlog.level`** +: Level of severity. Level values 0 through 5 are defined by Microsoft. Level values 6 through 15 are reserved. Level values 16 through 255 can be defined by the event provider. + +type: keyword + +required: False + + +**`winlog.opcode`** +: The opcode defined in the event. Task and opcode are typically used to identify the location in the application from where the event was logged. + +type: keyword + +required: False + + +**`winlog.process_id`** +: Identifies the process that generated the event. + +type: keyword + +required: False + + +**`winlog.provider_guid`** +: A globally unique identifier that identifies the provider that logged the event. + +type: keyword + +required: False + + +**`winlog.provider_name`** +: The source of the event log record (the application or service that logged the record). + +type: keyword + +required: False + + +**`winlog.session`** +: Configured session to forward ETW events from providers to consumers. + +type: keyword + +required: False + + +**`winlog.severity`** +: Human-readable level of severity. + +type: keyword + +required: False + + +**`winlog.task`** +: The task defined in the event. Task and opcode are typically used to identify the location in the application from where the event was logged. + +type: keyword + +required: False + + +**`winlog.thread_id`** +: Identifies the thread that generated the event. + +type: keyword + +required: False + + +**`winlog.version`** +: Specify the version of a manifest-based event. + +type: long + +required: False + + diff --git a/docs/reference/filebeat/exported-fields-zeek.md b/docs/reference/filebeat/exported-fields-zeek.md new file mode 100644 index 000000000000..635656ef6452 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-zeek.md @@ -0,0 +1,3308 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-zeek.html +--- + +# Zeek fields [exported-fields-zeek] + +Module for handling logs produced by Zeek/Bro + + +## zeek [_zeek] + +Fields from Zeek/Bro logs after normalization + +**`zeek.session_id`** +: A unique identifier of the session + +type: keyword + + + +## capture_loss [_capture_loss] + +Fields exported by the Zeek capture_loss log + +**`zeek.capture_loss.ts_delta`** +: The time delay between this measurement and the last. + +type: integer + + +**`zeek.capture_loss.peer`** +: In the event that there are multiple Bro instances logging to the same host, this distinguishes each peer with its individual name. + +type: keyword + + +**`zeek.capture_loss.gaps`** +: Number of missed ACKs from the previous measurement interval. + +type: integer + + +**`zeek.capture_loss.acks`** +: Total number of ACKs seen in the previous measurement interval. + +type: integer + + +**`zeek.capture_loss.percent_lost`** +: Percentage of ACKs seen where the data being ACKed wasn’t seen. + +type: double + + + +## connection [_connection] + +Fields exported by the Zeek Connection log + +**`zeek.connection.local_orig`** +: Indicates whether the session is originated locally. + +type: boolean + + +**`zeek.connection.local_resp`** +: Indicates whether the session is responded locally. + +type: boolean + + +**`zeek.connection.missed_bytes`** +: Missed bytes for the session. + +type: long + + +**`zeek.connection.state`** +: Code indicating the state of the session. + +type: keyword + + +**`zeek.connection.state_message`** +: The state of the session. + +type: keyword + + +**`zeek.connection.icmp.type`** +: ICMP message type. + +type: integer + + +**`zeek.connection.icmp.code`** +: ICMP message code. + +type: integer + + +**`zeek.connection.history`** +: Flags indicating the history of the session. + +type: keyword + + +**`zeek.connection.vlan`** +: VLAN identifier. + +type: integer + + +**`zeek.connection.inner_vlan`** +: VLAN identifier. + +type: integer + + + +## dce_rpc [_dce_rpc] + +Fields exported by the Zeek DCE_RPC log + +**`zeek.dce_rpc.rtt`** +: Round trip time from the request to the response. If either the request or response wasn’t seen, this will be null. + +type: integer + + +**`zeek.dce_rpc.named_pipe`** +: Remote pipe name. + +type: keyword + + +**`zeek.dce_rpc.endpoint`** +: Endpoint name looked up from the uuid. + +type: keyword + + +**`zeek.dce_rpc.operation`** +: Operation seen in the call. + +type: keyword + + + +## dhcp [_dhcp] + +Fields exported by the Zeek DHCP log + +**`zeek.dhcp.domain`** +: Domain given by the server in option 15. + +type: keyword + + +**`zeek.dhcp.duration`** +: Duration of the DHCP session representing the time from the first message to the last, in seconds. + +type: double + + +**`zeek.dhcp.hostname`** +: Name given by client in Hostname option 12. + +type: keyword + + +**`zeek.dhcp.client_fqdn`** +: FQDN given by client in Client FQDN option 81. + +type: keyword + + +**`zeek.dhcp.lease_time`** +: IP address lease interval in seconds. + +type: integer + + + +## address [_address] + +Addresses seen in this DHCP exchange. + +**`zeek.dhcp.address.assigned`** +: IP address assigned by the server. + +type: ip + + +**`zeek.dhcp.address.client`** +: IP address of the client. If a transaction is only a client sending INFORM messages then there is no lease information exchanged so this is helpful to know who sent the messages. Getting an address in this field does require that the client sources at least one DHCP message using a non-broadcast address. + +type: ip + + +**`zeek.dhcp.address.mac`** +: Client’s hardware address. + +type: keyword + + +**`zeek.dhcp.address.requested`** +: IP address requested by the client. + +type: ip + + +**`zeek.dhcp.address.server`** +: IP address of the DHCP server. + +type: ip + + +**`zeek.dhcp.msg.types`** +: List of DHCP message types seen in this exchange. + +type: keyword + + +**`zeek.dhcp.msg.origin`** +: (present if policy/protocols/dhcp/msg-orig.bro is loaded) The address that originated each message from the msg.types field. + +type: ip + + +**`zeek.dhcp.msg.client`** +: Message typically accompanied with a DHCP_DECLINE so the client can tell the server why it rejected an address. + +type: keyword + + +**`zeek.dhcp.msg.server`** +: Message typically accompanied with a DHCP_NAK to let the client know why it rejected the request. + +type: keyword + + +**`zeek.dhcp.software.client`** +: (present if policy/protocols/dhcp/software.bro is loaded) Software reported by the client in the vendor_class option. + +type: keyword + + +**`zeek.dhcp.software.server`** +: (present if policy/protocols/dhcp/software.bro is loaded) Software reported by the client in the vendor_class option. + +type: keyword + + +**`zeek.dhcp.id.circuit`** +: (present if policy/protocols/dhcp/sub-opts.bro is loaded) Added by DHCP relay agents which terminate switched or permanent circuits. It encodes an agent-local identifier of the circuit from which a DHCP client-to-server packet was received. Typically it should represent a router or switch interface number. + +type: keyword + + +**`zeek.dhcp.id.remote_agent`** +: (present if policy/protocols/dhcp/sub-opts.bro is loaded) A globally unique identifier added by relay agents to identify the remote host end of the circuit. + +type: keyword + + +**`zeek.dhcp.id.subscriber`** +: (present if policy/protocols/dhcp/sub-opts.bro is loaded) The subscriber ID is a value independent of the physical network configuration so that a customer’s DHCP configuration can be given to them correctly no matter where they are physically connected. + +type: keyword + + + +## dnp3 [_dnp3] + +Fields exported by the Zeek DNP3 log + +**`zeek.dnp3.function.request`** +: The name of the function message in the request. + +type: keyword + + +**`zeek.dnp3.function.reply`** +: The name of the function message in the reply. + +type: keyword + + +**`zeek.dnp3.id`** +: The response’s internal indication number. + +type: integer + + + +## dns [_dns_2] + +Fields exported by the Zeek DNS log + +**`zeek.dns.trans_id`** +: DNS transaction identifier. + +type: keyword + + +**`zeek.dns.rtt`** +: Round trip time for the query and response. + +type: double + + +**`zeek.dns.query`** +: The domain name that is the subject of the DNS query. + +type: keyword + + +**`zeek.dns.qclass`** +: The QCLASS value specifying the class of the query. + +type: long + + +**`zeek.dns.qclass_name`** +: A descriptive name for the class of the query. + +type: keyword + + +**`zeek.dns.qtype`** +: A QTYPE value specifying the type of the query. + +type: long + + +**`zeek.dns.qtype_name`** +: A descriptive name for the type of the query. + +type: keyword + + +**`zeek.dns.rcode`** +: The response code value in DNS response messages. + +type: long + + +**`zeek.dns.rcode_name`** +: A descriptive name for the response code value. + +type: keyword + + +**`zeek.dns.AA`** +: The Authoritative Answer bit for response messages specifies that the responding name server is an authority for the domain name in the question section. + +type: boolean + + +**`zeek.dns.TC`** +: The Truncation bit specifies that the message was truncated. + +type: boolean + + +**`zeek.dns.RD`** +: The Recursion Desired bit in a request message indicates that the client wants recursive service for this query. + +type: boolean + + +**`zeek.dns.RA`** +: The Recursion Available bit in a response message indicates that the name server supports recursive queries. + +type: boolean + + +**`zeek.dns.answers`** +: The set of resource descriptions in the query answer. + +type: keyword + + +**`zeek.dns.TTLs`** +: The caching intervals of the associated RRs described by the answers field. + +type: double + + +**`zeek.dns.rejected`** +: Indicates whether the DNS query was rejected by the server. + +type: boolean + + +**`zeek.dns.total_answers`** +: The total number of resource records in the reply. + +type: integer + + +**`zeek.dns.total_replies`** +: The total number of resource records in the reply message. + +type: integer + + +**`zeek.dns.saw_query`** +: Whether the full DNS query has been seen. + +type: boolean + + +**`zeek.dns.saw_reply`** +: Whether the full DNS reply has been seen. + +type: boolean + + + +## dpd [_dpd] + +Fields exported by the Zeek DPD log + +**`zeek.dpd.analyzer`** +: The analyzer that generated the violation. + +type: keyword + + +**`zeek.dpd.failure_reason`** +: The textual reason for the analysis failure. + +type: keyword + + +**`zeek.dpd.packet_segment`** +: (present if policy/frameworks/dpd/packet-segment-logging.bro is loaded) A chunk of the payload that most likely resulted in the protocol violation. + +type: keyword + + + +## files [_files] + +Fields exported by the Zeek Files log. + +**`zeek.files.fuid`** +: A file unique identifier. + +type: keyword + + +**`zeek.files.tx_host`** +: The host that transferred the file. + +type: ip + + +**`zeek.files.rx_host`** +: The host that received the file. + +type: ip + + +**`zeek.files.session_ids`** +: The sessions that have this file. + +type: keyword + + +**`zeek.files.source`** +: An identification of the source of the file data. E.g. it may be a network protocol over which it was transferred, or a local file path which was read, or some other input source. + +type: keyword + + +**`zeek.files.depth`** +: A value to represent the depth of this file in relation to its source. In SMTP, it is the depth of the MIME attachment on the message. In HTTP, it is the depth of the request within the TCP connection. + +type: long + + +**`zeek.files.analyzers`** +: A set of analysis types done during the file analysis. + +type: keyword + + +**`zeek.files.mime_type`** +: Mime type of the file. + +type: keyword + + +**`zeek.files.filename`** +: Name of the file if available. + +type: keyword + + +**`zeek.files.local_orig`** +: If the source of this file is a network connection, this field indicates if the data originated from the local network or not. + +type: boolean + + +**`zeek.files.is_orig`** +: If the source of this file is a network connection, this field indicates if the file is being sent by the originator of the connection or the responder. + +type: boolean + + +**`zeek.files.duration`** +: The duration the file was analyzed for. Not the duration of the session. + +type: double + + +**`zeek.files.seen_bytes`** +: Number of bytes provided to the file analysis engine for the file. + +type: long + + +**`zeek.files.total_bytes`** +: Total number of bytes that are supposed to comprise the full file. + +type: long + + +**`zeek.files.missing_bytes`** +: The number of bytes in the file stream that were completely missed during the process of analysis. + +type: long + + +**`zeek.files.overflow_bytes`** +: The number of bytes in the file stream that were not delivered to stream file analyzers. This could be overlapping bytes or bytes that couldn’t be reassembled. + +type: long + + +**`zeek.files.timedout`** +: Whether the file analysis timed out at least once for the file. + +type: boolean + + +**`zeek.files.parent_fuid`** +: Identifier associated with a container file from which this one was extracted as part of the file analysis. + +type: keyword + + +**`zeek.files.md5`** +: An MD5 digest of the file contents. + +type: keyword + + +**`zeek.files.sha1`** +: A SHA1 digest of the file contents. + +type: keyword + + +**`zeek.files.sha256`** +: A SHA256 digest of the file contents. + +type: keyword + + +**`zeek.files.extracted`** +: Local filename of extracted file. + +type: keyword + + +**`zeek.files.extracted_cutoff`** +: Indicate whether the file being extracted was cut off hence not extracted completely. + +type: boolean + + +**`zeek.files.extracted_size`** +: The number of bytes extracted to disk. + +type: long + + +**`zeek.files.entropy`** +: The information density of the contents of the file. + +type: double + + + +## ftp [_ftp] + +Fields exported by the Zeek FTP log + +**`zeek.ftp.user`** +: User name for the current FTP session. + +type: keyword + + +**`zeek.ftp.password`** +: Password for the current FTP session if captured. + +type: keyword + + +**`zeek.ftp.command`** +: Command given by the client. + +type: keyword + + +**`zeek.ftp.arg`** +: Argument for the command if one is given. + +type: keyword + + +**`zeek.ftp.file.size`** +: Size of the file if the command indicates a file transfer. + +type: long + + +**`zeek.ftp.file.mime_type`** +: Sniffed mime type of file. + +type: keyword + + +**`zeek.ftp.file.fuid`** +: (present if base/protocols/ftp/files.bro is loaded) File unique ID. + +type: keyword + + +**`zeek.ftp.reply.code`** +: Reply code from the server in response to the command. + +type: integer + + +**`zeek.ftp.reply.msg`** +: Reply message from the server in response to the command. + +type: keyword + + + +## data_channel [_data_channel] + +Expected FTP data channel. + +**`zeek.ftp.data_channel.passive`** +: Whether PASV mode is toggled for control channel. + +type: boolean + + +**`zeek.ftp.data_channel.originating_host`** +: The host that will be initiating the data connection. + +type: ip + + +**`zeek.ftp.data_channel.response_host`** +: The host that will be accepting the data connection. + +type: ip + + +**`zeek.ftp.data_channel.response_port`** +: The port at which the acceptor is listening for the data connection. + +type: integer + + +**`zeek.ftp.cwd`** +: Current working directory that this session is in. By making the default value *.*, we can indicate that unless something more concrete is discovered that the existing but unknown directory is ok to use. + +type: keyword + + + +## cmdarg [_cmdarg] + +Command that is currently waiting for a response. + +**`zeek.ftp.cmdarg.cmd`** +: Command. + +type: keyword + + +**`zeek.ftp.cmdarg.arg`** +: Argument for the command if one was given. + +type: keyword + + +**`zeek.ftp.cmdarg.seq`** +: Counter to track how many commands have been executed. + +type: integer + + +**`zeek.ftp.pending_commands`** +: Queue for commands that have been sent but not yet responded to are tracked here. + +type: integer + + +**`zeek.ftp.passive`** +: Indicates if the session is in active or passive mode. + +type: boolean + + +**`zeek.ftp.capture_password`** +: Determines if the password will be captured for this request. + +type: boolean + + +**`zeek.ftp.last_auth_requested`** +: present if base/protocols/ftp/gridftp.bro is loaded. Last authentication/security mechanism that was used. + +type: keyword + + + +## http [_http_3] + +Fields exported by the Zeek HTTP log + +**`zeek.http.trans_depth`** +: Represents the pipelined depth into the connection of this request/response transaction. + +type: integer + + +**`zeek.http.status_msg`** +: Status message returned by the server. + +type: keyword + + +**`zeek.http.info_code`** +: Last seen 1xx informational reply code returned by the server. + +type: integer + + +**`zeek.http.info_msg`** +: Last seen 1xx informational reply message returned by the server. + +type: keyword + + +**`zeek.http.tags`** +: A set of indicators of various attributes discovered and related to a particular request/response pair. + +type: keyword + + +**`zeek.http.password`** +: Password if basic-auth is performed for the request. + +type: keyword + + +**`zeek.http.captured_password`** +: Determines if the password will be captured for this request. + +type: boolean + + +**`zeek.http.proxied`** +: All of the headers that may indicate if the HTTP request was proxied. + +type: keyword + + +**`zeek.http.range_request`** +: Indicates if this request can assume 206 partial content in response. + +type: boolean + + +**`zeek.http.client_header_names`** +: The vector of HTTP header names sent by the client. No header values are included here, just the header names. + +type: keyword + + +**`zeek.http.server_header_names`** +: The vector of HTTP header names sent by the server. No header values are included here, just the header names. + +type: keyword + + +**`zeek.http.orig_fuids`** +: An ordered vector of file unique IDs from the originator. + +type: keyword + + +**`zeek.http.orig_mime_types`** +: An ordered vector of mime types from the originator. + +type: keyword + + +**`zeek.http.orig_filenames`** +: An ordered vector of filenames from the originator. + +type: keyword + + +**`zeek.http.resp_fuids`** +: An ordered vector of file unique IDs from the responder. + +type: keyword + + +**`zeek.http.resp_mime_types`** +: An ordered vector of mime types from the responder. + +type: keyword + + +**`zeek.http.resp_filenames`** +: An ordered vector of filenames from the responder. + +type: keyword + + +**`zeek.http.orig_mime_depth`** +: Current number of MIME entities in the HTTP request message body. + +type: integer + + +**`zeek.http.resp_mime_depth`** +: Current number of MIME entities in the HTTP response message body. + +type: integer + + + +## intel [_intel] + +Fields exported by the Zeek Intel log. + +**`zeek.intel.seen.indicator`** +: The intelligence indicator. + +type: keyword + + +**`zeek.intel.seen.indicator_type`** +: The type of data the indicator represents. + +type: keyword + + +**`zeek.intel.seen.host`** +: If the indicator type was Intel::ADDR, then this field will be present. + +type: keyword + + +**`zeek.intel.seen.conn`** +: If the data was discovered within a connection, the connection record should go here to give context to the data. + +type: keyword + + +**`zeek.intel.seen.where`** +: Where the data was discovered. + +type: keyword + + +**`zeek.intel.seen.node`** +: The name of the node where the match was discovered. + +type: keyword + + +**`zeek.intel.seen.uid`** +: If the data was discovered within a connection, the connection uid should go here to give context to the data. If the conn field is provided, this will be automatically filled out. + +type: keyword + + +**`zeek.intel.seen.f`** +: If the data was discovered within a file, the file record should go here to provide context to the data. + +type: object + + +**`zeek.intel.seen.fuid`** +: If the data was discovered within a file, the file uid should go here to provide context to the data. If the file record f is provided, this will be automatically filled out. + +type: keyword + + +**`zeek.intel.matched`** +: Event to represent a match in the intelligence data from data that was seen. + +type: keyword + + +**`zeek.intel.sources`** +: Sources which supplied data for this match. + +type: keyword + + +**`zeek.intel.fuid`** +: If a file was associated with this intelligence hit, this is the uid for the file. + +type: keyword + + +**`zeek.intel.file_mime_type`** +: A mime type if the intelligence hit is related to a file. If the $f field is provided this will be automatically filled out. + +type: keyword + + +**`zeek.intel.file_desc`** +: Frequently files can be described to give a bit more context. If the $f field is provided this field will be automatically filled out. + +type: keyword + + + +## irc [_irc] + +Fields exported by the Zeek IRC log + +**`zeek.irc.nick`** +: Nickname given for the connection. + +type: keyword + + +**`zeek.irc.user`** +: Username given for the connection. + +type: keyword + + +**`zeek.irc.command`** +: Command given by the client. + +type: keyword + + +**`zeek.irc.value`** +: Value for the command given by the client. + +type: keyword + + +**`zeek.irc.addl`** +: Any additional data for the command. + +type: keyword + + +**`zeek.irc.dcc.file.name`** +: Present if base/protocols/irc/dcc-send.bro is loaded. DCC filename requested. + +type: keyword + + +**`zeek.irc.dcc.file.size`** +: Present if base/protocols/irc/dcc-send.bro is loaded. Size of the DCC transfer as indicated by the sender. + +type: long + + +**`zeek.irc.dcc.mime_type`** +: present if base/protocols/irc/dcc-send.bro is loaded. Sniffed mime type of the file. + +type: keyword + + +**`zeek.irc.fuid`** +: present if base/protocols/irc/files.bro is loaded. File unique ID. + +type: keyword + + + +## kerberos [_kerberos_3] + +Fields exported by the Zeek Kerberos log + +**`zeek.kerberos.request_type`** +: Request type - Authentication Service (AS) or Ticket Granting Service (TGS). + +type: keyword + + +**`zeek.kerberos.client`** +: Client name. + +type: keyword + + +**`zeek.kerberos.service`** +: Service name. + +type: keyword + + +**`zeek.kerberos.success`** +: Request result. + +type: boolean + + +**`zeek.kerberos.error.code`** +: Error code. + +type: integer + + +**`zeek.kerberos.error.msg`** +: Error message. + +type: keyword + + +**`zeek.kerberos.valid.from`** +: Ticket valid from. + +type: date + + +**`zeek.kerberos.valid.until`** +: Ticket valid until. + +type: date + + +**`zeek.kerberos.valid.days`** +: Number of days the ticket is valid for. + +type: integer + + +**`zeek.kerberos.cipher`** +: Ticket encryption type. + +type: keyword + + +**`zeek.kerberos.forwardable`** +: Forwardable ticket requested. + +type: boolean + + +**`zeek.kerberos.renewable`** +: Renewable ticket requested. + +type: boolean + + +**`zeek.kerberos.ticket.auth`** +: Hash of ticket used to authorize request/transaction. + +type: keyword + + +**`zeek.kerberos.ticket.new`** +: Hash of ticket returned by the KDC. + +type: keyword + + +**`zeek.kerberos.cert.client.value`** +: Client certificate. + +type: keyword + + +**`zeek.kerberos.cert.client.fuid`** +: File unique ID of client cert. + +type: keyword + + +**`zeek.kerberos.cert.client.subject`** +: Subject of client certificate. + +type: keyword + + +**`zeek.kerberos.cert.server.value`** +: Server certificate. + +type: keyword + + +**`zeek.kerberos.cert.server.fuid`** +: File unique ID of server certificate. + +type: keyword + + +**`zeek.kerberos.cert.server.subject`** +: Subject of server certificate. + +type: keyword + + + +## modbus [_modbus] + +Fields exported by the Zeek modbus log. + +**`zeek.modbus.function`** +: The name of the function message that was sent. + +type: keyword + + +**`zeek.modbus.exception`** +: The exception if the response was a failure. + +type: keyword + + +**`zeek.modbus.track_address`** +: Present if policy/protocols/modbus/track-memmap.bro is loaded. Modbus track address. + +type: integer + + + +## mysql [_mysql_2] + +Fields exported by the Zeek MySQL log. + +**`zeek.mysql.cmd`** +: The command that was issued. + +type: keyword + + +**`zeek.mysql.arg`** +: The argument issued to the command. + +type: keyword + + +**`zeek.mysql.success`** +: Whether the command succeeded. + +type: boolean + + +**`zeek.mysql.rows`** +: The number of affected rows, if any. + +type: integer + + +**`zeek.mysql.response`** +: Server message, if any. + +type: keyword + + + +## notice [_notice] + +Fields exported by the Zeek Notice log. + +**`zeek.notice.connection_id`** +: Identifier of the related connection session. + +type: keyword + + +**`zeek.notice.icmp_id`** +: Identifier of the related ICMP session. + +type: keyword + + +**`zeek.notice.file.id`** +: An identifier associated with a single file that is related to this notice. + +type: keyword + + +**`zeek.notice.file.parent_id`** +: Identifier associated with a container file from which this one was extracted. + +type: keyword + + +**`zeek.notice.file.source`** +: An identification of the source of the file data. E.g. it may be a network protocol over which it was transferred, or a local file path which was read, or some other input source. + +type: keyword + + +**`zeek.notice.file.mime_type`** +: A mime type if the notice is related to a file. + +type: keyword + + +**`zeek.notice.file.is_orig`** +: If the source of this file is a network connection, this field indicates if the file is being sent by the originator of the connection or the responder. + +type: boolean + + +**`zeek.notice.file.seen_bytes`** +: Number of bytes provided to the file analysis engine for the file. + +type: long + + +**`zeek.notice.ffile.total_bytes`** +: Total number of bytes that are supposed to comprise the full file. + +type: long + + +**`zeek.notice.file.missing_bytes`** +: The number of bytes in the file stream that were completely missed during the process of analysis. + +type: long + + +**`zeek.notice.file.overflow_bytes`** +: The number of bytes in the file stream that were not delivered to stream file analyzers. This could be overlapping bytes or bytes that couldn’t be reassembled. + +type: long + + +**`zeek.notice.fuid`** +: A file unique ID if this notice is related to a file. + +type: keyword + + +**`zeek.notice.note`** +: The type of the notice. + +type: keyword + + +**`zeek.notice.msg`** +: The human readable message for the notice. + +type: keyword + + +**`zeek.notice.sub`** +: The human readable sub-message. + +type: keyword + + +**`zeek.notice.n`** +: Associated count, or a status code. + +type: long + + +**`zeek.notice.peer_name`** +: Name of remote peer that raised this notice. + +type: keyword + + +**`zeek.notice.peer_descr`** +: Textual description for the peer that raised this notice. + +type: text + + +**`zeek.notice.actions`** +: The actions which have been applied to this notice. + +type: keyword + + +**`zeek.notice.email_body_sections`** +: By adding chunks of text into this element, other scripts can expand on notices that are being emailed. + +type: text + + +**`zeek.notice.email_delay_tokens`** +: Adding a string token to this set will cause the built-in emailing functionality to delay sending the email either the token has been removed or the email has been delayed for the specified time duration. + +type: keyword + + +**`zeek.notice.identifier`** +: This field is provided when a notice is generated for the purpose of deduplicating notices. + +type: keyword + + +**`zeek.notice.suppress_for`** +: This field indicates the length of time that this unique notice should be suppressed. + +type: double + + +**`zeek.notice.dropped`** +: Indicate if the source IP address was dropped and denied network access. + +type: boolean + + + +## ntlm [_ntlm] + +Fields exported by the Zeek NTLM log. + +**`zeek.ntlm.domain`** +: Domain name given by the client. + +type: keyword + + +**`zeek.ntlm.hostname`** +: Hostname given by the client. + +type: keyword + + +**`zeek.ntlm.success`** +: Indicate whether or not the authentication was successful. + +type: boolean + + +**`zeek.ntlm.username`** +: Username given by the client. + +type: keyword + + +**`zeek.ntlm.server.name.dns`** +: DNS name given by the server in a CHALLENGE. + +type: keyword + + +**`zeek.ntlm.server.name.netbios`** +: NetBIOS name given by the server in a CHALLENGE. + +type: keyword + + +**`zeek.ntlm.server.name.tree`** +: Tree name given by the server in a CHALLENGE. + +type: keyword + + + +## ntp [_ntp] + +Fields exported by the Zeek NTP log. + +**`zeek.ntp.version`** +: The NTP version number (1, 2, 3, 4). + +type: integer + + +**`zeek.ntp.mode`** +: The NTP mode being used. + +type: integer + + +**`zeek.ntp.stratum`** +: The stratum (primary server, secondary server, etc.). + +type: integer + + +**`zeek.ntp.poll`** +: The maximum interval between successive messages in seconds. + +type: double + + +**`zeek.ntp.precision`** +: The precision of the system clock in seconds. + +type: double + + +**`zeek.ntp.root_delay`** +: Total round-trip delay to the reference clock in seconds. + +type: double + + +**`zeek.ntp.root_disp`** +: Total dispersion to the reference clock in seconds. + +type: double + + +**`zeek.ntp.ref_id`** +: For stratum 0, 4 character string used for debugging. For stratum 1, ID assigned to the reference clock by IANA. Above stratum 1, when using IPv4, the IP address of the reference clock. Note that the NTP protocol did not originally specify a large enough field to represent IPv6 addresses, so they use the first four bytes of the MD5 hash of the reference clock’s IPv6 address (i.e. an IPv4 address here is not necessarily IPv4). + +type: keyword + + +**`zeek.ntp.ref_time`** +: Time when the system clock was last set or correct. + +type: date + + +**`zeek.ntp.org_time`** +: Time at the client when the request departed for the NTP server. + +type: date + + +**`zeek.ntp.rec_time`** +: Time at the server when the request arrived from the NTP client. + +type: date + + +**`zeek.ntp.xmt_time`** +: Time at the server when the response departed for the NTP client. + +type: date + + +**`zeek.ntp.num_exts`** +: Number of extension fields (which are not currently parsed). + +type: integer + + + +## ocsp [_ocsp] + +Fields exported by the Zeek OCSP log Online Certificate Status Protocol (OCSP). Only created if policy script is loaded. + +**`zeek.ocsp.file_id`** +: File id of the OCSP reply. + +type: keyword + + +**`zeek.ocsp.hash.algorithm`** +: Hash algorithm used to generate issuerNameHash and issuerKeyHash. + +type: keyword + + +**`zeek.ocsp.hash.issuer.name`** +: Hash of the issuer’s distingueshed name. + +type: keyword + + +**`zeek.ocsp.hash.issuer.key`** +: Hash of the issuer’s public key. + +type: keyword + + +**`zeek.ocsp.serial_number`** +: Serial number of the affected certificate. + +type: keyword + + +**`zeek.ocsp.status`** +: Status of the affected certificate. + +type: keyword + + +**`zeek.ocsp.revoke.time`** +: Time at which the certificate was revoked. + +type: date + + +**`zeek.ocsp.revoke.reason`** +: Reason for which the certificate was revoked. + +type: keyword + + +**`zeek.ocsp.update.this`** +: The time at which the status being shows is known to have been correct. + +type: date + + +**`zeek.ocsp.update.next`** +: The latest time at which new information about the status of the certificate will be available. + +type: date + + + +## pe [_pe_2] + +Fields exported by the Zeek pe log. + +**`zeek.pe.client`** +: The client’s version string. + +type: keyword + + +**`zeek.pe.id`** +: File id of this portable executable file. + +type: keyword + + +**`zeek.pe.machine`** +: The target machine that the file was compiled for. + +type: keyword + + +**`zeek.pe.compile_time`** +: The time that the file was created at. + +type: date + + +**`zeek.pe.os`** +: The required operating system. + +type: keyword + + +**`zeek.pe.subsystem`** +: The subsystem that is required to run this file. + +type: keyword + + +**`zeek.pe.is_exe`** +: Is the file an executable, or just an object file? + +type: boolean + + +**`zeek.pe.is_64bit`** +: Is the file a 64-bit executable? + +type: boolean + + +**`zeek.pe.uses_aslr`** +: Does the file support Address Space Layout Randomization? + +type: boolean + + +**`zeek.pe.uses_dep`** +: Does the file support Data Execution Prevention? + +type: boolean + + +**`zeek.pe.uses_code_integrity`** +: Does the file enforce code integrity checks? + +type: boolean + + +**`zeek.pe.uses_seh`** +: Does the file use structured exception handing? + +type: boolean + + +**`zeek.pe.has_import_table`** +: Does the file have an import table? + +type: boolean + + +**`zeek.pe.has_export_table`** +: Does the file have an export table? + +type: boolean + + +**`zeek.pe.has_cert_table`** +: Does the file have an attribute certificate table? + +type: boolean + + +**`zeek.pe.has_debug_data`** +: Does the file have a debug table? + +type: boolean + + +**`zeek.pe.section_names`** +: The names of the sections, in order. + +type: keyword + + + +## radius [_radius] + +Fields exported by the Zeek Radius log. + +**`zeek.radius.username`** +: The username, if present. + +type: keyword + + +**`zeek.radius.mac`** +: MAC address, if present. + +type: keyword + + +**`zeek.radius.framed_addr`** +: The address given to the network access server, if present. This is only a hint from the RADIUS server and the network access server is not required to honor the address. + +type: ip + + +**`zeek.radius.remote_ip`** +: Remote IP address, if present. This is collected from the Tunnel-Client-Endpoint attribute. + +type: ip + + +**`zeek.radius.connect_info`** +: Connect info, if present. + +type: keyword + + +**`zeek.radius.reply_msg`** +: Reply message from the server challenge. This is frequently shown to the user authenticating. + +type: keyword + + +**`zeek.radius.result`** +: Successful or failed authentication. + +type: keyword + + +**`zeek.radius.ttl`** +: The duration between the first request and either the "Access-Accept" message or an error. If the field is empty, it means that either the request or response was not seen. + +type: integer + + +**`zeek.radius.logged`** +: Whether this has already been logged and can be ignored. + +type: boolean + + + +## rdp [_rdp] + +Fields exported by the Zeek RDP log. + +**`zeek.rdp.cookie`** +: Cookie value used by the client machine. This is typically a username. + +type: keyword + + +**`zeek.rdp.result`** +: Status result for the connection. It’s a mix between RDP negotation failure messages and GCC server create response messages. + +type: keyword + + +**`zeek.rdp.security_protocol`** +: Security protocol chosen by the server. + +type: keyword + + +**`zeek.rdp.keyboard_layout`** +: Keyboard layout (language) of the client machine. + +type: keyword + + +**`zeek.rdp.client.build`** +: RDP client version used by the client machine. + +type: keyword + + +**`zeek.rdp.client.client_name`** +: Name of the client machine. + +type: keyword + + +**`zeek.rdp.client.product_id`** +: Product ID of the client machine. + +type: keyword + + +**`zeek.rdp.desktop.width`** +: Desktop width of the client machine. + +type: integer + + +**`zeek.rdp.desktop.height`** +: Desktop height of the client machine. + +type: integer + + +**`zeek.rdp.desktop.color_depth`** +: The color depth requested by the client in the high_color_depth field. + +type: keyword + + +**`zeek.rdp.cert.type`** +: If the connection is being encrypted with native RDP encryption, this is the type of cert being used. + +type: keyword + + +**`zeek.rdp.cert.count`** +: The number of certs seen. X.509 can transfer an entire certificate chain. + +type: integer + + +**`zeek.rdp.cert.permanent`** +: Indicates if the provided certificate or certificate chain is permanent or temporary. + +type: boolean + + +**`zeek.rdp.encryption.level`** +: Encryption level of the connection. + +type: keyword + + +**`zeek.rdp.encryption.method`** +: Encryption method of the connection. + +type: keyword + + +**`zeek.rdp.done`** +: Track status of logging RDP connections. + +type: boolean + + +**`zeek.rdp.ssl`** +: (present if policy/protocols/rdp/indicate_ssl.bro is loaded) Flag the connection if it was seen over SSL. + +type: boolean + + + +## rfb [_rfb] + +Fields exported by the Zeek RFB log. + +**`zeek.rfb.version.client.major`** +: Major version of the client. + +type: keyword + + +**`zeek.rfb.version.client.minor`** +: Minor version of the client. + +type: keyword + + +**`zeek.rfb.version.server.major`** +: Major version of the server. + +type: keyword + + +**`zeek.rfb.version.server.minor`** +: Minor version of the server. + +type: keyword + + +**`zeek.rfb.auth.success`** +: Whether or not authentication was successful. + +type: boolean + + +**`zeek.rfb.auth.method`** +: Identifier of authentication method used. + +type: keyword + + +**`zeek.rfb.share_flag`** +: Whether the client has an exclusive or a shared session. + +type: boolean + + +**`zeek.rfb.desktop_name`** +: Name of the screen that is being shared. + +type: keyword + + +**`zeek.rfb.width`** +: Width of the screen that is being shared. + +type: integer + + +**`zeek.rfb.height`** +: Height of the screen that is being shared. + +type: integer + + + +## signature [_signature] + +Fields exported by the Zeek Signature log. + +**`zeek.signature.note`** +: Notice associated with signature event. + +type: keyword + + +**`zeek.signature.sig_id`** +: The name of the signature that matched. + +type: keyword + + +**`zeek.signature.event_msg`** +: A more descriptive message of the signature-matching event. + +type: keyword + + +**`zeek.signature.sub_msg`** +: Extracted payload data or extra message. + +type: keyword + + +**`zeek.signature.sig_count`** +: Number of sigs, usually from summary count. + +type: integer + + +**`zeek.signature.host_count`** +: Number of hosts, from a summary count. + +type: integer + + + +## sip [_sip] + +Fields exported by the Zeek SIP log. + +**`zeek.sip.transaction_depth`** +: Represents the pipelined depth into the connection of this request/response transaction. + +type: integer + + +**`zeek.sip.sequence.method`** +: Verb used in the SIP request (INVITE, REGISTER etc.). + +type: keyword + + +**`zeek.sip.sequence.number`** +: Contents of the CSeq: header from the client. + +type: keyword + + +**`zeek.sip.uri`** +: URI used in the request. + +type: keyword + + +**`zeek.sip.date`** +: Contents of the Date: header from the client. + +type: keyword + + +**`zeek.sip.request.from`** +: Contents of the request From: header Note: The tag= value that’s usually appended to the sender is stripped off and not logged. + +type: keyword + + +**`zeek.sip.request.to`** +: Contents of the To: header. + +type: keyword + + +**`zeek.sip.request.path`** +: The client message transmission path, as extracted from the headers. + +type: keyword + + +**`zeek.sip.request.body_length`** +: Contents of the Content-Length: header from the client. + +type: long + + +**`zeek.sip.response.from`** +: Contents of the response From: header Note: The tag= value that’s usually appended to the sender is stripped off and not logged. + +type: keyword + + +**`zeek.sip.response.to`** +: Contents of the response To: header. + +type: keyword + + +**`zeek.sip.response.path`** +: The server message transmission path, as extracted from the headers. + +type: keyword + + +**`zeek.sip.response.body_length`** +: Contents of the Content-Length: header from the server. + +type: long + + +**`zeek.sip.reply_to`** +: Contents of the Reply-To: header. + +type: keyword + + +**`zeek.sip.call_id`** +: Contents of the Call-ID: header from the client. + +type: keyword + + +**`zeek.sip.subject`** +: Contents of the Subject: header from the client. + +type: keyword + + +**`zeek.sip.user_agent`** +: Contents of the User-Agent: header from the client. + +type: keyword + + +**`zeek.sip.status.code`** +: Status code returned by the server. + +type: integer + + +**`zeek.sip.status.msg`** +: Status message returned by the server. + +type: keyword + + +**`zeek.sip.warning`** +: Contents of the Warning: header. + +type: keyword + + +**`zeek.sip.content_type`** +: Contents of the Content-Type: header from the server. + +type: keyword + + + +## smb_cmd [_smb_cmd] + +Fields exported by the Zeek smb_cmd log. + +**`zeek.smb_cmd.command`** +: The command sent by the client. + +type: keyword + + +**`zeek.smb_cmd.sub_command`** +: The subcommand sent by the client, if present. + +type: keyword + + +**`zeek.smb_cmd.argument`** +: Command argument sent by the client, if any. + +type: keyword + + +**`zeek.smb_cmd.status`** +: Server reply to the client’s command. + +type: keyword + + +**`zeek.smb_cmd.rtt`** +: Round trip time from the request to the response. + +type: double + + +**`zeek.smb_cmd.version`** +: Version of SMB for the command. + +type: keyword + + +**`zeek.smb_cmd.username`** +: Authenticated username, if available. + +type: keyword + + +**`zeek.smb_cmd.tree`** +: If this is related to a tree, this is the tree that was used for the current command. + +type: keyword + + +**`zeek.smb_cmd.tree_service`** +: The type of tree (disk share, printer share, named pipe, etc.). + +type: keyword + + + +## file [_file_4] + +If the command referenced a file, store it here. + +**`zeek.smb_cmd.file.name`** +: Filename if one was seen. + +type: keyword + + +**`zeek.smb_cmd.file.action`** +: Action this log record represents. + +type: keyword + + +**`zeek.smb_cmd.file.uid`** +: UID of the referenced file. + +type: keyword + + +**`zeek.smb_cmd.file.host.tx`** +: Address of the transmitting host. + +type: ip + + +**`zeek.smb_cmd.file.host.rx`** +: Address of the receiving host. + +type: ip + + +**`zeek.smb_cmd.smb1_offered_dialects`** +: Present if base/protocols/smb/smb1-main.bro is loaded. Dialects offered by the client. + +type: keyword + + +**`zeek.smb_cmd.smb2_offered_dialects`** +: Present if base/protocols/smb/smb2-main.bro is loaded. Dialects offered by the client. + +type: integer + + + +## smb_files [_smb_files] + +Fields exported by the Zeek SMB Files log. + +**`zeek.smb_files.action`** +: Action this log record represents. + +type: keyword + + +**`zeek.smb_files.fid`** +: ID referencing this file. + +type: integer + + +**`zeek.smb_files.name`** +: Filename if one was seen. + +type: keyword + + +**`zeek.smb_files.path`** +: Path pulled from the tree this file was transferred to or from. + +type: keyword + + +**`zeek.smb_files.previous_name`** +: If the rename action was seen, this will be the file’s previous name. + +type: keyword + + +**`zeek.smb_files.size`** +: Byte size of the file. + +type: long + + + +## times [_times] + +Timestamps of the file. + +**`zeek.smb_files.times.accessed`** +: The file’s access time. + +type: date + + +**`zeek.smb_files.times.changed`** +: The file’s change time. + +type: date + + +**`zeek.smb_files.times.created`** +: The file’s create time. + +type: date + + +**`zeek.smb_files.times.modified`** +: The file’s modify time. + +type: date + + +**`zeek.smb_files.uuid`** +: UUID referencing this file if DCE/RPC. + +type: keyword + + + +## smb_mapping [_smb_mapping] + +Fields exported by the Zeek SMB_Mapping log. + +**`zeek.smb_mapping.path`** +: Name of the tree path. + +type: keyword + + +**`zeek.smb_mapping.service`** +: The type of resource of the tree (disk share, printer share, named pipe, etc.). + +type: keyword + + +**`zeek.smb_mapping.native_file_system`** +: File system of the tree. + +type: keyword + + +**`zeek.smb_mapping.share_type`** +: If this is SMB2, a share type will be included. For SMB1, the type of share will be deduced and included as well. + +type: keyword + + + +## smtp [_smtp] + +Fields exported by the Zeek SMTP log. + +**`zeek.smtp.transaction_depth`** +: A count to represent the depth of this message transaction in a single connection where multiple messages were transferred. + +type: integer + + +**`zeek.smtp.helo`** +: Contents of the Helo header. + +type: keyword + + +**`zeek.smtp.mail_from`** +: Email addresses found in the MAIL FROM header. + +type: keyword + + +**`zeek.smtp.rcpt_to`** +: Email addresses found in the RCPT TO header. + +type: keyword + + +**`zeek.smtp.date`** +: Contents of the Date header. + +type: date + + +**`zeek.smtp.from`** +: Contents of the From header. + +type: keyword + + +**`zeek.smtp.to`** +: Contents of the To header. + +type: keyword + + +**`zeek.smtp.cc`** +: Contents of the CC header. + +type: keyword + + +**`zeek.smtp.reply_to`** +: Contents of the ReplyTo header. + +type: keyword + + +**`zeek.smtp.msg_id`** +: Contents of the MsgID header. + +type: keyword + + +**`zeek.smtp.in_reply_to`** +: Contents of the In-Reply-To header. + +type: keyword + + +**`zeek.smtp.subject`** +: Contents of the Subject header. + +type: keyword + + +**`zeek.smtp.x_originating_ip`** +: Contents of the X-Originating-IP header. + +type: keyword + + +**`zeek.smtp.first_received`** +: Contents of the first Received header. + +type: keyword + + +**`zeek.smtp.second_received`** +: Contents of the second Received header. + +type: keyword + + +**`zeek.smtp.last_reply`** +: The last message that the server sent to the client. + +type: keyword + + +**`zeek.smtp.path`** +: The message transmission path, as extracted from the headers. + +type: ip + + +**`zeek.smtp.user_agent`** +: Value of the User-Agent header from the client. + +type: keyword + + +**`zeek.smtp.tls`** +: Indicates that the connection has switched to using TLS. + +type: boolean + + +**`zeek.smtp.process_received_from`** +: Indicates if the "Received: from" headers should still be processed. + +type: boolean + + +**`zeek.smtp.has_client_activity`** +: Indicates if client activity has been seen, but not yet logged. + +type: boolean + + +**`zeek.smtp.fuids`** +: (present if base/protocols/smtp/files.bro is loaded) An ordered vector of file unique IDs seen attached to the message. + +type: keyword + + +**`zeek.smtp.is_webmail`** +: Indicates if the message was sent through a webmail interface. + +type: boolean + + + +## snmp [_snmp] + +Fields exported by the Zeek SNMP log. + +**`zeek.snmp.duration`** +: The amount of time between the first packet beloning to the SNMP session and the latest one seen. + +type: double + + +**`zeek.snmp.version`** +: The version of SNMP being used. + +type: keyword + + +**`zeek.snmp.community`** +: The community string of the first SNMP packet associated with the session. This is used as part of SNMP’s (v1 and v2c) administrative/security framework. See RFC 1157 or RFC 1901. + +type: keyword + + +**`zeek.snmp.get.requests`** +: The number of variable bindings in GetRequest/GetNextRequest PDUs seen for the session. + +type: integer + + +**`zeek.snmp.get.bulk_requests`** +: The number of variable bindings in GetBulkRequest PDUs seen for the session. + +type: integer + + +**`zeek.snmp.get.responses`** +: The number of variable bindings in GetResponse/Response PDUs seen for the session. + +type: integer + + +**`zeek.snmp.set.requests`** +: The number of variable bindings in SetRequest PDUs seen for the session. + +type: integer + + +**`zeek.snmp.display_string`** +: A system description of the SNMP responder endpoint. + +type: keyword + + +**`zeek.snmp.up_since`** +: The time at which the SNMP responder endpoint claims it’s been up since. + +type: date + + + +## socks [_socks] + +Fields exported by the Zeek SOCKS log. + +**`zeek.socks.version`** +: Protocol version of SOCKS. + +type: integer + + +**`zeek.socks.user`** +: Username used to request a login to the proxy. + +type: keyword + + +**`zeek.socks.password`** +: Password used to request a login to the proxy. + +type: keyword + + +**`zeek.socks.status`** +: Server status for the attempt at using the proxy. + +type: keyword + + +**`zeek.socks.request.host`** +: Client requested SOCKS address. Could be an address, a name or both. + +type: keyword + + +**`zeek.socks.request.port`** +: Client requested port. + +type: integer + + +**`zeek.socks.bound.host`** +: Server bound address. Could be an address, a name or both. + +type: keyword + + +**`zeek.socks.bound.port`** +: Server bound port. + +type: integer + + +**`zeek.socks.capture_password`** +: Determines if the password will be captured for this request. + +type: boolean + + + +## ssh [_ssh] + +Fields exported by the Zeek SSH log. + +**`zeek.ssh.client`** +: The client’s version string. + +type: keyword + + +**`zeek.ssh.direction`** +: Direction of the connection. If the client was a local host logging into an external host, this would be OUTBOUND. INBOUND would be set for the opposite situation. + +type: keyword + + +**`zeek.ssh.host_key`** +: The server’s key thumbprint. + +type: keyword + + +**`zeek.ssh.server`** +: The server’s version string. + +type: keyword + + +**`zeek.ssh.version`** +: SSH major version (1 or 2). + +type: integer + + + +## algorithm [_algorithm] + +Cipher algorithms used in this session. + +**`zeek.ssh.algorithm.cipher`** +: The encryption algorithm in use. + +type: keyword + + +**`zeek.ssh.algorithm.compression`** +: The compression algorithm in use. + +type: keyword + + +**`zeek.ssh.algorithm.host_key`** +: The server host key’s algorithm. + +type: keyword + + +**`zeek.ssh.algorithm.key_exchange`** +: The key exchange algorithm in use. + +type: keyword + + +**`zeek.ssh.algorithm.mac`** +: The signing (MAC) algorithm in use. + +type: keyword + + +**`zeek.ssh.auth.attempts`** +: The number of authentication attemps we observed. There’s always at least one, since some servers might support no authentication at all. It’s important to note that not all of these are failures, since some servers require two-factor auth (e.g. password AND pubkey). + +type: integer + + +**`zeek.ssh.auth.success`** +: Authentication result. + +type: boolean + + + +## ssl [_ssl_8] + +Fields exported by the Zeek SSL log. + +**`zeek.ssl.version`** +: SSL/TLS version that was logged. + +type: keyword + + +**`zeek.ssl.cipher`** +: SSL/TLS cipher suite that was logged. + +type: keyword + + +**`zeek.ssl.curve`** +: Elliptic curve that was logged when using ECDH/ECDHE. + +type: keyword + + +**`zeek.ssl.resumed`** +: Flag to indicate if the session was resumed reusing the key material exchanged in an earlier connection. + +type: boolean + + +**`zeek.ssl.next_protocol`** +: Next protocol the server chose using the application layer next protocol extension. + +type: keyword + + +**`zeek.ssl.established`** +: Flag to indicate if this ssl session has been established successfully. + +type: boolean + + +**`zeek.ssl.validation.status`** +: Result of certificate validation for this connection. + +type: keyword + + +**`zeek.ssl.validation.code`** +: Result of certificate validation for this connection, given as OpenSSL validation code. + +type: keyword + + +**`zeek.ssl.last_alert`** +: Last alert that was seen during the connection. + +type: keyword + + +**`zeek.ssl.server.name`** +: Value of the Server Name Indicator SSL/TLS extension. It indicates the server name that the client was requesting. + +type: keyword + + +**`zeek.ssl.server.cert_chain`** +: Chain of certificates offered by the server to validate its complete signing chain. + +type: keyword + + +**`zeek.ssl.server.cert_chain_fuids`** +: An ordered vector of certificate file identifiers for the certificates offered by the server. + +type: keyword + + + +## issuer [_issuer] + +Subject of the signer of the X.509 certificate offered by the server. + +**`zeek.ssl.server.issuer.common_name`** +: Common name of the signer of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.issuer.country`** +: Country code of the signer of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.issuer.locality`** +: Locality of the signer of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.issuer.organization`** +: Organization of the signer of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.issuer.organizational_unit`** +: Organizational unit of the signer of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.issuer.state`** +: State or province name of the signer of the X.509 certificate offered by the server. + +type: keyword + + + +## subject [_subject] + +Subject of the X.509 certificate offered by the server. + +**`zeek.ssl.server.subject.common_name`** +: Common name of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.subject.country`** +: Country code of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.subject.locality`** +: Locality of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.subject.organization`** +: Organization of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.subject.organizational_unit`** +: Organizational unit of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.server.subject.state`** +: State or province name of the X.509 certificate offered by the server. + +type: keyword + + +**`zeek.ssl.client.cert_chain`** +: Chain of certificates offered by the client to validate its complete signing chain. + +type: keyword + + +**`zeek.ssl.client.cert_chain_fuids`** +: An ordered vector of certificate file identifiers for the certificates offered by the client. + +type: keyword + + + +## issuer [_issuer_2] + +Subject of the signer of the X.509 certificate offered by the client. + +**`zeek.ssl.client.issuer.common_name`** +: Common name of the signer of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.issuer.country`** +: Country code of the signer of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.issuer.locality`** +: Locality of the signer of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.issuer.organization`** +: Organization of the signer of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.issuer.organizational_unit`** +: Organizational unit of the signer of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.issuer.state`** +: State or province name of the signer of the X.509 certificate offered by the client. + +type: keyword + + + +## subject [_subject_2] + +Subject of the X.509 certificate offered by the client. + +**`zeek.ssl.client.subject.common_name`** +: Common name of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.subject.country`** +: Country code of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.subject.locality`** +: Locality of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.subject.organization`** +: Organization of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.subject.organizational_unit`** +: Organizational unit of the X.509 certificate offered by the client. + +type: keyword + + +**`zeek.ssl.client.subject.state`** +: State or province name of the X.509 certificate offered by the client. + +type: keyword + + + +## stats [_stats_2] + +Fields exported by the Zeek stats log. + +**`zeek.stats.peer`** +: Peer that generated this log. Mostly for clusters. + +type: keyword + + +**`zeek.stats.memory`** +: Amount of memory currently in use in MB. + +type: integer + + +**`zeek.stats.packets.processed`** +: Number of packets processed since the last stats interval. + +type: long + + +**`zeek.stats.packets.dropped`** +: Number of packets dropped since the last stats interval if reading live traffic. + +type: long + + +**`zeek.stats.packets.received`** +: Number of packets seen on the link since the last stats interval if reading live traffic. + +type: long + + +**`zeek.stats.bytes.received`** +: Number of bytes received since the last stats interval if reading live traffic. + +type: long + + +**`zeek.stats.connections.tcp.active`** +: TCP connections currently in memory. + +type: integer + + +**`zeek.stats.connections.tcp.count`** +: TCP connections seen since last stats interval. + +type: integer + + +**`zeek.stats.connections.udp.active`** +: UDP connections currently in memory. + +type: integer + + +**`zeek.stats.connections.udp.count`** +: UDP connections seen since last stats interval. + +type: integer + + +**`zeek.stats.connections.icmp.active`** +: ICMP connections currently in memory. + +type: integer + + +**`zeek.stats.connections.icmp.count`** +: ICMP connections seen since last stats interval. + +type: integer + + +**`zeek.stats.events.processed`** +: Number of events processed since the last stats interval. + +type: integer + + +**`zeek.stats.events.queued`** +: Number of events that have been queued since the last stats interval. + +type: integer + + +**`zeek.stats.timers.count`** +: Number of timers scheduled since last stats interval. + +type: integer + + +**`zeek.stats.timers.active`** +: Current number of scheduled timers. + +type: integer + + +**`zeek.stats.files.count`** +: Number of files seen since last stats interval. + +type: integer + + +**`zeek.stats.files.active`** +: Current number of files actively being seen. + +type: integer + + +**`zeek.stats.dns_requests.count`** +: Number of DNS requests seen since last stats interval. + +type: integer + + +**`zeek.stats.dns_requests.active`** +: Current number of DNS requests awaiting a reply. + +type: integer + + +**`zeek.stats.reassembly_size.tcp`** +: Current size of TCP data in reassembly. + +type: integer + + +**`zeek.stats.reassembly_size.file`** +: Current size of File data in reassembly. + +type: integer + + +**`zeek.stats.reassembly_size.frag`** +: Current size of packet fragment data in reassembly. + +type: integer + + +**`zeek.stats.reassembly_size.unknown`** +: Current size of unknown data in reassembly (this is only PIA buffer right now). + +type: integer + + +**`zeek.stats.timestamp_lag`** +: Lag between the wall clock and packet timestamps if reading live traffic. + +type: integer + + + +## syslog [_syslog_4] + +Fields exported by the Zeek syslog log. + +**`zeek.syslog.facility`** +: Syslog facility for the message. + +type: keyword + + +**`zeek.syslog.severity`** +: Syslog severity for the message. + +type: keyword + + +**`zeek.syslog.message`** +: The plain text message. + +type: keyword + + + +## tunnel [_tunnel] + +Fields exported by the Zeek SSH log. + +**`zeek.tunnel.type`** +: The type of tunnel. + +type: keyword + + +**`zeek.tunnel.action`** +: The type of activity that occurred. + +type: keyword + + + +## weird [_weird] + +Fields exported by the Zeek Weird log. + +**`zeek.weird.name`** +: The name of the weird that occurred. + +type: keyword + + +**`zeek.weird.additional_info`** +: Additional information accompanying the weird if any. + +type: keyword + + +**`zeek.weird.notice`** +: Indicate if this weird was also turned into a notice. + +type: boolean + + +**`zeek.weird.peer`** +: The peer that originated this weird. This is helpful in cluster deployments if a particular cluster node is having trouble to help identify which node is having trouble. + +type: keyword + + +**`zeek.weird.identifier`** +: This field is to be provided when a weird is generated for the purpose of deduplicating weirds. The identifier string should be unique for a single instance of the weird. This field is used to define when a weird is conceptually a duplicate of a previous weird. + +type: keyword + + + +## x509 [_x509_2] + +Fields exported by the Zeek x509 log. + +**`zeek.x509.id`** +: File id of this certificate. + +type: keyword + + + +## certificate [_certificate_2] + +Basic information about the certificate. + +**`zeek.x509.certificate.version`** +: Version number. + +type: integer + + +**`zeek.x509.certificate.serial`** +: Serial number. + +type: keyword + + + +## subject [_subject_3] + +Subject. + +**`zeek.x509.certificate.subject.country`** +: Country provided in the certificate subject. + +type: keyword + + +**`zeek.x509.certificate.subject.common_name`** +: Common name provided in the certificate subject. + +type: keyword + + +**`zeek.x509.certificate.subject.locality`** +: Locality provided in the certificate subject. + +type: keyword + + +**`zeek.x509.certificate.subject.organization`** +: Organization provided in the certificate subject. + +type: keyword + + +**`zeek.x509.certificate.subject.organizational_unit`** +: Organizational unit provided in the certificate subject. + +type: keyword + + +**`zeek.x509.certificate.subject.state`** +: State or province provided in the certificate subject. + +type: keyword + + + +## issuer [_issuer_3] + +Issuer. + +**`zeek.x509.certificate.issuer.country`** +: Country provided in the certificate issuer field. + +type: keyword + + +**`zeek.x509.certificate.issuer.common_name`** +: Common name provided in the certificate issuer field. + +type: keyword + + +**`zeek.x509.certificate.issuer.locality`** +: Locality provided in the certificate issuer field. + +type: keyword + + +**`zeek.x509.certificate.issuer.organization`** +: Organization provided in the certificate issuer field. + +type: keyword + + +**`zeek.x509.certificate.issuer.organizational_unit`** +: Organizational unit provided in the certificate issuer field. + +type: keyword + + +**`zeek.x509.certificate.issuer.state`** +: State or province provided in the certificate issuer field. + +type: keyword + + +**`zeek.x509.certificate.common_name`** +: Last (most specific) common name. + +type: keyword + + + +## valid [_valid] + +Certificate validity timestamps + +**`zeek.x509.certificate.valid.from`** +: Timestamp before when certificate is not valid. + +type: date + + +**`zeek.x509.certificate.valid.until`** +: Timestamp after when certificate is not valid. + +type: date + + +**`zeek.x509.certificate.key.algorithm`** +: Name of the key algorithm. + +type: keyword + + +**`zeek.x509.certificate.key.type`** +: Key type, if key parseable by openssl (either rsa, dsa or ec). + +type: keyword + + +**`zeek.x509.certificate.key.length`** +: Key length in bits. + +type: integer + + +**`zeek.x509.certificate.signature_algorithm`** +: Name of the signature algorithm. + +type: keyword + + +**`zeek.x509.certificate.exponent`** +: Exponent, if RSA-certificate. + +type: keyword + + +**`zeek.x509.certificate.curve`** +: Curve, if EC-certificate. + +type: keyword + + + +## san [_san] + +Subject alternative name extension of the certificate. + +**`zeek.x509.san.dns`** +: List of DNS entries in SAN. + +type: keyword + + +**`zeek.x509.san.uri`** +: List of URI entries in SAN. + +type: keyword + + +**`zeek.x509.san.email`** +: List of email entries in SAN. + +type: keyword + + +**`zeek.x509.san.ip`** +: List of IP entries in SAN. + +type: ip + + +**`zeek.x509.san.other_fields`** +: True if the certificate contained other, not recognized or parsed name fields. + +type: boolean + + + +## basic_constraints [_basic_constraints] + +Basic constraints extension of the certificate. + +**`zeek.x509.basic_constraints.certificate_authority`** +: CA flag set or not. + +type: boolean + + +**`zeek.x509.basic_constraints.path_length`** +: Maximum path length. + +type: integer + + +**`zeek.x509.log_cert`** +: Present if policy/protocols/ssl/log-hostcerts-only.bro is loaded Logging of certificate is suppressed if set to F. + +type: boolean + + diff --git a/docs/reference/filebeat/exported-fields-zookeeper.md b/docs/reference/filebeat/exported-fields-zookeeper.md new file mode 100644 index 000000000000..472be5994b44 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-zookeeper.md @@ -0,0 +1,58 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-zookeeper.html +--- + +# ZooKeeper fields [exported-fields-zookeeper] + +ZooKeeper Module + + +## zookeeper [_zookeeper] + + +## audit [_audit_6] + +ZooKeeper Audit logs. + +**`zookeeper.audit.session`** +: Client session id + +type: keyword + + +**`zookeeper.audit.znode`** +: Path of the znode + +type: keyword + + +**`zookeeper.audit.znode_type`** +: Type of znode in case of creation operation + +type: keyword + + +**`zookeeper.audit.acl`** +: String representation of znode ACL like cdrwa(create, delete,read, write, admin). This is logged only for setAcl operation + +type: keyword + + +**`zookeeper.audit.result`** +: Result of the operation. Possible values are (success/failure/invoked). Result "invoked" is used for serverStop operation because stop is logged before ensuring that server actually stopped. + +type: keyword + + +**`zookeeper.audit.user`** +: Comma separated list of users who are associate with a client session + +type: keyword + + + +## log [_log_14] + +ZooKeeper logs. + diff --git a/docs/reference/filebeat/exported-fields-zoom.md b/docs/reference/filebeat/exported-fields-zoom.md new file mode 100644 index 000000000000..efc2777b95e4 --- /dev/null +++ b/docs/reference/filebeat/exported-fields-zoom.md @@ -0,0 +1,932 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-zoom.html +--- + +# Zoom fields [exported-fields-zoom] + +Module for handling incoming Zoom webhook requests + + +## zoom [_zoom] + +Module for parsing Zoom API Webhooks. + +**`zoom.master_account_id`** +: Master Account related to a specific Sub Account + +type: keyword + + +**`zoom.sub_account_id`** +: Related Sub Account + +type: keyword + + +**`zoom.operator_id`** +: UserID that triggered the event + +type: keyword + + +**`zoom.operator`** +: Username/Email related to the user that triggered the event + +type: keyword + + +**`zoom.account_id`** +: Related accountID to the event + +type: keyword + + +**`zoom.timestamp`** +: Timestamp related to the event + +type: date + + +**`zoom.creation_type`** +: Creation type + +type: keyword + + +**`zoom.account.owner_id`** +: UserID of the user whose sub account was created/disassociated + +type: keyword + + +**`zoom.account.email`** +: Email related to the user the action was performed on + +type: keyword + + +**`zoom.account.owner_email`** +: Email of the user whose sub account was created/disassociated + +type: keyword + + +**`zoom.account.account_name`** +: When an account name is updated, this is the new value set + +type: keyword + + +**`zoom.account.account_alias`** +: When an account alias is updated, this is the new value set + +type: keyword + + +**`zoom.account.account_support_name`** +: When an account support_name is updated, this is the new value set + +type: keyword + + +**`zoom.account.account_support_email`** +: When an account support_email is updated, this is the new value set + +type: keyword + + +**`zoom.chat_channel.name`** +: The name of the channel that has been added/modified/deleted + +type: keyword + + +**`zoom.chat_channel.id`** +: The ID of the channel that has been added/modified/deleted + +type: keyword + + +**`zoom.chat_channel.type`** +: Type of channel related to the event. Can be 1(Invite-Only), 2(Private) or 3(Public) + +type: keyword + + +**`zoom.chat_message.id`** +: Unique ID of the related chat message + +type: keyword + + +**`zoom.chat_message.type`** +: Type of message, can be either "to_contact" or "to_channel" + +type: keyword + + +**`zoom.chat_message.session_id`** +: SessionID for the channel related to the message + +type: keyword + + +**`zoom.chat_message.contact_email`** +: Email address related to the user sending the message + +type: keyword + + +**`zoom.chat_message.contact_id`** +: UserID belonging to the user receiving a message + +type: keyword + + +**`zoom.chat_message.channel_id`** +: ChannelID related to the message + +type: keyword + + +**`zoom.chat_message.channel_name`** +: Channel name related to the message + +type: keyword + + +**`zoom.chat_message.message`** +: A string containing the full message that was sent + +type: keyword + + +**`zoom.meeting.id`** +: Unique ID of the related meeting + +type: keyword + + +**`zoom.meeting.uuid`** +: The UUID of the related meeting + +type: keyword + + +**`zoom.meeting.host_id`** +: The UserID of the configured meeting host + +type: keyword + + +**`zoom.meeting.topic`** +: Topic of the related meeting + +type: keyword + + +**`zoom.meeting.type`** +: Type of meeting created + +type: keyword + + +**`zoom.meeting.start_time`** +: Date and time the meeting started + +type: date + + +**`zoom.meeting.timezone`** +: Which timezone is used for the meeting timestamps + +type: keyword + + +**`zoom.meeting.duration`** +: The duration of a meeting in minutes + +type: long + + +**`zoom.meeting.issues`** +: When a user reports an issue with the meeting, for example: "Unstable audio quality" + +type: keyword + + +**`zoom.meeting.password`** +: Password related to the meeting + +type: keyword + + +**`zoom.phone.id`** +: Unique ID for the phone or conversation + +type: keyword + + +**`zoom.phone.user_id`** +: UserID for the phone owner related to a Call Log being completed + +type: keyword + + +**`zoom.phone.download_url`** +: Download URL for the voicemail + +type: keyword + + +**`zoom.phone.ringing_start_time`** +: The timestamp when a ringtone was established to the callee + +type: date + + +**`zoom.phone.connected_start_time`** +: The date and time when a ringtone was established to the callee + +type: date + + +**`zoom.phone.answer_start_time`** +: The date and time when the call was answered + +type: date + + +**`zoom.phone.call_end_time`** +: The date and time when the call ended + +type: date + + +**`zoom.phone.call_id`** +: Unique ID of the related call + +type: keyword + + +**`zoom.phone.duration`** +: Duration of a voicemail in minutes + +type: long + + +**`zoom.phone.caller.id`** +: UserID of the caller related to the voicemail/call + +type: keyword + + +**`zoom.phone.caller.user_id`** +: UserID of the person which initiated the call + +type: keyword + + +**`zoom.phone.caller.number_type`** +: The type of number, can be 1(Internal) or 2(External) + +type: keyword + + +**`zoom.phone.caller.name`** +: The name of the related callee + +type: keyword + + +**`zoom.phone.caller.phone_number`** +: Phone Number of the caller related to the call + +type: keyword + + +**`zoom.phone.caller.extension_type`** +: Extension type of the caller number, can be user, callQueue, autoReceptionist or shareLineGroup + +type: keyword + + +**`zoom.phone.caller.extension_number`** +: Extension number of the caller + +type: keyword + + +**`zoom.phone.caller.timezone`** +: Timezone of the caller + +type: keyword + + +**`zoom.phone.caller.device_type`** +: Device type used by the caller + +type: keyword + + +**`zoom.phone.callee.id`** +: UserID of the callee related to the voicemail/call + +type: keyword + + +**`zoom.phone.callee.user_id`** +: UserID of the related callee of a voicemail/call + +type: keyword + + +**`zoom.phone.callee.name`** +: The name of the related callee + +type: keyword + + +**`zoom.phone.callee.number_type`** +: The type of number, can be 1(Internal) or 2(External) + +type: keyword + + +**`zoom.phone.callee.phone_number`** +: Phone Number of the callee related to the call + +type: keyword + + +**`zoom.phone.callee.extension_type`** +: Extension type of the callee number, can be user, callQueue, autoReceptionist or shareLineGroup + +type: keyword + + +**`zoom.phone.callee.extension_number`** +: Extension number of the callee related to the call + +type: keyword + + +**`zoom.phone.callee.timezone`** +: Timezone of the callee related to the call + +type: keyword + + +**`zoom.phone.callee.device_type`** +: Device type used by the callee related to the call + +type: keyword + + +**`zoom.phone.date_time`** +: Date and time of the related phone event + +type: date + + +**`zoom.recording.id`** +: Unique ID of the related recording + +type: keyword + + +**`zoom.recording.uuid`** +: UUID of the related recording + +type: keyword + + +**`zoom.recording.host_id`** +: UserID of the host of the meeting that was recorded + +type: keyword + + +**`zoom.recording.topic`** +: Topic of the meeting related to the recording + +type: keyword + + +**`zoom.recording.type`** +: Type of recording, can be multiple type of values, please check Zoom documentation + +type: keyword + + +**`zoom.recording.start_time`** +: The date and time when the recording started + +type: date + + +**`zoom.recording.timezone`** +: The timezone used for the recording date + +type: keyword + + +**`zoom.recording.duration`** +: Duration of the recording in minutes + +type: long + + +**`zoom.recording.share_url`** +: The URL to access the recording + +type: keyword + + +**`zoom.recording.total_size`** +: Total size of the recording in bytes + +type: long + + +**`zoom.recording.recording_count`** +: Number of recording files related to the recording + +type: long + + +**`zoom.recording.recording_file.recording_start`** +: The date and time the recording started + +type: date + + +**`zoom.recording.recording_file.recording_end`** +: The date and time the recording finished + +type: date + + +**`zoom.recording.host_email`** +: Email address of the host related to the meeting that was recorded + +type: keyword + + +**`zoom.user.id`** +: UserID related to the user event + +type: keyword + + +**`zoom.user.first_name`** +: User first name related to the user event + +type: keyword + + +**`zoom.user.last_name`** +: User last name related to the user event + +type: keyword + + +**`zoom.user.email`** +: User email related to the user event + +type: keyword + + +**`zoom.user.type`** +: User type related to the user event + +type: keyword + + +**`zoom.user.phone_number`** +: User phone number related to the user event + +type: keyword + + +**`zoom.user.phone_country`** +: User country code related to the user event + +type: keyword + + +**`zoom.user.company`** +: User company related to the user event + +type: keyword + + +**`zoom.user.pmi`** +: User personal meeting ID related to the user event + +type: keyword + + +**`zoom.user.use_pmi`** +: If a user has PMI enabled + +type: boolean + + +**`zoom.user.pic_url`** +: Full URL to the profile picture used by the user + +type: keyword + + +**`zoom.user.vanity_name`** +: Name of the personal meeting room related to the user event + +type: keyword + + +**`zoom.user.timezone`** +: Timezone configured for the user + +type: keyword + + +**`zoom.user.language`** +: Language configured for the user + +type: keyword + + +**`zoom.user.host_key`** +: Host key set for the user + +type: keyword + + +**`zoom.user.role`** +: The configured role for the user + +type: keyword + + +**`zoom.user.dept`** +: The configured departement for the user + +type: keyword + + +**`zoom.user.presence_status`** +: Current presence status of user + +type: keyword + + +**`zoom.user.personal_notes`** +: Personal notes for the User + +type: keyword + + +**`zoom.user.client_type`** +: Type of client used by the user. Can be browser, mac, win, iphone or android + +type: keyword + + +**`zoom.user.version`** +: Version of the client used by the user + +type: keyword + + +**`zoom.webinar.id`** +: Unique ID for the related webinar + +type: keyword + + +**`zoom.webinar.join_url`** +: The URL configured to join the webinar + +type: keyword + + +**`zoom.webinar.uuid`** +: UUID for the related webinar + +type: keyword + + +**`zoom.webinar.host_id`** +: UserID for the configured host of the webinar + +type: keyword + + +**`zoom.webinar.topic`** +: Meeting topic of the related webinar + +type: keyword + + +**`zoom.webinar.type`** +: Type of webinar created. Can be either 5(Webinar), 6(Recurring webinar without fixed time) or 9(Recurring webinar with fixed time) + +type: keyword + + +**`zoom.webinar.start_time`** +: The date and time when the webinar started + +type: date + + +**`zoom.webinar.timezone`** +: Timezone used for the dates related to the webinar + +type: keyword + + +**`zoom.webinar.duration`** +: Duration of the webinar in minutes + +type: long + + +**`zoom.webinar.agenda`** +: The configured agenda of the webinar + +type: keyword + + +**`zoom.webinar.password`** +: Password configured to access the webinar + +type: keyword + + +**`zoom.webinar.issues`** +: Any reported issues about a webinar is reported in this field + +type: keyword + + +**`zoom.zoomroom.id`** +: Unique ID of the Zoom room + +type: keyword + + +**`zoom.zoomroom.room_name`** +: The configured name of the Zoom room + +type: keyword + + +**`zoom.zoomroom.calendar_name`** +: Calendar name of the Zoom room + +type: keyword + + +**`zoom.zoomroom.calendar_id`** +: Unique ID of the calendar used by the Zoom room + +type: keyword + + +**`zoom.zoomroom.event_id`** +: Unique ID of the calendar event associated with the Zoom Room + +type: keyword + + +**`zoom.zoomroom.change_key`** +: Key used by Microsoft products integration that represents a specific version of a calendar + +type: keyword + + +**`zoom.zoomroom.resource_email`** +: Email address associated with the calendar in use by the Zoom room + +type: keyword + + +**`zoom.zoomroom.email`** +: Email address associated with the Zoom room itself + +type: keyword + + +**`zoom.zoomroom.issue`** +: Any reported alerts or issues related to the Zoom room or its equipment + +type: keyword + + +**`zoom.zoomroom.alert_type`** +: An integer value representing the type of alert. The list of alert types can be found in the Zoom documentation + +type: keyword + + +**`zoom.zoomroom.component`** +: An integer value representing the type of equipment or component, The list of component types can be found in the Zoom documentation + +type: keyword + + +**`zoom.zoomroom.alert_kind`** +: An integer value showing if the Zoom room alert has been either 1(Triggered) or 2(Cleared) + +type: keyword + + +**`zoom.registrant.id`** +: Unique ID of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.status`** +: Status of the specific user registration + +type: keyword + + +**`zoom.registrant.email`** +: Email of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.first_name`** +: First name of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.last_name`** +: Last name of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.address`** +: Address of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.city`** +: City of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.country`** +: Country of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.zip`** +: Zip code of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.state`** +: State of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.phone`** +: Phone number of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.industry`** +: Related industry of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.org`** +: Organization related to the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.job_title`** +: Job title of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.purchasing_time_frame`** +: Choosen purchase timeframe of the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.role_in_purchase_process`** +: Choosen role in a purchase process related to the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.no_of_employees`** +: Number of employees choosen by the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.comments`** +: Comments left by the user registering to a meeting or webinar + +type: keyword + + +**`zoom.registrant.join_url`** +: The URL that the registrant can use to join the webinar + +type: keyword + + +**`zoom.participant.id`** +: Unique ID of the participant related to a meeting + +type: keyword + + +**`zoom.participant.user_id`** +: UserID of the participant related to a meeting + +type: keyword + + +**`zoom.participant.user_name`** +: Username of the participant related to a meeting + +type: keyword + + +**`zoom.participant.join_time`** +: The date and time a participant joined a meeting + +type: date + + +**`zoom.participant.leave_time`** +: The date and time a participant left a meeting + +type: date + + +**`zoom.participant.sharing_details.link_source`** +: Method of sharing with dropbox integration + +type: keyword + + +**`zoom.participant.sharing_details.content`** +: Type of content that was shared + +type: keyword + + +**`zoom.participant.sharing_details.file_link`** +: The file link that was shared + +type: keyword + + +**`zoom.participant.sharing_details.date_time`** +: Timestamp the sharing started + +type: keyword + + +**`zoom.participant.sharing_details.source`** +: The file source that was share + +type: keyword + + +**`zoom.old_values`** +: Includes the old values when updating a object like user, meeting, account or webinar + +type: flattened + + +**`zoom.settings`** +: The current active settings related to a object like user, meeting, account or webinar + +type: flattened + + diff --git a/docs/reference/filebeat/exported-fields.md b/docs/reference/filebeat/exported-fields.md new file mode 100644 index 000000000000..7bfb7491a137 --- /dev/null +++ b/docs/reference/filebeat/exported-fields.md @@ -0,0 +1,79 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields.html +--- + +# Exported fields [exported-fields] + +This document describes the fields that are exported by Filebeat. They are grouped in the following categories: + +* [*ActiveMQ fields*](/reference/filebeat/exported-fields-activemq.md) +* [*Apache fields*](/reference/filebeat/exported-fields-apache.md) +* [*Auditd fields*](/reference/filebeat/exported-fields-auditd.md) +* [*AWS fields*](/reference/filebeat/exported-fields-aws.md) +* [*AWS CloudWatch fields*](/reference/filebeat/exported-fields-aws-cloudwatch.md) +* [*AWS Fargate fields*](/reference/filebeat/exported-fields-awsfargate.md) +* [*Azure fields*](/reference/filebeat/exported-fields-azure.md) +* [*Beat fields*](/reference/filebeat/exported-fields-beat-common.md) +* [*Decode CEF processor fields fields*](/reference/filebeat/exported-fields-cef.md) +* [*CEF fields*](/reference/filebeat/exported-fields-cef-module.md) +* [*Checkpoint fields*](/reference/filebeat/exported-fields-checkpoint.md) +* [*Cisco fields*](/reference/filebeat/exported-fields-cisco.md) +* [*Cloud provider metadata fields*](/reference/filebeat/exported-fields-cloud.md) +* [*Coredns fields*](/reference/filebeat/exported-fields-coredns.md) +* [*Crowdstrike fields*](/reference/filebeat/exported-fields-crowdstrike.md) +* [*CyberArk PAS fields*](/reference/filebeat/exported-fields-cyberarkpas.md) +* [*Docker fields*](/reference/filebeat/exported-fields-docker-processor.md) +* [*ECS fields*](/reference/filebeat/exported-fields-ecs.md) +* [*Elasticsearch fields*](/reference/filebeat/exported-fields-elasticsearch.md) +* [*Envoyproxy fields*](/reference/filebeat/exported-fields-envoyproxy.md) +* [*Fortinet fields*](/reference/filebeat/exported-fields-fortinet.md) +* [*Google Cloud Platform (GCP) fields*](/reference/filebeat/exported-fields-gcp.md) +* [*google_workspace fields*](/reference/filebeat/exported-fields-google_workspace.md) +* [*HAProxy fields*](/reference/filebeat/exported-fields-haproxy.md) +* [*Host fields*](/reference/filebeat/exported-fields-host-processor.md) +* [*ibmmq fields*](/reference/filebeat/exported-fields-ibmmq.md) +* [*Icinga fields*](/reference/filebeat/exported-fields-icinga.md) +* [*IIS fields*](/reference/filebeat/exported-fields-iis.md) +* [*iptables fields*](/reference/filebeat/exported-fields-iptables.md) +* [*Jolokia Discovery autodiscover provider fields*](/reference/filebeat/exported-fields-jolokia-autodiscover.md) +* [*Juniper JUNOS fields*](/reference/filebeat/exported-fields-juniper.md) +* [*Kafka fields*](/reference/filebeat/exported-fields-kafka.md) +* [*kibana fields*](/reference/filebeat/exported-fields-kibana.md) +* [*Kubernetes fields*](/reference/filebeat/exported-fields-kubernetes-processor.md) +* [*Log file content fields*](/reference/filebeat/exported-fields-log.md) +* [*logstash fields*](/reference/filebeat/exported-fields-logstash.md) +* [*Lumberjack fields*](/reference/filebeat/exported-fields-lumberjack.md) +* [*Microsoft fields*](/reference/filebeat/exported-fields-microsoft.md) +* [*MISP fields*](/reference/filebeat/exported-fields-misp.md) +* [*mongodb fields*](/reference/filebeat/exported-fields-mongodb.md) +* [*mssql fields*](/reference/filebeat/exported-fields-mssql.md) +* [*MySQL fields*](/reference/filebeat/exported-fields-mysql.md) +* [*MySQL Enterprise fields*](/reference/filebeat/exported-fields-mysqlenterprise.md) +* [*NATS fields*](/reference/filebeat/exported-fields-nats.md) +* [*NetFlow fields*](/reference/filebeat/exported-fields-netflow.md) +* [*Nginx fields*](/reference/filebeat/exported-fields-nginx.md) +* [*Office 365 fields*](/reference/filebeat/exported-fields-o365.md) +* [*Okta fields*](/reference/filebeat/exported-fields-okta.md) +* [*Oracle fields*](/reference/filebeat/exported-fields-oracle.md) +* [*Osquery fields*](/reference/filebeat/exported-fields-osquery.md) +* [*panw fields*](/reference/filebeat/exported-fields-panw.md) +* [*Pensando fields*](/reference/filebeat/exported-fields-pensando.md) +* [*PostgreSQL fields*](/reference/filebeat/exported-fields-postgresql.md) +* [*Process fields*](/reference/filebeat/exported-fields-process.md) +* [*RabbitMQ fields*](/reference/filebeat/exported-fields-rabbitmq.md) +* [*Redis fields*](/reference/filebeat/exported-fields-redis.md) +* [*s3 fields*](/reference/filebeat/exported-fields-s3.md) +* [*Salesforce fields*](/reference/filebeat/exported-fields-salesforce.md) +* [*Google Santa fields*](/reference/filebeat/exported-fields-santa.md) +* [*Snyk fields*](/reference/filebeat/exported-fields-snyk.md) +* [*sophos fields*](/reference/filebeat/exported-fields-sophos.md) +* [*Suricata fields*](/reference/filebeat/exported-fields-suricata.md) +* [*System fields*](/reference/filebeat/exported-fields-system.md) +* [*threatintel fields*](/reference/filebeat/exported-fields-threatintel.md) +* [*Traefik fields*](/reference/filebeat/exported-fields-traefik.md) +* [*Windows ETW fields*](/reference/filebeat/exported-fields-winlog.md) +* [*Zeek fields*](/reference/filebeat/exported-fields-zeek.md) +* [*ZooKeeper fields*](/reference/filebeat/exported-fields-zookeeper.md) +* [*Zoom fields*](/reference/filebeat/exported-fields-zoom.md) + diff --git a/docs/reference/filebeat/extract-array.md b/docs/reference/filebeat/extract-array.md new file mode 100644 index 000000000000..a2f1d1287309 --- /dev/null +++ b/docs/reference/filebeat/extract-array.md @@ -0,0 +1,46 @@ +--- +navigation_title: "extract_array" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/extract-array.html +--- + +# Extract array [extract-array] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `extract_array` processor populates fields with values read from an array field. The following example will populate `source.ip` with the first element of the `my_array` field, `destination.ip` with the second element, and `network.transport` with the third. + +```yaml +processors: + - extract_array: + field: my_array + mappings: + source.ip: 0 + destination.ip: 1 + network.transport: 2 +``` + +The following settings are supported: + +`field` +: The array field whose elements are to be extracted. + +`mappings` +: Maps each field name to an array index. Use 0 for the first element in the array. Multiple fields can be mapped to the same array element. + +`ignore_missing` +: (Optional) Whether to ignore events where the array field is missing. The default is `false`, which will fail processing of an event if the specified field does not exist. Set it to `true` to ignore this condition. + +`overwrite_keys` +: Whether the target fields specified in the mapping are overwritten if they already exist. The default is `false`, which will fail processing if a target field already exists. + +`fail_on_error` +: (Optional) If set to `true` and an error happens, changes to the event are reverted, and the original event is returned. If set to `false`, processing continues despite errors. Default is `true`. + +`omit_empty` +: (Optional) Whether empty values are extracted from the array. If set to `true`, instead of the target field being set to an empty value, it is left unset. The empty string (`""`), an empty array (`[]`) or an empty object (`{}`) are considered empty values. Default is `false`. + diff --git a/docs/reference/filebeat/faq-deleted-files-are-not-freed.md b/docs/reference/filebeat/faq-deleted-files-are-not-freed.md new file mode 100644 index 000000000000..1dfc8c65083c --- /dev/null +++ b/docs/reference/filebeat/faq-deleted-files-are-not-freed.md @@ -0,0 +1,11 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/faq-deleted-files-are-not-freed.html +--- + +# Filebeat keeps open file handlers of deleted files for a long time [faq-deleted-files-are-not-freed] + +In the default behaviour, Filebeat opens the files and keeps them open until it reaches the end of them. In situations when the configured output is blocked (e.g. {{es}} or {{ls}} is unavailable) for a long time, this can cause Filebeat to keep file handlers to files that were deleted from the file system in the mean time. As long as Filebeat keeps the deleted files open, the operating system doesn’t free up the space on disk, which can lead to increase disk utilisation or even out of disk situations. + +To mitigate this issue, you can set the [`close_timeout`](/reference/filebeat/filebeat-input-log.md#filebeat-input-log-close-timeout) setting to `5m`. This will ensure every file handler is closed once every 5 minutes, regardless of whether it reached EOF or not. Note that this option can lead to data loss if the file is deleted before Filebeat reaches the end of the file. + diff --git a/docs/reference/filebeat/faq.md b/docs/reference/filebeat/faq.md new file mode 100644 index 000000000000..f93c2a3a8ee8 --- /dev/null +++ b/docs/reference/filebeat/faq.md @@ -0,0 +1,33 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/faq.html +--- + +# Common problems [faq] + +This section describes common problems you might encounter with Filebeat. Also check out the [Filebeat discussion forum](https://discuss.elastic.co/c/beats/filebeat). + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/reference/filebeat/feature-roles.md b/docs/reference/filebeat/feature-roles.md new file mode 100644 index 000000000000..3370ecc55f72 --- /dev/null +++ b/docs/reference/filebeat/feature-roles.md @@ -0,0 +1,25 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/feature-roles.html +--- + +# Grant users access to secured resources [feature-roles] + +You can use role-based access control to grant users access to secured resources. The roles that you set up depend on your organization’s security requirements and the minimum privileges required to use specific features. + +Typically you need the create the following separate roles: + +* [setup role](/reference/filebeat/privileges-to-setup-beats.md) for setting up index templates and other dependencies +* [monitoring role](/reference/filebeat/privileges-to-publish-monitoring.md) for sending monitoring information +* [writer role](/reference/filebeat/privileges-to-publish-events.md) for publishing events collected by Filebeat +* [reader role](/reference/filebeat/kibana-user-privileges.md) for {{kib}} users who need to view and create visualizations that access Filebeat data + +{{es-security-features}} provides [built-in roles](elasticsearch://reference/elasticsearch/roles.md) that grant a subset of the privileges needed by Filebeat users. When possible, use the built-in roles to minimize the affect of future changes on your security strategy. + +Instead of using usernames and passwords, roles and privileges can be assigned to API keys to grant access to Elasticsearch resources. See [*Grant access using API keys*](/reference/filebeat/beats-api-keys.md) for more information. + + + + + + diff --git a/docs/reference/filebeat/fields-not-indexed.md b/docs/reference/filebeat/fields-not-indexed.md new file mode 100644 index 000000000000..e4faf305c462 --- /dev/null +++ b/docs/reference/filebeat/fields-not-indexed.md @@ -0,0 +1,13 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/fields-not-indexed.html +--- + +# Fields are not indexed or usable in Kibana visualizations [fields-not-indexed] + +If you have recently performed an operation that loads or parses custom, structured logs, you might need to refresh the index to make the fields available in {{kib}}. To refresh the index, use the [refresh API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-refresh). For example: + +```sh +curl -XPOST 'http://localhost:9200/filebeat-2016.08.09/_refresh' +``` + diff --git a/docs/reference/filebeat/file-log-rotation.md b/docs/reference/filebeat/file-log-rotation.md new file mode 100644 index 000000000000..8cd24ffe26a9 --- /dev/null +++ b/docs/reference/filebeat/file-log-rotation.md @@ -0,0 +1,68 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/file-log-rotation.html +--- + +# Log rotation results in lost or duplicate events [file-log-rotation] + +Filebeat supports reading from rotating log files. However, some log rotation strategies can result in lost or duplicate events when using Filebeat to forward messages. To resolve this issue: + +* **Avoid log rotation strategies that copy and truncate log files** + + Log rotation strategies that copy and truncate the input log file can result in Filebeat sending duplicate events. This happens because Filebeat identifies files by inode and device name. During log rotation, lines that Filebeat has already processed are moved to a new file. When Filebeat encounters the new file, it reads from the beginning because the previous state information (the offset and read timestamp) is associated with the inode and device name of the old file. + + Furthermore, strategies that copy and truncate the input log file can result in lost events if lines are written to the log file after it’s copied, but before it’s truncated. + +* **Make sure Filebeat is configured to read from all rotated logs** + + When an input log file is moved or renamed during log rotation, Filebeat is able to recognize that the file has already been read. After the file is rotated, a new log file is created, and the application continues logging. Filebeat picks up the new file during the next scan. Because the file has a new inode and device name, Filebeat starts reading it from the beginning. + + To avoid missing events from a rotated file, configure the input to read from the log file and all the rotated files. For examples, see [Example configurations](#log-rotate-example). + + +If you’re using Windows, also see [More about log rotation on Windows](#log-rotation-windows). + + +## Example configurations [log-rotate-example] + +This section shows a typical configuration for logrotate, a popular tool for doing log rotation on Linux, followed by a Filebeat configuration that reads all the rotated logs. + + +### logrotate.conf [log-rotate-example-logrotate] + +In this example, Filebeat reads web server log. The logs are rotated every day, and the new file is created with the specified permissions. + +```yaml +/var/log/my-server/my-server.log { + daily + missingok + rotate 7 + notifempty + create 0640 www-data www-data +} +``` + + +### filebeat.yml [log-rotate-example-filebeat] + +In this example, Filebeat is configured to read all log files to make sure it does not miss any events. + +```yaml +filebeat.inputs: +- type: filestream + id: my-server-filestream-id + paths: + - /var/log/my-server/my-server.log* +``` + + +## More about log rotation on Windows [log-rotation-windows] + +On Windows, log rotation schemes that delete old files and rename newer files to old filenames might get blocked if the old files are being processed by Filebeat. This happens because Windows does not delete files and file metadata until the last process has closed the file. Unlike most *nix filesystems, a Windows filename cannot be reused until all processes accessing the file have closed the deleted file. + +To avoid this problem, use dates in rotated filenames. The file will never be renamed to an older filename, and the log writer and log rotator will always be able to open the file. This approach also highly reduces the chance of log writing, rotation, and collection interfering with each other. + +Because log rotation is typically handled by the logging application, we are not providing an example configuration for Windows. + +Also read [Open file handlers cause issues with Windows file rotation](/reference/filebeat/windows-file-rotation.md). + diff --git a/docs/reference/filebeat/file-output.md b/docs/reference/filebeat/file-output.md new file mode 100644 index 000000000000..e499b2d70d27 --- /dev/null +++ b/docs/reference/filebeat/file-output.md @@ -0,0 +1,89 @@ +--- +navigation_title: "File" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/file-output.html +--- + +# Configure the File output [file-output] + + +The File output dumps the transactions into a file where each transaction is in a JSON format. Currently, this output is used for testing, but it can be used as input for Logstash. + +To use this output, edit the Filebeat configuration file to disable the {{es}} output by commenting it out, and enable the file output by adding `output.file`. + +Example configuration: + +```yaml +output.file: + path: "/tmp/filebeat" + filename: filebeat + #rotate_every_kb: 10000 + #number_of_files: 7 + #permissions: 0600 + #rotate_on_startup: true +``` + +## Configuration options [_configuration_options_29] + +You can specify the following `output.file` options in the `filebeat.yml` config file: + +### `enabled` [_enabled_34] + +The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. + +The default value is `true`. + + +### `path` [path] + +The path to the directory where the generated files will be saved. This option is mandatory. + +The path may include the timestamp when the file output is initialized using the `+FORMAT` syntax where `FORMAT` is a valid [time format](https://github.com/elastic/beats/blob/main/libbeat/common/dtfmt/doc.go), and enclosed with expansion braces: `%{+FORMAT}`. For example: + +``` +path: 'fileoutput-%{+yyyy.MM.dd}' +``` + + +### `filename` [_filename] + +The name of the generated files. The default is set to the Beat name. For example, the files generated by default for Filebeat would be `"filebeat-{{datetime}}.ndjson"`, `"filebeat-{{datetime}}-1.ndjson"`, `"filebeat-{{datetime}}-2.ndjson"`, and so on. + + +### `rotate_every_kb` [_rotate_every_kb] + +The maximum size in kilobytes of each file. When this size is reached, the files are rotated. The default value is 10240 KB. + + +### `number_of_files` [_number_of_files] + +The maximum number of files to save under [`path`](#path). When this number of files is reached, the oldest file is deleted, and the rest of the files are shifted from last to first. The number of files must be between 2 and 1024. The default is 7. + + +### `permissions` [_permissions] + +Permissions to use for file creation. The default is 0600. + + +### `rotate_on_startup` [_rotate_on_startup] + +If the output file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to true. + + +### `codec` [_codec_3] + +Output codec configuration. If the `codec` section is missing, events will be json encoded. + +See [Change the output codec](/reference/filebeat/configuration-output-codec.md) for more information. + + +### `queue` [_queue_5] + +Configuration options for internal queue. + +See [Internal queue](/reference/filebeat/configuring-internal-queue.md) for more information. + +Note:`queue` options can be set under `filebeat.yml` or the `output` section but not both. + + + diff --git a/docs/reference/filebeat/filebeat-configuration-reloading.md b/docs/reference/filebeat/filebeat-configuration-reloading.md new file mode 100644 index 000000000000..5dc1bcbc8421 --- /dev/null +++ b/docs/reference/filebeat/filebeat-configuration-reloading.md @@ -0,0 +1,88 @@ +--- +navigation_title: "Config file loading" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-reloading.html +--- + +# Load external configuration files [filebeat-configuration-reloading] + + +Filebeat can load external configuration files for inputs and modules, allowing you to separate your configuration into multiple smaller configuration files. See the [Input config](#load-input-config) and the [Module config](#load-module-config) sections for details. + +::::{note} +On systems with POSIX file permissions, all Beats configuration files are subject to ownership and file permission checks. For more information, see [Config File Ownership and Permissions](/reference/libbeat/config-file-permissions.md). +:::: + + + +## Input config [load-input-config] + +For input configurations, you specify the `path` option in the `filebeat.config.inputs` section of the `filebeat.yml` file. For example: + +```sh +filebeat.config.inputs: + enabled: true + path: inputs.d/*.yml +``` + +Each file found by the `path` Glob must contain a list of one or more input definitions. + +::::{tip} +The first line of each external configuration file must be an input definition that starts with `- type`. Make sure you omit the line `filebeat.config.inputs` from this file. All [`input type configuration options`](/reference/filebeat/configuration-filebeat-options.md#filebeat-input-types) must be specified within each external configuration file. Specifying these configuration options at the global `filebeat.config.inputs` level is not supported. +:::: + + +Example external configuration file: + +```yaml +- type: filestream + id: first + paths: + - /var/log/mysql.log + prospector.scanner.check_interval: 10s + +- type: filestream + id: second + paths: + - /var/log/apache.log + prospector.scanner.check_interval: 5s +``` + +::::{warning} +It is critical that two running inputs DO NOT have overlapping file paths defined. If more than one input harvests the same file at the same time, it can lead to unexpected behavior. +:::: + + + +## Module config [load-module-config] + +For module configurations, you specify the `path` option in the `filebeat.config.modules` section of the `filebeat.yml` file. By default, Filebeat loads the module configurations enabled in the [`modules.d`](/reference/filebeat/configuration-filebeat-modules.md#configure-modules-d-configs) directory. For example: + +```sh +filebeat.config.modules: + enabled: true + path: ${path.config}/modules.d/*.yml +``` + +The `path` setting must point to the `modules.d` directory if you want to use the [`modules`](/reference/filebeat/command-line-options.md#modules-command) command to enable and disable module configurations. + +Each file found by the Glob must contain a list of one or more module definitions. + +::::{tip} +The first line of each external configuration file must be a module definition that starts with `- module`. Make sure you omit the line `filebeat.config.modules` from this file. +:::: + + +For example: + +```yaml +- module: apache + access: + enabled: true + var.paths: [/var/log/apache2/access.log*] + error: + enabled: true + var.paths: [/var/log/apache2/error.log*] +``` + + diff --git a/docs/reference/filebeat/filebeat-cpu.md b/docs/reference/filebeat/filebeat-cpu.md new file mode 100644 index 000000000000..a5c4648ae5d2 --- /dev/null +++ b/docs/reference/filebeat/filebeat-cpu.md @@ -0,0 +1,9 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-cpu.html +--- + +# Filebeat is using too much CPU [filebeat-cpu] + +Filebeat might be configured to scan for files too frequently. Check the setting for `scan_frequency` in the `filebeat.yml` config file. Setting `scan_frequency` to less than 1s may cause Filebeat to scan the disk in a tight loop. + diff --git a/docs/reference/filebeat/filebeat-deduplication.md b/docs/reference/filebeat/filebeat-deduplication.md new file mode 100644 index 000000000000..adca4f11569b --- /dev/null +++ b/docs/reference/filebeat/filebeat-deduplication.md @@ -0,0 +1,112 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-deduplication.html +--- + +# Deduplicate data [filebeat-deduplication] + +The {{beats}} framework guarantees at-least-once delivery to ensure that no data is lost when events are sent to outputs that support acknowledgement, such as {{es}}, {{ls}}, Kafka, and Redis. This is great if everything goes as planned. But if Filebeat shuts down during processing, or the connection is lost before events are acknowledged, you can end up with duplicate data. + + +## What causes duplicates in {{es}}? [_what_causes_duplicates_in_es] + +When an output is blocked, the retry mechanism in Filebeat attempts to resend events until they are acknowledged by the output. If the output receives the events, but is unable to acknowledge them, the data might be sent to the output multiple times. Because document IDs are typically set by {{es}} *after* it receives the data from {{beats}}, the duplicate events are indexed as new documents. + + +## How can I avoid duplicates? [_how_can_i_avoid_duplicates] + +Rather than allowing {{es}} to set the document ID, set the ID in {{beats}}. The ID is stored in the {{beats}} `@metadata._id` field and used to set the document ID during indexing. That way, if {{beats}} sends the same event to {{es}} more than once, {{es}} overwrites the existing document rather than creating a new one. + +The `@metadata._id` field is passed along with the event so that you can use it to set the document ID after the event has been published by Filebeat but before it’s received by {{es}}. For example, see [{{ls}} pipeline example](#ls-doc-id). + +There are several ways to set the document ID in {{beats}}: + +* **`add_id` processor** + + Use the [`add_id`](/reference/filebeat/add-id.md) processor when your data has no natural key field, and you can’t derive a unique key from existing fields. + + This example generates a unique ID for each event and adds it to the `@metadata._id` field: + + ```yaml + processors: + - add_id: ~ + ``` + +* **`fingerprint` processor** + + Use the [`fingerprint`](/reference/filebeat/fingerprint.md) processor to derive a unique key from one or more existing fields. + + This example uses the values of `field1` and `field2` to derive a unique key that it adds to the `@metadata._id` field: + + ```yaml + processors: + - fingerprint: + fields: ["field1", "field2"] + target_field: "@metadata._id" + ``` + +* **`decode_json_fields` processor** + + Use the `document_id` setting in the [`decode_json_fields`](/reference/filebeat/decode-json-fields.md) processor when you’re decoding a JSON string that contains a natural key field. + + For this example, assume that the `message` field contains the JSON string `{"myid": "100", "text": "Some text"}`. This example takes the value of `myid` from the JSON string and stores it in the `@metadata._id` field: + + ```yaml + processors: + - decode_json_fields: + document_id: "myid" + fields: ["message"] + max_depth: 1 + target: "" + ``` + + The resulting document ID is `100`. + +* **JSON input settings** + + Use the `json.document_id` input setting if you’re ingesting JSON-formatted data, and the data has a natural key field. + + This example takes the value of `key1` from the JSON document and stores it in the `@metadata._id` field: + + ```yaml + filebeat.inputs: + - type: log + paths: + - /path/to/json.log + json.document_id: "key1" + ``` + + + +## {{ls}} pipeline example [ls-doc-id] + +For this example, assume that you’ve used one of the approaches described earlier to store the document ID in the {{beats}} `@metadata._id` field. To preserve the ID when you send {{beats}} data through {{ls}} en route to {{es}}, set the `document_id` field in the {{ls}} pipeline: + +```json +input { + beats { + port => 5044 + } +} + +output { + if [@metadata][_id] { + elasticsearch { + hosts => ["http://localhost:9200"] + document_id => "%{[@metadata][_id]}" <1> + index => "%{[@metadata][beat]}-%{[@metadata][version]}" + } + } else { + elasticsearch { + hosts => ["http://localhost:9200"] + index => "%{[@metadata][beat]}-%{[@metadata][version]}" + } + } +} +``` + +1. Sets the `document_id` field in the [{{es}} output](logstash://reference/plugins-outputs-elasticsearch.md) to the value stored in `@metadata._id`. + + +When {{es}} indexes the document, it sets the document ID to the specified value, preserving the ID passed from {{beats}}. + diff --git a/docs/reference/filebeat/filebeat-geoip.md b/docs/reference/filebeat/filebeat-geoip.md new file mode 100644 index 000000000000..f373192d0c1b --- /dev/null +++ b/docs/reference/filebeat/filebeat-geoip.md @@ -0,0 +1,206 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-geoip.html +--- + +# Enrich events with geoIP information [filebeat-geoip] + +You can use Filebeat along with the [GeoIP Processor](elasticsearch://reference/ingestion-tools/enrich-processor/geoip-processor.md) in {{es}} to export geographic location information based on IP addresses. Then you can use this information to visualize the location of IP addresses on a map in {{kib}}. + +The `geoip` processor adds information about the geographical location of IP addresses, based on data from the Maxmind GeoLite2 City Database. Because the processor uses a geoIP database that’s installed on {{es}}, you don’t need to install a geoIP database on the machines running Filebeat. + +::::{note} +If your use case involves using {{ls}}, you can use the [GeoIP filter](logstash://reference/plugins-filters-geoip.md) available in {{ls}} instead of using the `geoip` processor. However, using the `geoip` processor is the simplest approach when you don’t require the additional processing power of {{ls}}. +:::: + + + +## Configure the `geoip` processor [filebeat-configuring-geoip] + +To configure Filebeat and the `geoip` processor: + +1. Define an ingest pipeline that uses one or more `geoip` processors to add location information to the event. For example, you can use the Console in {{kib}} to create the following pipeline: + + ```console + PUT _ingest/pipeline/geoip-info + { + "description": "Add geoip info", + "processors": [ + { + "geoip": { + "field": "client.ip", + "target_field": "client.geo", + "ignore_missing": true + } + }, + { + "geoip": { + "database_file": "GeoLite2-ASN.mmdb", + "field": "client.ip", + "target_field": "client.as", + "properties": [ + "asn", + "organization_name" + ], + "ignore_missing": true + } + }, + { + "geoip": { + "field": "source.ip", + "target_field": "source.geo", + "ignore_missing": true + } + }, + { + "geoip": { + "database_file": "GeoLite2-ASN.mmdb", + "field": "source.ip", + "target_field": "source.as", + "properties": [ + "asn", + "organization_name" + ], + "ignore_missing": true + } + }, + { + "geoip": { + "field": "destination.ip", + "target_field": "destination.geo", + "ignore_missing": true + } + }, + { + "geoip": { + "database_file": "GeoLite2-ASN.mmdb", + "field": "destination.ip", + "target_field": "destination.as", + "properties": [ + "asn", + "organization_name" + ], + "ignore_missing": true + } + }, + { + "geoip": { + "field": "server.ip", + "target_field": "server.geo", + "ignore_missing": true + } + }, + { + "geoip": { + "database_file": "GeoLite2-ASN.mmdb", + "field": "server.ip", + "target_field": "server.as", + "properties": [ + "asn", + "organization_name" + ], + "ignore_missing": true + } + }, + { + "geoip": { + "field": "host.ip", + "target_field": "host.geo", + "ignore_missing": true + } + }, + { + "rename": { + "field": "server.as.asn", + "target_field": "server.as.number", + "ignore_missing": true + } + }, + { + "rename": { + "field": "server.as.organization_name", + "target_field": "server.as.organization.name", + "ignore_missing": true + } + }, + { + "rename": { + "field": "client.as.asn", + "target_field": "client.as.number", + "ignore_missing": true + } + }, + { + "rename": { + "field": "client.as.organization_name", + "target_field": "client.as.organization.name", + "ignore_missing": true + } + }, + { + "rename": { + "field": "source.as.asn", + "target_field": "source.as.number", + "ignore_missing": true + } + }, + { + "rename": { + "field": "source.as.organization_name", + "target_field": "source.as.organization.name", + "ignore_missing": true + } + }, + { + "rename": { + "field": "destination.as.asn", + "target_field": "destination.as.number", + "ignore_missing": true + } + }, + { + "rename": { + "field": "destination.as.organization_name", + "target_field": "destination.as.organization.name", + "ignore_missing": true + } + } + ] + } + ``` + + In this example, the pipeline ID is `geoip-info`. `field` specifies the field that contains the IP address to use for the geographical lookup, and `target_field` is the field that will hold the geographical information. `"ignore_missing": true` configures the pipeline to continue processing when it encounters an event that doesn’t have the specified field. + + See [GeoIP Processor](elasticsearch://reference/ingestion-tools/enrich-processor/geoip-processor.md) for more options. + + To learn more about adding host information to an event, see [add_host_metadata](/reference/filebeat/add-host-metadata.md). + +2. In the Filebeat config file, configure the {{es}} output to use the pipeline. Specify the pipeline ID in the `pipeline` option under `output.elasticsearch`. For example: + + ```yaml + output.elasticsearch: + hosts: ["localhost:9200"] + pipeline: geoip-info + ``` + +3. Run Filebeat. Remember to use `sudo` if the config file is owned by root. + + ```sh + ./filebeat -e + ``` + + If the lookups succeed, the events are enriched with `geo_point` fields, such as `client.geo.location` and `host.geo.location`, that you can use to populate visualizations in {{kib}}. + + +If you add a field that’s not already defined as a `geo_point` in the index template, add a mapping so the field gets indexed correctly. + + +## Visualize locations [filebeat-visualizing-location] + +To visualize the location of IP addresses, you can create a new [coordinate map](docs-content://explore-analyze/visualize/maps.md) in {{kib}} and select the location field, for example `client.geo.location` or `host.geo.location`, as the Geohash. + +:::{image} images/coordinate-map.png +:alt: Coordinate map in {kib} +:class: screenshot +::: + diff --git a/docs/reference/filebeat/filebeat-input-aws-cloudwatch.md b/docs/reference/filebeat/filebeat-input-aws-cloudwatch.md new file mode 100644 index 000000000000..18e8de25e57f --- /dev/null +++ b/docs/reference/filebeat/filebeat-input-aws-cloudwatch.md @@ -0,0 +1,225 @@ +--- +navigation_title: "AWS CloudWatch" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-cloudwatch.html +--- + +# AWS CloudWatch input [filebeat-input-aws-cloudwatch] + + +`aws-cloudwatch` input can be used to retrieve all logs from all log streams in a specific log group. `filterLogEvents` AWS API is used to list log events from the specified log group. Amazon CloudWatch Logs can be used to store log files from Amazon Elastic Compute Cloud(EC2), AWS CloudTrail, Route53, and other sources. + +A log group is a group of log streams that share the same retention, monitoring, and access control settings. You can define log groups and specify which streams to put into each group. There is no limit on the number of log streams that can belong to one log group. + +A log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. + +```yaml +filebeat.inputs: +- type: aws-cloudwatch + log_group_arn: arn:aws:logs:us-east-1:428152502467:log-group:test:* + scan_frequency: 1m + credential_profile_name: elastic-beats + start_position: beginning +``` + +The `aws-cloudwatch` input supports the following configuration options plus the [Common options](#filebeat-input-aws-cloudwatch-common-options) described later. + + +### `log_group_arn` [_log_group_arn] + +ARN of the log group to collect logs from. The ARN may refer to a log group in a linked source account. + +Note: `log_group_arn` cannot be combined with `log_group_name`, `log_group_name_prefix` and `region_name` properties. If set, values extracted from `log_group_arn` takes precedence over them. + +Note: If the log group is in a linked source account and filebeat is configured to use a monitoring account, you must use the `log_group_arn`. You can read more about AWS account linking and cross account observability from the [official documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html). + + +### `log_group_name` [_log_group_name] + +Name of the log group to collect logs from. + +Note: `region_name` is required when log_group_name is given. + + +### `log_group_name_prefix` [_log_group_name_prefix] + +The prefix for a group of log group names. See `include_linked_accounts_for_prefix_mode` option for linked source accounts behavior. + +Note: `region_name` is required when `log_group_name_prefix` is given. `log_group_name` and `log_group_name_prefix` cannot be given at the same time. The number of workers that will process the log groups under this prefix is set through the `number_of_workers` config. + + +### `include_linked_accounts_for_prefix_mode` [_include_linked_accounts_for_prefix_mode] + +Configure whether to include linked source accounts that contains the prefix value defined through `log_group_name_prefix`. Accepts a boolean and this is by default disabled. + +Note: Utilize `log_group_arn` if you desire to obtain logs from a known log group (including linked source accounts) You can read more about AWS account linking and cross account observability from the [official documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html). + + +### `region_name` [_region_name] + +Region that the specified log group or log group prefix belongs to. + + +### `number_of_workers` [_number_of_workers] + +Number of workers that will process the log groups with the given `log_group_name_prefix`. Default value is 1. + + +### `log_streams` [_log_streams] + +A list of strings of log streams names that Filebeat collect log events from. + + +### `log_stream_prefix` [_log_stream_prefix] + +A string to filter the results to include only log events from log streams that have names starting with this prefix. + + +### `start_position` [_start_position] + +`start_position` allows user to specify if this input should read log files from the `beginning` or from the `end`. + +* `beginning`: reads from the beginning of the log group (default). +* `end`: read only new messages from current time minus `scan_frequency` going forward + +For example, with `scan_frequency` equals to `30s` and current timestamp is `2020-06-24 12:00:00`: + +* with `start_position = beginning`: + + * first iteration: startTime=0, endTime=2020-06-24 12:00:00 + * second iteration: startTime=2020-06-24 12:00:00, endTime=2020-06-24 12:00:30 + +* with `start_position = end`: + + * first iteration: startTime=2020-06-24 11:59:30, endTime=2020-06-24 12:00:00 + * second iteration: startTime=2020-06-24 12:00:00, endTime=2020-06-24 12:00:30 + + + +### `scan_frequency` [_scan_frequency] + +This config parameter sets how often Filebeat checks for new log events from the specified log group. Default `scan_frequency` is 1 minute, which means Filebeat will sleep for 1 minute before querying for new logs again. + + +### `api_timeout` [_api_timeout] + +The maximum duration of AWS API can take. If it exceeds the timeout, AWS API will be interrupted. The default AWS API timeout for a message is 120 seconds. The minimum is 0 seconds. + + +### `api_sleep` [_api_sleep] + +This is used to sleep between AWS `FilterLogEvents` API calls inside the same collection period. `FilterLogEvents` API has a quota of 5 transactions per second (TPS)/account/Region. By default, `api_sleep` is 200 ms. This value should only be adjusted when there are multiple Filebeats or multiple Filebeat inputs collecting logs from the same region and AWS account. + + +### `latency` [_latency] + +Some AWS services send logs to CloudWatch with a latency to process larger than `aws-cloudwatch` input `scan_frequency`. This case, please specify a `latency` parameter so collection start time and end time will be shifted by the given latency amount. + + +### `aws credentials` [_aws_credentials] + +In order to make AWS API calls, `aws-cloudwatch` input requires AWS credentials. Please see [AWS credentials options](/reference/filebeat/filebeat-input-aws-s3.md#aws-credentials-config) for more details. + + +## AWS Permissions [_aws_permissions] + +Specific AWS permissions are required for IAM user to access aws-cloudwatch: + +``` +cloudwatchlogs:DescribeLogGroups +logs:FilterLogEvents +``` + + +## Metrics [_metrics] + +This input exposes metrics under the [HTTP monitoring endpoint](/reference/filebeat/http-endpoint.md). These metrics are exposed under the `/inputs` path. They can be used to observe the activity of the input. + +| Metric | Description | +| --- | --- | +| `log_events_received_total` | Number of CloudWatch log events received. | +| `log_groups_total` | Logs collected from number of CloudWatch log groups. | +| `cloudwatch_events_created_total` | Number of events created from processing logs from CloudWatch. | +| `api_calls_total` | Number of API calls made total. | + +## Common options [filebeat-input-aws-cloudwatch-common-options] + +The following configuration options are supported by all inputs. + + +#### `enabled` [_enabled] + +Use the `enabled` option to enable and disable inputs. By default, enabled is set to true. + + +#### `tags` [_tags] + +A list of tags that Filebeat includes in the `tags` field of each published event. Tags make it easy to select specific events in Kibana or apply conditional filtering in Logstash. These tags will be appended to the list of tags specified in the general configuration. + +Example: + +```yaml +filebeat.inputs: +- type: aws-cloudwatch + . . . + tags: ["json"] +``` + + +#### `fields` [filebeat-input-aws-cloudwatch-fields] + +Optional fields that you can specify to add additional information to the output. For example, you might add fields that you can use for filtering log data. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. By default, the fields that you specify here will be grouped under a `fields` sub-dictionary in the output document. To store the custom fields as top-level fields, set the `fields_under_root` option to true. If a duplicate field is declared in the general configuration, then its value will be overwritten by the value declared here. + +```yaml +filebeat.inputs: +- type: aws-cloudwatch + . . . + fields: + app_id: query_engine_12 +``` + + +#### `fields_under_root` [fields-under-root-aws-cloudwatch] + +If this option is set to true, the custom [fields](#filebeat-input-aws-cloudwatch-fields) are stored as top-level fields in the output document instead of being grouped under a `fields` sub-dictionary. If the custom field names conflict with other field names added by Filebeat, then the custom fields overwrite the other fields. + + +#### `processors` [_processors] + +A list of processors to apply to the input data. + +See [Processors](/reference/filebeat/filtering-enhancing-data.md) for information about specifying processors in your config. + + +#### `pipeline` [_pipeline] + +The ingest pipeline ID to set for the events generated by this input. + +::::{note} +The pipeline ID can also be configured in the Elasticsearch output, but this option usually results in simpler configuration files. If the pipeline is configured both in the input and output, the option from the input is used. +:::: + + +::::{important} +The `pipeline` is always lowercased. If `pipeline: Foo-Bar`, then the pipeline name in {{es}} needs to be defined as `foo-bar`. +:::: + + + +#### `keep_null` [_keep_null] + +If this option is set to true, fields with `null` values will be published in the output document. By default, `keep_null` is set to `false`. + + +#### `index` [_index] + +If present, this formatted string overrides the index for events from this input (for elasticsearch outputs), or sets the `raw_index` field of the event’s metadata (for other outputs). This string can only refer to the agent name and version and the event timestamp; for access to dynamic fields, use `output.elasticsearch.index` or a processor. + +Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might expand to `"filebeat-myindex-2019.11.01"`. + + +#### `publisher_pipeline.disable_host` [_publisher_pipeline_disable_host] + +By default, all events contain `host.name`. This option can be set to `true` to disable the addition of this field to all events. The default value is `false`. + + diff --git a/docs/reference/filebeat/filebeat-input-aws-s3.md b/docs/reference/filebeat/filebeat-input-aws-s3.md new file mode 100644 index 000000000000..d8bc2f957dab --- /dev/null +++ b/docs/reference/filebeat/filebeat-input-aws-s3.md @@ -0,0 +1,1150 @@ +--- +navigation_title: "AWS S3" +mapped_pages: + - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html +--- + +# AWS S3 input [filebeat-input-aws-s3] + + +Use the `aws-s3` input to retrieve logs from S3 objects that are pointed to by S3 notification events read from an SQS queue or directly polling list of S3 objects in an S3 bucket. The use of SQS notification is preferred: polling lists of S3 objects is expensive in terms of performance and costs and should be preferably used only when no SQS notification can be attached to the S3 buckets. This input can, for example, be used to receive S3 access logs to monitor detailed records for the requests that are made to a bucket. This input also supports S3 notification from SNS to SQS. + +SQS notification method is enabled setting `queue_url` configuration value. S3 bucket list polling method is enabled setting `bucket_arn` configuration value. Both values cannot be set at the same time, at least one of the values must be set. + +When using the SQS notification method, this input depends on S3 notifications delivered to an SQS queue for `s3:ObjectCreated:*` events. You must create an SQS queue and configure S3 to publish events to the queue. + +The S3 input manages SQS message visibility to prevent messages from being reprocessed while the S3 object is still being processed. If the processing takes longer than half of the visibility timeout, the timeout is reset to ensure the message doesn’t return to the queue before processing is complete. + +If an error occurs during the processing of the S3 object, the processing will be stopped, and the SQS message will be returned to the queue for reprocessing. + + +## Configuration Examples [_configuration_examples] + + +### SQS with JSON files [_sqs_with_json_files] + +This example reads s3:ObjectCreated notifications from SQS, and assumes that all the S3 objects have a `Content-Type` of `application/json`. It splits the `Records` array in the JSON into separate events. + +```yaml +filebeat.inputs: +- type: aws-s3 + queue_url: https://sqs.ap-southeast-1.amazonaws.com/1234/test-s3-queue + expand_event_list_from_field: Records +``` + + +### S3 bucket listing [_s3_bucket_listing] + +When using the direct polling list of S3 objects in an S3 buckets, a number of workers that will process the S3 objects listed must be set through the `number_of_workers` config. Listing of the S3 bucket will be polled according the time interval defined by `bucket_list_interval` config. The default value is 120 sec. + +```yaml +filebeat.inputs: +- type: aws-s3 + bucket_arn: arn:aws:s3:::test-s3-bucket + number_of_workers: 5 + bucket_list_interval: 300s + credential_profile_name: elastic-beats + expand_event_list_from_field: Records +``` + + +### S3-compatible services [_s3_compatible_services] + +The `aws-s3` input can also poll third party S3-compatible services such as the Minio. Using non-AWS S3 compatible buckets requires the use of `access_key_id` and `secret_access_key` for authentication. To specify the S3 bucket name, use the `non_aws_bucket_name` config and the `endpoint` must be set to replace the default API endpoint. `endpoint` should be a full URI in the form of `https(s)://` in the case of `non_aws_bucket_name`, that will be used as the API endpoint of the service. No `endpoint` is needed if using the native AWS S3 service hosted at `amazonaws.com`. Please see [Configuration parameters](#aws-credentials-config) for alternate AWS domains that require a different endpoint. + +```yaml +filebeat.inputs: +- type: aws-s3 + non_aws_bucket_name: test-s3-bucket + number_of_workers: 5 + bucket_list_interval: 300s + access_key_id: xxxxxxx + secret_access_key: xxxxxxx + endpoint: https://s3.example.com:9000 + expand_event_list_from_field: Records +``` + + +## Document ID Generation [_document_id_generation] + +This aws-s3 input feature prevents the duplication of events in Elasticsearch by generating a custom document `_id` for each event, rather than relying on Elasticsearch to automatically generate one. Each document in an Elasticsearch index must have a unique `_id`, and Filebeat uses this property to avoid ingesting duplicate events. + +The custom `_id` is based on several pieces of information from the S3 object: the Last-Modified timestamp, the bucket ARN, the object key, and the byte offset of the data in the event. + +Duplicate prevention is particularly useful in scenarios where Filebeat needs to retry an operation. Filebeat guarantees at-least-once delivery, meaning it will retry any failed or incomplete operations. These retries may be triggered by issues with the host, `{{beatname_uc}}`, network connectivity, or services such as Elasticsearch, SQS, or S3. + + +### Limitations of `_id`-Based Deduplication [_limitations_of_id_based_deduplication] + +There are some limitations to consider when using `_id`-based deduplication in Elasticsearch: + +* Deduplication works only within a single index. The same `_id` can exist in different indices, which is important if you’re using data streams or index aliases. When the backing index rolls over, a duplicate may be ingested. +* Indexing operations in Elasticsearch may take longer when an `_id` is specified. Elasticsearch needs to check if the ID already exists before writing, which can increase the time required for indexing. + + +### Disabling Duplicate Prevention [_disabling_duplicate_prevention] + +If you want to disable the `_id`-based deduplication, you can remove the document `_id` using the [`drop_fields`](/reference/filebeat/drop-fields.md) processor in Filebeat. + +```yaml +filebeat.inputs: + - type: aws-s3 + queue_url: https://queue.amazonaws.com/80398EXAMPLE/MyQueue + processors: + - drop_fields: + fields: + - '@metadata._id' + ignore_missing: true +``` + +Alternatively, you can remove the `_id` field using an Elasticsearch Ingest Node pipeline. + +```json +{ + "processors": [ + { + "remove": { + "if": "ctx.input?.type == \"aws-s3\"", + "field": "_id", + "ignore_missing": true + } + } + ] +} +``` + + +## Handling Compressed Objects [_handling_compressed_objects] + +S3 objects that use the gzip format ([RFC 1952](https://rfc-editor.org/rfc/rfc1952.html)) with the DEFLATE compression algorithm are automatically decompressed during processing. This is achieved by checking for the gzip file magic header. + + +## Configuration [_configuration] + +The `aws-s3` input supports the following configuration options plus the [Common options](#filebeat-input-aws-s3-common-options) described later. + +::::{note} +For time durations, valid time units are - "ns", "us" (or "µs"), "ms", "s", "m", "h". For example, "2h" +:::: + + + +### `api_timeout` [_api_timeout_2] + +The maximum duration of the AWS API call. If it exceeds the timeout, the AWS API call will be interrupted. The default AWS API timeout is `120s`. + +The API timeout must be longer than the `sqs.wait_time` value. + + +### `buffer_size` [input-aws-s3-buffer_size] + +The size of the buffer in bytes that each harvester uses when fetching a file. This only applies to non-JSON logs. The default is `16 KiB`. + + +### `content_type` [input-aws-s3-content_type] + +A standard MIME type describing the format of the object data. This can be set to override the MIME type given to the object when it was uploaded. For example: `application/json`. + + +### `encoding` [input-aws-s3-encoding] + +The file encoding to use for reading data that contains international characters. This only applies to non-JSON logs. See [`encoding`](/reference/filebeat/filebeat-input-log.md#_encoding_3). + + +### `decoding` [input-aws-s3-decoding] + +The file decoding option is used to specify a codec that will be used to decode the file contents. This can apply to any file stream data. An example config is shown below: + +Currently supported codecs are given below:- + +1. [csv](#attrib-decoding-csv): This codec decodes RFC 4180 CSV data streams. +2. [parquet](#attrib-decoding-parquet): This codec decodes Apache Parquet data streams. + + +#### `csv` [attrib-decoding-csv] + +The CSV codec is used to decode RFC 4180 CSV data streams. Enabling the codec without other options will use the default codec options. + +```yaml + decoding.codec.csv.enabled: true +``` + +The `csv` codec supports five sub attributes to control aspects of CSV decoding. The `comma` attribute specifies the field separator character used by the CSV format. If it is not specified, the comma character *`,`* is used. The `comment` attribute specifies the character that should be interpreted as a comment mark. If it is specified, lines starting with the character will be ignored. Both `comma` and `comment` must be single characters. The `lazy_quotes` attribute controls how quoting in fields is handled. If `lazy_quotes` is true, a quote may appear in an unquoted field and a non-doubled quote may appear in a quoted field. The `trim_leading_space` attribute specifies that leading white space should be ignored, even if the `comma` character is white space. For complete details of the preceding configuration attribute behaviors, see the CSV decoder [documentation](https://pkg.go.dev/encoding/csv#Reader) The `fields_names` attribute can be used to specify the column names for the data. If it is absent, the field names are obtained from the first non-comment line of data. The number of fields must match the number of field names. + +An example config is shown below: + +```yaml + decoding.codec.csv.enabled: true + decoding.codec.csv.comma: "\t" + decoding.codec.csv.comment: "#" +``` + + +#### `parquet` [attrib-decoding-parquet] + +The `parquet` codec is used to decode the [Apache Parquet](https://en.wikipedia.org/wiki/Apache_Parquet) data storage format. Enabling the codec without other options will use the default codec options. + +```yaml + decoding.codec.parquet.enabled: true +``` + +The Parquet codec supports two attributes, batch_size and process_parallel, to improve decoding performance: + +* `batch_size`: This attribute specifies the number of records to read from the Parquet stream at a time. By default, batch_size is set to 1. Increasing the batch size can boost processing speed by reading more records in each operation. +* `process_parallel`: When set to true, this attribute allows Filebeat to read multiple columns from the Parquet stream in parallel, using as many readers as there are columns. Enabling parallel processing can significantly increase throughput, but it will also result in higher memory usage. By default, process_parallel is set to false. + +By adjusting both batch_size and process_parallel, you can fine-tune the trade-off between processing speed and memory consumption. + +An example config is shown below: + +```yaml + decoding.codec.parquet.enabled: true + decoding.codec.parquet.process_parallel: true + decoding.codec.parquet.batch_size: 1000 +``` + + +### `expand_event_list_from_field` [_expand_event_list_from_field] + +If the fileset using this input expects to receive multiple messages bundled under a specific field or an array of objects then the config option `expand_event_list_from_field` value can be assigned the name of the field or `.[]`. This setting will be able to split the messages under the group value into separate events. For example, CloudTrail logs are in JSON format and events are found under the JSON object "Records". + +::::{note} +When using `expand_event_list_from_field`, `content_type` config parameter has to be set to `application/json`. +:::: + + +```json +{ + "Records": [ + { + "eventVersion": "1.07", + "eventTime": "2019-11-14T00:51:00Z", + "awsRegion": "us-east-1", + "eventID": "EXAMPLE8-9621-4d00-b913-beca2EXAMPLE", + }, + { + "eventVersion": "1.07", + "eventTime": "2019-11-14T00:52:00Z", + "awsRegion": "us-east-1", + "eventID": "EXAMPLEc-28be-486c-8928-49ce6EXAMPLE", + } + ] +} +``` + +Or when `expand_event_list_from_field` is set to `.[]`, an array of objects will be split into separate events. + +```json +[ + { + "id":"1234", + "message":"success" + }, + { + "id":"5678", + "message":"failure" + } +] +``` + +Note: When `expand_event_list_from_field` parameter is given in the config, aws-s3 input will assume the logs are in JSON format and decode them as JSON. Content type will not be checked. If a file has "application/json" content-type, `expand_event_list_from_field` becomes required to read the JSON file. + + +### `file_selectors` [_file_selectors] + +If the SQS queue will have events that correspond to files that Filebeat shouldn’t process `file_selectors` can be used to limit the files that are downloaded. This is a list of selectors which are made up of `regex` and `expand_event_list_from_field` options. The `regex` should match the S3 object key in the SQS message, and the optional `expand_event_list_from_field` is the same as the global setting. If `file_selectors` is given, then any global `expand_event_list_from_field` value is ignored in favor of the ones specified in the `file_selectors`. Regex syntax is the same as the Go language. Files that don’t match one of the regexes won’t be processed. [`content_type`](#input-aws-s3-content_type), [`parsers`](#input-aws-s3-parsers), [`include_s3_metadata`](#input-aws-s3-include_s3_metadata),[`max_bytes`](#input-aws-s3-max_bytes), [`buffer_size`](#input-aws-s3-buffer_size), and [`encoding`](#input-aws-s3-encoding) may also be set for each file selector. + +```yaml +file_selectors: + - regex: '/CloudTrail/' + expand_event_list_from_field: 'Records' + - regex: '/CloudTrail-Digest/' + - regex: '/CloudTrail-Insight/' + expand_event_list_from_field: 'Records' +``` + + +### `fips_enabled` [_fips_enabled] + +Moved to [AWS credentials options](#aws-credentials-config). + + +### `include_s3_metadata` [input-aws-s3-include_s3_metadata] + +This input can include S3 object metadata in the generated events for use in follow-on processing. You must specify the list of keys to include. By default, none are included. If the key exists in the S3 response, then it will be included in the event as `aws.s3.metadata.` where the key name as been normalized to all lowercase. + +``` +include_s3_metadata: + - last-modified + - x-amz-version-id +``` + + +### `max_bytes` [input-aws-s3-max_bytes] + +The maximum number of bytes that a single log message can have. All bytes after `max_bytes` are discarded and not sent. This setting is especially useful for multiline log messages, which can get large. This only applies to non-JSON logs. The default is `10 MiB`. + + +### `parsers` [input-aws-s3-parsers] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +This option expects a list of parsers that non-JSON logs go through. + +Available parsers: + +* `multiline` + +In this example, Filebeat is reading multiline messages that consist of XML that start with the `` tag. + +```yaml +filebeat.inputs: +- type: aws-s3 + ... + parsers: + - multiline: + pattern: "^