diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/00-AWS-Free-Tier/index.html b/00-AWS-Free-Tier/index.html new file mode 100644 index 0000000..88a35b4 --- /dev/null +++ b/00-AWS-Free-Tier/index.html @@ -0,0 +1,1370 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Day 0 - Creating Your Own Server - with AWS Free Tier - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 0 - Creating Your Own Server - with AWS Free Tier

+

Refer to Day 0 - Get Your Own Server to review your options:

+ +

AWS free-tier, is it always free?

+

The AWS Free Tier is designed to allow new users to explore and test various AWS services without incurring any costs for 12 months following the AWS sign-up date, subject to certain usage limits. When your 12 month free usage term expires or if your application use exceeds the tiers, you simply pay standard, pay-as-you-go service rates. You can extend that free usage with an Educate Pack, if you are eligible.

+

Signing up with AWS Educate pack:

+
    +
  • Go to the AWS Educate website at https://aws.amazon.com/education/awseducate/
  • +
  • Click on the “Join AWS Educate” button located at the top right corner of the page.
  • +
  • Choose the option that best describes you, whether you are a student or an educator.
  • +
  • Create an AWS Educate account by filling out the required information, including your name, email address, and the name of your school or institution.
  • +
  • Once you have created your account, you can access the AWS Educate Starter Account, which includes $100 in AWS Promotional Credits, free access to over 25 AWS services, and self-paced labs and tutorials to help you get started with AWS.
  • +
+

Please note that the AWS Educate program is intended for students and educators who are interested in learning about cloud computing and AWS services. In order to be eligible for the program, you will need to provide proof of your status as a student or educator.

+

Signing up with AWS

+

Sign-up is fairly simple - just provide your email address and a password of your choosing - along with a phone number for a 2FA - a second method of authentication. +You will need to also provide your VISA or other credit card information.

+
    +
  • For Support Plan, choose “Basic Plan/Free”
  • +
+

Logout, then login again, and then select:

+
    +
  • Services - from the top menu
  • +
  • EC2 - from the list of services
  • +
+

In “AWS speak” the server we’ll create will be an “EC2 compute instance” - so now choose “Launch Instance”. You will be presented with several image options - choose one with “Ubuntu Server LTS” in the name. +At the next screen you’ll have options for the type - typically only “t2.micro” is eligible for the Free Tier, but this is fine, so select to “review and Launch” +At the review screen there will be an option “Security Groups” - this is in fact a firewall configuration which AWS provides by default. While a good thing in general, for our purposes we want our server completely exposed, so we’ll edit this to effectively disable it, like this:

+
    +
  • Select “Configure Security Group”
  • +
  • Select “Add Rule”
  • +
  • Type: “All traffic”, Source: “Anywhere”
  • +
+

This opens all ports and protocols to access from anywhere. While this might be unwise for a production server, it is what we want for this course.

+

Now select “Launch”. When prompted for a key pair, create one.

+

Your server instance should now launch, and you can login to it by:

+
    +
  • Services, EC2, Running instances, Connect
  • +
+

Remote access via SSH

+

You should see an “IPv4” entry for your server, this is its unique Internet IP address, and is how you’ll connect to it via SSH (the Secure Shell protocol) - something we’ll be covering in the first lesson.

+

This video, How to Set Up AWS EC2 and Connect to Linux Instance with PuTTY, gives a good overview of the process.

+

You will be logging in as the user ubuntu. It has been added to the ‘adm’ and ‘sudo’ groups, which on an Ubuntu system gives it access to read various logs - and to “become root” as required via the sudo command.

+

You are now a sysadmin

+

Confirm that you can do administrative tasks by typing:

+

sudo apt update

+

(Normally you’d expect this would prompt you to confirm your password, but because you’re using public key authentication the system hasn’t prompted you to set up a password - and AWS have configured sudo to not request one for “ubuntu”).

+

Then:

+

sudo apt upgrade

+

Don’t worry too much about the output and messages from these commands, but it should be clear whether they succeeded or not. (Reply to any prompts by taking the default option). These commands are how you force the installation of updates on an Ubuntu Linux system, and only an administrator can do them.

+

To logout, type logout or exit.

+

Your server is now all set up and ready for the course!

+

Note that:

+
    +
  • This server is now running, and completely exposed to the whole of the Internet
  • +
  • You alone are responsible for managing it
  • +
  • You have just installed the latest updates, so it should be secure for now
  • +
+

Now you are ready to start the challenge. Day 1, here we go!

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/00-Azure-Free-Tier/index.html b/00-Azure-Free-Tier/index.html new file mode 100644 index 0000000..756bc66 --- /dev/null +++ b/00-Azure-Free-Tier/index.html @@ -0,0 +1,1344 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Day 0 - Creating Your Own Server - with Azure Free Credits - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 0 - Creating Your Own Server - with Azure Free Credits

+

Refer to Day 0 - Get Your Own Server to review your options:

+ +

Signing up with Azure

+

Sign-up is fairly simple - just provide your email address and a password of your choosing - along with a phone number for a 2FA - a second method of authentication. Azure can be a bit funny about ‘corporate’ email addresses, eg using a work address or your own domain. Create a new @outlook or @gmail.com account if so using the link on the sign-up page. +You will need to also provide your VISA or other credit card information.

+
    +
  • Click ‘start building in azure’
  • +
  • Click ‘Deploy a virtual machine’
  • +
  • Click ‘Create a linux virtual machine’
  • +
  • Search and select Ubuntu Server LTS
  • +
  • Use the Standard _D2s_v3 size - this should be comfortably covered by your trial credits for the duration of the course
  • +
  • Ensure ‘SSH Public Key’ for authentication and ‘generate new key pair’ for SSH Public Key source are selected
  • +
  • Leave ‘allow selected ports’ as ‘ssh (22)’ for now
  • +
  • Click ‘Review + Create’
  • +
  • Azure will generate and download the private key file to SSH onto the box -
  • +
  • (Windows) double-click this to open on Windows and it will be added to your cert store on the machine
  • +
  • (Mac OS X and Linux) run the command ‘sudo ssh-add -K /link-to-downloaded-file’
  • +
  • Note: if the above command doesn’t work for you then try running without sudo. If you get any error related to permissions then try running ‘chmod 400 filename’ first.
  • +
  • Connect to the machine using ssh azureuser@PUBLICIP
  • +
+

Now to fully expose the machine and all ports to the internet:

+
    +
  • Navigate to https://portal.azure.com/#home
  • +
  • Select ‘Virtual Machines’
  • +
  • Select your created virtual machine and select ‘Networking’ from the settings pane
  • +
  • Click ‘Inbound Port Rules’ and ‘Add inbound port rule’
  • +
  • Set ‘source port ranges’ and ‘destination port ranges’ to ‘*’ and set ‘Source’ and ‘Destination’ to ‘any’. Ensure protocol is set to ‘any’ and action is set to ‘allow’. Set the priority to ‘100’ and create an appropriate name
  • +
  • Click ‘Outbound port rules’ and ‘add outbound port rule’
  • +
  • Set ‘source port ranges’ and ‘destination port ranges’ to ‘*’ and set ‘Source’ and ‘Destination’ to ‘any’. Ensure protocol is set to ‘any’ and action is set to ‘allow’. Set the priority to ‘101’ and create an appropriate name
  • +
+

This opens all ports and protocols to access from anywhere. While this might be unwise for a production server, it is what we want for this course.

+

Remote access via SSH

+

Ensure your machine is ‘running’ (if not, click ‘start’) and connect using the ‘connect -> ssh’ dropdown and following instructions

+

You will be logging in as the user azureuser. It has been added to the ‘adm’ and ‘sudo’ groups, which on an Ubuntu system gives it access to read various logs - and to “become root” as required via the sudo command.

+

You are now a sysadmin

+

Confirm that you can do administrative tasks by typing:

+

sudo apt update

+

(Normally you’d expect this would prompt you to confirm your password, but because you’re using public key authentication the system hasn’t prompted you to set up a password - and Azure have configured sudo to not request one for “azureuser”).

+

Then:

+

sudo apt upgrade

+

Don’t worry too much about the output and messages from these commands, but it should be clear whether they succeeded or not. (Reply to any prompts by taking the default option). These commands are how you force the installation of updates on an Ubuntu Linux system, and only an administrator can do them.

+

To logout, type logout or exit.

+

Your server is now all set up and ready for the course!

+

Note that:

+
    +
  • This server is now running, and completely exposed to the whole of the Internet
  • +
  • You alone are responsible for managing it
  • +
  • You have just installed the latest updates, so it should be secure for now
  • +
+

Now you are ready to start the challenge. Day 1, here we go!

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/00-Google-Cloud/index.html b/00-Google-Cloud/index.html new file mode 100644 index 0000000..4ff5373 --- /dev/null +++ b/00-Google-Cloud/index.html @@ -0,0 +1,1342 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Day 0 - Creating Your Own Server - with Google Cloud Platform Free Tier - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 0 - Creating Your Own Server - with Google Cloud Platform Free Tier

+

Refer to Day 0 - Get Your Own Server to review your options:

+ +

Signing up with GCP

+

Sign-up is fairly simple - just provide your email address and a password of your choosing - along with a phone number for a 2FA - a second method of authentication. +You will need to also provide your VISA or other credit card information.

+
    +
  • Choose “Compute Engine” and click “VM Instances”.
  • +
  • Create a new instance.
  • +
  • Select whichever regions you want.
  • +
  • For Machine Configuration select series and set to “E2” and Machine type to “e2-micro”.
  • +
  • Change boot disk to “Ubuntu LTS”
  • +
+

Now after we create our own server, we need to open all ports and protocols to access from anywhere. While this might be unwise for a production server, it is what we want for this course.

+

Navigate to your GCP home page and goto Networking > VPC Network > Firewall > Create Firewall

+

Set “Direction of Traffic” to “Ingress” +Set “Target” to “All instances in the network” +Set “Source Filter” to “IP Ranges” +Set “Source IP Ranges” to “0.0.0.0/0” +Set “Protocols and Ports” to “Allow All” +Create and repeat the steps by creating a new Firewall and setting “Direction of Traffic” to “Egress”

+

Logging in for the first time

+

Select your instance and click “ssh” it will open a new window console. To access the root, type “sudo -i passwd” in the command line then set your own password. Log in by typing “su” and “password”. Note that the password won’t show as you type or paste it.

+

Setting up SSH

+

You can also refer to https://cloud.google.com/compute/docs/instances/connecting-advanced#thirdpartytools if you intend to access your server via third-party tools (e.g. Putty).

+

You are now a sysadmin

+

Confirm that you can do administrative tasks by typing:

+

sudo apt update

+

Then:

+

sudo apt upgrade

+

Don’t worry too much about the output and messages from these commands, but it should be clear whether they succeeded or not. (Reply to any prompts by taking the default option). These commands are how you force the installation of updates on an Ubuntu Linux system, and only an administrator can do them.

+

To logout, type logout or exit.

+

Your server is now all set up and ready for the course!

+

Note that:

+
    +
  • This server is now running, and completely exposed to the whole of the Internet
  • +
  • You alone are responsible for managing it
  • +
  • You have just installed the latest updates, so it should be secure for now
  • +
+

Now you are ready to start the challenge. Day 1, here we go!

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/00-Local-Server/index.html b/00-Local-Server/index.html new file mode 100644 index 0000000..58f1a66 --- /dev/null +++ b/00-Local-Server/index.html @@ -0,0 +1,1470 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Day 0 - Creating Your Own Local Server - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 0 - Creating Your Own Local Server

+

Refer to Day 0 - Get Your Own Server to review your options:

+ +

It’s difficult to create a server in cloud without a credit card

+

We normally recommend using Amazon’s AWS “Free Tier” or Digital Ocean - but both require that you have a credit card. The same is true of the Microsoft Azure, Google’s GCP and the vast majority of providers listed at Low End Box (https://lowendbox.com/).

+

Some will accept PayPal, or Bitcoin - but typically those who don’t have a credit card don’t have these either.

+

WARNING: If you go searching too deeply for options in this area, you’re very likely to come across a range of scammy, fake, or fraudulent sites. While we’ve tried to eliminate these from the links below, please do be careful! It should go without saying that none of these are “affiliate” links, and we get no kick-backs from any of them :-)

+

Cards that work as, or like, credit cards

+ +

But what if I don’t want to use a cloud provider? You can just work with a local virtual machine

+

You can run the challenge on a home server and all the commands will work as they would on a cloud server. However, not being exposed to the wild certainly loses the feel of what real sysadmins have to face.

+

If you set your own VM at a private server, go for the minimum requirements like 1GHz CPU core, 1GB RAM, and a couple of gigs of disk space. You can always adapt this to your heart’s desire (or how much hardware you have available).

+

Our recommendation is: use a cloud server if you can, to get the full experience, but don’t get limited by it. This is your server.

+

Download the Linux ISO

+

Go to the Official Ubuntu page and download the latest LTS (Long Term Support) available version.

+

NOTE: download the server version, NOT the desktop version.

+

Create a Virtual Machine with VirtualBox

+ +

Install VirtualBox, when ready:

+
    +
  • Click on Machine > New
  • +
  • Give a name to your VM and select the Type Linux. Click Next.
  • +
  • Adjust hardware: 1024MB memory and 1 CPU (this is the minimum, but you can reserve more if your host machine can provide it). Click Next
  • +
  • Virtual hard disk: 2,5GB is minimum, 5GB is a good number. Click Next.
  • +
  • Finish but we’re not done yet.
  • +
  • The new VM should show up in a list of VMs, select it.
  • +
  • Click on Machine > Settings
  • +
  • Click on Storage. Right-click on Controllet IDE, click on Optical Drive.
  • +
  • Select the Linux ISO you downloaded from the list if available, if not click Add and find it in your directories. Click Choose.
  • +
  • Click on Network and change the network adapter to Bridged Adapter.
  • +
  • Click OK
  • +
  • Click Start or Machine > Start > Normal Start.
  • +
+

Installing Linux

+

After a few seconds the welcome screen will load. +At the end of each page there’s DONE and BACK buttons. +Use arrow keys and the enter key to select options. When you’re ok with your selection, use the arrow key to go down to DONE and enter to go to the next page.

+
    +
  • Welcome Screen: Select your language
  • +
  • Keyboard Configuration: Select Keyboard type
  • +
  • Choose type of install: Select Ubuntu Server (minimized). It comes with most of the packages you need without being bloated. It will install faster too.
  • +
  • Network Connections: If you have setup the VM to use a bridged adapter like instructed, you don’t really have to worry a lot. The installer will automatically detect the DHCP settings from your local network router and you just have to select DONE.
  • +
  • Configure Proxy: If your system requires any http proxy to connect to the internet enter the proxy address, otherwise just select DONE.
  • +
  • Configure Ubuntu archive mirror: Leave it as default. DONE.
  • +
  • Guided Storage Configurations: We are going to utilize the entire storage space reserved for this VM and that’s why we select Use the Entire Disk option.
  • +
  • Storage Configuration: Leave it the standard storage configuration and select DONE. When prompted to confirm, don’t worry. This will only use the VM disk, not your computer disk.
  • +
  • Profile Setup: Enter your name, your server’s name, your username and password. This user will be your administrator user in the system (or sudo), so don’t forget this password.
  • +
  • Update to Ubuntu Pro: No. Skip for now.
  • +
  • SSH setup: Select on Install OpenSSH server because that’s how you will connect to your server later.
  • +
  • Featured Server Snaps: None of these packages are important now, just select DONE.
  • +
  • Installing System: Now you have to wait for a few minutes for your system to install. You can “cheat” and speed up the install by skipping the downloading of updates, select Cancel update and Reboot when it appears at the bottom of the page, a few moments later. You can complete the updates after the first boot. After the installation is complete the system will reboot automatically.
  • +
+

Logging in for the first time

+

After the first reboot, it will show a black screen asking for the login. That’s when you use that username and password you created during the install.

+

Note: the password will not show up, not even ***, just trust that is taking it in.

+

If you need to find out the IP address for the VM, just type in the console:

+

ip address

+

That will give you the inet, i.e., the ip address. You will need that to connect with SSH.

+

Remote access via SSH

+

If you are using Windows 10 or 11, follow the instructions to connect using the native SSH client. In older versions of Windows, you may need to install a 3rd party SSH client, like PuTTY and generate a ssh key-pair.

+

If you are on Linux or MacOS, open a terminal and run the command:

+

ssh username@ip_address

+

Or, using the SSH private key, ssh -i private_key username@ip_address

+

Enter your password (or a passphrase, if your SSH key is protected with one)

+

Voila! You have just accessed your server remotely.

+

If in doubt, consult the complementary video that covers a lot of possible setups (local server with VirtualBox, AWS, Digital Ocean, Azure, Linode, Google Cloud, Vultr and Oracle Cloud).

+

You are now a sysadmin

+

Confirm that you can do administrative tasks by typing:

+

sudo apt update

+

Then:

+

sudo apt upgrade -y

+

Don’t worry too much about the output and messages from these commands, but it should be clear whether they succeeded or not. (Reply to any prompts by taking the default option). These commands are how you force the installation of updates on an Ubuntu Linux system, and only an administrator can do them.

+

REBOOT

+

When a kernel update is identified in this first check for updates, this is one of the few occasions you will need to reboot your server, so go for it after the update is done:

+

sudo reboot now

+

Your server is now all set up and ready for the course!

+

Note that:

+
    +
  • This server is now running but is not exposed to the Internet, i.e. other people will not be able to attempt to connect. We recommend you keep it that way. It is one thing to expose a server in the cloud, exposing your home network is another story. For your own security, don’t do it.
  • +
+

To logout, type logout or exit.

+

When you are done

+

Just type:

+

sudo shutdown now

+

Or click on Force Shutdown

+

Some Other Options

+ +

Now you are ready to start the challenge. Day 1, here we go!

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/00-VPS-big/index.html b/00-VPS-big/index.html new file mode 100644 index 0000000..51ccb02 --- /dev/null +++ b/00-VPS-big/index.html @@ -0,0 +1,1495 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Day 0 - Creating Your Own Server in the Cloud - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 0 - Creating Your Own Server in the Cloud

+

Refer to Day 0 - Get Your Own Server to review your options:

+ +

Signing up with a VPS

+

Sign-up is fairly simple - just provide your email address and a password of your choosing - along with a phone number for a 2FA or another second method of authentication. You will need to also provide your credit card information.

+

Comparison

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ProviderInstance TypevCPUMemoryStoragePrice*Trial Credits
AWSt2.micro11 GB8 GB SSD$18.27Free Tier for 1 year
AzureB111 GB30 GB SSD$12.26$200 / 30 days + Free Tier for 1 year
GCPe2-micro11 GB10 GB SSD$ 7.11$300 / 90 days
OracleVM.Standard.E2.1.Micro11 GB45 GB SSD$19.92$300 / 30 days + Always Free services
+
    +
  • Estimate prices
  • +
+

On a side note, avoid IBM Cloud as much as you can. They do not offer good deals and, according to some reports from previous students, their Linux VM is tampered with enough to the point some commands do not work as expected.

+

Educational Packs

+ +

Create a Virtual Machine

+

The process is basically the same for all these VPS, but here are some step-by-steps:

+ +

VM with Oracle Cloud

+
    +
  • Choose “Compute, Instances” from the left-hand sidebar menu.
  • +
  • Click on Create Instance
  • +
  • Choose a hostname because the default ones are pretty ugly.
  • +
  • Placement: it will automatically choose the one closest to you.
  • +
  • Change Image: Select the image “Ubuntu” and opt for the latest LTS version
  • +
  • Change Shape: Click on “Specialty and previous generation”. Click VM.Standard.E2.1.Micro - the option with 1GB Mem / 1 CPU / Always Free-eligible
  • +
  • Add SSH Keys: select “Generate a key pair for me” and download the private key to connect with SSH. You can also add a new public key that you created locally
  • +
  • Create
  • +
+

Logging in for the first time

+

Select your instance and click “SSH”, it will open a new window console. To access the root, type “sudo -i passwd” in the command line then set your own password. Log in by typing “su” and “password”. Note that the password won’t show as you type or paste it.

+

Remote access via SSH

+

You should see a “Public IPv4 address” (or similar) entry for your server in the account’s control panel, this is its unique Internet IP address, and it is how you’ll connect to it via SSH (the Secure Shell protocol) - something we’ll be covering in the first lesson.

+

If you are using Windows 10 or 11, follow the instructions to connect using the native SSH client. In older versions of Windows, you may need to install a 3rd party SSH client, like PuTTY, and generate an ssh key pair.

+

If you are on Linux or MacOS, open a terminal and run the command:

+

ssh username@ip_address

+

Or, using the SSH private key, ssh -i private_key username@ip_address

+

Enter your password (or a passphrase, if your SSH key is protected with one)

+

Voila! You have just accessed your server remotely.

+

If in doubt, consult the complementary video that covers a lot of possible setups (local server with VirtualBox, AWS, Digital Ocean, Azure, Linode, Google Cloud, Vultr, and Oracle Cloud).

+

What about the root user?

+

Working on a different approach from smaller VPS, the big guys don’t let use root to connect. Don’t worry, root still exists in the system, but since the provider already created an admin user from the beginning, you don’t have to deal with it.

+

You are now a sysadmin

+

Confirm that you can do administrative tasks by typing:

+

sudo apt update

+

(Normally you’d expect this would prompt you to confirm your password, but because you’re using public key authentication the system hasn’t prompted you to set up a password - and AWS has configured sudo* to not request one for “ubuntu”).

+

Then:

+

sudo apt upgrade -y

+

Don’t worry too much about the output and messages from these commands, but it should be clear whether they succeeded or not. (Reply to any prompts by taking the default option). These commands are how you force the installation of updates on an Ubuntu Linux system, and only an administrator can do them.

+

REBOOT

+

When a kernel update is identified in this first check for updates, this is one of the few occasions you will need to reboot your server, so go for it:

+

sudo reboot now

+

Your server is now all setup and ready for the course!

+

Note that:

+
    +
  • This server is now running and completely exposed to the whole Internet
  • +
  • You alone are responsible for managing it
  • +
  • You have just installed the latest updates, so it should be secure for now
  • +
+

To logout, type logout or exit.

+

When you are done

+

You should be safe running the VM during the month for the challenge, but you can Stop the instance at any point. It will continue to count toward the bill, though.

+

When you no longer need the VM, Terminate/Destroy instance.

+

Now you are ready to start the challenge. Day 1, here we go!

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/00-VPS-small/index.html b/00-VPS-small/index.html new file mode 100644 index 0000000..c438156 --- /dev/null +++ b/00-VPS-small/index.html @@ -0,0 +1,1523 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Day 0 - Creating Your Own Server in the Cloud (but cheaper) - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 0 - Creating Your Own Server in the Cloud (but cheaper)

+

Refer to Day 0 - Get Your Own Server to review your options:

+ +

Signing up with a VPS

+

Sign-up is immediate - just provide your email address and a password of your choosing and you’re in! To be able to create a VM, however, you may need to provide your credit card information (or other information for billing) in the account section.

+

Comparison

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ProviderInstance TypevCPUMemoryStoragePriceTrial Credits
Digital OceanBasic Plan11 GB25 GB SSD$6.00$200 / 60 days
LinodeNanode 1GB11 GB25 GB SSD$5.00$100 / 60 days
VultrCloud Compute - Regular11 GB25 GB SSD$5.00$250 / 30 days
+

For more details:

+ +

Create a Virtual Machine

+

The process is basically the same for all these VPS, but here some step-by-steps:

+

VM with Digital Ocean (or Droplet)

+
    +
  • Choose “Manage, Droplets” from the left-hand sidebar. (a “droplet” is Digital Ocean’s cute name for a server!)
  • +
  • Click on Create > Droplet
  • +
  • Choose Region: choose the one closes to you. Be aware that the pricing can change depending on the region.
  • +
  • DataCenter: use the default (it will pick one for you)
  • +
  • Choose an image: Select the image “Ubuntu” and opt for the latest LTS version
  • +
  • Choose Size: Basic Plan (shared CPU) + Regular. Click the option with 1GB Mem / 1 CPU / 25GB SSD Disk
  • +
  • Choose Authentication Method: choose “Password” and type a strong password for the root account.
  • +
  • Note that since the server is on the Internet it will be under immediate attack from bots attempting to “brute force” the root password. Make it strong!
  • +
  • Or, if you want to be safer, choose “SSH Key” and add a new public key that you created locally
  • +
  • Choose a hostname because the default ones are pretty ugly.
  • +
  • Create Droplet
  • +
+

VM with Linode (or Node)

+
    +
  • Click on Create Linode (a “linode” is Linode’s cute name for a server)
  • +
  • Choose an Distribution: Select the image “Ubuntu” and opt for the latest LTS version
  • +
  • Choose Region: choose the one closest to you. Be aware that the pricing can change depending on the region.
  • +
  • Linode Plan: Shared CPU + Nanode 1GB. This option has 1GB Mem / 1 CPU / 25GB SSD Disk
  • +
  • Linode Label: Choose a hostname because the default ones are pretty ugly.
  • +
  • Choose Authentication Method: on the “Root Password” and type a strong password for the root account.
  • +
  • Note that since the server is on the Internet it will be under immediate attack from bots attempting to “brute force” the root password. Make it strong!
  • +
  • And, if you want to be safer, click “Add An SSH Key” and add a new public key that you created locally
  • +
  • Create Linode
  • +
+

VM with Vultr

+
    +
  • Choose “Products, Instances” from the left-hand sidebar. (no cute names)
  • +
  • Click on Deploy Server
  • +
  • Choose Server: Cloud Compute (Shared vCPU) + Intel Regular Performance
  • +
  • Server Location: choose the one closest to you. Be aware that the pricing can change depending on the region.
  • +
  • Server image: Select the image “Ubuntu” and opt for the latest LTS version
  • +
  • Server Size: Click the option with 1GB Mem / 1 CPU / 25GB SSD Disk
  • +
  • SSH Keys: click “Add New” and add a new public key that you created locally
  • +
  • Note that since that there’s no option to just authenticate with root password, you will need to create a SSH key.
  • +
  • Server Hostname & Label: Choose a hostname for your server.
  • +
  • Disable “Auto Backups”. They will not be required for the challenge and are only adding to the bill.
  • +
  • Deploy Now
  • +
+

Logging in for the first time with console

+

We are going to access our server using SSH but, if for some reason you get stuck in that part, there is a way to access it using a console:

+ +

Remote access via SSH

+

You should see a “Public IPv4 address” (or similar) entry for your server in account’s control panel, this is its unique Internet IP address, and it is how you’ll connect to it via SSH (the Secure Shell protocol) - something we’ll be covering in the first lesson.

+
    +
  • Digital Ocean: Click on Networking tab > Public Network > Public IPv4 Address
  • +
  • Linode: Click on Network tab > IP Addresses > IPv4 - Public
  • +
  • Vultr: Click on Settings tab > Public Network > Address
  • +
+

If you are using Windows 10 or 11, follow the instructions to connect using the native SSH client. In older versions of Windows, you may need to install a 3rd party SSH client, like PuTTY and generate a ssh key-pair.

+

If you are on Linux or MacOS, open a terminal and run the command:

+

ssh username@ip_address

+

Or, using the SSH private key, ssh -i private_key username@ip_address

+

Enter your password (or a passphrase, if your SSH key is protected with one)

+

Voila! You have just accessed your server remotely.

+

If in doubt, consult the complementary video that covers a lot of possible setups (local server with VirtualBox, AWS, Digital Ocean, Azure, Linode, Google Cloud, Vultr and Oracle Cloud).

+

Creating a working admin account

+

We want to follow the Best Practice of not logging as “root” remotely, so we’ll create an ordinary user account, but one with the power to “become root” as necessary, like this:

+

adduser snori74

+

usermod -a -G admin snori74

+

usermod -a -G sudo snori74

+

(Of course, replace ‘snori74’ with your name!)

+

This will be the account that you use to login and work with your server. It has been added to the ‘adm’ and ‘sudo’ groups, which on an Ubuntu system gives it access to read various logs and to “become root” as required via the sudo command.

+

To login using your new user, copy the SSH key from root.

+

You are now a sysadmin

+

Confirm that you can do administrative tasks by typing:

+

sudo apt update

+

Then:

+

sudo apt upgrade -y

+

Don’t worry too much about the output and messages from these commands, but it should be clear whether they succeeded or not. (Reply to any prompts by taking the default option). These commands are how you force the installation of updates on an Ubuntu Linux system, and only an administrator can do them.

+

REBOOT

+

When a kernel update is identified in this first check for updates, this is one of the few occasions you will need to reboot your server, so go for it after the update is done:

+

sudo reboot now

+

Your server is now all set up and ready for the course!

+

Note that:

+
    +
  • This server is now running, and completely exposed to the whole of the Internet
  • +
  • You alone are responsible for managing it
  • +
  • You have just installed the latest updates, so it should be secure for now
  • +
+

To logout, type logout or exit.

+

When you are done

+

You should be safe running the VM during the month for the challenge, but you can Stop the instance at any point. It will continue to count to the bill, though.

+

When you no longer need the VM, Terminate/Destroy instance.

+

Now you are ready to start the challenge. Day 1, here we go!

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/00/index.html b/00/index.html new file mode 100644 index 0000000..ff90378 --- /dev/null +++ b/00/index.html @@ -0,0 +1,1418 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Get Your Own Server - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 0 - Get Your Own Server

+ +

INTRO

+

First, you need a server. You can’t learn about administering a remote Linux server without having one of your own - so today we’re going to get one - completely free!

+

Through the magic of Linux and virtualization, it’s now possible to get a small Internet server setup almost instantly - and at a very low cost. Technically, what you’ll be doing is creating and renting a VPS (“Virtual Private Server”). In a data center somewhere, a single physical server running Linux will be split into a dozen or more Virtual servers, using the KVM (Kernel-based Virtual Machine) feature that’s been part of Linux since early 2007.

+

In addition to a hosting provider, we also need to choose which “flavor” of Linux to install on our server. If you’re new to Linux then the range of “distributions” available can be confusing - but the latest LTS (“Long Term Support”) version of Ubuntu Server is a popular choice, and what you’ll need for this course.

+

Do you have a free server I can use?

+

Well, not quite yet.

+

SadServers has a beta scenario - “Resumable Server”: Linux Upskill Challenge

+

This is a Debian 11 server without a challenge; it’s for you to do as you please. Please be mindful that it still has some limitations (there’s still no outgoing Internet access) and there can be some issues.

+

So, what are the options?

+ +

Check your options, see what fits you best. Take a minute to watch the video, it will answer most of your questions. When you have your server ready, you can start the challenge.

+

Day 1, here we go!

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/01/index.html b/01/index.html new file mode 100644 index 0000000..f1b3672 --- /dev/null +++ b/01/index.html @@ -0,0 +1,1693 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 1 - Get to know your server - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 1 - Get to know your server

+ +

INTRO

+

You should now have a remote server setup running the latest Ubuntu Server LTS (Long Term Support) version. You alone will be administering it. To become a fully-rounded Linux server admin you should become comfortable working with different versions of Linux, but for now Ubuntu is a good choice.

+

Once you have reached a level of comfort at the command-line then you’ll find your skills transfer not only to all the standard Linux variants, but also to Android, Apple’s OSX, OpenBSD, Solaris and IBM AIX. Throughout the course you’ll be working on Linux - but in fact most of what is covered is applicable to any system derived from the UNIX Operating System - and the major differences between them are with their graphic user interfaces such as Gnome, Unity, KDE etc - none of which you’ll be using!

+

YOUR TASKS TODAY

+
    +
  • Connect and login to your server, preferably using a SSH client
  • +
  • Run a few simple commands to check the status of your server - like this demo
  • +
+

USING A SSH CLIENT

+

Remote access used to be done by the simple telnet protocol, but now the much more secure SSH (Secure SHell) protocol is always used. If your server is a local VM or WSL, you could skip this section by simply using the server console/terminal if you want. We will explore SSH more in detail at the server side on Day 3 but knowing how to use a ssh client is a basic sysadmin skill, so you might as well do it now.

+

In MacOS and Linux

+

On an MacOS machine you’ll normally access the command line via Terminal.app - it’s in the Utilities sub-folder of Applications.

+

On Linux distributions with a menu you’ll typically find the terminal under “Applications menu -> Accessories -> Terminal”, “Applications menu -> System -> Terminal” or “Menu -> System -> Terminal Program (Konsole)”- or you can simply search for your terminal application. In many cases Ctrl+Alt+T will also bring up a terminal windows.

+

Once you open up a “terminal” session, you can use your command-line ssh client like this:

+

ssh user@<ip address>

+

For example:

+

ssh support@192.123.321.99

+

If the remote server was configured with a SSH public key (like AWS, Azure and GCP), then you’ll need to point to the location of the private key as proof of identity with the -i switch, typically like this:

+

ssh -i ~/.ssh/id_rsa support@192.123.321.99

+

A very slick connection process can be setup with the .ssh/config feature - see the “SSH client configuration” link in the EXTENSION section below.

+

In Windows

+

On recent Windows 10 versions, the same command-line client is now available, but must be enabled (via “Settings”, “Apps”, “Apps & features”, “Manage optional features”, “Add a feature”, “OpenSSH client”).

+

There are various SSH clients available for Windows (PuTTY, Solar-PuTTY, MobaXterm, Termius, etc) but if you use Windows versions older than 10, the installation of PuTTY is suggested.

+

Alternatively, you can install the Windows Subsystem for Linux which gives you a full local command-line Linux environment, including an SSH client - ssh.

+

Regardless of which client you use, the first time you connect to your server, you may receive a warning that you’re connecting to a new server - and be asked if you wish to cache the host key. Yes, you do. Just type/click Yes.

+

But don’t worry too much about securing the SSH session or hardening the server right now; we will be doing this in Day 3.

+

For now, just login to your server and remember that Linux is case-sensitive regarding user names, as well as passwords.

+

You’ll be spending a lot of time in your SSH client, so it pays to spend some time customizing it. At the very least try “black on white” and “green on black” - and experiment with different monospaced fonts, (“Ubuntu Mono” is free to download, and very nice).

+

It’s also very handy to be able to cut and paste text between your remote session and your local desktop, so spend some time getting confident with how to do this in your SSH client and terminal.

+

Perhaps you might now try logging in from home and work - even from your smartphone! - using an ssh client app such as Termux, Termius for Android or Termius for iPhone. As a server admin you’ll need to be comfortable logging in from all over. You can also potentially use JavaScript ssh clients like consolefish and ShellHub, but these options involve putting more trust in third-parties than most sysadmins would be comfortable with when accessing production systems.

+

To log out, simply type exit or close the terminal.

+

LOGIN TO YOUR SERVER

+

Once logged in, notice that the “command prompt” that you receive ends in $ - this is the convention for an ordinary user, whereas the “root” user with full administrative power has a # prompt (but we will dive into this difference in Day 3 as well).

+

Here’s a short vid on using ssh in a work environment.

+

GENERAL INFORMATION ABOUT THE SERVER

+

Use lsb_release -a to see which Linux distro and version you’re using. lsb_release may not be available in your server, as it’s not widely adopted, but you will always have the same information available in the system file os-release. You can check its content by typing cat /etc/os-release

+

uname -a will also print the system information and it can show some interesting things like kernel version, hardware platform, etc.

+

uptime will show you how long the system has been running. It kinda makes the weird numbers you get from cat /proc/uptime a lot more readable.

+

whoami will print the user name you logged on with, who will show who is logged on and w will also show what they are doing.

+

HARDWARE INFORMATION

+

lshw can give some detailed information on the hardware configuration, and there’s a bunch of switches we can use to filter the information we want to see, but it’s not the only tool we use to check hardware with. Some of the used commands are:

+ +

MEASURE MEMORY AND CPU USAGE

+

Don’t worry! Linux won’t eat your RAM. But if you want to check the amount of memory used in the system, use free -h . vmstat will also give some memory statistics.

+

top is like a Task Manager for Linux, it will display the processes and the consumption of resources. htop is an interactive, prettier version.

+

MEASURE DISK USAGE

+

Use df -h to see disk space usage, but go with du -h if you want to estimate the size of your folders.

+

MEASURE NETWORK USAGE

+

You will have a general idea of your network interfaces and their IP addresses by using ifconfig or its modern substitute ip address, but it won’t show you bandwidth usage.

+

For that we have netstat -i in a more static view and ifstat in a continuous view. To interrupt ifstat just use CTRL+C.

+

But if you want more info on that traffic, sudo iftop -i eth0 is a nice display. Change eth0 for the interface you wish to capture traffic information. To exit the monitor view, type q to quit.

+

POSTING YOUR PROGRESS

+

Regularly posting your progress can be a helpful motivator. Feel free to post to the subreddit/community or to the discord chat a small introduction of yourself, and your Linux background for your “classmates” - and notes on how each day has gone.

+

Of course, also drop in a note if you get stuck or spot errors in these notes.

+

EXTENSION

+

If this was all too easy, then spend some time reading up on:

+ +

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/02/index.html b/02/index.html new file mode 100644 index 0000000..3198a63 --- /dev/null +++ b/02/index.html @@ -0,0 +1,1630 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 2 - Basic navigation - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 2 - Basic navigation

+ +

INTRO

+

Most computer users outside of the Linux and Unix world don’t spend much time at the command-line now, but as a Linux sysadmin this is your default working environment - so you need to be skilled in it.

+

When you use a graphic desktop such as Windows or Apple’s macOS (or even the latest Linux flavors), then increasingly you are presented with simple “places” where your stuff is stored - “Pictures” “Music” etc but if you’re even moderately technical then you’ll realize that underneath all this is a hierarchical “directory structure” of “folders” (e.g. C:\Users\Steve\Desktop on Windows or /Users/Steve/Desktop on macOS - and on a Desktop Linux system /home/steve/Desktop)

+

From now on, the course will point you to a range of good online resources for a topic, and then set you a simple set of tasks to achieve. It’s perfectly fine to google for other online resources, refer to any books you have etc - and in fact a fundamental element of the design of this course is to force you to do a bit of your own research. Even the most experienced sysadmins will do an online search to find advice for how to use commands - so the sooner you too get into that habit the better!

+

YOUR TASKS TODAY

+
    +
  • Find the documentation for the commands we used so far - demo
  • +
  • Navigate between directories, then create, list, move and delete files - demo
  • +
+

RTFM

+

This is a good time to mention that one of the many advantages of Linux is that it’s designed to let you know the system, to let you learn how to use it. The documentation available in form of text manuals, guides and forums is where you will spend most of your time during this journey.

+

Whereas proprietary systems have some free documentation, you see much more frequently the use of paid customer support to fix issues or find how a particular task can be executed. Although you can also do this with Linux (Canonical, RedHat and SuSE are examples of companies that offer support in the same fashion), this is most likely not the case. And you are here to learn, so…

+

Which leads us to the famous acronym RTFM. Reading the manual is the first thing you should do when you’re learning a command. We will go through the many ways to obtain that information but if at the end of that search you need more insight, you can always ask a well written question in forums and other communities.

+

Starting with the man command. Each application installed comes with its own page in this manual, so that you can look at the page for pwd to see the full detail on the syntax like this:

+

man pwd

+

You might also try:

+
 man cp
+ man mv
+ man grep
+ man ls
+ man man
+
+ +

As you’ll see, these are excellent for the detailed syntax of a command, but many are extremely terse, and for others the amount of detail can be somewhat daunting!

+

And that’s why tldr is such a powerful tool! You can easily install it with sudo apt install tldr or follow this demo.

+
$ tldr pwd
+pwd
+Print name of current/working directory.More information: https://www.gnu.org/software/coreutils/pwd.
+
+ - Print the current directory:
+   pwd
+
+ - Print the current directory, and resolve all symlinks (i.e. show the "physical" path):
+   pwd -P
+
+

If you know a keyword or some description of what the command is supposed to do, you can try apropos or man -k like this:

+
$ apropos "working directory"
+git-stash (1)        - Stash the changes in a dirty working directory away
+pwd (1)              - print name of current/working directory
+pwdx (1)             - report current working directory of a process
+
+$ man -k "working directory"
+git-stash (1)        - Stash the changes in a dirty working directory away
+pwd (1)              - print name of current/working directory
+pwdx (1)             - report current working directory of a process
+
+

But you’ll soon find out that not every command has a manual that you can read with man. Those commands are contained within the shell itself and we call them builtin commands.

+

There are some overlaping (i.e. builtin commands that also have a man page) but if man does not work, we use help to display information about them.

+
$ man export
+No manual entry for export
+
+$ help export
+export: export [-fn] [name[=value] ...] or export -p
+    Set export attribute for shell variables.
+
+    Marks each NAME for automatic export to the environment of subsequently
+    executed commands.  If VALUE is supplied, assign VALUE before exporting.
+
+    Options:
+      -f        refer to shell functions
+      -n        remove the export property from each NAME
+      -p        display a list of all exported variables and functions
+
+    An argument of `--' disables further option processing.
+
+    Exit Status:
+    Returns success unless an invalid option is given or NAME is invalid.
+
+

The best way to know if a command is a builtin command, is to check its type:

+
$ type export
+export is a shell builtin
+
+

And lastly, info reads the documentation stored in info format.

+ +
    +
  • Start by reading the manual: man hier
  • +
  • / is the “root” of a branching tree of folders (also known as directories)
  • +
  • At all times you are “in” one part of the system - the command pwd (“print working directory”) will show you where you are
  • +
  • Generally your prompt is also configured to give you at least some of this information, so if I’m “in” the /etc directory then the prompt might be steve@202.203.203.22:/etc$ or simply /etc: $
  • +
  • cd moves to different areas - so cd /var/log will take you into the /var/log folder - do this and then check with pwd - and look to see if your prompt changes to reflect your location.
  • +
  • You can move “up” the structure by typing cd .. ( “cee dee dot dot “) try this out by first cd‘ing to /var/log/ then cd .. and then cd .. again - watching your prompt carefully, or typing pwd each time, to clarify your present working directory.
  • +
  • A “relative” location is based on your present working directory - e.g. if you first cd /var then pwd will confirm that you are “in” /var, and you can move to /var/log in two ways - either by providing the full path with cd /var/log or simply the “relative” path with the command cd log
  • +
  • A simple cd will always return you to your own defined “home directory”, also referred to as ~ (the “tilde” character) [NB: this differs from DOS/Windows]
  • +
  • What files are in a folder? The ls (list) command will give you a list of the files, and sub folders. Like many Linux commands, there are options (known as “switches”) to alter the meaning of the command or the output format. Try a simple ls, then ls -l -t and then try ls -l -t -r -a
  • +
  • By convention, files with a starting character of “.” are considered hidden and the ls, and many other commands, will ignore them. The -a switch includes them. You should see a number of hidden files in your home directory.
  • +
  • A note on switches: Generally most Linux command will accept one or more “parameters”, and one or more “switches”. So, when we say ls -l /var/log the “-l” is a switch to say “long format” and the “/var/log” is the “parameter”. Many commands accept a large number of switches, and these can generally be combined (so from now on, use ls -ltra, rather than ls -l -t -r -a
  • +
  • In your home directory type ls -ltra and look at the far left hand column - those entries with a “d” as the first character on the line are directories (folders) rather than files. They may also be shown in a different color or font - if not, then adding the “–color=auto” switch should do this (i.e. ls -ltra --color=auto)
  • +
+

BASIC DIRECTORY MANIPULATION

+
    +
  • You can make a new folder/directory with the mkdir command, so move to your home directory, type pwd to check that you are indeed in the correct place, and then create a directory, for example to create one called “test”, simply type mkdir test. Now use the ls command to see the result.
  • +
  • You can create even more directories, nesting inside directories, and then navigate between them with the cd command.
  • +
  • When you want to move that directory inside another directory, you use mv and specify the path to move.
  • +
  • To delete (or remove) a directory, use rmdir if the directory is empty or rm -r if there still any files or other directories inside of it.
  • +
+

BASIC FILE MANIPULATION

+
    +
  • You can make new empty files with the touch command, so you can explore a little more of the ls command.
  • +
  • When you want to move that file to another directory, you use mv and specify the path to move.
  • +
  • To delete (or remove) a file, use rm.
  • +
+

WRAP

+

Being able to move confidently around the directory structure at the command line is important, so don’t think you can skip it! However, these skills are something that you’ll be constantly using over the twenty days of the course, so don’t despair if this doesn’t immediately “click”.

+

EXTENSION

+

If this is already something that you’re very familiar with, then:

+
    +
  • Learn about pushd and popd to navigate around multiple directories easily. Running pushd /var/log moves you to to the /var/log, but keeps track of where you were. You can pushd more than one directory at a time. Try it out: pushd /var/log, pushd /dev, pushd /etc, pushd, popd, popd. Note how pushd with no arguments switches between the last two pushed directories but +more complex navigation is also possible. Finally, cd - also moves you the last visited directory.
  • +
+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/03/index.html b/03/index.html new file mode 100644 index 0000000..0e9a768 --- /dev/null +++ b/03/index.html @@ -0,0 +1,1710 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 3 - Power trip! - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 3 - Power trip!

+ +

INTRO

+

You may have been logging in as an ordinary user at your server, yet you’re probably aware that root is the power user on a Linux system. This administrative or “superuser” account, is all powerful - and a typo in a command could potentially cripple your server. As a sysadmin you’re typically working on systems that are both important and remote, so avoiding such mistakes is A Very Good Idea.

+

In ancient times, sysadmins used to login as root in production systems, but it’s now common Best Practice to discourage or disallow login directly by root and instead to give specified trusted users the permission to run root-only commands via the sudo command.

+

YOUR TASKS TODAY

+
    +
  • Change the password of your sudo user
  • +
  • Change the hostname
  • +
  • Change the timezone
  • +
+

Check out the demo

+

LOCAL CHANGES VS GLOBAL CHANGES

+

Global: programs/environments that any user can use, used across the system. A global change affects all users.

+

Local or By user: programs/environments that a particular user runs, not available to other users. A local change affects only one user.

+

WHO ARE YOU AND WHAT CAN YOU DO?

+

There are 3 types of users in a Linux system:

+
    +
  • root - the powerful superuser that can execute any command at any level in the system. They can do all global changes as well as local changes for any user.
  • +
  • sudoers - regular users that are allowed to use sudo, i.e., they can execute commands in one or more levels in the system, can do some or all global changes. It’s common to have at least one sudoer that has the same powers as root, but the amount of priviledges other sudoers have can vary.
  • +
  • regular users - users that can use the system but can only do local changes, i.e., can only deal with their own files/directories and environment variables.
  • +
+

We will get into more detail about users and their permissions on Day 13 and Day 14.

+

STOP USING ROOT

+

If you created a VM with one of the big VPS providers, root is already “disabled” and your default user (ubuntu, azureuser, etc) already has sudo powers.

+

However, if you really, really want to use root, there are ways to do it in AWS, Azure and GCP. But do it at your own risk!

+

However, if you created a VM locally or with other VPS providers, it is very likely that you have your root user readily available.

+

Stop using root. If you followed the guides, you should have created a regular user and added it to a sudoers group, like this:

+
adduser snori74
+usermod -a -G sudo snori74
+
+ +

Adding a regular user to a group with sudo priviledges is the easiest way to do it, as the sudo group is pretty standard in Ubuntu. But this can also be accomplished by modifying the /etc/sudoers using the command visudo.

+

Login with this new user from now on. Use whoami to print the user name you logged on with.

+

CHANGE PASSWORD

+

If you’re using a password to login (rather than public key), then now is a good time to ensure that this is very strong and unique - i.e. at least 10 alphanumeric characters - because your server is fully exposed to bots that will be continuously attempting to break in. This is specially important if you’re still using root.

+

Use the passwd command to change your password.

+

To do this, think of a new, secure password, then simply type passwd, press “Enter” and give your current password when prompted, then the new one you’ve chosen, confirm it - and then WRITE IT DOWN somewhere. In a production system of course, public keys and/or two factor authentication would be more appropriate.

+

A NOTE ON “HARDENING”

+

Your server is protected by the fact that its security updates are up to date, and that you’ve set Long Strong Unique passwords - or are using public keys. While exposed to the world, and very likely under continuous attack, it should be perfectly secure.

+

Next week we’ll look at how we can view those attacks, but for now it’s simply important to state that while it’s OK to read up on “SSH hardening”, things such as changing the default port and fail2ban are unnecessary and unhelpful when we’re trying to learn - and you are perfectly safe without them.

+

THE POWER OF SUDO

+
    +
  • Use the links in the “Resources” section below to understand how sudo works
  • +
  • Try cat /etc/shadow, can you view the contents of the file?
  • +
  • This file is where the hashed passwords are kept. It is a prime target for intruders - who aim to grab it and use offline password crackers to discover the passwords. So it’s safe to assume it shouldn’t be visible to non-authorized users in the system.
  • +
  • Now try with sudo, i.e. sudo cat /etc/shadow
  • +
  • Test running the reboot command, and then via sudo (i.e. sudo reboot)
  • +
+

Once you’ve reconnected back:

+
    +
  • Use the uptime command to confirm that your server did actually fully restart
  • +
  • See the login history by filtering the username (e.g. snori74) using the command last. If this is the first time using a non-root user, you will only have one record (i.e. last snori74).
  • +
  • Now compare to the times you logged as root: last root
  • +
  • Better yet, check for failed login attempts for root with sudo lastb
  • +
  • Test fully “becoming root” by the command sudo -i. This can be handy if you have a series of commands to do “as root”. Note the change to your prompt.
  • +
  • Type exit or logout to get back to your own normal “admin” login.
  • +
  • Check the last few times sudo was used by typing: sudo journalctl -e /usr/bin/sudo
  • +
+

Normally invoking the sudo command will ask you to re-confirm your identity with your password. However, this can be changed in the sudoers configuration file so it does NOT prompt for a password. We talk about it in more detail in Day 13.

+

ADMINISTRATIVE TASKS

+

We will go into detail of the many things you can do to your server, but here are some examples of simple administrative tasks that require sudo.

+

If you wish to, you can now rename your server. Traditionally you would do this by editing two files, /etc/hostname and /etc/hosts and then rebooting - but the more modern, and recommended, way is to use the hostnamectl command, like this:

+
sudo hostnamectl set-hostname mylittlecloudbox
+
+ +

No reboot is required but if you want to see the new name in the prompt, just open a new session with bash (or logoff and login again, same effect).

+

For a cloud server, you might find that the hostname changes after a reboot. To prevent this, edit /etc/cloud/cloud.cfg and change the “preserve_hostname” line to read:

+
preserve_hostname: true
+
+ +

You might also consider changing the timezone your server uses. By default this is likely to be UTC (i.e. GMT) - which is pretty appropriate for a worldwide fleet of servers. You could also set it to the zone the server is in, or where you and your headquarters are. For a company this is a decision not to be taken lightly, but for now you can simply change as you please!

+

First check the current setting with:

+
timedatectl
+
+ +

Then get a a list of available timezones:

+
timedatectl list-timezones
+
+ +

And finally select one, like this:

+
sudo timedatectl set-timezone Australia/Sydney
+
+ +

Confirm:

+
timedatectl
+
+ +

The major practical effects of this are (1) the timing of scheduled tasks, and (2) the timestamping of the logs files kept under /var/log. If you make a change, there will naturally be a “jump” in the dates and time recorded.

+

WITH GREAT POWERS COMES GREAT RESPONSIBILITY

+

As a Linux sysadmin you may be working on client or custom systems where you have little control, and many of these will default to doing everything as root. You need to be able to safely work on such systems - where your only protection is to double check before pressing Enter.

+

On the other hand, for any systems where you have full control, setting up a “normal” account for yourself (and any co-admins) with permission to run sudo is recommended. While this is standard with Ubuntu, it’s also easy to configure with other popular server distros such as Debian, CentOS and RHEL.

+

Even with that, it’s important to take the necessary precautions before making global changes, to prevent accidentally locking yourself out or other issues. Practices like using a test environment, checking for syntax errors and typos, and keeping an eye on the log files, will eventually become second nature.

+

EXTENSION

+ +

What’s difference between “sudo -i” and “sudo -s”?

+

Both sudo -i and sudo -s are commands that allow a user to obtain root privileges on a Unix-based system. However, they have some differences in how they function.

+
    +
  • sudo -i stands for “sudo interactive” and it launches a new login shell for the root user. This means that it creates a new environment for the root user with the root user’s home directory and shell configuration files. This makes it similar to logging in directly as the root user, and any commands executed from this shell will have the privileges of the root user.
  • +
  • sudo -s stands for “sudo shell” and it launches a new shell for the root user, but it does not create a new login shell. This means that it does not change the environment or shell configuration files of the current user. Any commands executed from this shell will have the privileges of the root user, but the environment will still be that of the current user.
  • +
+

In summary, sudo -i is more powerful and creates a new shell with the full environment of the root user, while sudo -s is less powerful and only launches a new shell with the root user’s privileges but with the same environment as the current user.

+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/04/index.html b/04/index.html new file mode 100644 index 0000000..41e6898 --- /dev/null +++ b/04/index.html @@ -0,0 +1,1489 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 4 - Installing software, exploring the file structure - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 4 - Installing software, exploring the file structure

+ +

INTRO

+

As a sysadmin, one of your key tasks is to install new software as required. You’ll also need to be very familiar with the layout of the standard directories in a Linux system.

+

You’ll be getting practice in both of these areas in today’s session.

+

YOUR TASKS TODAY

+
    +
  • Install a new application from the online repositories
  • +
  • Become familiar with some of the standard directories
  • +
  • Look at the format and content of some configuration files.
  • +
+

If you’ve used a smartphone “app store ” or “market”, then you’ll immediately understand the normal installation of Linux software from the standard repositories. As long as we know what the name or description of a package (=app) is, then we can search for it:

+
 apt search "midnight commander"
+
+ +

This will show a range of matching “packages”, and we can then install them with apt install command. So to install package mc (Midnight Commander) on Ubuntu:

+
 sudo apt install mc
+
+ +

(Unless you’re already logged in as the root user you need to use sudo before the installation commands - because an ordinary user is not permitted to install software that could impact a whole server).

+

Now that you have mc installed, start it by simply typing mc and pressing Enter.

+

This isn’t a “classic” Unix application, but once you get over the retro interface you should find navigation fairly easy, so go looking for these directories:

+

/root +/home +/sbin +/etc +/var/log

+

…and use the links in the Resources section below to begin to understand how these are used. You can also read the official manual on this hierarchy by typing man hier.

+

Most key configuration files are kept under /etc and subdirectories of that. These files, and the logs under /var/log are almost invariably simple text files. In the coming days you’ll be spending a lot of time with these - but for now simply use F3 to look into their contents.

+

Some interesting files to look at are: /etc/passwd, /etc/ssh/sshd_config and /var/log/auth.log

+

Use F3 again to exit from viewing a file.

+

F10 will exit mc, although you may need to use your mouse to select it.

+

(On an Apple Mac in Terminal, you may need to use ESC+3 to get F3 and ESC+0 for F10)

+

Now use apt search to search for and install some more packages: Try searching for “hangman”. You will probably find that an old text-based version is included in a package called bsdgames. Install and play a couple of rounds…

+

Posting your progress

+
    +
  • Post your progress, comments and questions to the forum.
  • +
+

EXTENSION

+
    +
  • Use mc to view /etc/apt/sources.list.d/ubuntu.sources where the actual locations of the repositories are specified. Often these will be “mirror” sites that are closer to your server than the main Ubuntu servers.
  • +
  • Read Repositories - CommandLine for more of the gory details.
  • +
+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/05/index.html b/05/index.html new file mode 100644 index 0000000..6141c13 --- /dev/null +++ b/05/index.html @@ -0,0 +1,1464 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 5 - More or less - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 5 - More or less…

+ +

INTRO

+

Today we’ll end with a bang - with a quick introduction to five different topics. Mastery isn’t required today - you’ll be getting plenty of practice with all these in the sessions to come!

+

Don’t be misled by how simplistic some of these commands may seem - they all have hidden depths and many sysadmins will be using several of these every day.

+

YOUR TASKS TODAY

+
    +
  • Use tab completion
  • +
  • Search in the command history
  • +
  • Read a dot file using more and less
  • +
  • Change / customize your prompt
  • +
+

Use the links in the Resources section to complete these tasks:

+
    +
  • +

    Get familiar with using more and less for viewing files, including being able to get to the top or bottom of a file in less, and searching for some text

    +
  • +
  • +

    Test how “tab completion” works - this is a handy feature that helps you enter commands correctly. It helps find both the command and also file name parameters, so typing les then hitting “Tab” will complete the command less, but also typing less /etc/serv and pressing “Tab” will complete to less /etc/services. Try typing less /etc/s then pressing “Tab”, and again, to see how the feature handles ambiguity.

    +
  • +
  • +

    Now that you’ve typed in quite a few commands, try pressing the “Up arrow” to scroll back through them. What you should notice is that not only can you see your most recent commands - but even those from the last time you logged in. Now try the history command - this lists out the whole of your cached command history - often 100 or more entries. There are number of clever things that can be done with this. The simplest is to repeat a command - pick one line to repeat (say number 20) and repeat it by typing !20 and pressing “Enter”. Later when you’ll be typing long, complex, commands this can be very handy. You can also press Ctrl + r, then start typing any part of the command that you are looking for. You’ll see an autocomplete of a past command at your prompt. If you keep typing, you’ll get more specific options appear. You can either run it by pressing return, or editing it first by pressing arrows or other movement keys. You can also keep pressing Ctrl + r to see other instances of the same command you used with different options.

    +
  • +
  • +

    Look for “hidden” files in your home directory. In Linux the convention is simply that any file starting with a “.” character is hidden. So, type cd to return to your “home directory” then ls -l to show what files are there. Now type ls -la or ls -ltra (the “a” is for “all”) to show all the files - including those starting with a dot. By far the most common use of “dot files” is to keep personal settings in a home directory. So use your new skills with less to look at the contents of .bashrc , .bash_history and others.

    +
  • +
  • +

    Finally, use the nano editor to create a file in your home directory and type up a summary of how the last five days have worked for you.

    +
  • +
+

EXTENSION

+

We’re using bash as our terminal shell for now (it is standard in many distros) but it is not the only one out there. If you want to test out zsh, fish or oh-my-zsh, you will see that there are a few differences and the features are usually the main differentiator. Try that, poke around.

+

After that, you can go up a notch and try to have several shell sessions open at the same time in the same terminal window with a terminal multiplexer. Try screen - that’s a little simpler and maybe too terse in the beginning - or tmux, that have many features and colors. There are so much material out there on “how to customize your tmux”, have fun.

+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/06/index.html b/06/index.html new file mode 100644 index 0000000..28a9043 --- /dev/null +++ b/06/index.html @@ -0,0 +1,1678 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 6 - Editing with "vim" - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 6 - Editing with “vim”

+ +

INTRO

+

Simple text files are at the heart of Linux, so editing these is a key sysadmin skill. There are a range of simple text editors aimed at beginners. Some more common examples you’ll see are nano and pico. These look as if they were written for DOS back in the 1980’s - but are pretty easy to “just figure out”.

+

The Real Sysadmintm however, uses vi - this is the editor that’s always installed by default - and today you’ll get started using it.

+

Bill Joy wrote Vi back in the mid 1970’s - and even the “modern” Vim that we’ll concentrate on is over 20 years old, but despite their age, these remain the standard editors on command-line server boxes. Additionally, they have a loyal following among programmers, and even some writers. Vim is actually a contraction of Vi IMproved and is a direct descendant of Vi.

+

Very often when you type vi, what the system actually starts is vim. To see if this is true of your system type, run:

+
vi --version
+
+

You should see output similar to the following if the vi command is actually symlinked to vim:

+
user@testbox:~$ vi --version
+VIM - Vi IMproved 8.2 (2019 Dec 12, compiled Oct 01 2021 01:51:08)
+Included patches: 1-2434
+Extra patches: 8.2.3402, 8.2.3403, 8.2.3409, 8.2.3428
+Modified by team+vim@tracker.debian.org
+Compiled by team+vim@tracker.debian.org
+...
+
+

YOUR TASKS TODAY

+
    +
  • Run vimtutor
  • +
  • Edit a file with vim
  • +
+

WHAT IF I DON’T HAVE VIM INSTALLED?

+

The rest of this lesson assumes that you have vim installed on your system, +which it often is by default. But in some cases it isn’t and if you try to run +the vim commands below you may get an error like the following:

+
user@testbox:~$ vim
+-bash: vim: command not found
+
+

OPTION 1 - ALIAS VIM

+

One option is to simply substitute vi for any of the vim commands in the +instructions below. Vim is reverse compatible with Vi and all +of the below exercises should work the same for Vi as well as for Vim. To make +things easier on ourselves we can just alias the vim command so that vi +runs instead:

+
echo "alias vim='vi'" >> ~/.bashrc
+source ~/.bashrc
+
+

OPTION 2 - INSTALL VIM

+

The other option, and the option that many sysadmins would probably take is to +install Vim if it isn’t installed already.

+

To install Vim on Ubuntu using the system package manager, run:

+
sudo apt install vim
+
+
+

Note: Since Ubuntu Server LTS is the recommended +Linux distribution to use for the Linux Upskill Challenge, installing Vim for +all of the other various Linux “distros” is outside of the scope of this +lesson. The command above “should” work for most Debian-family Linux OS’s +however, so if you’re running Mint, Debian, Pop!_OS, or one of the many other +flavors of Ubuntu, give it a try. For Linux distros outside of the +Debian-family a few simple web-searches will probably help you find how to +install Vim using other Linux’s package managers.

+
+

THE TWO THINGS YOU NEED TO KNOW

+
    +
  • There are two “modes” - with very different behaviours
  • +
  • Little or nothing onscreen lets you know which mode you’re currently in!
  • +
+

The two modes are “normal mode” and “insert mode”, and as a beginner, simply remember:

+

"Press Esc twice or more to return to normal mode"

+

The “normal mode” is used to input commands, and “insert mode” for writing text - similar to a regular text editor’s default behaviour.

+

INSTRUCTIONS

+

So, first grab a text file to edit. A copy of /etc/services will do nicely:

+
cd
+pwd
+cp -v /etc/services testfile
+vim testfile
+
+

At this point we have the file on screen, and we are in “normal mode”. Unlike nano, however, there’s no onscreen menu and it’s not at all obvious how anything works!

+

Start by pressing Esc once or twice to ensure that we are in normal mode (remember this trick from above), then type :q! and press Enter. This quits without saving any changes - a vital first skill when you don’t yet know what you’re doing! +Now let’s go in again and play around, seeing how powerful and dangerous vim is - then again, quit without saving:

+
vim testfile
+
+

Use the keys h j k and l to move around (this is the traditional vi method) then try using the arrow keys - if these work, then feel free to use them - but remember those hjkl keys because one day you may be on a system with just the traditional vi and the arrow keys won’t work.

+

Now play around moving through the file. Then exit with Esc Esc :q! as discussed earlier.

+

Now that you’ve mastered that, let’s get more advanced.

+
vim testfile
+
+

This time, move down a few lines into the file and press 3 then 3 again, then d and d again - and suddenly 33 lines of the file are deleted!

+

Why? Well, you are in normal mode and 33dd is a command that says “delete 33 lines”. Now, you’re still in normal mode, so press u - and you’ve magically undone the last change you made. Neat huh?

+

Now you know the three basic tricks for a newbie to vim:

+
    +
  • Esc Esc always gets you back to “normal mode”
  • +
  • From normal mode :q! will always quit without saving anything you’ve done, and
  • +
  • From normal mode u will undo the last action
  • +
+

So, here’s some useful, productive things to do:

+
    +
  • Finding things: From normal mode, type G to get to the bottom of the file, then gg to get to the top. Let’s search for references to “sun”, type /sun to find the first instance, hit enter, then press n repeatedly to step through all the next occurrences. Now go to the top of the file (gg remember) and try searching for “Apple” or “Microsoft”.
  • +
  • Cutting and pasting: Go back up to the top of the file (with gg) and look at the first few lines of comments (the ones with “#” as the first character). Play around with cutting some of these out, and pasting them back. To do this simply position the cursor on a line, then (for example), type 11dd to delete 11 lines, then immediately paste them back in by pressing P - and then move down the file a bit and paste the same 11 lines in there again with P
  • +
  • Inserting text: Move anywhere in the file and press i to get into “insert mode” (it may show at the bottom of the screen) and start typing - and Esc Esc to get back into normal mode when you’re done.
  • +
  • Writing your changes to disk: From normal mode type :w to “write” but stay in vim, or :wq to “write and quit”.
  • +
+

This is as much as you ever need to learn about vim - but there’s an enormous amount more you could learn if you had the time. Your next step should be to run vimtutor and go through the “official” Vim tutorial. It typically takes around 30 minutes the first time through. To solidify your Vim skills make a habit of running through the vimtutor every day for 1-2 weeks and you should have a solid foundation with the basics.

+
+

Note: If you aliased vim to vi for the excercises above, now might be a good time to install vim since this is what provides the vimtutor command. Once you have Vim installed, you can run :help vimtutor from inside of Vim to view the help as well as a few other tips/tricks.

+
+

However, if you’re serious about becoming a sysadmin, it’s important that you commit to using vim (or vi) for all of your editing from now on.

+

One last thing, you may see reference to is the Vi vs. Emacs debate. This is a long running rivalry for programmers, not system administrators - vi/vim is what you need to learn.

+

WHY CAN’T I JUST STICK WITH NANO?

+
    +
  • +

    In many situations as a professional, you’ll be working on other people’s systems, and they’re often very paranoid about stability. You may not have the authority to just “sudo apt install ” - even if technically you could.

    +
  • +
  • +

    However, vi is always installed on any Unix or Linux box from tiny IoT devices to supercomputer clusters. It is actually required by the Single Unix Specification and POSIX.

    +
  • +
  • +

    And frankly it’s a shibboleth for Linux pros. As a newbie in an interview it’s fine to say you’re “only a beginner with vi/vim” - but very risky to say you hate it and can never remember how to exit.

    +
  • +
+

So, it makes sense if you’re aiming to do Linux professionally, but if you’re just working on your own systems then by all means choose nano or pico etc.

+

EXTENSION

+

If you’re already familiar with vi / vim then use today’s hour to research and test some customisation via your ~/.vimrc file. The link below is specifically for sysadmins:

+ +

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/07/index.html b/07/index.html new file mode 100644 index 0000000..9cc8a4c --- /dev/null +++ b/07/index.html @@ -0,0 +1,1580 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 7 - The server and its services - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 7 - The server and its services

+ +

INTRO

+

Today you’ll install a common server application - the Apache2 web server - also known as httpd - the “Hyper Text Transport Protocol Daemon”!

+

If you’re a website professional then you might do things slightly differently, but our focus with this is not on Apache itself, or the website content, but to get a better understanding of:

+
    +
  • application installation
  • +
  • configuration files
  • +
  • services
  • +
  • logs
  • +
+

YOUR TASKS TODAY

+
    +
  • Install and run apache, transforming your server into a web server
  • +
+

INSTRUCTIONS

+
    +
  • Refresh your list of available packages (apps) by: sudo apt update - this takes a moment or two, but ensures that you’ll be getting the latest versions.
  • +
  • Install Apache from the repository with a simple: sudo apt install apache2
  • +
  • Confirm that it’s running by browsing to http://[external IP of your server] - where you should see a confirmation page.
  • +
  • Apache is installed as a “service” - a program that starts automatically when the server starts and keeps running whether anyone is logged in or not. Try stopping it with the command: sudo systemctl stop apache2 - check that the webpage goes dead - then re-start it with sudo systemctl start apache2 - and check its status with: systemctl status apache2.
  • +
  • As with the vast majority of Linux software, configuration is controlled by files under the /etc directory - check the configuration files under /etc/apache2 especially /etc/apache2/apache2.conf - you can use less to simply view them, or the vim editor to view and edit as you wish.
  • +
  • In /etc/apache2/apache2.conf there’s the line with the text: “IncludeOptional conf-enabled/*.conf”. This tells Apache that the *.conf files in the subdirectory conf-enabled should be merged in with those from /etc/apache2/apache2.conf at load. This approach of lots of small specific config files is common.
  • +
  • If you’re familiar with configuring web servers, then go crazy, setup some virtual hosts, or add in some mods etc.
  • +
  • The location of the default webpage is defined by the DocumentRoot parameter in the file /etc/apache2/sites-enabled/000-default.conf.
  • +
  • Use less or vim to view the code of the default page - normally at /var/www/html/index.html. This uses fairly complex modern web design - so you might like to browse to http://165.227.92.20/sample where you’ll see a much simpler page. Use View Source in your browser to see the code of this, copy it, and then, in your ssh session sudo vim /var/www/html/index.html to first delete the existing content, then paste in this simple example - and then edit to your own taste. View the result with your workstation browser by again going to http://[external IP of your server]
  • +
  • As with most Linux services, Apache keeps its logs under the /var/log directory - look at the logs in /var/log/apache2 - in the access.log file you should be able to see your session from when you browsed to the test page. Notice that there’s an overwhelming amount of detail - this is typical, but in a later lesson you’ll learn how to filter out just what you want. Notice the error.log file too - hopefully this one will be empty!
  • +
+

Note for AWS/Azure/GCP/OCI users

+

Don’t forget to add port 80 to your instance security group to allow inbound traffic to your server.

+ +

POSTING YOUR PROGRESS

+

Practice your text-editing skills, and allow your “classmates” to judge your progress by editing /var/www/html/index.html with vim and posting the URL to access it to the forum. (It doesn’t have to be pretty!)

+

SECURITY

+
    +
  • As the sysadmin of this server, responsible for its security, you need to be very aware that you’ve now increased the “attack surface” of your server. In addition to ssh on port 22, you are now also exposing the apache2 code on port 80. Over time the logs may reveal access from a wide range of visiting search engines, and attackers - and that’s perfectly normal.
  • +
  • If you run the commands: sudo apt update, then sudo apt upgrade, and accept the suggested upgrades, then you’ll have all the latest security updates, and be secure enough for a test environment - but you should re-run this regularly.
  • +
+

EXTENSION

+

Read up on:

+ +

RESOURCES

+ +

TROUBLESHOOT AND MAKE A SAD SERVER HAPPY!

+

Practice what you’ve learned with some challenges at SadServers.com:

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/08/index.html b/08/index.html new file mode 100644 index 0000000..4e6f83b --- /dev/null +++ b/08/index.html @@ -0,0 +1,1523 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 8 - The infamous "grep" and other text processors - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 8 - The infamous “grep” and other text processors

+ +

INTRO

+

Your server is now running two services: the sshd (Secure Shell Daemon) service that you use to login; and the Apache2 web server. Both of these services are generating logs as you and others access your server - and these are text files which we can analyse using some simple tools.

+

Plain text files are a key part of “the Unix way” and there are many small “tools” to allow you to easily edit, sort, search and otherwise manipulate them. Today we’ll use grep, cat, more, less, cut, awk and tail to slice and dice your logs.

+

The grep command is famous for being extremely powerful and handy, but also because its “nerdy” name is typical of Unix/Linux conventions.

+

YOUR TASKS TODAY

+
    +
  • Dump out the complete contents of a file with cat like this: cat /var/log/apache2/access.log
  • +
  • Use less to open the same file, like this: less /var/log/apache2/access.log - and move up and down through the file with your arrow keys, then use “q” to quit.
  • +
  • Again using less, look at a file, but practice confidently moving around using gg, GG and /, n and N (to go to the top of the file, bottom of the file, to search for something and to hop to the next “hit” or back to the previous one)
  • +
  • View recent logins and sudo usage by viewing /var/log/auth.log with less
  • +
  • Look at just the tail end of the file with tail /var/log/apache2/access.log (yes, there’s also a head command!)
  • +
  • Follow a log in real-time with: tail -f /var/log/apache2/access.log (while accessing your server’s web page in a browser)
  • +
  • You can take the output of one command and “pipe” it in as the input to another by using the | (pipe) symbol
  • +
  • So, dump out a file with cat, but pipe that output to grep with a search term - like this: cat /var/log/auth.log | grep "authenticating"
  • +
  • Simplify this to: grep "authenticating" /var/log/auth.log
  • +
  • Piping allows you to narrow your search, e.g. grep "authenticating" /var/log/auth.log | grep "root"
  • +
  • Use the cut command to select out most interesting portions of each line by specifying “-d” (delimiter) and “-f” (field) - like: grep "authenticating" /var/log/auth.log| grep "root"| cut -f 10- -d" " (field 10 onwards, where the delimiter between field is the ” ” character). This approach can be very useful in extracting useful information from log data.
  • +
  • Use the -v option to invert the selection and find attempts to login with other users: grep "authenticating" /var/log/auth.log| grep -v "root"| cut -f 10- -d" "
  • +
+

The output of any command can be “redirected” to a file with the “>” operator. The command: ls -ltr > listing.txt wouldn’t list the directory contents to your screen, but instead redirect into the file “listing.txt” (creating that file if it didn’t exist, or overwriting the contents if it did).

+

WHERE’S MY /VAR/LOG/AUTH.LOG?

+

If you didn’t find the file /var/log/auth.log you’re probably using a minimal version of Ubuntu (it can be your own local VM or a version in one of the VPS). That minimal image is, well… minimal. It only has the systemd journal available and it didn’t come with the old syslog system by default.

+

But don’t worry! To get that back, sudo apt install rsyslog and the file will be created. Just give it a few minutes to populate before working on the lesson.

+

It also be missing a few of the other programs we use in the challenge, but you can always install them.

+

POSTING YOUR PROGRESS

+

Re-run the command to list all the IP’s that have unsuccessfully tried to login to your server as root - but this time, use the the “>” operator to redirect it to the file: ~/attackers.txt. You might like to share and compare with others doing the course how heavily you’re “under attack”!

+

EXTENSION

+
    +
  • See if you can extend your filtering of auth.log to select just the IP addresses, then pipe this to sort, and then further to uniq to get a list of all those IP addresses that have been “auditing” your server security for you.
  • +
  • Investigate the awk and sed commands. When you’re having difficulty figuring out how to do something with grep and cut, then you may need to step up to using these. Googling for “linux sed tricks” or “awk one liners” will get you many examples.
  • +
  • Aim to learn at least one simple useful trick with both awk and sed
  • +
+

RESOURCES

+ +

TROUBLESHOOT AND MAKE A SAD SERVER HAPPY!

+

Practice what you’ve learned with some challenges at SadServers.com:

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/09/index.html b/09/index.html new file mode 100644 index 0000000..bb19b44 --- /dev/null +++ b/09/index.html @@ -0,0 +1,1601 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 9 - Diving into networking - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 9 - Diving into networking

+ +

INTRO

+

The two services your server is now running are sshd for remote login, and apache2 for web access. These are both “open to the world” via the TCP/IP “ports” - 22 and 80.

+

As a sysadmin, you need to understand what ports you have open on your servers because each open port is also a potential focus of attacks. You need to be be able to put in place appropriate monitoring and controls.

+

YOUR TASKS TODAY

+
    +
  • Secure your web server by using a firewall
  • +
+

INSTRUCTIONS

+

First we’ll look at a couple of ways of determining what ports are open on your server:

+
    +
  • ss - this, “socket status”, is a standard utility - replacing the older netstat
  • +
  • nmap - this “port scanner” won’t normally be installed by default
  • +
+

There are a wide range of options that can be used with ss, but first try: ss -ltpn

+

The output lines show which ports are open on which interfaces:

+
sudo ss -ltp
+State   Recv-Q  Send-Q   Local Address:Port     Peer Address:Port  Process
+LISTEN  0       4096     127.0.0.53%lo:53        0.0.0.0:*      users:(("systemd-resolve",pid=364,fd=13))
+LISTEN  0       128            0.0.0.0:22           0.0.0.0:*      users:(("sshd",pid=625,fd=3))
+LISTEN  0       128               [::]:22              [::]:*      users:(("sshd",pid=625,fd=4))
+LISTEN  0       511                  *:80                *:*      users:(("apache2",pid=106630,fd=4),("apache2",pid=106629,fd=4),("apache2",pid=106627,fd=4))
+
+ +

The network notation can be a little confusing, but the lines above show ports 80 and 22 open “to the world” on all local IP addresses - and port 53 (DNS) open only on a special local address.

+

Now install nmap with apt install. This works rather differently, actively probing 1,000 or more ports to check whether they’re open. It’s most famously used to scan remote machines - please don’t - but it’s also very handy to check your own configuration, by scanning your server:

+
$ nmap localhost
+
+Starting Nmap 5.21 ( http://nmap.org ) at 2013-03-17 02:18 UTC
+Nmap scan report for localhost (127.0.0.1)
+Host is up (0.00042s latency).
+Not shown: 998 closed ports
+PORT   STATE SERVICE
+22/tcp open  ssh
+80/tcp open  http
+
+Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds
+
+ +

Port 22 is providing the ssh service, which is how you’re connected, so that will be open. If you have Apache running then port 80/http will also be open. Every open port is an increase in the “attack surface”, so it’s Best Practice to shut down services that you don’t need.

+

Note that however that “localhost” (127.0.0.1), is the loopback network device. Services “bound” only to this will only be available on this local machine. To see what’s actually exposed to others, first use the ip a command to find the IP address of your actual network card, and then nmap that.

+

Host firewall

+

The Linux kernel has built-in firewall functionality called “netfilter”. We configure and query this via various utilities, the most low-level of which are the iptables command, and the newer nftables. These are powerful, but also complex - so we’ll use a more friendly alternative - ufw - the “uncomplicated firewall”.

+

First let’s list what rules are in place by typing sudo iptables -L

+

You will see something like this:

+
Chain INPUT (policy ACCEPT)
+target  prot opt source             destination
+
+Chain FORWARD (policy ACCEPT)
+target  prot opt source             destination
+
+Chain OUTPUT (policy ACCEPT)
+target  prot opt source             destination
+
+ +

So, essentially no firewalling - any traffic is accepted to anywhere.

+

Using ufw is very simple. It is available by default in all Ubuntu installations after 8.04 LTS, but if you need to install it:

+
sudo apt install ufw
+
+ +

Then, to allow SSH, but disallow HTTP we would type:

+
sudo ufw allow ssh
+sudo ufw deny http
+
+ +

BEWARE! Don’t forget to explicitly ALLOW ssh, or you’ll lose all contact with your server! If not allowed, the firewall assumes the port is DENIED by default.

+

And then enable this with:

+
sudo ufw enable
+
+ +

Typing sudo iptables -L now will list the detailed rules generated by this - one of these should now be:

+
“DROP       tcp  --  anywhere             anywhere             tcp dpt:http”
+
+ +

The effect of this is that although your server is still running Apache, it’s no longer accessible from the “outside” - all incoming traffic to the destination port of http/80 being DROPed. Test for yourself! You will probably want to reverse this with:

+
sudo ufw allow http
+sudo ufw enable
+
+ +

In practice, ensuring that you’re not running unnecessary services is often enough protection, and a host-based firewall is unnecessary, but this very much depends on the type of server you are configuring. Regardless, hopefully this session has given you some insight into the concepts.

+

BTW: For this test/learning server you should allow http/80 access again now, because those access.log files will give you a real feel for what it’s like to run a server in a hostile world.

+

Using non-standard ports

+

Occasionally it may be reasonable to re-configure a service so that it’s provided on a non-standard port - this is particularly common advice for ssh/22 - and would be done by altering the configuration in /etc/ssh/sshd_config.

+

Some call this “security by obscurity” - equivalent to moving the keyhole on your front door to an unusual place rather than improving the lock itself, or camouflaging your tank rather than improving its armour - but it does effectively eliminate attacks by opportunistic hackers, which is the main threat for most servers.

+

But, if you’re going to do it, remember all the rules and security tools you already have in place. If you are using AWS, for example, and change the SSH port to 2222, you will need to open that port in the EC2 security group for your instance.

+

EXTENSION

+

Even after denying access, it might be useful to know who’s been trying to gain entry. Check out these discussions of logging and more complex setups:

+ +

RESOURCES

+ +

TROUBLESHOOT AND MAKE A SAD SERVER HAPPY!

+

Practice what you’ve learned with some challenges at SadServers.com:

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/10/index.html b/10/index.html new file mode 100644 index 0000000..3061e82 --- /dev/null +++ b/10/index.html @@ -0,0 +1,1847 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 10 - Scheduling tasks - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 10 - Scheduling tasks

+ +

Introduction

+

Linux has a rich set of features for running scheduled tasks. One of the key +attributes of a good sysadmin is getting the computer to do your work for you +(sometimes misrepresented as laziness!) - and a well configured set of +scheduled tasks is key to keeping your server running well.

+

The time-based job scheduler cron(8) is the +one most commonly used by Linux sysadmins. It’s been around more or less in it’s +current form since Unix System V and uses a standardized syntax that’s in +widespread use.

+

Using at to schedule oneshot tasks

+

If you’re on Ubuntu, you will likely need to install the at package first.

+
sudo apt update
+sudo apt install at
+
+

We’ll use the at command to schedule a one time task to be ran at some point +in the future.

+

Next, let’s print the filename of the terminal connected to standard input (in +Linux everything is a file, including your terminal!). We’re going to echo +something to our terminal at some point in the future to get an idea +of how scheduling future tasks with at works.

+
vagrant@ubuntu2204:~$ tty
+/dev/pts/0
+
+

Now we’ll schedule a command to echo a greeting to our terminal 1 minute in the +future.

+
vagrant@ubuntu2204:~$ echo 'echo "Greetings $USER!" > /dev/pts/0' | at now + 1 minutes
+warning: commands will be executed using /bin/sh
+job 2 at Sun May 26 06:30:00 2024
+
+

After several seconds, a greeting should be printed to our terminal.

+
...
+vagrant@ubuntu2204:~$ Greetings vagrant!
+
+

It’s not as common for this to be used to schedule one time tasks, but if you +ever needed to, now you have an idea of how this might work. In the next section +we’ll learn about scheduling time-based tasks using cron and crontab.

+

For a more in-depth exploration of scheduling things with at review the +relevant articles in the further reading section below.

+

Using crontab to schedule jobs

+

In Linux we use the crontab command to interact with tasks scheduled with +the cron daemon. Each user, including the root user, can schedule jobs that run +as their user.

+

Display your user’s crontab with crontab -l.

+
vagrant@ubuntu2204:~$ crontab -l
+no crontab for vagrant
+
+

Unless you’ve already created a crontab for your user, you probably won’t have +one yet. Let’s create a simple cronjob to understand how it works.

+

Using the crontab -e command, let’s create our first cronjob. On Ubuntu, if +this is you’re first time editing a crontab you will be greeted with a menu to +choose your preferred editor.

+
vagrant@ubuntu2204:~$ crontab -e
+no crontab for vagrant - using an empty one
+
+Select an editor.  To change later, run 'select-editor'.
+  1. /bin/nano        <---- easiest
+  2. /usr/bin/vim.basic
+  3. /usr/bin/vim.tiny
+  4. /bin/ed
+
+Choose 1-4 [1]: 2
+
+

Choose whatever your preferred editor is then press Enter.

+

At the bottom of the file add the following cronjob and then save and quit the +file.

+
* * * * * echo "Hello world!" > /dev/pts/0
+
+
+

NOTE: Make sure that the /dev/pts/0 file path matches whatever was printed +by your tty command above.

+
+

Next, let’s take a look at the crontab we just installed by running crontab -l +again. You should see the cronjob you created printed to your terminal.

+
vagrant@ubuntu2204:~$ crontab -l
+* * * * * echo "Hello world!" > /dev/pts/0
+
+

This cronjob will print the string Hello world! to your terminal every minute until we remove or update the cronjob. Wait a few minutes and see what it does.

+
vagrant@ubuntu2204:~$ Hello world!
+Hello world!
+Hello world!
+...
+
+

When you’re ready uninstall the crontab you created with crontab -r.

+

Understanding crontab syntax

+

The basic crontab syntax is as follows:

+
* * * * * command to be executed
+- - - - -
+| | | | |
+| | | | ----- Day of week (0 - 7) (Sunday=0 or 7)
+| | | ------- Month (1 - 12)
+| | --------- Day of month (1 - 31)
+| ----------- Hour (0 - 23)
+------------- Minute (0 - 59)
+
+
    +
  • Minute values can be from 0 to 59.
  • +
  • Hour values can be from 0 to 23.
  • +
  • Day of month values can be from 1 to 31.
  • +
  • Month values can be from 1 to 12.
  • +
  • Day of week values can be from 0 to 6, with 0 denoting Sunday.
  • +
+

There are different operators that can be used as a short-hand to specify multiple +values in each field:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
SymbolDescription
*Wildcard, specifies every possible time interval
,List multiple values separated by a comma.
-Specify a range between two numbers, separated by a hyphen
/Specify a periodicity/frequency using a slash
+

There’s also a helpful site to check cron schedule expressions at crontab.guru.

+

Use the crontab.guru site to play around with the different expressions to get +an idea of how it works or click the random button to generate an expression +at random.

+

Your Tasks Today

+
    +
  1. Schedule daily backups of user’s home directories
  2. +
  3. Schedule a task that looks for any backups that are more than 7 days old and + deletes them
  4. +
+

Automating common system administration tasks

+

One common use-case that cronjobs are used for is scheduling backups of various +things. As the root user, we’re going to create a cronjob that creates a +compressed archive of all of the user’s home directories using the tar utility. +Tar is short for “tape archive” and harkens back to earlier days of Unix and +Linux when data was commonly archived on tape storage similar to cassette tapes.

+

As a general rule, it’s good to test your command or script before installing it +as a cronjob. First we’ll create a backup of /home by manually running a +version of our tar command.

+
vagrant@ubuntu2204:~$ sudo tar -czvf /var/backups/home.tar.gz /home/
+tar: Removing leading `/' from member names
+/home/
+/home/ubuntu/
+/home/ubuntu/.profile
+/home/ubuntu/.bash_logout
+/home/ubuntu/.bashrc
+/home/ubuntu/.ssh/
+/home/ubuntu/.ssh/authorized_keys
+...
+
+
+

NOTE: We’re passing the -v verbose flag to tar so that we can see better +what it’s doing. -czf stand for “create”, “gzip compress”, and “file” in +that order. See man tar for further details.

+
+

Let’s also use the date command to allow us to insert the date of the backup +into the filename. Since we’ll be taking daily backups, after this cronjob +has ran for a few days we will have a few days worth of backups each with it’s +own archive tagged with the date.

+
vagrant@ubuntu2204:~$ date
+Sun May 26 04:12:13 UTC 2024
+
+

The default string printed by the date command isn’t that useful. Let’s output +the date in ISO 8601 format, sometimes referred to as the “ISO date”.

+
vagrant@ubuntu2204:~$ date -I
+2024-05-26
+
+

This is a more useful string that we can combine with our tar command to +create an archive with today’s date in it.

+
vagrant@ubuntu2204:~$ sudo tar -czvf /var/backups/home.$(date -I).tar.gz /home/
+tar: Removing leading `/' from member names
+/home/
+/home/ubuntu/
+...
+
+

Let’s look at the backups we’ve created to understand how this date command is +being inserted into our filename.

+
vagrant@ubuntu2204:~$ ls -l /var/backups
+total 16
+-rw-r--r-- 1 root root 8205 May 26 04:16 home.2024-05-26.tar.gz
+-rw-r--r-- 1 root root 3873 May 26 04:07 home.tar.gz
+
+
+

NOTE: These .tar.gz files are often called tarballs +by sysadmins.

+
+

Create and edit a crontab for root with sudo crontab -e and add the following +cronjob.

+
0 5 * * * tar -zcf /var/backups/home.$(date -I).tar.gz /home/
+
+

This cronjob will run every day at 05:00. After a few days there will be several +backups of user’s home directories in /var/backups.

+

If we were to let this cronjob run indefinitely, after a while we would end up +with a lot of backups in /var/backups. Over time this will cause the disk +space being used to grow and could fill our disk. It’s probably best +that we don’t let that happen. To mitigate this risk, we’ll setup another +cronjob that runs everyday and cleans up old backups that we don’t need to store.

+

The find command is like a swiss army knife for finding files based on all +kinds of criteria and listing them or doing other things to them, such as +deleting them. We’re going to craft a find command that finds all of the +backups we created and deletes any that are older than 7 days.

+

First let’s get an idea of how the find command works by finding all of our +backups and listing them.

+
vagrant@ubuntu2204:~$ sudo find /var/backups -name "home.*.tar.gz"
+/var/backups/home.2024-05-26.tar.gz
+...
+
+

What this command is doing is looking for all of the files in /var/backups +that start with home. and end with .tar.gz. The * is a wildcard character +that matches any string.

+

In our case we need to create a scheduled task that will find all of the files +older than 7 days in /var/backups and delete them. Run sudo crontab -e and +install the following cronjob.

+
30 5 * * * find /var/backups -name "home.*.tar.gz" -mtime +7 -delete
+
+
+

NOTE: The -mtime flag is short for “modified time” and in our case find is +looking for files that were modified more than 7 days ago, that’s what the ++7 indicates. The find command will be covered in greater detail on +Day 11 - Finding things….

+
+

By now, our crontab should look something like this:

+
vagrant@ubuntu2204:~$ sudo crontab -l
+# Daily user dirs backup
+0 5 * * * tar -zcf /var/backups/home.$(date -I).tar.gz /home/
+# Retain 7 days of homedir backups
+30 5 * * * find /var/backups -name "home.*.tar.gz" -mtime +7 -delete
+
+

Setting up cronjobs using the find ... -delete syntax is fairly idiomatic of +scheduled tasks a system administrator might use to manage files and remove +old files that are no longer needed to prevent disks from getting full. It’s not +uncommon to see more sophisticated cron scripts that use a combination of tools +like tar, find, and rsync to manage backups incrementally or on a schedule +and implement a more sophisticated retention policy based on real-world use-cases.

+

System crontab

+

There’s also a system-wide crontab defined in /etc/crontab. Let’s take a look +at this file.

+
vagrant@ubuntu2204:~$ cat /etc/crontab
+# /etc/crontab: system-wide crontab
+# Unlike any other crontab you don't have to run the `crontab'
+# command to install the new version when you edit this file
+# and files in /etc/cron.d. These files also have username fields,
+# that none of the other crontabs do.
+
+SHELL=/bin/sh
+# You can also override PATH, but by default, newer versions inherit it from the environment
+#PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
+
+# Example of job definition:
+# .---------------- minute (0 - 59)
+# |  .------------- hour (0 - 23)
+# |  |  .---------- day of month (1 - 31)
+# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
+# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
+# |  |  |  |  |
+# *  *  *  *  * user-name command to be executed
+17  *  *  *  *  root      cd / && run-parts --report /etc/cron.hourly
+25  6  *  *  *  root      test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
+47  6  *  *  7  root      test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
+52  6  1  *  *  root      test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
+
+

By now the basic syntax should be familiar to you, but you’ll notice an extra +field user-name. This specifies the user that runs the task and is unique to +the system crontab at /etc/crontab.

+

It’s not common for system administrators to use /etc/crontab anymore and +instead user’s are encouraged to install their own crontab for their user, even +for the root user. User crontab’s are all located in /var/spool/cron. The +exact subdirectory tends to vary depending on the distribution.

+
vagrant@ubuntu2204:~$ sudo ls -l /var/spool/cron/crontabs
+total 8
+-rw------- 1 root    crontab  392 May 26 04:45 root
+-rw------- 1 vagrant crontab 1108 May 26 05:45 vagrant
+
+

Each user has their own crontab with their user as the filename.

+

Note that the system crontab shown above also manages cronjobs that run daily, +weekly, and monthly as scripts in the /etc/cron.* directories. Let’s look +at an example.

+
vagrant@ubuntu2204:~$ ls -l /etc/cron.daily
+total 20
+-rwxr-xr-x 1 root root  376 Nov 11  2019 apport
+-rwxr-xr-x 1 root root 1478 Apr  8  2022 apt-compat
+-rwxr-xr-x 1 root root  123 Dec  5  2021 dpkg
+-rwxr-xr-x 1 root root  377 Jan 24  2022 logrotate
+-rwxr-xr-x 1 root root 1330 Mar 17  2022 man-db
+
+

Each of these files is a script or a shortcut to a script to do some regular +task and they’re run in alphabetic order by run-parts. So in this case +apport will run first. Use less or cat to view some of the scripts on your +system - many will look very complex and are best left well alone, but others +may be just a few lines of simple commands.

+
vagrant@ubuntu2204:~$ cat /etc/cron.daily/dpkg 
+#!/bin/sh
+
+# Skip if systemd is running.
+if [ -d /run/systemd/system ]; then
+  exit 0
+fi
+
+/usr/libexec/dpkg/dpkg-db-backup
+
+

As an alternative to scheduling jobs with crontab you may also create a script +and put it into one of the /etc/cron.{daily,weekly,monthly} directories and +it will get ran at the desired interval.

+

A note about systemd timers

+

All major Linux distributions now include “systemd”. As well as starting and +stopping services, this can also be used to run tasks at specific times via +“timers”. See which ones are already configured on your server with:

+
systemctl list-timers
+
+

Use the links in the further reading section to read up about how these timers work.

+

Further reading

+ +

License

+

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/11/index.html b/11/index.html new file mode 100644 index 0000000..7cf1b95 --- /dev/null +++ b/11/index.html @@ -0,0 +1,1607 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 11 - Finding things - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 11 - Finding things…

+ +

INTRO

+

Today we’ll look at how you find files, and text inside these files, quickly and efficiently.

+

It can be very frustrating to know that a file or setting exists, but not be able to track it down! Master today’s commands and you’ll be much more confident as you administer your systems.

+

Today you’ll look at some useful tools:

+
    +
  • locate
  • +
  • find
  • +
  • grep
  • +
  • which
  • +
+

YOUR TASKS TODAY

+
    +
  • Find all files that have the word “Permission” in it
  • +
+

INSTRUCTIONS

+

locate

+

If you’re looking for a file called access.log then the quickest approach is to use “locate” like this:

+
$ locate access.log
+/var/log/apache2/access.log
+/var/log/apache2/access.log.1
+/var/log/apache2/access.log.2.gz
+
+ +

(If locate is not installed, do so with sudo apt install mlocate)

+

As you can see, by default it treats a search for “something” as a search for “*something*“. It’s very fast because it searches an index, but if this index is out of date or missing it may not give you the answer you’re looking for. This is because the index is created by the updatedb command - typically run only nightly by cron. It may therefore be out of date for recently added files, so it can be worthwhile updating the index by manually running: sudo updatedb.

+

find

+

The find command searches down through a directory structure looking for files which match some criteria - which could be name, but also size, or when last updated etc. Try these examples:

+
find /var -name access.log
+find /home -mtime -3
+
+ +

The first searches for files with the name “access.log”, the second for any file under /home with a last-modified date in the last 3 days.

+

These will take longer than locate did because they search through the filesystem directly rather from an index. Also, because find uses the permissions of the logged-in user you’ll get “permission denied” messages for many directories if you search the whole system. Starting the command with sudo of course will run it as root - or you could filter the errors with grep like this: find /var -name access.log 2>&1 | grep -vi "Permission denied".

+

These examples are just the tip of a very large iceberg, check the articles in the RESOURCES section and work through as many examples as you can - time spent getting really comfortable with find is not wasted.

+

grep -R

+

Rather than asking “grep” to search for text within a specific file, you can give it a whole directory structure, and ask it to recursively search down through it, including following all symbolic links (which -r does not). +This trick is particularly handy when you “just know” that an item appears “somewhere” - but are not sure where.

+

As an example, you know that “PermitRootLogin” is an ssh parameter in a config file somewhere under /etc, but can’t recall exactly where it is kept:

+

grep -R -i "PermitRootLogin" /etc/*

+

Because this only works on plain text files, it’s most useful for the /etc and /var/log folders. (Notice the -i which makes the search “case insensitive”, finding the setting even if it’s been entered as “Permitrootlogin”

+

You may now have logs like /var/log/access.log.2.gz - these are older logs that have been compressed to save disk space - so you can’t read them with less, or search them with grep. However, there are zless and zgrep, which do work, and on ordinary as well as compressed files.

+

which

+

It’s sometimes useful to know where a command is being run from. If you type nano, and it starts, where is the nano binary coming from? The general rule is that the system will search through the locations setup in your “path”. To see this type:

+

echo $PATH

+

To see where nano comes from, type:

+

which nano

+

Try this for grep, vi and service and reboot. You’ll notice that they’re typically always in subfolders named bin, but that there are several different ones.

+

EXTENSION

+

The -exec feature of the find command is extremely powerful.

+

But “finding things” can go so much further than that! You can not only track down the content of a file, but also its usage with commands like lsof and fuser.

+

Test some examples of this from the RESOURCES links.

+

RESOURCES

+ +

TROUBLESHOOT AND MAKE A SAD SERVER HAPPY!

+

Practice what you’ve learned with some challenges at SadServers.com:

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/12/index.html b/12/index.html new file mode 100644 index 0000000..3a0d7d5 --- /dev/null +++ b/12/index.html @@ -0,0 +1,1517 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 12 - Transferring files - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 12 - Transferring files

+ +

INTRO

+

You’ve now had a working Internet server of your own for some time, and seen how you can create and edit small files there. You’ve created a web server where you’ve been able to edit a simple web page.

+

Today we’ll be looking at how you can move files between your other systems and this server - tasks like:

+
    +
  • Taking a copy of some files from your server onto your desktop machine
  • +
  • Copying up some text to your server to put on your webpage
  • +
  • Uploading some photos and logos for your webpage
  • +
+

YOUR TASKS TODAY

+
    +
  • Upload a file to the server
  • +
  • Download a file from the server
  • +
  • Synchronize a backup
  • +
+

PROTOCOLS

+

There are a wide range of ways a Linux server can share files, including:

+
    +
  • SMB: Microsoft’s file sharing, useful on a local network of Windows machines
  • +
  • AFP: Apple’s file sharing, useful on a local network of Apple machines
  • +
  • WebDAV: Sharing over web (http) protocols
  • +
  • FTP: Traditional Internet sharing protocol
  • +
  • scp: Simple support for copying files
  • +
  • rsync: Fast, very efficient file copying
  • +
  • SFTP: file access and copying over the SSH protocol (Despite the name, the SFTP protocol at a technical level is completely unrelated to traditional FTP)
  • +
+

Each of these have their place, but for copying files back and forth from your local desktop to your server, SFTP has a number of key advantages:

+
    +
  • No extra setup is required on your server
  • +
  • Top quality security
  • +
  • Allows browsing through the directory structure
  • +
  • You can create and delete folders
  • +
+

If you’re successfully logging in via ssh from your home, work or a cybercafe then you’ll also be able to use SFTP from this same location because the same underlying protocol is being used.

+

By contrast, setting up your server for any of the other protocols will require extra work. Not only that, enabling extra protocols also increases the “attack surface” - and there’s always a chance that you’ll mis-configure something in a way that allows an attacker in. It’s also very likely that restrictive firewall policies at a workplace will interfere with or block these protocols. Finally, while old-style FTP is still very commonly used, it sends login credentials “in clear”, so that your flatmates, cafe buddies or employer may be able to grab them off the network by “packet sniffing”. Not a big issue with your “classroom” server - but it’s an unacceptable risk if you’re remotely administering production servers.

+

SFTP client software

+

What’s required to use SFTP is some client software. A command-line client (unsurprisingly called sftp) comes standard on every Apple OSX or Linux system. If you’re using a Linux desktop, you also have a built-in GUI client via your file manager. This will allow you to easily attach to remote servers via SFTP. (For the Nautilus file manager for example, press ctrl + L to bring up the ‘location window” and type: sftp://username@myserver-address).

+

Although Windows and Apple macOS have no built-in GUI client there are a wide range of third-party options available, both free and commercial. If you don’t already have such a client installed, then choose one such as:

+
    +
  • WinSCP or FileZilla - for Windows users
  • +
  • CyberDuck or FileZilla - for macOS users
  • +
+

Download locations are under the RESOURCES section.

+

Configuring and using your choice of these should be straightforward. The only real potential for confusion is that these clients generally support a wide range of protocols such as scp and FTP that we’re not going to use. When you’re asked for SERVER, give your server’s IP address, PORT will be 22, and PROTOCOL will be SFTP or SSH.

+

INSTRUCTIONS

+
    +
  • Configure your chosen SFTP client to login to your server as your username
  • +
  • Copy some files from your server down to your local desktop (try files from your “home” folder, and from /var/log)
  • +
  • Create an “images” folder under your “home” folder on the server, and upload some images to it from your desktop machine
  • +
  • Go up to the root directory. You should see /etc, /bin and other folders. Try to create an “images” folder here too - this should fail because you are logging in as an ordinary use, so you won’t have permission to create new files or folders. In your own “home” directory you of course have full permission.
  • +
+

Once the files are uploaded you can login via ssh and use sudo to give yourself the necessary power to move files about.

+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/13/index.html b/13/index.html new file mode 100644 index 0000000..5cfc4f8 --- /dev/null +++ b/13/index.html @@ -0,0 +1,1598 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 13 - Users and Groups - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 13 - Users and Groups

+ +

INTRO

+

Today you’re going to set-up another user on your system. You’re going to imagine that this is a help-desk person that you trust to do just a few simple tasks:

+
    +
  • check that the system is running
  • +
  • check disk space with: df -h
  • +
+

…but you also want them to be able to reboot the system, because you believe that “turning it off and on again” resolves most problems :-)

+

You’ll be covering a several new areas, so have fun!

+

YOUR TASKS TODAY

+
    +
  • Create a new user
  • +
  • Create a new group
  • +
  • Create a new user and add to an existing group
  • +
  • Make a new user a sudoer
  • +
+

Follow this demo

+

ADDING A NEW USER

+

Choose a name for your new user - we’ll use “helen” in the examples, so to add this new user:

+

sudo adduser helen

+

(Names are case-sensitive in Linux, so “Helen” would be a completely different user)

+

The “adduser” command works very slightly differently in each distro - if it didn’t ask you for a password for your new user, then set it manually now by:

+

sudo passwd helen

+

You will now have a new entry in the simple text database of users: /etc/passwd (check it out with: less), and a group of the same name in the file: /etc/group. A hash of the password for the user is in: /etc/shadow (you can read this too if you use “sudo” - check the permissions to see how they’re set. For obvious reasons it’s not readable to just everyone).

+

If you’re used to other operating systems it may be hard to believe, but these simple text files are the whole Linux user database and you could even create your users and groups by directly editing these files - although this isn’t normally recommended.

+

Additionally, adduser will have created a home directory, /home/helen for example, with the correct permissions.

+

ATTENTION! useradd is not the same as adduser. They both create a new user, but they interact very differently. Check the link in the EXTENSION section to see those differences.

+

ADDING A NEW GROUP

+

Let’s say we want to all of the developers in my organization to have their own group, so they can have access to the same things.

+

sudo groupadd developers

+

On most modern Linux systems there is a group created for each user, so user “ubuntu” is a member of the group “ubuntu”. But if you want, you can create a new user directly into an existing group, using the ingroup flag. So a new user fred would be created like this:

+

sudo adduser --ingroup developers fred

+

ADDING AN USER TO GROUPS

+

Users can also be part of more than one group, and groups can be added as required.

+

To see what groups you’re a member of, simply type: groups

+

On an Ubuntu system the first user created (in your case ubuntu), should be a member of the groups: ubuntu, sudo and admin - and if you list the /var/log folder you’ll see your membership of the sudo group is why you can use less to read and view the contents of /var/log/auth.log

+

The “root” user can add a user to an existing group with the command:

+

usermod -a -G group user

+

so your ubuntu user can do the same simply by prefixing the command with sudo.

+

Because the new user helen is not the first user created in the system, they don’t have the power to run sudo - which your user has by being a member of the group sudo.

+

So, to check which groups helen is a member of, you can “become helen” by switching users like this:

+

sudo su helen

+

Then:

+

groups

+

If you try to do stuff only a sudo user can do, i.e. read the contents of /var/log/auth.log, even using the prefix sudo won’t work. Helen is not a sudo and has no permissions to perform this action.

+

Now type “exit” to return to your normal user, and you can add helen to this group with:

+

sudo usermod -a -G sudo helen

+

Instead of switching users again, simply run the groups helen to check. Try that with fred too and check how everything works.

+

See if any of your new users can sudo reboot.

+

CLEVER SUDO TRICKS

+

Your new user is just an ordinary user and so can’t use sudo to run commands with elevated privileges - until we set them up. We could simply add them to a group that’s pre-defined to be able to use sudo to do anything as root (like we did with helen) - but we don’t want to give fred quite that same amount of power.

+

Use ls -l to look at the permissions for the file: /etc/sudoers This is where the magic is defined, and you’ll see that it’s tightly controlled, but you should be able to view it with: sudo less /etc/sudoers You want to add a new entry in there for your new user, and for this you need to run a special utility: visudo

+

To run this, you can temporarily “become root” by running:

+

sudo -i

+

Notice that your prompt has changed to a #

+

Now simply run visudo to begin editing /etc/sudoers - typically this will use nano.

+

All lines in /etc/sudoers beginning with “#” are optional comments. You’ll want to add some lines like this:

+
# Allow user "fred" to run "sudo reboot"
+# ...and don't prompt for a password
+#
+fred ALL = NOPASSWD:/sbin/reboot
+
+ +

You can add these line in wherever seems reasonable. The visudo command will automatically check your syntax, and won’t allow you to save if there are mistakes - because a corrupt sudoers file could lock you out of your server!

+

Type exit to remove your magic hat and become your normal user again - and notice that your prompt reverts to: $

+

TESTING

+

Test by logging in as your test user and typing: sudo reboot +Note that you can “become” helen by:

+

sudo su helen

+

If your ssh config allows login only with public keys, you’ll need to setup /home/helen/.ssh/authorized_keys - including getting the owner and permissions correct. A little challenge of your understanding of this area!

+

EXTENSION

+

If you find this all pretty familiar, then you might like to check and update your knowledge on a couple of related areas:

+ +

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/14/index.html b/14/index.html new file mode 100644 index 0000000..0fb089e --- /dev/null +++ b/14/index.html @@ -0,0 +1,1571 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 14 - Who has permission? - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 14 - Who has permission?

+ +

INTRO

+

Files on a Linux system always have associated “permissions” - controlling who has access and what sort of access. You’ll have bumped into this in various ways already - as an example, yesterday while logged in as your “ordinary” user, you could not upload files directly into /var/www or create a new folder at /.

+

The Linux permission system is quite simple, but it does have some quirky and subtle aspects, so today is simply an introduction to some of the basic concepts.

+

This time you really do need to work your way through the material in the RESOURCES section!

+

YOUR TASKS TODAY

+
    +
  • Change the ownership of a file to root
  • +
  • Change file permissions
  • +
+

OWNERSHIP

+

First let’s look at “ownership”. All files are tagged with both the name of the user and the group that owns them, so if we type ls -l and see a file listing like this:

+
-rw-------  1 steve  staff      4478979  6 Feb  2011 private.txt
+-rw-rw-r--  1 steve  staff      4478979  6 Feb  2011 press.txt
+-rwxr-xr-x  1 steve  staff      4478979  6 Feb  2011 upload.bin
+
+ +

Then these files are owned by user “steve”, and the group “staff”. Anyone that is not “steve” or is not part of the group “staff” is considered “other”. Others may still have permissions to handle these files, but they do not have any ownership.

+

If you want to change the ownership of a file, use the chown utility. This will change the user owner of file to a new user:

+

sudo chown user file

+

You can also change user and group at the same time:

+

sudo chown user:group file

+

If you only need to change the group owner, you can use chgrp command instead:

+

sudo chgrp group file

+

Since you created new users in the previous lesson, switch logins and create a few files to their home directories for testing. See how they show with ls -l

+

PERMISSIONS (SYMBOLIC NOTATION)

+

Looking at the -rw-r--r-- at the start of a directory listing line, (ignore the first “-” for now), and see these as potentially three groups of “rwx”: the permission granted to the “user” who owns the file, the “group”, and “other people” - we like to call that UGO.

+

For the example list above:

+
    +
  • private.txt - Steve has rw (ie Read and Write) permission, but neither the group “staff” nor “other people” have any permission at all
  • +
  • press.txt - Steve can Read and Write to this file too, but so can any member of the group “staff” and anyone, i.e. “other people”, can read it
  • +
  • upload.bin - Steve has rwx, he can read, write and execute - i.e. run this program - but the group and others can only read and execute it
  • +
+

You can change the permissions on any file with the chmod utility. Create a simple text file in your home directory with vim (e.g. tuesday.txt) and check that you can list its contents by typing: cat tuesday.txt or less tuesday.txt.

+

Now look at its permissions by doing: ls -ltr tuesday.txt

+
-rw-rw-r-- 1 ubuntu ubuntu   12 Nov 19 14:48 tuesday.txt
+
+ +

So, the file is owned by the user “ubuntu”, and group “ubuntu”, who are the only ones that can write to the file - but any other user can only read it.

+

CHANGING PERMISSIONS

+

Now let’s remove the permission of the user and “ubuntu” group to write their own file:

+

chmod u-w tuesday.txt

+

chmod g-w tuesday.txt

+

…and remove the permission for “others” to read the file:

+

chmod o-r tuesday.txt

+

Do a listing to check the result:

+
-r--r----- 1 ubuntu ubuntu   12 Nov 19 14:48 tuesday.txt
+
+ +

…and confirm by trying to edit the file with nano or vim. You’ll find that you appear to be able to edit it - but can’t save any changes. (In this case, as the owner, you have “permission to override permissions”, so can can write with :w!). You can of course easily give yourself back the permission to write to the file by:

+

chmod u+w tuesday.txt

+

POSTING YOUR PROGRESS

+

Just for fun, create a file: secret.txt in your home folder, take away all permissions from it for the user, group and others - and see what happens when you try to edit it with vim.

+

EXTENSION

+

If all of this is old news to you, you may want to look into Linux ACLs:

+ +

Also, SELinux and AppArmour:

+ +

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/15/index.html b/15/index.html new file mode 100644 index 0000000..d5ee34a --- /dev/null +++ b/15/index.html @@ -0,0 +1,1587 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 15 - Deeper into repositories - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 15 - Deeper into repositories…

+ +

INTRO

+

Early on you installed some software packages to your server using apt install. That was fairly painless, and we explained how the Linux model of software installation is very similar to how “app stores” work on Android, iPhone, and increasingly in MacOS and Windows.

+

Today however, you’ll be looking “under the covers” to see how this works; better understand the advantages (and disadvantages!) - and to see how you can safely extend the system beyond the main official sources.

+

YOUR TASKS TODAY

+
    +
  • Add a new repo
  • +
  • Remove a repo
  • +
  • Find out where to get a program from (apt-search)
  • +
  • Install a program without apt
  • +
+

REPOSITORIES AND VERSIONS

+

Any particular Linux installation has a number of important characteristics:

+
    +
  • Version - e.g. Ubuntu 20.04, CentOS 5, RHEL 6
  • +
  • “Bit size” - 32-bit or 64-bit
  • +
  • Chip - Intel, AMD, PowerPC, ARM
  • +
+

The version number is particularly important because it controls the versions of application that you can install. When Ubuntu 18.04 was released (in April 2018 - hence the version number!), it came out with Apache 2.4.29. So, if your server runs 18.04, then even if you installed Apache with apt five years later that is still the version you would receive. This provides stability, but at an obvious cost for web designers who hanker after some feature which later versions provide. (Security patches are made to the repositories, but by “backporting” security fixes from later versions into the old stable version that was first shipped).

+

WHERE IS ALL THIS SETUP?

+

We’ll be discussing the “package manager” used by the Debian and Ubuntu distributions, and dozens of derivatives. This uses the apt command, but for most purposes the competing yum and dnf commands used by Fedora, RHEL, CentOS and Scientific Linux work in a very similar way - as do the equivalent utilities in other versions.

+

The configuration is done with files under the /etc/apt directory, and to see where the packages you install are coming from, use less to view /etc/apt/sources.list where you’ll see lines that are clearly specifying URLs to a “repository” for your specific version:

+
 deb http://archive.ubuntu.com/ubuntu precise-security main restricted universe
+
+ +

There’s no need to be concerned with the exact syntax of this for now, but what’s fairly common is to want to add extra repositories - and this is what we’ll deal with next.

+

EXTRA REPOSITORIES

+

While there’s an amazing amount of software available in the “standard” repositories (more than 3,000 for CentOS and ten times that number for Ubuntu), there are often packages not available - typically for one of two reasons:

+
    +
  • Stability - CentOS is based on RHEL (Red Hat Enterprise Linux), which is firmly focussed on stability in large commercial server installations, so games and many minor packages are not included
  • +
  • Ideology - Ubuntu and Debian have a strong “software freedom” ethic (this refers to freedom, not price), which means that certain packages you may need are unavailable by default
  • +
+

So, next you’ll adding an extra repository to your system, and install software from it.

+

ENABLING EXTRA REPOSITORIES

+

First do a quick check to see how many packages you could already install. You can get the full list and details by running:

+

apt-cache dump

+

…but you’ll want to press Ctrl-c a few times to stop that, as it’s far too long-winded.

+

Instead, filter out just the packages names using grep, and count them using: wc -l (wc is “word count”, and the “-l” makes it count lines rather than words) - like this:

+

apt-cache dump | grep "Package:" | wc -l

+

These are all the packages you could now install. Sometimes there are extra packages available if you enable extra repositories. Most Linux distros have a similar concept, but in Ubuntu, often the “Universe” and “Multiverse” repositories are disabled by default. These are hosted at Ubuntu, but with less support, and Multiverse: “contains software which has been classified as non-free …may not include security updates”. Examples of useful tools in Multiverse might include the compression utilities rar and lha, and the network performance tool netperf.

+

To enable the “Multiverse” repository, follow the guide at:

+ +

After adding this, update your local cache of available applications:

+

sudo apt update

+

Once done, you should be able to install netperf like this:

+

sudo apt install netperf

+

…and the output will show that it’s coming from Multiverse.

+

EXTENSION - Ubuntu PPAs

+

Ubuntu also allows users to register an account and setup software in a Personal Package Archive (PPA) - typically these are setup by enthusiastic developers, and allow you to install the latest “cutting edge” software.

+

As an example, install and run the neofetch utility. When run, this prints out a summary of your configuration and hardware. +This is in the standard repositories, and neofetch --version will show the version. If for some reason you wanted to have a later version you could install a developer’s Neofetch PPA to your software sources by:

+

sudo add-apt-repository ppa:ubuntusway-dev/dev

+

As always, after adding a repository, update your local cache of available applications:

+

sudo apt update

+

Then install the package with:

+

sudo apt install neofetch

+

Check with neofetch --version to see what version you have now.

+

Check with apt-cache show neofetch to see the details of the package.

+

When you next run “sudo apt upgrade” you’ll likely be prompted to install a new version of neofetch - because the developers are sometimes literally making changes every day. (And if it’s not obvious, when the developers have a bad day your software will stop working until they make a fix - that’s the real “cutting edge”!)

+

SUMMARY

+

Installing only from the default repositories is clearly the safest, but there are often good reasons for going beyond them. As a sysadmin you need to judge the risks, but in the example we came up with a realistic scenario where connecting to an unstable working developer’s version made sense.

+

As general rule however you:

+
    +
  • Will seldom have good reasons for hooking into more than one or two extra repositories
  • +
  • Need to read up about a repository first, to understand any potential disadvantages.
  • +
+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/16/index.html b/16/index.html new file mode 100644 index 0000000..c06b938 --- /dev/null +++ b/16/index.html @@ -0,0 +1,1519 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 16 - Archiving and compressing - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 16 - Archiving and compressing

+ +

INTRO

+

As a system administrator, you need to be able to confidently work with compressed “archives” of files. In particular two of your key responsibilities; installing new software, and managing backups, often require this.

+

YOUR TASKS TODAY

+
    +
  • Create a tarball
  • +
  • Create a compressed tarball and compare sizes
  • +
  • Extract files from a tarball
  • +
+

CREATING ARCHIVES

+

On other operating systems, applications like WinZip, and pkzip before it, have long been used to gather a series of files and folders into one compressed file - with a .zip extension. Linux takes a slightly different approach, with the “gathering” of files and folders done in one step, and the compression in another.

+

So, you could create a “snapshot” of the current files in your /etc/init.d folder like this:

+

tar -cvf myinits.tar /etc/init.d/

+

This creates myinits.tar in your current directory.

+

Note 1: The -f switch specifies that “the output should go to the filename which follows” - so in this case the order of the switches is important. VERY IMPORTANT: tar considers anything after -f as the name of the archive that needs to be created. So, we should always use -f as the last flag while creating an archive.

+

Note 2: The -v switch (verbose) is included to give some feedback - traditionally many utilities provide no feedback unless they fail.

+

(The cryptic “tar” name? - originally short for “tape archive”)

+

You could then compress this file with GnuZip like this:

+

gzip myinits.tar

+

…which will create myinits.tar.gz. A compressed tar archive like this is known as a “tarball”. You will also sometimes see tarballs with a .tgz extension - at the Linux commandline this doesn’t have any meaning to the system, but is simply helpful to humans.

+

In practice you can do the two steps in one with the “-z” switch, like this:

+

tar -cvzf myinits.tgz /etc/init.d/

+

This uses the -c switch to say that we’re creating an archive; -v to make the command “verbose”; -z to compress the result - and -f to specify the output file.

+

TASKS FOR TODAY

+
    +
  • Check the links under “Resources” to better understand this - and to find out how to extract files from an archive!
  • +
  • Use tar to create an archive copy of some files and check the resulting size
  • +
  • Run the same command, but this time use -z to compress - and check the file size
  • +
  • Copy your archives to /tmp (with: cp) and extract each there to test that it works
  • +
+

POSTING YOUR PROGRESS

+

Nothing to post today - but make sure you understand this stuff, because we’ll be using it for real in the next day’s session!

+

EXTENSION

+
    +
  • What is a .bz2 file - and how would you extract the files from it?
  • +
  • Research how absolute and relative paths are handled in tar - and why you need to be careful extracting from archives when logged in as root
  • +
  • You might notice that some tutorials write “tar cvf” rather than “tar -cvf” with the switch character - do you know why?
  • +
+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/17/index.html b/17/index.html new file mode 100644 index 0000000..3e3ae8d --- /dev/null +++ b/17/index.html @@ -0,0 +1,1560 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 17 - Build from the source - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 17 - Build from the source

+ +

INTRO

+

A few days ago we saw how to authorise extra repositories for apt-cache to search when we need unusual applications, or perhaps more recent versions than those in the standard repositories.

+

Today we’re going one step further - literally going to “go to the source”. This is not something to be done lightly - the whole reason for package managers is to make your life easy - but occasionally it is justified, and it is something you need to be aware of and comfortable with.

+

The applications we’ve been installing up to this point have come from repositories. The files there are “binaries” - pre-compiled, and often customised by your distro. What might not be clear is that your distro gets these applications from a diverse range of un-coordinated development projects (the “upstream”), and these developers are continuously working on new versions. We’ll go to one of these, download the source, compile and install it.

+

(Another big part of what package managers like apt do, is to identify and install any required “dependencies”. In the Linux world many open source apps take advantage of existing infrastructure in this way, but it can be a very tricky thing to resolve manually. However, the app we’re installing today from source is relatively unusual in being completly standalone).

+

YOUR TASKS TODAY

+
    +
  • Download a source code tarball
  • +
  • Extract and build the source
  • +
+

FIRST WE NEED THE ESSENTIALS

+

Projects normally provide their applications as “source files”, written in the C, C++ or other computer languages. We’re going to pull down such a source file, but it won’t be any use to us until we compile it into an “executable” - a program that our server can execute. So, we’ll need to first install a standard bundle of common compilers and similar tools. On Ubuntu, the package of such tools is called “build-essential”. Install it like this:

+

sudo apt install build-essential

+

GETTING THE SOURCE

+

First, test that you already have nmap installed, and type nmap -V to see what version you have. This is the version installed from your standard repositories. Next, type: which nmap - to see where the executable is stored.

+

Now let’s go to the “Project Page” for the developers http://nmap.org/ and grab the very latest cutting-edge version. Look for the download page, then the section “Source Code Distribution” and the link for the “Latest development nmap release tarball” and note the URL for it - something like:

+
 https://nmap.org/dist/nmap-7.70.tar.bz2
+
+ +

This is version 7.70, the latest development release when these notes were written, but it may be different now. So now we’ll pull this down to your server. The first question is where to put it - we’ll put it in your home directory, so change to your home directory with:

+

cd

+

then simply using wget (“web get”), to download the file like this:

+

wget -v https://nmap.org/dist/nmap-7.70.tar.bz2

+

The -v (for verbose), gives some feedback so that you can see what’s happening. Once it’s finished, check by listing your directory contents:

+

ls -ltr

+

As we’ve learnt, the end of the filename is typically a clue to the file’s format - in this case “.bz2” signals that it’s a tarball compressed with the bz2 algorithm. While we could uncompress this then un-combine the files in two steps, it can be done with one command - like this:

+

tar -j -x -v -f nmap-7.70.tar.bz2

+

....where the -j means “uncompress a bz2 file first”, -x is extract, -v is verbose - and -f says “the filename comes next”. Normally we’d actually do this more concisely as:

+

tar -jxvf nmap-7.70.tar.bz2

+

So, lets see the results,

+

ls -ltr

+

Remembering that directories have a leading “d” in the listing, you’ll see that a directory has been created :

+
 -rw-r--r--  1 steve  steve  21633731    2011-10-01 06:46 nmap-7.70.tar.bz2
+ drwxr-xr-x 20 steve  steve  4096        2011-10-01 06:06 nmap-7.70
+
+ +

Now explore the contents of this with mc or simply cd nmap-7.70 - you should be able to use ls and less find and read the actual source code. Even if you know no programming, the comments can be entertaining reading.

+

By convention, source files will typically include in their root directory a series of text files in uppercase such as: README and INSTALLATION. Look for these, and read them using more or less. It’s important to realise that the programmers of the “upstream” project are not writing for Ubuntu, CentOS - or even Linux. They have written a correct working program in C or C++ etc and made it available, but it’s up to us to figure out how to compile it for our operating system, chip type etc. (This hopefully gives a little insight into the value that distributions such as CentOS, Ubuntu and utilities such as apt, yum etc add, and how tough it would be to create your own Linux From Scratch)

+

So, in this case we see an INSTALL file that says something terse like:

+
 Ideally, you should be able to just type:
+
+ ./configure
+ make
+ make install
+
+ For far more in-depth compilation, installation, and removal notes
+ read the Nmap Install Guide at http://nmap.org/install/ .
+
+ +

In fact, this is fairly standard for many packages. Here’s what each of the steps does:

+
    +
  • ./configure - is a script which checks your server (ie to see whether it’s ARM or Intel based, 32 or 64-bit, which compiler you have etc). It can also be given parameters to tailor the compilation of the software, such as to not include any extra support for running in a GUI environment - something that would make sense on a “headless” (remote text-only server), or to optimize for minimum memory use at the expense of speed - as might make sense if your server has very little RAM. If asked any questions, just take the defaults - and don’t panic if you get some WARNING messages, chances are that all will be well.
  • +
  • make - compiles the software, typically calling the GNU compiler gcc. This may generate lots of scary looking text, and take a minute or two - or as much as an hour or two for very large packages like LibreOffice.
  • +
  • make install - this step takes the compiled files, and installs that plus documentation to your system and in some cases will setup services and scheduled tasks etc. Until now you’ve just been working in your home directory, but this step installs to the system for all users, so requires root privileges. Because of this, you’ll need to actually run: sudo make install. If asked any questions, just take the defaults.
  • +
+

Now, potentially this last step will have overwritten the nmap you already had, but more likely this new one has been installed into a different place.

+

In general /bin is for key parts of the operating system, /usr/bin for less critical utilities and /usr/local/bin for software you’ve chosen to manually install yourself. When you type a command it will search through each of the directories given in your PATH environment variable, and start the first match. So, if /bin/nmap exists, it will run instead of /usr/local/bin - but if you give the “full path” to the version you want - such as /usr/local/bin/nmap - it will run that version instead.

+

The “locate” command allows very fast searching for files, but because these files have only just been added, we’ll need to manually update the index of files:

+

sudo updatedb

+

Then to search the index:

+

locate bin/nmap

+

This should find both your old and new copies of nmap

+

Now try running each, for example:

+

/usr/bin/nmap -V

+

/usr/local/bin/nmap -V

+

The nmap utility relies on no other package or library, so it is very easy to install from source. Most other packages have many “dependencies”, so installing them from source by hand can be pretty challenging even when well explained (look at: http://oss.oetiker.ch/smokeping/doc/smokeping_install.en.html for a good example).

+

NOTE: Because you’ve done all this outside of the apt system, this binary won’t get updates when you run apt update. Not a big issue with a utility like nmap probably, but for anything that runs as an exposed service it’s important that you understand that you now have to track security alerts for the application (and all of its dependencies), and install the later fixed versions when they’re available. This is a significant pain/risk for a production server.

+

POSTING YOUR PROGRESS

+

Pat yourself on the back if you succeeded today - and let us know in the forum.

+

EXTENSION

+

Research some distributions where “from source” is normal:

+ +

None of these is typically used in production servers, but investigating any of them will certainly increase your knowledge of how Linux works “under the covers” - asking you to make many choices that the production-ready distros such as RHEL and Ubuntu do on your behalf by choosing what they see as sensible defaults.

+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/18/index.html b/18/index.html new file mode 100644 index 0000000..842f162 --- /dev/null +++ b/18/index.html @@ -0,0 +1,1516 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 18 - Log rotation - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 18 - Logs, monitoring and troubleshooting

+ +

INTRO

+

When you’re administering a remote server, logs are your best friend, but disk space problems can be your worst enemy - so while Linux applications are generally very good at generating logs, they need to be controlled.

+

The logrotate application keeps your logs in check. Using this, you can define how many days of logs you wish to keep; split them into manageable files; compress them to save space, or even keep them on a totally separate server.

+

Good sysadmins love automation - having the computer automatically do the boring repetitive stuff Just Makes Sense.

+

YOUR TASKS TODAY

+
    +
  • Check the logs for apache2 that are Severity 3
  • +
  • Edit logrotate configuration for apache2 to rotate daily
  • +
+

ARE YOUR LOGS ROTATING?

+

Look into your logs directories - /var/log, and subdirectories like /var/log/apache2. Can you see that your logs are already being rotated? You should see a /var/log/syslog file, but also a series of older compressed versions with names like /var/log/syslog.1.gz

+

WHEN DO THEY ROTATE?

+

You will recall that cron is generally setup to run scripts in /etc/cron.daily - so look in there and you should see a script called logrotate - or possibly 00logrotate to force it to be the first task to run.

+

CONFIGURING LOGROTATE

+

The overall configuration is set in /etc/logrotate.conf - have a look at that, but then also look at the files under the directory /etc/logrotate.d, as the contents of these are merged in to create the full configuration. +You will probably see one called apache2, with contents like this:

+
 /var/log/apache2/*.log {
+ weekly
+ missingok
+ rotate 52
+ compress
+ delaycompress
+ notifempty
+ create 640 root adm
+ }
+
+ +

Much of this is fairly clear: any apache2 .log file will be rotated each week, with 52 compressed copies being kept.

+

Typically when you install an application a suitable logrotate “recipe” is installed for you, so you’ll not normally be creating these from scratch. However, the default settings won’t always match your requirements, so it’s perfectly reasonable for you as the sysadmin to edit these - for example, the default apache2 recipe above creates 52 weekly logs, but you might find it more useful to have logs rotated daily, a copy automatically emailed to an auditor, and just 30 days worth kept on the server.

+

RESOURCES

+ +

TROUBLESHOOT AND MAKE A SAD SERVER HAPPY!

+

Practice what you’ve learned with some challenges at SadServers.com:

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/19/index.html b/19/index.html new file mode 100644 index 0000000..f102d85 --- /dev/null +++ b/19/index.html @@ -0,0 +1,1553 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 19 - Inodes, symlinks and other shortcuts - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 19 - Inodes, symlinks and other shortcuts

+ +

INTRO

+

Today’s topic gives a peek “under the covers” at the technical detail of how files are stored.

+

Linux supports a large number of different “filesystems” - although on a server you’ll typically be dealing with just ext3 or ext4 and perhaps btrfs - but today we’ll not be dealing with any of these; instead with the layer of Linux that sits above all of these - the Linux Virtual Filesystem.

+

The VFS is a key part of Linux, and an overview of it and some of the surrounding concepts is very useful in confidently administering a system.

+

YOUR TASKS TODAY

+
    +
  • Create a hard link
  • +
  • Create a soft link
  • +
  • Create aliases
  • +
+

THE NEXT LAYER DOWN

+

Linux has an extra layer between the filename and the file’s actual data on the disk - this is the inode. This has a numerical value which you can see most easily in two ways:

+

The -i switch on the ls command:

+
 ls -li /etc/hosts
+ 35356766 -rw------- 1 root root 260 Nov 25 04:59 /etc/hosts
+
+ +

The stat command:

+
 stat /etc/hosts
+ File: `/etc/hosts'
+ Size: 260           Blocks: 8           IO Block: 4096   regular file
+ Device: 2ch/44d     Inode: 35356766     Links: 1
+ Access: (0600/-rw-------)  Uid: (  0/   root)   Gid: ( 0/  root)
+ Access: 2012-11-28 13:09:10.000000000 +0400
+ Modify: 2012-11-25 04:59:55.000000000 +0400
+ Change: 2012-11-25 04:59:55.000000000 +0400
+
+ +

Every file name “points” to an inode, which in turn points to the actual data on the disk. This means that several filenames could point to the same inode - and hence have exactly the same contents. In fact this is a standard technique - called a “hard link”. The other important thing to note is that when we view the permissions, ownership and dates of filenames, these attributes are actually kept at the inode level, not the filename. Much of the time this distinction is just theoretical, but it can be very important.

+ +

Work through the steps below to get familiar with hard and soft linking:

+

First move to your home directory with:

+

cd

+

Then use the ln (“link”) command to create a “hard link”, like this:

+

ln /etc/passwd link1

+

and now a “symbolic link” (or “symlink”), like this:

+

ln -s /etc/passwd link2

+

Now use ls -li to view the resulting files, and less or cat to view them.

+

Note that the permissions on a symlink generally show as allowing everthing - but what matters is the permission of the file it points to.

+

Both hard and symlinks are widely used in Linux, but symlinks are especially common - for example:

+

ls -ltr /etc/rc2.d/*

+

This directory holds all the scripts that start when your machine changes to “runlevel 2” (its normal running state) - but you’ll see that in fact most of them are symlinks to the real scripts in /etc/init.d

+

It’s also very common to have something like :

+
 prog
+ prog-v3
+ prog-v4
+
+ +

where the program “prog”, is a symlink - originally to v3, but now points to v4 (and could be pointed back if required)

+

Read up in the resources provided, and test on your server to gain a better understanding. In particular, see how permissions and file sizes work with symbolic links versus hard links or simple files

+

The Differences

+

Hard links:

+
    +
  • Only link to a file, not a directory
  • +
  • Can’t reference a file on a different disk/volume
  • +
  • Links will reference a file even if it is moved
  • +
  • Links reference inode/physical locations on the disk
  • +
+

Symbolic (soft) links:

+
    +
  • Can link to directories
  • +
  • Can reference a file/folder on a different hard disk/volume
  • +
  • Links remain if the original file is deleted
  • +
  • Links will NOT reference the file anymore if it is moved
  • +
  • Links reference abstract filenames/directories and NOT physical locations.
  • +
  • They have their own inode
  • +
+

EXTENSION

+ +

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/20/index.html b/20/index.html new file mode 100644 index 0000000..3514f0a --- /dev/null +++ b/20/index.html @@ -0,0 +1,1531 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 20 - Scripting - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 20 - Scripting

+ +

INTRO

+

Today is the final session for the course. Pat yourself on the back if you worked your way through all lessons!

+

You’ve seen that a continual emphasis for a sysadmin is to automate as much as possible, and also how in Linux the system is very “transparent” - once you know where to look!

+

Today, on this final session for the course, we’ll cover how to write small programs or “shell scripts” to help manage your system.

+

When typing at the Linux command-line you’re directly communicating with “the command interpreter”, also known as “the shell”. Normally this shell is bash, so when you string commands together to make a script the result can be called either a ‘“shell script”, or a “bash script”.

+

Why make a script rather than just typing commands in manually?

+
    +
  • It saves typing. Remember when we searched through the logs with a long string of grep, cut and sort commands? If you need to do something like that more than a few times then turning it into a script saves typing - and typos!
  • +
  • Parameters. One script can be used to do several things depending on what parameters you provide
  • +
  • Automation. Pop your script in /etc/cron.daily and it will run each day, or install a symlink to it in the appropriate /etc/rc.d folder and you can have it run each time the system is shut down or booted up.
  • +
+

YOUR TASKS TODAY

+
    +
  • Write a short script that list the top 3 IP addresses that tried to login into your server
  • +
+

START WITH A SHEBANG!

+

Scripts are just simple text files, but if you set the “execute” permissions on them then the system will look for a special line starting with the two characters “#” and “!” - referred to as the “shebang” (or “crunchbang”) at the top of the file.

+

This line typically looks like this:

+
 #!/bin/bash
+
+ +

Normally anything starting with a “#” character would be treated as a comment, but in the first line and followed by a “!”, it’s interpreted as: “please feed the rest of this to the /bin/bash program, which will interpret it as a script”. All of our scripts will be written in the bash language - the same as you’ve been typing at the command line throughout this course - but scripts can also be written in many other “scripting languages”, so a script in the Perl language might start with #!/usr/bin/perl and one in Python #!/usr/bin/env python3

+

YOUR FIRST SCRIPT

+

You’ll write a small script to list out who’s been most recently unsuccessfully trying to login to your server, using the entries in /var/log/auth.log.

+

Use vim to create a file, attacker, in your home directory with this content:

+
 #!/bin/bash
+ #
+ #   attacker - prints out the last failed login attempt
+ #
+ echo "The last failed login attempt came from IP address:"
+ grep -i "disconnected from" /var/log/auth.log|tail -1| cut -d: -f4| cut -f7 -d" "
+
+ +

Putting comments at the top of the script like this isn’t strictly necessary (the computer ignores them), but it’s a good professional habit to get into.

+

To make it executable type:

+

chmod +x attacker

+

Now to run this script, you just need to refer to it by name - but the current directory is (deliberately) not in your $PATH, so you need to do this either of two ways:

+
 /home/support/attacker
+ ./attacker
+
+ +

Once you’re happy with a script, and want to have it easily available, you’ll probably want to move it somewhere on your $PATH - and /usr/local/bin is a normally the appropriate place, so try this:

+

sudo mv attacker /usr/local/bin/attacker

+

…and now it will Just Work whenever you type attacker

+

EXTENDING THE SCRIPT

+

You can expand this script so that it requires a parameter and prints out some syntax help when you don’t give one. There are a few new tricks in this, so it’s worth studying:

+
 #
+ ##   topattack - list the most persistent attackers
+ #
+ if [ -z "$1" ]; then
+ echo -e "\nUsage: `basename $0` <num> - Lists the top <num> attackers by IP"
+ exit 0
+ fi
+ echo " "
+ echo "Persistant recent attackers"
+ echo " "
+ echo "Attempts      IP "
+ echo "-----------------------"
+ grep "Disconnected from authenticating user root" /var/log/auth.log|cut -d: -f 4 | cut -d" " -f7|sort |uniq -c |sort -nr |head -$1
+
+ +

Again, use vim to create "topattack", chmod to make it executable and mv to move it into /usr/local/bin once you have it working correctly.

+

(BTW, you can use whois to find details on any of these IPs - just be aware that the system that is “attacking” you may be an innocent party that’s been hacked into).

+

A collection of simple scripts like this is something that you can easily create to make your sysadmin tasks simpler, quicker and less error prone.

+

If automating and scripting many of your daily tasks sounds like something you really like doing, you might also want to script the setup of your machines and services. Even though you can do this using bash scripting like shown in this lesson, there are some benefits in choosing an orchestration framework like ansible, cloudinit or terraform. Those frameworks are outside of the scope of this course, but might be worth reading about.

+

And yes, this is the last lesson - so please, feel free to write a review on how the course went for you and what you plan to do with your new knowledge and skills!

+

RESOURCES

+ +

Some rights reserved. Check the license terms +here

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/21/index.html b/21/index.html new file mode 100644 index 0000000..16c3056 --- /dev/null +++ b/21/index.html @@ -0,0 +1,1497 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + Day 21 - What's next? - Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + + + + + + + + +

Day 21 - What next?

+ +

What is this madness – surely the course was for just 20 days?

+

Yes, but hopefully you’ll go on learning, so here’s a few suggestions for directions that you might take.

+

Play with your server

+

You’re familiar with the server you used during the course, so keep working with it. Maybe uninstall Apache2 and install NGINX, a competing webserver. Keep a running stat on ssh “attackers”. Whatever. A free AWS will last a year, and a $5/mo server should be something you can easily justify.

+

Add services that you’ll use

+

You should now be capable of following tutorials on installing and running your own instance of Minecraft, Wordpress, WireGuard VPN, or Mediawiki. Expect to have some problems – it’s all good experience!

+

Take a look at Server World for some inspiration.

+

Extend your learning

+

Stop browsing articles on Gnome, KDE or i3 – and start checking out any articles like “20 Linux commands every sysadmin should know”. Try these out, delve into the options. Like learning a foreign vocabulary, you will only be able to use these “words” if you know them!

+

Check out Linux Journey if you haven’t already, specially if you are still pretty new to Linux and would like to see a different learning approach. Linux 101 Hacks is also a good resource.

+

Practice what you’ve learned with some challenges at SadServers.com. There you’ll find a collection of scenarios where you have to do, fix or hack something in a Linux server. It’s great to exercise your troubleshooting skills without messing with your own server.

+

To get crazy fast in the command line, try Command Line Challenge, practicelinux.com, learnshell.org and commandlinefu.com.

+

If your next level goal is to get into DevOps, take a look at the DevOps Roadmap.

+

Certifications

+

If you’re looking to do Linux professionally, and you don’t have an impressive CV or resume already, then you should be aiming at getting a Linux certification. There are really just three certs/tracks that count:

+
    +
  • CompTIA Linux+ - one and done exam, distro independent but doesn’t hold much value in the market. Do this if you don’t want to get too deep into Linux, or you have other CompTIA tracks going on and an employer is paying for them.
  • +
  • LPI LPIC-1: Linux Administrator – Very extensive description of the coverage of their various certs/courses. You can go very deep with this exams, they cover everything you can think of pure Linux. Not so popular with employers but the knowledge certainly holds it value.
  • +
  • Red Hat – You could spend a lot of time and money here, but it might well pay off! Geared to RedHat Enterprise Linux distribution and its particularities, it is a practical exam (the others are multiple question) and it’s well known in Enterprise circles, it really pops up in any resume.
  • +
+

Even if you don’t want/need certs, the outline of the topics in these references can give you a good idea of areas to focus on in your self-learning.

+

Affordable professional training

+ +

Show your appreciation!

+

Steve Brorens (@snori74) was a collector of postcards and enjoyed greatly all the “Snail Mail” he received from the students.

+

But since his passing there’s nowhere to send postcards anymore. You can show your appreciation for the course by letting everyone else know how awesome it was! Recommend the course to other people, invite your friends to do the challenge together, have fun! Show the world you finished the challenge by posting about it on social media.

+

Contribute

+

Livia Lima is the one currently maintaining the material. But she’s only one person and appreciates any help to keep this challenge running consistently every month, and available to everyone.

+

If you’d like to contribute, here a few things you can do:

+
    +
  • Answer other students questions in our channels. Help a friend through the challenge.
  • +
  • Correct typos, dead links, etc by submitting a correction request to the source material.
  • +
  • Suggest improvements by submitting a feature request to the source material.
  • +
  • Help moderate Lemmy, Reddit or Discord. Are you a whiz in one (or more) of those platforms? Help admin them.
  • +
  • Support the infrastructure by donating or sponsoring. The challenge is free but the website servers and the domains costs money, so we appreciate if you can spare a buck.
  • +
+

Thanks for all and happy linuxing!

+ + + + + + + + + + + + + + + + + + + + +

Comments

+ + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/404.html b/404.html new file mode 100644 index 0000000..6cd2f93 --- /dev/null +++ b/404.html @@ -0,0 +1,1132 @@ + + + + + + + + + + + + + + + + + + + + + + + Linux Upskill Challenge + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 0000000..1cf13b9 Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.525ec568.min.js b/assets/javascripts/bundle.525ec568.min.js new file mode 100644 index 0000000..4b08eae --- /dev/null +++ b/assets/javascripts/bundle.525ec568.min.js @@ -0,0 +1,16 @@ +"use strict";(()=>{var Wi=Object.create;var gr=Object.defineProperty;var Di=Object.getOwnPropertyDescriptor;var Vi=Object.getOwnPropertyNames,Vt=Object.getOwnPropertySymbols,Ni=Object.getPrototypeOf,yr=Object.prototype.hasOwnProperty,ao=Object.prototype.propertyIsEnumerable;var io=(e,t,r)=>t in e?gr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,$=(e,t)=>{for(var r in t||(t={}))yr.call(t,r)&&io(e,r,t[r]);if(Vt)for(var r of Vt(t))ao.call(t,r)&&io(e,r,t[r]);return e};var so=(e,t)=>{var r={};for(var o in e)yr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&Vt)for(var o of Vt(e))t.indexOf(o)<0&&ao.call(e,o)&&(r[o]=e[o]);return r};var xr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var zi=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of Vi(t))!yr.call(e,n)&&n!==r&&gr(e,n,{get:()=>t[n],enumerable:!(o=Di(t,n))||o.enumerable});return e};var Mt=(e,t,r)=>(r=e!=null?Wi(Ni(e)):{},zi(t||!e||!e.__esModule?gr(r,"default",{value:e,enumerable:!0}):r,e));var co=(e,t,r)=>new Promise((o,n)=>{var i=p=>{try{s(r.next(p))}catch(c){n(c)}},a=p=>{try{s(r.throw(p))}catch(c){n(c)}},s=p=>p.done?o(p.value):Promise.resolve(p.value).then(i,a);s((r=r.apply(e,t)).next())});var lo=xr((Er,po)=>{(function(e,t){typeof Er=="object"&&typeof po!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(Er,function(){"use strict";function e(r){var o=!0,n=!1,i=null,a={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function s(k){return!!(k&&k!==document&&k.nodeName!=="HTML"&&k.nodeName!=="BODY"&&"classList"in k&&"contains"in k.classList)}function p(k){var ft=k.type,qe=k.tagName;return!!(qe==="INPUT"&&a[ft]&&!k.readOnly||qe==="TEXTAREA"&&!k.readOnly||k.isContentEditable)}function c(k){k.classList.contains("focus-visible")||(k.classList.add("focus-visible"),k.setAttribute("data-focus-visible-added",""))}function l(k){k.hasAttribute("data-focus-visible-added")&&(k.classList.remove("focus-visible"),k.removeAttribute("data-focus-visible-added"))}function f(k){k.metaKey||k.altKey||k.ctrlKey||(s(r.activeElement)&&c(r.activeElement),o=!0)}function u(k){o=!1}function d(k){s(k.target)&&(o||p(k.target))&&c(k.target)}function y(k){s(k.target)&&(k.target.classList.contains("focus-visible")||k.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(k.target))}function L(k){document.visibilityState==="hidden"&&(n&&(o=!0),X())}function X(){document.addEventListener("mousemove",J),document.addEventListener("mousedown",J),document.addEventListener("mouseup",J),document.addEventListener("pointermove",J),document.addEventListener("pointerdown",J),document.addEventListener("pointerup",J),document.addEventListener("touchmove",J),document.addEventListener("touchstart",J),document.addEventListener("touchend",J)}function te(){document.removeEventListener("mousemove",J),document.removeEventListener("mousedown",J),document.removeEventListener("mouseup",J),document.removeEventListener("pointermove",J),document.removeEventListener("pointerdown",J),document.removeEventListener("pointerup",J),document.removeEventListener("touchmove",J),document.removeEventListener("touchstart",J),document.removeEventListener("touchend",J)}function J(k){k.target.nodeName&&k.target.nodeName.toLowerCase()==="html"||(o=!1,te())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",L,!0),X(),r.addEventListener("focus",d,!0),r.addEventListener("blur",y,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var qr=xr((hy,On)=>{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var $a=/["'&<>]/;On.exports=Pa;function Pa(e){var t=""+e,r=$a.exec(t);if(!r)return t;var o,n="",i=0,a=0;for(i=r.index;i{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof It=="object"&&typeof Yr=="object"?Yr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof It=="object"?It.ClipboardJS=r():t.ClipboardJS=r()})(It,function(){return function(){var e={686:function(o,n,i){"use strict";i.d(n,{default:function(){return Ui}});var a=i(279),s=i.n(a),p=i(370),c=i.n(p),l=i(817),f=i.n(l);function u(V){try{return document.execCommand(V)}catch(A){return!1}}var d=function(A){var M=f()(A);return u("cut"),M},y=d;function L(V){var A=document.documentElement.getAttribute("dir")==="rtl",M=document.createElement("textarea");M.style.fontSize="12pt",M.style.border="0",M.style.padding="0",M.style.margin="0",M.style.position="absolute",M.style[A?"right":"left"]="-9999px";var F=window.pageYOffset||document.documentElement.scrollTop;return M.style.top="".concat(F,"px"),M.setAttribute("readonly",""),M.value=V,M}var X=function(A,M){var F=L(A);M.container.appendChild(F);var D=f()(F);return u("copy"),F.remove(),D},te=function(A){var M=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},F="";return typeof A=="string"?F=X(A,M):A instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(A==null?void 0:A.type)?F=X(A.value,M):(F=f()(A),u("copy")),F},J=te;function k(V){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?k=function(M){return typeof M}:k=function(M){return M&&typeof Symbol=="function"&&M.constructor===Symbol&&M!==Symbol.prototype?"symbol":typeof M},k(V)}var ft=function(){var A=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},M=A.action,F=M===void 0?"copy":M,D=A.container,Y=A.target,$e=A.text;if(F!=="copy"&&F!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Y!==void 0)if(Y&&k(Y)==="object"&&Y.nodeType===1){if(F==="copy"&&Y.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(F==="cut"&&(Y.hasAttribute("readonly")||Y.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if($e)return J($e,{container:D});if(Y)return F==="cut"?y(Y):J(Y,{container:D})},qe=ft;function Fe(V){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Fe=function(M){return typeof M}:Fe=function(M){return M&&typeof Symbol=="function"&&M.constructor===Symbol&&M!==Symbol.prototype?"symbol":typeof M},Fe(V)}function ki(V,A){if(!(V instanceof A))throw new TypeError("Cannot call a class as a function")}function no(V,A){for(var M=0;M0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof D.action=="function"?D.action:this.defaultAction,this.target=typeof D.target=="function"?D.target:this.defaultTarget,this.text=typeof D.text=="function"?D.text:this.defaultText,this.container=Fe(D.container)==="object"?D.container:document.body}},{key:"listenClick",value:function(D){var Y=this;this.listener=c()(D,"click",function($e){return Y.onClick($e)})}},{key:"onClick",value:function(D){var Y=D.delegateTarget||D.currentTarget,$e=this.action(Y)||"copy",Dt=qe({action:$e,container:this.container,target:this.target(Y),text:this.text(Y)});this.emit(Dt?"success":"error",{action:$e,text:Dt,trigger:Y,clearSelection:function(){Y&&Y.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(D){return vr("action",D)}},{key:"defaultTarget",value:function(D){var Y=vr("target",D);if(Y)return document.querySelector(Y)}},{key:"defaultText",value:function(D){return vr("text",D)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(D){var Y=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return J(D,Y)}},{key:"cut",value:function(D){return y(D)}},{key:"isSupported",value:function(){var D=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Y=typeof D=="string"?[D]:D,$e=!!document.queryCommandSupported;return Y.forEach(function(Dt){$e=$e&&!!document.queryCommandSupported(Dt)}),$e}}]),M}(s()),Ui=Fi},828:function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function a(s,p){for(;s&&s.nodeType!==n;){if(typeof s.matches=="function"&&s.matches(p))return s;s=s.parentNode}}o.exports=a},438:function(o,n,i){var a=i(828);function s(l,f,u,d,y){var L=c.apply(this,arguments);return l.addEventListener(u,L,y),{destroy:function(){l.removeEventListener(u,L,y)}}}function p(l,f,u,d,y){return typeof l.addEventListener=="function"?s.apply(null,arguments):typeof u=="function"?s.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(L){return s(L,f,u,d,y)}))}function c(l,f,u,d){return function(y){y.delegateTarget=a(y.target,f),y.delegateTarget&&d.call(l,y)}}o.exports=p},879:function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var a=Object.prototype.toString.call(i);return i!==void 0&&(a==="[object NodeList]"||a==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var a=Object.prototype.toString.call(i);return a==="[object Function]"}},370:function(o,n,i){var a=i(879),s=i(438);function p(u,d,y){if(!u&&!d&&!y)throw new Error("Missing required arguments");if(!a.string(d))throw new TypeError("Second argument must be a String");if(!a.fn(y))throw new TypeError("Third argument must be a Function");if(a.node(u))return c(u,d,y);if(a.nodeList(u))return l(u,d,y);if(a.string(u))return f(u,d,y);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function c(u,d,y){return u.addEventListener(d,y),{destroy:function(){u.removeEventListener(d,y)}}}function l(u,d,y){return Array.prototype.forEach.call(u,function(L){L.addEventListener(d,y)}),{destroy:function(){Array.prototype.forEach.call(u,function(L){L.removeEventListener(d,y)})}}}function f(u,d,y){return s(document.body,u,d,y)}o.exports=p},817:function(o){function n(i){var a;if(i.nodeName==="SELECT")i.focus(),a=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var s=i.hasAttribute("readonly");s||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),s||i.removeAttribute("readonly"),a=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var p=window.getSelection(),c=document.createRange();c.selectNodeContents(i),p.removeAllRanges(),p.addRange(c),a=p.toString()}return a}o.exports=n},279:function(o){function n(){}n.prototype={on:function(i,a,s){var p=this.e||(this.e={});return(p[i]||(p[i]=[])).push({fn:a,ctx:s}),this},once:function(i,a,s){var p=this;function c(){p.off(i,c),a.apply(s,arguments)}return c._=a,this.on(i,c,s)},emit:function(i){var a=[].slice.call(arguments,1),s=((this.e||(this.e={}))[i]||[]).slice(),p=0,c=s.length;for(p;p0&&i[i.length-1])&&(c[0]===6||c[0]===2)){r=0;continue}if(c[0]===3&&(!i||c[1]>i[0]&&c[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function N(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],a;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(s){a={error:s}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(a)throw a.error}}return i}function q(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||p(d,L)})},y&&(n[d]=y(n[d])))}function p(d,y){try{c(o[d](y))}catch(L){u(i[0][3],L)}}function c(d){d.value instanceof nt?Promise.resolve(d.value.v).then(l,f):u(i[0][2],d)}function l(d){p("next",d)}function f(d){p("throw",d)}function u(d,y){d(y),i.shift(),i.length&&p(i[0][0],i[0][1])}}function uo(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof he=="function"?he(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(a){return new Promise(function(s,p){a=e[i](a),n(s,p,a.done,a.value)})}}function n(i,a,s,p){Promise.resolve(p).then(function(c){i({value:c,done:s})},a)}}function H(e){return typeof e=="function"}function ut(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var zt=ut(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function Qe(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ue=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var a=this._parentage;if(a)if(this._parentage=null,Array.isArray(a))try{for(var s=he(a),p=s.next();!p.done;p=s.next()){var c=p.value;c.remove(this)}}catch(L){t={error:L}}finally{try{p&&!p.done&&(r=s.return)&&r.call(s)}finally{if(t)throw t.error}}else a.remove(this);var l=this.initialTeardown;if(H(l))try{l()}catch(L){i=L instanceof zt?L.errors:[L]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=he(f),d=u.next();!d.done;d=u.next()){var y=d.value;try{ho(y)}catch(L){i=i!=null?i:[],L instanceof zt?i=q(q([],N(i)),N(L.errors)):i.push(L)}}}catch(L){o={error:L}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new zt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ho(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&Qe(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&Qe(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Tr=Ue.EMPTY;function qt(e){return e instanceof Ue||e&&"closed"in e&&H(e.remove)&&H(e.add)&&H(e.unsubscribe)}function ho(e){H(e)?e():e.unsubscribe()}var Pe={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var dt={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,a=n.isStopped,s=n.observers;return i||a?Tr:(this.currentObservers=null,s.push(r),new Ue(function(){o.currentObservers=null,Qe(s,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,a=o.isStopped;n?r.error(i):a&&r.complete()},t.prototype.asObservable=function(){var r=new j;return r.source=this,r},t.create=function(r,o){return new To(r,o)},t}(j);var To=function(e){oe(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:Tr},t}(g);var _r=function(e){oe(t,e);function t(r){var o=e.call(this)||this;return o._value=r,o}return Object.defineProperty(t.prototype,"value",{get:function(){return this.getValue()},enumerable:!1,configurable:!0}),t.prototype._subscribe=function(r){var o=e.prototype._subscribe.call(this,r);return!o.closed&&r.next(this._value),o},t.prototype.getValue=function(){var r=this,o=r.hasError,n=r.thrownError,i=r._value;if(o)throw n;return this._throwIfClosed(),i},t.prototype.next=function(r){e.prototype.next.call(this,this._value=r)},t}(g);var At={now:function(){return(At.delegate||Date).now()},delegate:void 0};var Ct=function(e){oe(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=At);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,a=o._infiniteTimeWindow,s=o._timestampProvider,p=o._windowTime;n||(i.push(r),!a&&i.push(s.now()+p)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,a=n._buffer,s=a.slice(),p=0;p0?e.prototype.schedule.call(this,r,o):(this.delay=o,this.state=r,this.scheduler.flush(this),this)},t.prototype.execute=function(r,o){return o>0||this.closed?e.prototype.execute.call(this,r,o):this._execute(r,o)},t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!=null&&n>0||n==null&&this.delay>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.flush(this),0)},t}(gt);var Lo=function(e){oe(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t}(yt);var kr=new Lo(Oo);var Mo=function(e){oe(t,e);function t(r,o){var n=e.call(this,r,o)||this;return n.scheduler=r,n.work=o,n}return t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!==null&&n>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=vt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var a=r.actions;o!=null&&((i=a[a.length-1])===null||i===void 0?void 0:i.id)!==o&&(vt.cancelAnimationFrame(o),r._scheduled=void 0)},t}(gt);var _o=function(e){oe(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(yt);var me=new _o(Mo);var S=new j(function(e){return e.complete()});function Yt(e){return e&&H(e.schedule)}function Hr(e){return e[e.length-1]}function Xe(e){return H(Hr(e))?e.pop():void 0}function ke(e){return Yt(Hr(e))?e.pop():void 0}function Bt(e,t){return typeof Hr(e)=="number"?e.pop():t}var xt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Gt(e){return H(e==null?void 0:e.then)}function Jt(e){return H(e[bt])}function Xt(e){return Symbol.asyncIterator&&H(e==null?void 0:e[Symbol.asyncIterator])}function Zt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Zi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var er=Zi();function tr(e){return H(e==null?void 0:e[er])}function rr(e){return fo(this,arguments,function(){var r,o,n,i;return Nt(this,function(a){switch(a.label){case 0:r=e.getReader(),a.label=1;case 1:a.trys.push([1,,9,10]),a.label=2;case 2:return[4,nt(r.read())];case 3:return o=a.sent(),n=o.value,i=o.done,i?[4,nt(void 0)]:[3,5];case 4:return[2,a.sent()];case 5:return[4,nt(n)];case 6:return[4,a.sent()];case 7:return a.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function or(e){return H(e==null?void 0:e.getReader)}function U(e){if(e instanceof j)return e;if(e!=null){if(Jt(e))return ea(e);if(xt(e))return ta(e);if(Gt(e))return ra(e);if(Xt(e))return Ao(e);if(tr(e))return oa(e);if(or(e))return na(e)}throw Zt(e)}function ea(e){return new j(function(t){var r=e[bt]();if(H(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function ta(e){return new j(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?b(function(n,i){return e(n,i,o)}):le,Te(1),r?De(t):Qo(function(){return new ir}))}}function jr(e){return e<=0?function(){return S}:E(function(t,r){var o=[];t.subscribe(T(r,function(n){o.push(n),e=2,!0))}function pe(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new g}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,a=i===void 0?!0:i,s=e.resetOnRefCountZero,p=s===void 0?!0:s;return function(c){var l,f,u,d=0,y=!1,L=!1,X=function(){f==null||f.unsubscribe(),f=void 0},te=function(){X(),l=u=void 0,y=L=!1},J=function(){var k=l;te(),k==null||k.unsubscribe()};return E(function(k,ft){d++,!L&&!y&&X();var qe=u=u!=null?u:r();ft.add(function(){d--,d===0&&!L&&!y&&(f=Ur(J,p))}),qe.subscribe(ft),!l&&d>0&&(l=new at({next:function(Fe){return qe.next(Fe)},error:function(Fe){L=!0,X(),f=Ur(te,n,Fe),qe.error(Fe)},complete:function(){y=!0,X(),f=Ur(te,a),qe.complete()}}),U(k).subscribe(l))})(c)}}function Ur(e,t){for(var r=[],o=2;oe.next(document)),e}function P(e,t=document){return Array.from(t.querySelectorAll(e))}function R(e,t=document){let r=fe(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function fe(e,t=document){return t.querySelector(e)||void 0}function Ie(){var e,t,r,o;return(o=(r=(t=(e=document.activeElement)==null?void 0:e.shadowRoot)==null?void 0:t.activeElement)!=null?r:document.activeElement)!=null?o:void 0}var wa=O(h(document.body,"focusin"),h(document.body,"focusout")).pipe(_e(1),Q(void 0),m(()=>Ie()||document.body),G(1));function et(e){return wa.pipe(m(t=>e.contains(t)),K())}function $t(e,t){return C(()=>O(h(e,"mouseenter").pipe(m(()=>!0)),h(e,"mouseleave").pipe(m(()=>!1))).pipe(t?Ht(r=>Le(+!r*t)):le,Q(e.matches(":hover"))))}function Jo(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Jo(e,r)}function x(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Jo(o,n);return o}function sr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function Tt(e){let t=x("script",{src:e});return C(()=>(document.head.appendChild(t),O(h(t,"load"),h(t,"error").pipe(v(()=>$r(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),_(()=>document.head.removeChild(t)),Te(1))))}var Xo=new g,Ta=C(()=>typeof ResizeObserver=="undefined"?Tt("https://unpkg.com/resize-observer-polyfill"):I(void 0)).pipe(m(()=>new ResizeObserver(e=>e.forEach(t=>Xo.next(t)))),v(e=>O(Ye,I(e)).pipe(_(()=>e.disconnect()))),G(1));function ce(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ge(e){let t=e;for(;t.clientWidth===0&&t.parentElement;)t=t.parentElement;return Ta.pipe(w(r=>r.observe(t)),v(r=>Xo.pipe(b(o=>o.target===t),_(()=>r.unobserve(t)))),m(()=>ce(e)),Q(ce(e)))}function St(e){return{width:e.scrollWidth,height:e.scrollHeight}}function cr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}function Zo(e){let t=[],r=e.parentElement;for(;r;)(e.clientWidth>r.clientWidth||e.clientHeight>r.clientHeight)&&t.push(r),r=(e=r).parentElement;return t.length===0&&t.push(document.documentElement),t}function Ve(e){return{x:e.offsetLeft,y:e.offsetTop}}function en(e){let t=e.getBoundingClientRect();return{x:t.x+window.scrollX,y:t.y+window.scrollY}}function tn(e){return O(h(window,"load"),h(window,"resize")).pipe(Me(0,me),m(()=>Ve(e)),Q(Ve(e)))}function pr(e){return{x:e.scrollLeft,y:e.scrollTop}}function Ne(e){return O(h(e,"scroll"),h(window,"scroll"),h(window,"resize")).pipe(Me(0,me),m(()=>pr(e)),Q(pr(e)))}var rn=new g,Sa=C(()=>I(new IntersectionObserver(e=>{for(let t of e)rn.next(t)},{threshold:0}))).pipe(v(e=>O(Ye,I(e)).pipe(_(()=>e.disconnect()))),G(1));function tt(e){return Sa.pipe(w(t=>t.observe(e)),v(t=>rn.pipe(b(({target:r})=>r===e),_(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function on(e,t=16){return Ne(e).pipe(m(({y:r})=>{let o=ce(e),n=St(e);return r>=n.height-o.height-t}),K())}var lr={drawer:R("[data-md-toggle=drawer]"),search:R("[data-md-toggle=search]")};function nn(e){return lr[e].checked}function Je(e,t){lr[e].checked!==t&&lr[e].click()}function ze(e){let t=lr[e];return h(t,"change").pipe(m(()=>t.checked),Q(t.checked))}function Oa(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function La(){return O(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(Q(!1))}function an(){let e=h(window,"keydown").pipe(b(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:nn("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),b(({mode:t,type:r})=>{if(t==="global"){let o=Ie();if(typeof o!="undefined")return!Oa(o,r)}return!0}),pe());return La().pipe(v(t=>t?S:e))}function ye(){return new URL(location.href)}function lt(e,t=!1){if(B("navigation.instant")&&!t){let r=x("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function sn(){return new g}function cn(){return location.hash.slice(1)}function pn(e){let t=x("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Ma(e){return O(h(window,"hashchange"),e).pipe(m(cn),Q(cn()),b(t=>t.length>0),G(1))}function ln(e){return Ma(e).pipe(m(t=>fe(`[id="${t}"]`)),b(t=>typeof t!="undefined"))}function Pt(e){let t=matchMedia(e);return ar(r=>t.addListener(()=>r(t.matches))).pipe(Q(t.matches))}function mn(){let e=matchMedia("print");return O(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(Q(e.matches))}function Nr(e,t){return e.pipe(v(r=>r?t():S))}function zr(e,t){return new j(r=>{let o=new XMLHttpRequest;return o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network error"))}),o.addEventListener("abort",()=>{r.complete()}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{var i;if(n.lengthComputable)t.progress$.next(n.loaded/n.total*100);else{let a=(i=o.getResponseHeader("Content-Length"))!=null?i:0;t.progress$.next(n.loaded/+a*100)}}),t.progress$.next(5)),o.send(),()=>o.abort()})}function je(e,t){return zr(e,t).pipe(v(r=>r.text()),m(r=>JSON.parse(r)),G(1))}function fn(e,t){let r=new DOMParser;return zr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/html")),G(1))}function un(e,t){let r=new DOMParser;return zr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),G(1))}function dn(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function hn(){return O(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(dn),Q(dn()))}function bn(){return{width:innerWidth,height:innerHeight}}function vn(){return h(window,"resize",{passive:!0}).pipe(m(bn),Q(bn()))}function gn(){return z([hn(),vn()]).pipe(m(([e,t])=>({offset:e,size:t})),G(1))}function mr(e,{viewport$:t,header$:r}){let o=t.pipe(ee("size")),n=z([o,r]).pipe(m(()=>Ve(e)));return z([r,t,n]).pipe(m(([{height:i},{offset:a,size:s},{x:p,y:c}])=>({offset:{x:a.x-p,y:a.y-c+i},size:s})))}function _a(e){return h(e,"message",t=>t.data)}function Aa(e){let t=new g;return t.subscribe(r=>e.postMessage(r)),t}function yn(e,t=new Worker(e)){let r=_a(t),o=Aa(t),n=new g;n.subscribe(o);let i=o.pipe(Z(),ie(!0));return n.pipe(Z(),Re(r.pipe(W(i))),pe())}var Ca=R("#__config"),Ot=JSON.parse(Ca.textContent);Ot.base=`${new URL(Ot.base,ye())}`;function xe(){return Ot}function B(e){return Ot.features.includes(e)}function Ee(e,t){return typeof t!="undefined"?Ot.translations[e].replace("#",t.toString()):Ot.translations[e]}function Se(e,t=document){return R(`[data-md-component=${e}]`,t)}function ae(e,t=document){return P(`[data-md-component=${e}]`,t)}function ka(e){let t=R(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>R(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function xn(e){if(!B("announce.dismiss")||!e.childElementCount)return S;if(!e.hidden){let t=R(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return C(()=>{let t=new g;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),ka(e).pipe(w(r=>t.next(r)),_(()=>t.complete()),m(r=>$({ref:e},r)))})}function Ha(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function En(e,t){let r=new g;return r.subscribe(({hidden:o})=>{e.hidden=o}),Ha(e,t).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))}function Rt(e,t){return t==="inline"?x("div",{class:"md-tooltip md-tooltip--inline",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"})):x("div",{class:"md-tooltip",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"}))}function wn(...e){return x("div",{class:"md-tooltip2",role:"tooltip"},x("div",{class:"md-tooltip2__inner md-typeset"},e))}function Tn(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return x("aside",{class:"md-annotation",tabIndex:0},Rt(t),x("a",{href:r,class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}else return x("aside",{class:"md-annotation",tabIndex:0},Rt(t),x("span",{class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}function Sn(e){return x("button",{class:"md-clipboard md-icon",title:Ee("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}var Ln=Mt(qr());function Qr(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(p=>!e.terms[p]).reduce((p,c)=>[...p,x("del",null,(0,Ln.default)(c))," "],[]).slice(0,-1),i=xe(),a=new URL(e.location,i.base);B("search.highlight")&&a.searchParams.set("h",Object.entries(e.terms).filter(([,p])=>p).reduce((p,[c])=>`${p} ${c}`.trim(),""));let{tags:s}=xe();return x("a",{href:`${a}`,class:"md-search-result__link",tabIndex:-1},x("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&x("div",{class:"md-search-result__icon md-icon"}),r>0&&x("h1",null,e.title),r<=0&&x("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&x("nav",{class:"md-tags"},e.tags.map(p=>{let c=s?p in s?`md-tag-icon md-tag--${s[p]}`:"md-tag-icon":"";return x("span",{class:`md-tag ${c}`},p)})),o>0&&n.length>0&&x("p",{class:"md-search-result__terms"},Ee("search.result.term.missing"),": ",...n)))}function Mn(e){let t=e[0].score,r=[...e],o=xe(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),a=r.findIndex(l=>l.scoreQr(l,1)),...p.length?[x("details",{class:"md-search-result__more"},x("summary",{tabIndex:-1},x("div",null,p.length>0&&p.length===1?Ee("search.result.more.one"):Ee("search.result.more.other",p.length))),...p.map(l=>Qr(l,1)))]:[]];return x("li",{class:"md-search-result__item"},c)}function _n(e){return x("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>x("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?sr(r):r)))}function Kr(e){let t=`tabbed-control tabbed-control--${e}`;return x("div",{class:t,hidden:!0},x("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function An(e){return x("div",{class:"md-typeset__scrollwrap"},x("div",{class:"md-typeset__table"},e))}function Ra(e){var o;let t=xe(),r=new URL(`../${e.version}/`,t.base);return x("li",{class:"md-version__item"},x("a",{href:`${r}`,class:"md-version__link"},e.title,((o=t.version)==null?void 0:o.alias)&&e.aliases.length>0&&x("span",{class:"md-version__alias"},e.aliases[0])))}function Cn(e,t){var o;let r=xe();return e=e.filter(n=>{var i;return!((i=n.properties)!=null&&i.hidden)}),x("div",{class:"md-version"},x("button",{class:"md-version__current","aria-label":Ee("select.version")},t.title,((o=r.version)==null?void 0:o.alias)&&t.aliases.length>0&&x("span",{class:"md-version__alias"},t.aliases[0])),x("ul",{class:"md-version__list"},e.map(Ra)))}var Ia=0;function ja(e){let t=z([et(e),$t(e)]).pipe(m(([o,n])=>o||n),K()),r=C(()=>Zo(e)).pipe(ne(Ne),pt(1),He(t),m(()=>en(e)));return t.pipe(Ae(o=>o),v(()=>z([t,r])),m(([o,n])=>({active:o,offset:n})),pe())}function Fa(e,t){let{content$:r,viewport$:o}=t,n=`__tooltip2_${Ia++}`;return C(()=>{let i=new g,a=new _r(!1);i.pipe(Z(),ie(!1)).subscribe(a);let s=a.pipe(Ht(c=>Le(+!c*250,kr)),K(),v(c=>c?r:S),w(c=>c.id=n),pe());z([i.pipe(m(({active:c})=>c)),s.pipe(v(c=>$t(c,250)),Q(!1))]).pipe(m(c=>c.some(l=>l))).subscribe(a);let p=a.pipe(b(c=>c),re(s,o),m(([c,l,{size:f}])=>{let u=e.getBoundingClientRect(),d=u.width/2;if(l.role==="tooltip")return{x:d,y:8+u.height};if(u.y>=f.height/2){let{height:y}=ce(l);return{x:d,y:-16-y}}else return{x:d,y:16+u.height}}));return z([s,i,p]).subscribe(([c,{offset:l},f])=>{c.style.setProperty("--md-tooltip-host-x",`${l.x}px`),c.style.setProperty("--md-tooltip-host-y",`${l.y}px`),c.style.setProperty("--md-tooltip-x",`${f.x}px`),c.style.setProperty("--md-tooltip-y",`${f.y}px`),c.classList.toggle("md-tooltip2--top",f.y<0),c.classList.toggle("md-tooltip2--bottom",f.y>=0)}),a.pipe(b(c=>c),re(s,(c,l)=>l),b(c=>c.role==="tooltip")).subscribe(c=>{let l=ce(R(":scope > *",c));c.style.setProperty("--md-tooltip-width",`${l.width}px`),c.style.setProperty("--md-tooltip-tail","0px")}),a.pipe(K(),ve(me),re(s)).subscribe(([c,l])=>{l.classList.toggle("md-tooltip2--active",c)}),z([a.pipe(b(c=>c)),s]).subscribe(([c,l])=>{l.role==="dialog"?(e.setAttribute("aria-controls",n),e.setAttribute("aria-haspopup","dialog")):e.setAttribute("aria-describedby",n)}),a.pipe(b(c=>!c)).subscribe(()=>{e.removeAttribute("aria-controls"),e.removeAttribute("aria-describedby"),e.removeAttribute("aria-haspopup")}),ja(e).pipe(w(c=>i.next(c)),_(()=>i.complete()),m(c=>$({ref:e},c)))})}function mt(e,{viewport$:t},r=document.body){return Fa(e,{content$:new j(o=>{let n=e.title,i=wn(n);return o.next(i),e.removeAttribute("title"),r.append(i),()=>{i.remove(),e.setAttribute("title",n)}}),viewport$:t})}function Ua(e,t){let r=C(()=>z([tn(e),Ne(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:a,height:s}=ce(e);return{x:o-i.x+a/2,y:n-i.y+s/2}}));return et(e).pipe(v(o=>r.pipe(m(n=>({active:o,offset:n})),Te(+!o||1/0))))}function kn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return C(()=>{let i=new g,a=i.pipe(Z(),ie(!0));return i.subscribe({next({offset:s}){e.style.setProperty("--md-tooltip-x",`${s.x}px`),e.style.setProperty("--md-tooltip-y",`${s.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),tt(e).pipe(W(a)).subscribe(s=>{e.toggleAttribute("data-md-visible",s)}),O(i.pipe(b(({active:s})=>s)),i.pipe(_e(250),b(({active:s})=>!s))).subscribe({next({active:s}){s?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Me(16,me)).subscribe(({active:s})=>{o.classList.toggle("md-tooltip--active",s)}),i.pipe(pt(125,me),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:s})=>s)).subscribe({next(s){s?e.style.setProperty("--md-tooltip-0",`${-s}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(W(a),b(s=>!(s.metaKey||s.ctrlKey))).subscribe(s=>{s.stopPropagation(),s.preventDefault()}),h(n,"mousedown").pipe(W(a),re(i)).subscribe(([s,{active:p}])=>{var c;if(s.button!==0||s.metaKey||s.ctrlKey)s.preventDefault();else if(p){s.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(c=Ie())==null||c.blur()}}),r.pipe(W(a),b(s=>s===o),Ge(125)).subscribe(()=>e.focus()),Ua(e,t).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))})}function Wa(e){return e.tagName==="CODE"?P(".c, .c1, .cm",e):[e]}function Da(e){let t=[];for(let r of Wa(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let a;for(;a=/(\(\d+\))(!)?/.exec(i.textContent);){let[,s,p]=a;if(typeof p=="undefined"){let c=i.splitText(a.index);i=c.splitText(s.length),t.push(c)}else{i.textContent=s,t.push(i);break}}}}return t}function Hn(e,t){t.append(...Array.from(e.childNodes))}function fr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,a=new Map;for(let s of Da(t)){let[,p]=s.textContent.match(/\((\d+)\)/);fe(`:scope > li:nth-child(${p})`,e)&&(a.set(p,Tn(p,i)),s.replaceWith(a.get(p)))}return a.size===0?S:C(()=>{let s=new g,p=s.pipe(Z(),ie(!0)),c=[];for(let[l,f]of a)c.push([R(".md-typeset",f),R(`:scope > li:nth-child(${l})`,e)]);return o.pipe(W(p)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of c)l?Hn(f,u):Hn(u,f)}),O(...[...a].map(([,l])=>kn(l,t,{target$:r}))).pipe(_(()=>s.complete()),pe())})}function $n(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return $n(t)}}function Pn(e,t){return C(()=>{let r=$n(e);return typeof r!="undefined"?fr(r,e,t):S})}var Rn=Mt(Br());var Va=0;function In(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return In(t)}}function Na(e){return ge(e).pipe(m(({width:t})=>({scrollable:St(e).width>t})),ee("scrollable"))}function jn(e,t){let{matches:r}=matchMedia("(hover)"),o=C(()=>{let n=new g,i=n.pipe(jr(1));n.subscribe(({scrollable:c})=>{c&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")});let a=[];if(Rn.default.isSupported()&&(e.closest(".copy")||B("content.code.copy")&&!e.closest(".no-copy"))){let c=e.closest("pre");c.id=`__code_${Va++}`;let l=Sn(c.id);c.insertBefore(l,e),B("content.tooltips")&&a.push(mt(l,{viewport$}))}let s=e.closest(".highlight");if(s instanceof HTMLElement){let c=In(s);if(typeof c!="undefined"&&(s.classList.contains("annotate")||B("content.code.annotate"))){let l=fr(c,e,t);a.push(ge(s).pipe(W(i),m(({width:f,height:u})=>f&&u),K(),v(f=>f?l:S)))}}return P(":scope > span[id]",e).length&&e.classList.add("md-code__content"),Na(e).pipe(w(c=>n.next(c)),_(()=>n.complete()),m(c=>$({ref:e},c)),Re(...a))});return B("content.lazy")?tt(e).pipe(b(n=>n),Te(1),v(()=>o)):o}function za(e,{target$:t,print$:r}){let o=!0;return O(t.pipe(m(n=>n.closest("details:not([open])")),b(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(b(n=>n||!o),w(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Fn(e,t){return C(()=>{let r=new g;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),za(e,t).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}var Un=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel p,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel p{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel,.nodeLabel p{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}a .nodeLabel{text-decoration:underline}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var Gr,Qa=0;function Ka(){return typeof mermaid=="undefined"||mermaid instanceof Element?Tt("https://unpkg.com/mermaid@11/dist/mermaid.min.js"):I(void 0)}function Wn(e){return e.classList.remove("mermaid"),Gr||(Gr=Ka().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Un,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),G(1))),Gr.subscribe(()=>co(this,null,function*(){e.classList.add("mermaid");let t=`__mermaid_${Qa++}`,r=x("div",{class:"mermaid"}),o=e.textContent,{svg:n,fn:i}=yield mermaid.render(t,o),a=r.attachShadow({mode:"closed"});a.innerHTML=n,e.replaceWith(r),i==null||i(a)})),Gr.pipe(m(()=>({ref:e})))}var Dn=x("table");function Vn(e){return e.replaceWith(Dn),Dn.replaceWith(An(e)),I({ref:e})}function Ya(e){let t=e.find(r=>r.checked)||e[0];return O(...e.map(r=>h(r,"change").pipe(m(()=>R(`label[for="${r.id}"]`))))).pipe(Q(R(`label[for="${t.id}"]`)),m(r=>({active:r})))}function Nn(e,{viewport$:t,target$:r}){let o=R(".tabbed-labels",e),n=P(":scope > input",e),i=Kr("prev");e.append(i);let a=Kr("next");return e.append(a),C(()=>{let s=new g,p=s.pipe(Z(),ie(!0));z([s,ge(e),tt(e)]).pipe(W(p),Me(1,me)).subscribe({next([{active:c},l]){let f=Ve(c),{width:u}=ce(c);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let d=pr(o);(f.xd.x+l.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),z([Ne(o),ge(o)]).pipe(W(p)).subscribe(([c,l])=>{let f=St(o);i.hidden=c.x<16,a.hidden=c.x>f.width-l.width-16}),O(h(i,"click").pipe(m(()=>-1)),h(a,"click").pipe(m(()=>1))).pipe(W(p)).subscribe(c=>{let{width:l}=ce(o);o.scrollBy({left:l*c,behavior:"smooth"})}),r.pipe(W(p),b(c=>n.includes(c))).subscribe(c=>c.click()),o.classList.add("tabbed-labels--linked");for(let c of n){let l=R(`label[for="${c.id}"]`);l.replaceChildren(x("a",{href:`#${l.htmlFor}`,tabIndex:-1},...Array.from(l.childNodes))),h(l.firstElementChild,"click").pipe(W(p),b(f=>!(f.metaKey||f.ctrlKey)),w(f=>{f.preventDefault(),f.stopPropagation()})).subscribe(()=>{history.replaceState({},"",`#${l.htmlFor}`),l.click()})}return B("content.tabs.link")&&s.pipe(Ce(1),re(t)).subscribe(([{active:c},{offset:l}])=>{let f=c.innerText.trim();if(c.hasAttribute("data-md-switching"))c.removeAttribute("data-md-switching");else{let u=e.offsetTop-l.y;for(let y of P("[data-tabs]"))for(let L of P(":scope > input",y)){let X=R(`label[for="${L.id}"]`);if(X!==c&&X.innerText.trim()===f){X.setAttribute("data-md-switching",""),L.click();break}}window.scrollTo({top:e.offsetTop-u});let d=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...d])])}}),s.pipe(W(p)).subscribe(()=>{for(let c of P("audio, video",e))c.pause()}),Ya(n).pipe(w(c=>s.next(c)),_(()=>s.complete()),m(c=>$({ref:e},c)))}).pipe(Ke(se))}function zn(e,{viewport$:t,target$:r,print$:o}){return O(...P(".annotate:not(.highlight)",e).map(n=>Pn(n,{target$:r,print$:o})),...P("pre:not(.mermaid) > code",e).map(n=>jn(n,{target$:r,print$:o})),...P("pre.mermaid",e).map(n=>Wn(n)),...P("table:not([class])",e).map(n=>Vn(n)),...P("details",e).map(n=>Fn(n,{target$:r,print$:o})),...P("[data-tabs]",e).map(n=>Nn(n,{viewport$:t,target$:r})),...P("[title]",e).filter(()=>B("content.tooltips")).map(n=>mt(n,{viewport$:t})))}function Ba(e,{alert$:t}){return t.pipe(v(r=>O(I(!0),I(!1).pipe(Ge(2e3))).pipe(m(o=>({message:r,active:o})))))}function qn(e,t){let r=R(".md-typeset",e);return C(()=>{let o=new g;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Ba(e,t).pipe(w(n=>o.next(n)),_(()=>o.complete()),m(n=>$({ref:e},n)))})}var Ga=0;function Ja(e,t){document.body.append(e);let{width:r}=ce(e);e.style.setProperty("--md-tooltip-width",`${r}px`),e.remove();let o=cr(t),n=typeof o!="undefined"?Ne(o):I({x:0,y:0}),i=O(et(t),$t(t)).pipe(K());return z([i,n]).pipe(m(([a,s])=>{let{x:p,y:c}=Ve(t),l=ce(t),f=t.closest("table");return f&&t.parentElement&&(p+=f.offsetLeft+t.parentElement.offsetLeft,c+=f.offsetTop+t.parentElement.offsetTop),{active:a,offset:{x:p-s.x+l.width/2-r/2,y:c-s.y+l.height+8}}}))}function Qn(e){let t=e.title;if(!t.length)return S;let r=`__tooltip_${Ga++}`,o=Rt(r,"inline"),n=R(".md-typeset",o);return n.innerHTML=t,C(()=>{let i=new g;return i.subscribe({next({offset:a}){o.style.setProperty("--md-tooltip-x",`${a.x}px`),o.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){o.style.removeProperty("--md-tooltip-x"),o.style.removeProperty("--md-tooltip-y")}}),O(i.pipe(b(({active:a})=>a)),i.pipe(_e(250),b(({active:a})=>!a))).subscribe({next({active:a}){a?(e.insertAdjacentElement("afterend",o),e.setAttribute("aria-describedby",r),e.removeAttribute("title")):(o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t))},complete(){o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t)}}),i.pipe(Me(16,me)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(pt(125,me),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?o.style.setProperty("--md-tooltip-0",`${-a}px`):o.style.removeProperty("--md-tooltip-0")},complete(){o.style.removeProperty("--md-tooltip-0")}}),Ja(o,e).pipe(w(a=>i.next(a)),_(()=>i.complete()),m(a=>$({ref:e},a)))}).pipe(Ke(se))}function Xa({viewport$:e}){if(!B("header.autohide"))return I(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Be(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),K()),o=ze("search");return z([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),K(),v(n=>n?r:I(!1)),Q(!1))}function Kn(e,t){return C(()=>z([ge(e),Xa(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),K((r,o)=>r.height===o.height&&r.hidden===o.hidden),G(1))}function Yn(e,{header$:t,main$:r}){return C(()=>{let o=new g,n=o.pipe(Z(),ie(!0));o.pipe(ee("active"),He(t)).subscribe(([{active:a},{hidden:s}])=>{e.classList.toggle("md-header--shadow",a&&!s),e.hidden=s});let i=ue(P("[title]",e)).pipe(b(()=>B("content.tooltips")),ne(a=>Qn(a)));return r.subscribe(o),t.pipe(W(n),m(a=>$({ref:e},a)),Re(i.pipe(W(n))))})}function Za(e,{viewport$:t,header$:r}){return mr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=ce(e);return{active:o>=n}}),ee("active"))}function Bn(e,t){return C(()=>{let r=new g;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=fe(".md-content h1");return typeof o=="undefined"?S:Za(o,t).pipe(w(n=>r.next(n)),_(()=>r.complete()),m(n=>$({ref:e},n)))})}function Gn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),K()),n=o.pipe(v(()=>ge(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),ee("bottom"))));return z([o,n,t]).pipe(m(([i,{top:a,bottom:s},{offset:{y:p},size:{height:c}}])=>(c=Math.max(0,c-Math.max(0,a-p,i)-Math.max(0,c+p-s)),{offset:a-i,height:c,active:a-i<=p})),K((i,a)=>i.offset===a.offset&&i.height===a.height&&i.active===a.active))}function es(e){let t=__md_get("__palette")||{index:e.findIndex(o=>matchMedia(o.getAttribute("data-md-color-media")).matches)},r=Math.max(0,Math.min(t.index,e.length-1));return I(...e).pipe(ne(o=>h(o,"change").pipe(m(()=>o))),Q(e[r]),m(o=>({index:e.indexOf(o),color:{media:o.getAttribute("data-md-color-media"),scheme:o.getAttribute("data-md-color-scheme"),primary:o.getAttribute("data-md-color-primary"),accent:o.getAttribute("data-md-color-accent")}})),G(1))}function Jn(e){let t=P("input",e),r=x("meta",{name:"theme-color"});document.head.appendChild(r);let o=x("meta",{name:"color-scheme"});document.head.appendChild(o);let n=Pt("(prefers-color-scheme: light)");return C(()=>{let i=new g;return i.subscribe(a=>{if(document.body.setAttribute("data-md-color-switching",""),a.color.media==="(prefers-color-scheme)"){let s=matchMedia("(prefers-color-scheme: light)"),p=document.querySelector(s.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']");a.color.scheme=p.getAttribute("data-md-color-scheme"),a.color.primary=p.getAttribute("data-md-color-primary"),a.color.accent=p.getAttribute("data-md-color-accent")}for(let[s,p]of Object.entries(a.color))document.body.setAttribute(`data-md-color-${s}`,p);for(let s=0;sa.key==="Enter"),re(i,(a,s)=>s)).subscribe(({index:a})=>{a=(a+1)%t.length,t[a].click(),t[a].focus()}),i.pipe(m(()=>{let a=Se("header"),s=window.getComputedStyle(a);return o.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(p=>(+p).toString(16).padStart(2,"0")).join("")})).subscribe(a=>r.content=`#${a}`),i.pipe(ve(se)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")}),es(t).pipe(W(n.pipe(Ce(1))),ct(),w(a=>i.next(a)),_(()=>i.complete()),m(a=>$({ref:e},a)))})}function Xn(e,{progress$:t}){return C(()=>{let r=new g;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(w(o=>r.next({value:o})),_(()=>r.complete()),m(o=>({ref:e,value:o})))})}var Jr=Mt(Br());function ts(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r.trimEnd()}function Zn({alert$:e}){Jr.default.isSupported()&&new j(t=>{new Jr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||ts(R(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>Ee("clipboard.copied"))).subscribe(e)}function ei(e,t){return e.protocol=t.protocol,e.hostname=t.hostname,e}function rs(e,t){let r=new Map;for(let o of P("url",e)){let n=R("loc",o),i=[ei(new URL(n.textContent),t)];r.set(`${i[0]}`,i);for(let a of P("[rel=alternate]",o)){let s=a.getAttribute("href");s!=null&&i.push(ei(new URL(s),t))}}return r}function ur(e){return un(new URL("sitemap.xml",e)).pipe(m(t=>rs(t,new URL(e))),de(()=>I(new Map)))}function os(e,t){if(!(e.target instanceof Element))return S;let r=e.target.closest("a");if(r===null)return S;if(r.target||e.metaKey||e.ctrlKey)return S;let o=new URL(r.href);return o.search=o.hash="",t.has(`${o}`)?(e.preventDefault(),I(new URL(r.href))):S}function ti(e){let t=new Map;for(let r of P(":scope > *",e.head))t.set(r.outerHTML,r);return t}function ri(e){for(let t of P("[href], [src]",e))for(let r of["href","src"]){let o=t.getAttribute(r);if(o&&!/^(?:[a-z]+:)?\/\//i.test(o)){t[r]=t[r];break}}return I(e)}function ns(e){for(let o of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...B("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let n=fe(o),i=fe(o,e);typeof n!="undefined"&&typeof i!="undefined"&&n.replaceWith(i)}let t=ti(document);for(let[o,n]of ti(e))t.has(o)?t.delete(o):document.head.appendChild(n);for(let o of t.values()){let n=o.getAttribute("name");n!=="theme-color"&&n!=="color-scheme"&&o.remove()}let r=Se("container");return We(P("script",r)).pipe(v(o=>{let n=e.createElement("script");if(o.src){for(let i of o.getAttributeNames())n.setAttribute(i,o.getAttribute(i));return o.replaceWith(n),new j(i=>{n.onload=()=>i.complete()})}else return n.textContent=o.textContent,o.replaceWith(n),S}),Z(),ie(document))}function oi({location$:e,viewport$:t,progress$:r}){let o=xe();if(location.protocol==="file:")return S;let n=ur(o.base);I(document).subscribe(ri);let i=h(document.body,"click").pipe(He(n),v(([p,c])=>os(p,c)),pe()),a=h(window,"popstate").pipe(m(ye),pe());i.pipe(re(t)).subscribe(([p,{offset:c}])=>{history.replaceState(c,""),history.pushState(null,"",p)}),O(i,a).subscribe(e);let s=e.pipe(ee("pathname"),v(p=>fn(p,{progress$:r}).pipe(de(()=>(lt(p,!0),S)))),v(ri),v(ns),pe());return O(s.pipe(re(e,(p,c)=>c)),s.pipe(v(()=>e),ee("pathname"),v(()=>e),ee("hash")),e.pipe(K((p,c)=>p.pathname===c.pathname&&p.hash===c.hash),v(()=>i),w(()=>history.back()))).subscribe(p=>{var c,l;history.state!==null||!p.hash?window.scrollTo(0,(l=(c=history.state)==null?void 0:c.y)!=null?l:0):(history.scrollRestoration="auto",pn(p.hash),history.scrollRestoration="manual")}),e.subscribe(()=>{history.scrollRestoration="manual"}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),t.pipe(ee("offset"),_e(100)).subscribe(({offset:p})=>{history.replaceState(p,"")}),s}var ni=Mt(qr());function ii(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,a)=>`${i}${a}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return a=>(0,ni.default)(a).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function jt(e){return e.type===1}function dr(e){return e.type===3}function ai(e,t){let r=yn(e);return O(I(location.protocol!=="file:"),ze("search")).pipe(Ae(o=>o),v(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:B("search.suggest")}}})),r}function si(e){var l;let{selectedVersionSitemap:t,selectedVersionBaseURL:r,currentLocation:o,currentBaseURL:n}=e,i=(l=Xr(n))==null?void 0:l.pathname;if(i===void 0)return;let a=ss(o.pathname,i);if(a===void 0)return;let s=ps(t.keys());if(!t.has(s))return;let p=Xr(a,s);if(!p||!t.has(p.href))return;let c=Xr(a,r);if(c)return c.hash=o.hash,c.search=o.search,c}function Xr(e,t){try{return new URL(e,t)}catch(r){return}}function ss(e,t){if(e.startsWith(t))return e.slice(t.length)}function cs(e,t){let r=Math.min(e.length,t.length),o;for(o=0;oS)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:a,aliases:s})=>a===i||s.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),v(n=>h(document.body,"click").pipe(b(i=>!i.metaKey&&!i.ctrlKey),re(o),v(([i,a])=>{if(i.target instanceof Element){let s=i.target.closest("a");if(s&&!s.target&&n.has(s.href)){let p=s.href;return!i.target.closest(".md-version")&&n.get(p)===a?S:(i.preventDefault(),I(new URL(p)))}}return S}),v(i=>ur(i).pipe(m(a=>{var s;return(s=si({selectedVersionSitemap:a,selectedVersionBaseURL:i,currentLocation:ye(),currentBaseURL:t.base}))!=null?s:i})))))).subscribe(n=>lt(n,!0)),z([r,o]).subscribe(([n,i])=>{R(".md-header__topic").appendChild(Cn(n,i))}),e.pipe(v(()=>o)).subscribe(n=>{var a;let i=__md_get("__outdated",sessionStorage);if(i===null){i=!0;let s=((a=t.version)==null?void 0:a.default)||"latest";Array.isArray(s)||(s=[s]);e:for(let p of s)for(let c of n.aliases.concat(n.version))if(new RegExp(p,"i").test(c)){i=!1;break e}__md_set("__outdated",i,sessionStorage)}if(i)for(let s of ae("outdated"))s.hidden=!1})}function ls(e,{worker$:t}){let{searchParams:r}=ye();r.has("q")&&(Je("search",!0),e.value=r.get("q"),e.focus(),ze("search").pipe(Ae(i=>!i)).subscribe(()=>{let i=ye();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=et(e),n=O(t.pipe(Ae(jt)),h(e,"keyup"),o).pipe(m(()=>e.value),K());return z([n,o]).pipe(m(([i,a])=>({value:i,focus:a})),G(1))}function pi(e,{worker$:t}){let r=new g,o=r.pipe(Z(),ie(!0));z([t.pipe(Ae(jt)),r],(i,a)=>a).pipe(ee("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(ee("focus")).subscribe(({focus:i})=>{i&&Je("search",i)}),h(e.form,"reset").pipe(W(o)).subscribe(()=>e.focus());let n=R("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),ls(e,{worker$:t}).pipe(w(i=>r.next(i)),_(()=>r.complete()),m(i=>$({ref:e},i)),G(1))}function li(e,{worker$:t,query$:r}){let o=new g,n=on(e.parentElement).pipe(b(Boolean)),i=e.parentElement,a=R(":scope > :first-child",e),s=R(":scope > :last-child",e);ze("search").subscribe(l=>s.setAttribute("role",l?"list":"presentation")),o.pipe(re(r),Wr(t.pipe(Ae(jt)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:a.textContent=f.length?Ee("search.result.none"):Ee("search.result.placeholder");break;case 1:a.textContent=Ee("search.result.one");break;default:let u=sr(l.length);a.textContent=Ee("search.result.other",u)}});let p=o.pipe(w(()=>s.innerHTML=""),v(({items:l})=>O(I(...l.slice(0,10)),I(...l.slice(10)).pipe(Be(4),Vr(n),v(([f])=>f)))),m(Mn),pe());return p.subscribe(l=>s.appendChild(l)),p.pipe(ne(l=>{let f=fe("details",l);return typeof f=="undefined"?S:h(f,"toggle").pipe(W(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(b(dr),m(({data:l})=>l)).pipe(w(l=>o.next(l)),_(()=>o.complete()),m(l=>$({ref:e},l)))}function ms(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=ye();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function mi(e,t){let r=new g,o=r.pipe(Z(),ie(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(W(o)).subscribe(n=>n.preventDefault()),ms(e,t).pipe(w(n=>r.next(n)),_(()=>r.complete()),m(n=>$({ref:e},n)))}function fi(e,{worker$:t,keyboard$:r}){let o=new g,n=Se("search-query"),i=O(h(n,"keydown"),h(n,"focus")).pipe(ve(se),m(()=>n.value),K());return o.pipe(He(i),m(([{suggest:s},p])=>{let c=p.split(/([\s-]+)/);if(s!=null&&s.length&&c[c.length-1]){let l=s[s.length-1];l.startsWith(c[c.length-1])&&(c[c.length-1]=l)}else c.length=0;return c})).subscribe(s=>e.innerHTML=s.join("").replace(/\s/g," ")),r.pipe(b(({mode:s})=>s==="search")).subscribe(s=>{switch(s.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(b(dr),m(({data:s})=>s)).pipe(w(s=>o.next(s)),_(()=>o.complete()),m(()=>({ref:e})))}function ui(e,{index$:t,keyboard$:r}){let o=xe();try{let n=ai(o.search,t),i=Se("search-query",e),a=Se("search-result",e);h(e,"click").pipe(b(({target:p})=>p instanceof Element&&!!p.closest("a"))).subscribe(()=>Je("search",!1)),r.pipe(b(({mode:p})=>p==="search")).subscribe(p=>{let c=Ie();switch(p.type){case"Enter":if(c===i){let l=new Map;for(let f of P(":first-child [href]",a)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}p.claim()}break;case"Escape":case"Tab":Je("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof c=="undefined")i.focus();else{let l=[i,...P(":not(details) > [href], summary, details[open] [href]",a)],f=Math.max(0,(Math.max(0,l.indexOf(c))+l.length+(p.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}p.claim();break;default:i!==Ie()&&i.focus()}}),r.pipe(b(({mode:p})=>p==="global")).subscribe(p=>{switch(p.type){case"f":case"s":case"/":i.focus(),i.select(),p.claim();break}});let s=pi(i,{worker$:n});return O(s,li(a,{worker$:n,query$:s})).pipe(Re(...ae("search-share",e).map(p=>mi(p,{query$:s})),...ae("search-suggest",e).map(p=>fi(p,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ye}}function di(e,{index$:t,location$:r}){return z([t,r.pipe(Q(ye()),b(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>ii(o.config)(n.searchParams.get("h"))),m(o=>{var a;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let s=i.nextNode();s;s=i.nextNode())if((a=s.parentElement)!=null&&a.offsetHeight){let p=s.textContent,c=o(p);c.length>p.length&&n.set(s,c)}for(let[s,p]of n){let{childNodes:c}=x("span",null,p);s.replaceWith(...Array.from(c))}return{ref:e,nodes:n}}))}function fs(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return z([r,t]).pipe(m(([{offset:i,height:a},{offset:{y:s}}])=>(a=a+Math.min(n,Math.max(0,s-i))-n,{height:a,locked:s>=i+n})),K((i,a)=>i.height===a.height&&i.locked===a.locked))}function Zr(e,o){var n=o,{header$:t}=n,r=so(n,["header$"]);let i=R(".md-sidebar__scrollwrap",e),{y:a}=Ve(i);return C(()=>{let s=new g,p=s.pipe(Z(),ie(!0)),c=s.pipe(Me(0,me));return c.pipe(re(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*a}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),c.pipe(Ae()).subscribe(()=>{for(let l of P(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=ce(f);f.scrollTo({top:u-d/2})}}}),ue(P("label[tabindex]",e)).pipe(ne(l=>h(l,"click").pipe(ve(se),m(()=>l),W(p)))).subscribe(l=>{let f=R(`[id="${l.htmlFor}"]`);R(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),fs(e,r).pipe(w(l=>s.next(l)),_(()=>s.complete()),m(l=>$({ref:e},l)))})}function hi(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return st(je(`${r}/releases/latest`).pipe(de(()=>S),m(o=>({version:o.tag_name})),De({})),je(r).pipe(de(()=>S),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),De({}))).pipe(m(([o,n])=>$($({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return je(r).pipe(m(o=>({repositories:o.public_repos})),De({}))}}function bi(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return st(je(`${r}/releases/permalink/latest`).pipe(de(()=>S),m(({tag_name:o})=>({version:o})),De({})),je(r).pipe(de(()=>S),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),De({}))).pipe(m(([o,n])=>$($({},o),n)))}function vi(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return hi(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return bi(r,o)}return S}var us;function ds(e){return us||(us=C(()=>{let t=__md_get("__source",sessionStorage);if(t)return I(t);if(ae("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return S}return vi(e.href).pipe(w(o=>__md_set("__source",o,sessionStorage)))}).pipe(de(()=>S),b(t=>Object.keys(t).length>0),m(t=>({facts:t})),G(1)))}function gi(e){let t=R(":scope > :last-child",e);return C(()=>{let r=new g;return r.subscribe(({facts:o})=>{t.appendChild(_n(o)),t.classList.add("md-source__repository--active")}),ds(e).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}function hs(e,{viewport$:t,header$:r}){return ge(document.body).pipe(v(()=>mr(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),ee("hidden"))}function yi(e,t){return C(()=>{let r=new g;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(B("navigation.tabs.sticky")?I({hidden:!1}):hs(e,t)).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}function bs(e,{viewport$:t,header$:r}){let o=new Map,n=P(".md-nav__link",e);for(let s of n){let p=decodeURIComponent(s.hash.substring(1)),c=fe(`[id="${p}"]`);typeof c!="undefined"&&o.set(s,c)}let i=r.pipe(ee("height"),m(({height:s})=>{let p=Se("main"),c=R(":scope > :first-child",p);return s+.8*(c.offsetTop-p.offsetTop)}),pe());return ge(document.body).pipe(ee("height"),v(s=>C(()=>{let p=[];return I([...o].reduce((c,[l,f])=>{for(;p.length&&o.get(p[p.length-1]).tagName>=f.tagName;)p.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return c.set([...p=[...p,l]].reverse(),u)},new Map))}).pipe(m(p=>new Map([...p].sort(([,c],[,l])=>c-l))),He(i),v(([p,c])=>t.pipe(Fr(([l,f],{offset:{y:u},size:d})=>{let y=u+d.height>=Math.floor(s.height);for(;f.length;){let[,L]=f[0];if(L-c=u&&!y)f=[l.pop(),...f];else break}return[l,f]},[[],[...p]]),K((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([s,p])=>({prev:s.map(([c])=>c),next:p.map(([c])=>c)})),Q({prev:[],next:[]}),Be(2,1),m(([s,p])=>s.prev.length{let i=new g,a=i.pipe(Z(),ie(!0));if(i.subscribe(({prev:s,next:p})=>{for(let[c]of p)c.classList.remove("md-nav__link--passed"),c.classList.remove("md-nav__link--active");for(let[c,[l]]of s.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",c===s.length-1)}),B("toc.follow")){let s=O(t.pipe(_e(1),m(()=>{})),t.pipe(_e(250),m(()=>"smooth")));i.pipe(b(({prev:p})=>p.length>0),He(o.pipe(ve(se))),re(s)).subscribe(([[{prev:p}],c])=>{let[l]=p[p.length-1];if(l.offsetHeight){let f=cr(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=ce(f);f.scrollTo({top:u-d/2,behavior:c})}}})}return B("navigation.tracking")&&t.pipe(W(a),ee("offset"),_e(250),Ce(1),W(n.pipe(Ce(1))),ct({delay:250}),re(i)).subscribe(([,{prev:s}])=>{let p=ye(),c=s[s.length-1];if(c&&c.length){let[l]=c,{hash:f}=new URL(l.href);p.hash!==f&&(p.hash=f,history.replaceState({},"",`${p}`))}else p.hash="",history.replaceState({},"",`${p}`)}),bs(e,{viewport$:t,header$:r}).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))})}function vs(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:a}})=>a),Be(2,1),m(([a,s])=>a>s&&s>0),K()),i=r.pipe(m(({active:a})=>a));return z([i,n]).pipe(m(([a,s])=>!(a&&s)),K(),W(o.pipe(Ce(1))),ie(!0),ct({delay:250}),m(a=>({hidden:a})))}function Ei(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new g,a=i.pipe(Z(),ie(!0));return i.subscribe({next({hidden:s}){e.hidden=s,s?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(W(a),ee("height")).subscribe(({height:s})=>{e.style.top=`${s+16}px`}),h(e,"click").subscribe(s=>{s.preventDefault(),window.scrollTo({top:0})}),vs(e,{viewport$:t,main$:o,target$:n}).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))}function wi({document$:e,viewport$:t}){e.pipe(v(()=>P(".md-ellipsis")),ne(r=>tt(r).pipe(W(e.pipe(Ce(1))),b(o=>o),m(()=>r),Te(1))),b(r=>r.offsetWidth{let o=r.innerText,n=r.closest("a")||r;return n.title=o,B("content.tooltips")?mt(n,{viewport$:t}).pipe(W(e.pipe(Ce(1))),_(()=>n.removeAttribute("title"))):S})).subscribe(),B("content.tooltips")&&e.pipe(v(()=>P(".md-status")),ne(r=>mt(r,{viewport$:t}))).subscribe()}function Ti({document$:e,tablet$:t}){e.pipe(v(()=>P(".md-toggle--indeterminate")),w(r=>{r.indeterminate=!0,r.checked=!1}),ne(r=>h(r,"change").pipe(Dr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),re(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function gs(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function Si({document$:e}){e.pipe(v(()=>P("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),b(gs),ne(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function Oi({viewport$:e,tablet$:t}){z([ze("search"),t]).pipe(m(([r,o])=>r&&!o),v(r=>I(r).pipe(Ge(r?400:100))),re(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function ys(){return location.protocol==="file:"?Tt(`${new URL("search/search_index.js",eo.base)}`).pipe(m(()=>__index),G(1)):je(new URL("search/search_index.json",eo.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var ot=Go(),Ut=sn(),Lt=ln(Ut),to=an(),Oe=gn(),hr=Pt("(min-width: 960px)"),Mi=Pt("(min-width: 1220px)"),_i=mn(),eo=xe(),Ai=document.forms.namedItem("search")?ys():Ye,ro=new g;Zn({alert$:ro});var oo=new g;B("navigation.instant")&&oi({location$:Ut,viewport$:Oe,progress$:oo}).subscribe(ot);var Li;((Li=eo.version)==null?void 0:Li.provider)==="mike"&&ci({document$:ot});O(Ut,Lt).pipe(Ge(125)).subscribe(()=>{Je("drawer",!1),Je("search",!1)});to.pipe(b(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=fe("link[rel=prev]");typeof t!="undefined"&<(t);break;case"n":case".":let r=fe("link[rel=next]");typeof r!="undefined"&<(r);break;case"Enter":let o=Ie();o instanceof HTMLLabelElement&&o.click()}});wi({viewport$:Oe,document$:ot});Ti({document$:ot,tablet$:hr});Si({document$:ot});Oi({viewport$:Oe,tablet$:hr});var rt=Kn(Se("header"),{viewport$:Oe}),Ft=ot.pipe(m(()=>Se("main")),v(e=>Gn(e,{viewport$:Oe,header$:rt})),G(1)),xs=O(...ae("consent").map(e=>En(e,{target$:Lt})),...ae("dialog").map(e=>qn(e,{alert$:ro})),...ae("header").map(e=>Yn(e,{viewport$:Oe,header$:rt,main$:Ft})),...ae("palette").map(e=>Jn(e)),...ae("progress").map(e=>Xn(e,{progress$:oo})),...ae("search").map(e=>ui(e,{index$:Ai,keyboard$:to})),...ae("source").map(e=>gi(e))),Es=C(()=>O(...ae("announce").map(e=>xn(e)),...ae("content").map(e=>zn(e,{viewport$:Oe,target$:Lt,print$:_i})),...ae("content").map(e=>B("search.highlight")?di(e,{index$:Ai,location$:Ut}):S),...ae("header-title").map(e=>Bn(e,{viewport$:Oe,header$:rt})),...ae("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Nr(Mi,()=>Zr(e,{viewport$:Oe,header$:rt,main$:Ft})):Nr(hr,()=>Zr(e,{viewport$:Oe,header$:rt,main$:Ft}))),...ae("tabs").map(e=>yi(e,{viewport$:Oe,header$:rt})),...ae("toc").map(e=>xi(e,{viewport$:Oe,header$:rt,main$:Ft,target$:Lt})),...ae("top").map(e=>Ei(e,{viewport$:Oe,header$:rt,main$:Ft,target$:Lt})))),Ci=ot.pipe(v(()=>Es),Re(xs),G(1));Ci.subscribe();window.document$=ot;window.location$=Ut;window.target$=Lt;window.keyboard$=to;window.viewport$=Oe;window.tablet$=hr;window.screen$=Mi;window.print$=_i;window.alert$=ro;window.progress$=oo;window.component$=Ci;})(); +//# sourceMappingURL=bundle.525ec568.min.js.map + diff --git a/assets/javascripts/bundle.525ec568.min.js.map b/assets/javascripts/bundle.525ec568.min.js.map new file mode 100644 index 0000000..ef5d8d3 --- /dev/null +++ b/assets/javascripts/bundle.525ec568.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/escape-html/index.js", "node_modules/clipboard/dist/clipboard.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/tslib/tslib.es6.mjs", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/BehaviorSubject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/QueueAction.ts", "node_modules/rxjs/src/internal/scheduler/QueueScheduler.ts", "node_modules/rxjs/src/internal/scheduler/queue.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounce.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/hover/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/tooltip2/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/tooltip/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/findurl/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/ellipsis/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*\n * Copyright (c) 2016-2024 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchEllipsis,\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchEllipsis({ viewport$, document$ })\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/******************************************************************************\nCopyright (c) Microsoft Corporation.\n\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\nPERFORMANCE OF THIS SOFTWARE.\n***************************************************************************** */\n/* global Reflect, Promise, SuppressedError, Symbol, Iterator */\n\nvar extendStatics = function(d, b) {\n extendStatics = Object.setPrototypeOf ||\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\n return extendStatics(d, b);\n};\n\nexport function __extends(d, b) {\n if (typeof b !== \"function\" && b !== null)\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\n extendStatics(d, b);\n function __() { this.constructor = d; }\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\n}\n\nexport var __assign = function() {\n __assign = Object.assign || function __assign(t) {\n for (var s, i = 1, n = arguments.length; i < n; i++) {\n s = arguments[i];\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\n }\n return t;\n }\n return __assign.apply(this, arguments);\n}\n\nexport function __rest(s, e) {\n var t = {};\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\n t[p] = s[p];\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\n t[p[i]] = s[p[i]];\n }\n return t;\n}\n\nexport function __decorate(decorators, target, key, desc) {\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\n return c > 3 && r && Object.defineProperty(target, key, r), r;\n}\n\nexport function __param(paramIndex, decorator) {\n return function (target, key) { decorator(target, key, paramIndex); }\n}\n\nexport function __esDecorate(ctor, descriptorIn, decorators, contextIn, initializers, extraInitializers) {\n function accept(f) { if (f !== void 0 && typeof f !== \"function\") throw new TypeError(\"Function expected\"); return f; }\n var kind = contextIn.kind, key = kind === \"getter\" ? \"get\" : kind === \"setter\" ? \"set\" : \"value\";\n var target = !descriptorIn && ctor ? contextIn[\"static\"] ? ctor : ctor.prototype : null;\n var descriptor = descriptorIn || (target ? Object.getOwnPropertyDescriptor(target, contextIn.name) : {});\n var _, done = false;\n for (var i = decorators.length - 1; i >= 0; i--) {\n var context = {};\n for (var p in contextIn) context[p] = p === \"access\" ? {} : contextIn[p];\n for (var p in contextIn.access) context.access[p] = contextIn.access[p];\n context.addInitializer = function (f) { if (done) throw new TypeError(\"Cannot add initializers after decoration has completed\"); extraInitializers.push(accept(f || null)); };\n var result = (0, decorators[i])(kind === \"accessor\" ? { get: descriptor.get, set: descriptor.set } : descriptor[key], context);\n if (kind === \"accessor\") {\n if (result === void 0) continue;\n if (result === null || typeof result !== \"object\") throw new TypeError(\"Object expected\");\n if (_ = accept(result.get)) descriptor.get = _;\n if (_ = accept(result.set)) descriptor.set = _;\n if (_ = accept(result.init)) initializers.unshift(_);\n }\n else if (_ = accept(result)) {\n if (kind === \"field\") initializers.unshift(_);\n else descriptor[key] = _;\n }\n }\n if (target) Object.defineProperty(target, contextIn.name, descriptor);\n done = true;\n};\n\nexport function __runInitializers(thisArg, initializers, value) {\n var useValue = arguments.length > 2;\n for (var i = 0; i < initializers.length; i++) {\n value = useValue ? initializers[i].call(thisArg, value) : initializers[i].call(thisArg);\n }\n return useValue ? value : void 0;\n};\n\nexport function __propKey(x) {\n return typeof x === \"symbol\" ? x : \"\".concat(x);\n};\n\nexport function __setFunctionName(f, name, prefix) {\n if (typeof name === \"symbol\") name = name.description ? \"[\".concat(name.description, \"]\") : \"\";\n return Object.defineProperty(f, \"name\", { configurable: true, value: prefix ? \"\".concat(prefix, \" \", name) : name });\n};\n\nexport function __metadata(metadataKey, metadataValue) {\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\n}\n\nexport function __awaiter(thisArg, _arguments, P, generator) {\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\n return new (P || (P = Promise))(function (resolve, reject) {\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\n step((generator = generator.apply(thisArg, _arguments || [])).next());\n });\n}\n\nexport function __generator(thisArg, body) {\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g = Object.create((typeof Iterator === \"function\" ? Iterator : Object).prototype);\n return g.next = verb(0), g[\"throw\"] = verb(1), g[\"return\"] = verb(2), typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\n function verb(n) { return function (v) { return step([n, v]); }; }\n function step(op) {\n if (f) throw new TypeError(\"Generator is already executing.\");\n while (g && (g = 0, op[0] && (_ = 0)), _) try {\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\n if (y = 0, t) op = [op[0] & 2, t.value];\n switch (op[0]) {\n case 0: case 1: t = op; break;\n case 4: _.label++; return { value: op[1], done: false };\n case 5: _.label++; y = op[1]; op = [0]; continue;\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\n default:\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\n if (t[2]) _.ops.pop();\n _.trys.pop(); continue;\n }\n op = body.call(thisArg, _);\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\n }\n}\n\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n var desc = Object.getOwnPropertyDescriptor(m, k);\n if (!desc || (\"get\" in desc ? !m.__esModule : desc.writable || desc.configurable)) {\n desc = { enumerable: true, get: function() { return m[k]; } };\n }\n Object.defineProperty(o, k2, desc);\n}) : (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n o[k2] = m[k];\n});\n\nexport function __exportStar(m, o) {\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\n}\n\nexport function __values(o) {\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\n if (m) return m.call(o);\n if (o && typeof o.length === \"number\") return {\n next: function () {\n if (o && i >= o.length) o = void 0;\n return { value: o && o[i++], done: !o };\n }\n };\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\n}\n\nexport function __read(o, n) {\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\n if (!m) return o;\n var i = m.call(o), r, ar = [], e;\n try {\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\n }\n catch (error) { e = { error: error }; }\n finally {\n try {\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\n }\n finally { if (e) throw e.error; }\n }\n return ar;\n}\n\n/** @deprecated */\nexport function __spread() {\n for (var ar = [], i = 0; i < arguments.length; i++)\n ar = ar.concat(__read(arguments[i]));\n return ar;\n}\n\n/** @deprecated */\nexport function __spreadArrays() {\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\n r[k] = a[j];\n return r;\n}\n\nexport function __spreadArray(to, from, pack) {\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\n if (ar || !(i in from)) {\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\n ar[i] = from[i];\n }\n }\n return to.concat(ar || Array.prototype.slice.call(from));\n}\n\nexport function __await(v) {\n return this instanceof __await ? (this.v = v, this) : new __await(v);\n}\n\nexport function __asyncGenerator(thisArg, _arguments, generator) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\n return i = Object.create((typeof AsyncIterator === \"function\" ? AsyncIterator : Object).prototype), verb(\"next\"), verb(\"throw\"), verb(\"return\", awaitReturn), i[Symbol.asyncIterator] = function () { return this; }, i;\n function awaitReturn(f) { return function (v) { return Promise.resolve(v).then(f, reject); }; }\n function verb(n, f) { if (g[n]) { i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; if (f) i[n] = f(i[n]); } }\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\n function fulfill(value) { resume(\"next\", value); }\n function reject(value) { resume(\"throw\", value); }\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\n}\n\nexport function __asyncDelegator(o) {\n var i, p;\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: false } : f ? f(v) : v; } : f; }\n}\n\nexport function __asyncValues(o) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var m = o[Symbol.asyncIterator], i;\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\n}\n\nexport function __makeTemplateObject(cooked, raw) {\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\n return cooked;\n};\n\nvar __setModuleDefault = Object.create ? (function(o, v) {\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\n}) : function(o, v) {\n o[\"default\"] = v;\n};\n\nexport function __importStar(mod) {\n if (mod && mod.__esModule) return mod;\n var result = {};\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\n __setModuleDefault(result, mod);\n return result;\n}\n\nexport function __importDefault(mod) {\n return (mod && mod.__esModule) ? mod : { default: mod };\n}\n\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\n}\n\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\n}\n\nexport function __classPrivateFieldIn(state, receiver) {\n if (receiver === null || (typeof receiver !== \"object\" && typeof receiver !== \"function\")) throw new TypeError(\"Cannot use 'in' operator on non-object\");\n return typeof state === \"function\" ? receiver === state : state.has(receiver);\n}\n\nexport function __addDisposableResource(env, value, async) {\n if (value !== null && value !== void 0) {\n if (typeof value !== \"object\" && typeof value !== \"function\") throw new TypeError(\"Object expected.\");\n var dispose, inner;\n if (async) {\n if (!Symbol.asyncDispose) throw new TypeError(\"Symbol.asyncDispose is not defined.\");\n dispose = value[Symbol.asyncDispose];\n }\n if (dispose === void 0) {\n if (!Symbol.dispose) throw new TypeError(\"Symbol.dispose is not defined.\");\n dispose = value[Symbol.dispose];\n if (async) inner = dispose;\n }\n if (typeof dispose !== \"function\") throw new TypeError(\"Object not disposable.\");\n if (inner) dispose = function() { try { inner.call(this); } catch (e) { return Promise.reject(e); } };\n env.stack.push({ value: value, dispose: dispose, async: async });\n }\n else if (async) {\n env.stack.push({ async: true });\n }\n return value;\n}\n\nvar _SuppressedError = typeof SuppressedError === \"function\" ? SuppressedError : function (error, suppressed, message) {\n var e = new Error(message);\n return e.name = \"SuppressedError\", e.error = error, e.suppressed = suppressed, e;\n};\n\nexport function __disposeResources(env) {\n function fail(e) {\n env.error = env.hasError ? new _SuppressedError(e, env.error, \"An error was suppressed during disposal.\") : e;\n env.hasError = true;\n }\n var r, s = 0;\n function next() {\n while (r = env.stack.pop()) {\n try {\n if (!r.async && s === 1) return s = 0, env.stack.push(r), Promise.resolve().then(next);\n if (r.dispose) {\n var result = r.dispose.call(r.value);\n if (r.async) return s |= 2, Promise.resolve(result).then(next, function(e) { fail(e); return next(); });\n }\n else s |= 1;\n }\n catch (e) {\n fail(e);\n }\n }\n if (s === 1) return env.hasError ? Promise.reject(env.error) : Promise.resolve();\n if (env.hasError) throw env.error;\n }\n return next();\n}\n\nexport default {\n __extends,\n __assign,\n __rest,\n __decorate,\n __param,\n __metadata,\n __awaiter,\n __generator,\n __createBinding,\n __exportStar,\n __values,\n __read,\n __spread,\n __spreadArrays,\n __spreadArray,\n __await,\n __asyncGenerator,\n __asyncDelegator,\n __asyncValues,\n __makeTemplateObject,\n __importStar,\n __importDefault,\n __classPrivateFieldGet,\n __classPrivateFieldSet,\n __classPrivateFieldIn,\n __addDisposableResource,\n __disposeResources,\n};\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @method toPromise\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @nocollapse\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return {Observable} Observable that the Subject casts to\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\n/**\n * @class AnonymousSubject\n */\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { Subject } from './Subject';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\n\n/**\n * A variant of Subject that requires an initial value and emits its current\n * value whenever it is subscribed to.\n *\n * @class BehaviorSubject\n */\nexport class BehaviorSubject extends Subject {\n constructor(private _value: T) {\n super();\n }\n\n get value(): T {\n return this.getValue();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n const subscription = super._subscribe(subscriber);\n !subscription.closed && subscriber.next(this._value);\n return subscription;\n }\n\n getValue(): T {\n const { hasError, thrownError, _value } = this;\n if (hasError) {\n throw thrownError;\n }\n this._throwIfClosed();\n return _value;\n }\n\n next(value: T): void {\n super.next((this._value = value));\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param bufferSize The size of the buffer to replay on subscription\n * @param windowTime The amount of time the buffered items will stay buffered\n * @param timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n *\n * @class Action\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler.\n * @return {void}\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n * @return {any}\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @class Scheduler\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return {number} A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param {function(state: ?T): ?Subscription} work A function representing a\n * task, or some unit of work to be executed by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler itself.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @return {Subscription} A subscription in order to be able to unsubscribe\n * the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @type {boolean}\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @type {any}\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { Subscription } from '../Subscription';\nimport { QueueScheduler } from './QueueScheduler';\nimport { SchedulerAction } from '../types';\nimport { TimerHandle } from './timerHandle';\n\nexport class QueueAction extends AsyncAction {\n constructor(protected scheduler: QueueScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (delay > 0) {\n return super.schedule(state, delay);\n }\n this.delay = delay;\n this.state = state;\n this.scheduler.flush(this);\n return this;\n }\n\n public execute(state: T, delay: number): any {\n return delay > 0 || this.closed ? super.execute(state, delay) : this._execute(state, delay);\n }\n\n protected requestAsyncId(scheduler: QueueScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n\n if ((delay != null && delay > 0) || (delay == null && this.delay > 0)) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n\n // Otherwise flush the scheduler starting with this action.\n scheduler.flush(this);\n\n // HACK: In the past, this was returning `void`. However, `void` isn't a valid\n // `TimerHandle`, and generally the return value here isn't really used. So the\n // compromise is to return `0` which is both \"falsy\" and a valid `TimerHandle`,\n // as opposed to refactoring every other instanceo of `requestAsyncId`.\n return 0;\n }\n}\n", "import { AsyncScheduler } from './AsyncScheduler';\n\nexport class QueueScheduler extends AsyncScheduler {\n}\n", "import { QueueAction } from './QueueAction';\nimport { QueueScheduler } from './QueueScheduler';\n\n/**\n *\n * Queue Scheduler\n *\n * Put every next task on a queue, instead of executing it immediately\n *\n * `queue` scheduler, when used with delay, behaves the same as {@link asyncScheduler} scheduler.\n *\n * When used without delay, it schedules given task synchronously - executes it right when\n * it is scheduled. However when called recursively, that is when inside the scheduled task,\n * another task is scheduled with queue scheduler, instead of executing immediately as well,\n * that task will be put on a queue and wait for current one to finish.\n *\n * This means that when you execute task with `queue` scheduler, you are sure it will end\n * before any other task scheduled with that scheduler will start.\n *\n * ## Examples\n * Schedule recursively first, then do something\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(() => {\n * queueScheduler.schedule(() => console.log('second')); // will not happen now, but will be put on a queue\n *\n * console.log('first');\n * });\n *\n * // Logs:\n * // \"first\"\n * // \"second\"\n * ```\n *\n * Reschedule itself recursively\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(function(state) {\n * if (state !== 0) {\n * console.log('before', state);\n * this.schedule(state - 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * console.log('after', state);\n * }\n * }, 0, 3);\n *\n * // In scheduler that runs recursively, you would expect:\n * // \"before\", 3\n * // \"before\", 2\n * // \"before\", 1\n * // \"after\", 1\n * // \"after\", 2\n * // \"after\", 3\n *\n * // But with queue it logs:\n * // \"before\", 3\n * // \"after\", 3\n * // \"before\", 2\n * // \"after\", 2\n * // \"before\", 1\n * // \"after\", 1\n * ```\n */\n\nexport const queueScheduler = new QueueScheduler(QueueAction);\n\n/**\n * @deprecated Renamed to {@link queueScheduler}. Will be removed in v8.\n */\nexport const queue = queueScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n const flushId = this._scheduled;\n this._scheduled = undefined;\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an