Binary Options Indicators: Which ones to use? > Binary ...

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

An introduction to Linux through Windows Subsystem for Linux

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash 
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash 
Next, press esc to exit insert mode, then type in the following:
:wq 
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: brief intro to top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
  • proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
  • run: directory for programs to store runtime information.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
  • tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.

Appendix B: random resources

EDIT: implemented various changes suggested in the comments. Thanks all!
submitted by HeavenBuilder to linux4noobs [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

CLI & GUI v0.16.0.3 'Nitrogen Nebula' released!

This is the CLI & GUI v0.16.0.3 'Nitrogen Nebula' point release. This release predominantly features bug fixes and performance improvements.

(Direct) download links (GUI)

(Direct) download links (CLI)

GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 75b198869a3a117b13b9a77b700afe5cee54fd86244e56cb59151d545adbbdfd monero-android-armv7-v0.16.0.3.tar.bz2 b48918a167b0961cdca524fad5117247239d7e21a047dac4fc863253510ccea1 monero-android-armv8-v0.16.0.3.tar.bz2 727a1b23fbf517bf2f1878f582b3f5ae5c35681fcd37bb2560f2e8ea204196f3 monero-freebsd-x64-v0.16.0.3.tar.bz2 6df98716bb251257c3aab3cf1ab2a0e5b958ecf25dcf2e058498783a20a84988 monero-linux-armv7-v0.16.0.3.tar.bz2 6849446764e2a8528d172246c6b385495ac60fffc8d73b44b05b796d5724a926 monero-linux-armv8-v0.16.0.3.tar.bz2 cb67ad0bec9a342b0f0be3f1fdb4a2c8d57a914be25fc62ad432494779448cc3 monero-linux-x64-v0.16.0.3.tar.bz2 49aa85bb59336db2de357800bc796e9b7d94224d9c3ebbcd205a8eb2f49c3f79 monero-linux-x86-v0.16.0.3.tar.bz2 16a5b7d8dcdaff7d760c14e8563dd9220b2e0499c6d0d88b3e6493601f24660d monero-mac-x64-v0.16.0.3.tar.bz2 5d52712827d29440d53d521852c6af179872c5719d05fa8551503d124dec1f48 monero-win-x64-v0.16.0.3.zip ff094c5191b0253a557be5d6683fd99e1146bf4bcb99dc8824bd9a64f9293104 monero-win-x86-v0.16.0.3.zip # ## GUI 50fe1d2dae31deb1ee542a5c2165fc6d6c04b9a13bcafde8a75f23f23671d484 monero-gui-install-win-x64-v0.16.0.3.exe 20c03ddb1c82e1bcb73339ef22f409e5850a54042005c6e97e42400f56ab2505 monero-gui-linux-x64-v0.16.0.3.tar.bz2 574a84148ee6af7119fda6b9e2859e8e9028fe8a8eec4dfdd196aeade47e9c90 monero-gui-mac-x64-v0.16.0.3.dmg 371cb4de2c9ccb5ed99b2622068b6aeea5bdfc7b9805340ea7eb92e7c17f2478 monero-gui-win-x64-v0.16.0.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl81bL8ACgkQ8K9NRioL 35J+UA//bgY6Mhikh8Cji8i2bmGXEmGvvWMAHJiAtAG2lgW3BT9BHAFMfEpUP5rk svFNsUY/Uurtzxwc/myTPWLzvXVMHzaWJ/EMKV9/C3xrDzQxRnl/+HRS38aT/D+N gaDjchCfk05NHRIOWkO3+2Erpn3gYZ/VVacMo3KnXnQuMXvAkmT5vB7/3BoosOU+ B1Jg5vPZFCXyZmPiMQ/852Gxl5FWi0+zDptW0jrywaS471L8/ZnIzwfdLKgMO49p Fek1WUUy9emnnv66oITYOclOKoC8IjeL4E1UHSdTnmysYK0If0thq5w7wIkElDaV avtDlwqp+vtiwm2svXZ08rqakmvPw+uqlYKDSlH5lY9g0STl8v4F3/aIvvKs0bLr My2F6q9QeUnCZWgtkUKsBy3WhqJsJ7hhyYd+y+sBFIQH3UVNv5k8XqMIXKsrVgmn lRSolLmb1pivCEohIRXl4SgY9yzRnJT1OYHwgsNmEC5T9f019QjVPsDlGNwjqgqB S+Theb+pQzjOhqBziBkRUJqJbQTezHoMIq0xTn9j4VsvRObYNtkuuBQJv1wPRW72 SPJ53BLS3WkeKycbJw3TO9r4BQDPoKetYTE6JctRaG3pSG9VC4pcs2vrXRWmLhVX QUb0V9Kwl9unD5lnN17dXbaU3x9Dc2pF62ZAExgNYfuCV/pTJmc= =bbBm -----END PGP SIGNATURE----- 

Upgrading (GUI)

Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary.
In case you want to update manually, you ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
  2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
  3. Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

Upgrading (CLI)

You ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
  2. Extract the new binaries to a new directory of your liking.
  3. Copy over the wallet files from the old directory (i.e. the v0.15.x.x or v0.16.0.x directory).
  4. Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.16.0.3, it will simply pick up where it left off.

Release notes (GUI)

  • macOS app is now notarized by Apple
  • CMake improvments
  • Add support for IPv6 remote nodes
  • Add command history to Logs page
  • Add "Donate to Monero" button
  • Indicate probability of finding a block on Mining page
  • Minor bug fixes
Note that you can find a full change log here.

Release notes (CLI)

  • DoS fixes
  • Add option to print daily coin emission and fees in monero-blockchain-stats
  • Minor bug fixes
Note that you can find a full change log here.

Further remarks

  • A guide on pruning can be found here.
  • Ledger Monero users, please be aware that version 1.6.0 of the Ledger Monero App is required in order to properly use CLI or GUI v0.16.

Guides on how to get started (GUI)

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

Ledger & Trezor CLI guides

Guides to resolve common issues (GUI)

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.
submitted by dEBRUYNE_1 to Monero [link] [comments]

VFXALERT

BUSINESS ADDRESS:
51 Bakehouse Rd
Kensington,Victoria
3031
Phone:
0381189320
Website:
https://vfxalert.com/
Keywords:
vfx binary signals,vfx binary options signals,vfx signals,vfx crypto signals,vfx forex signals,binary option trading signals,vfx software,free binary signals,free binary options signals,binary option signals,binary signal,forex signals live,binary options trading signals,free binary options signals providers,best binary options signals,free signals for binary options,binary options,signals for binary options,binary options signals,tradingsignals for binary options,binary options trading signals,binary options trading,signals for binary options online,free binary options signals,binary options trading on the news
BUSINESS DESRIPTION:
The vfxAlert software provides a full range of analytical tools online, a convenient interface for working with any broker. In one working window, we show the most necessary data in order to correctly assess the situation on the market. The vfxAlert signals include direct binary signals, online charts, trend indicator, market news. You can use binary options signals online, in a browser window, without downloading the vfxAlert application.
submitted by VFXALERT5 to u/VFXALERT5 [link] [comments]

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

Getting Windows Subsystem for Linux running smoothly on Windows 10

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Present Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them, which tells bash that they should be hidden by default. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, kinda like an even crummier Notepad, which is a pain to use at first but bear with me and we can pull through. /etc/passwd is a plaintext file that does not store passwords, as the name would suggest, but rather stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash 
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash 
Next, press esc to exit insert mode, then type in the following:
:wq 
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//* ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the * is a Kleene Star and means "grab everything that's here", and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provides commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: overview of top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, contains information for Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that don't have any dependencies outside the scope of their own package
  • proc: process information, contains details about your Linux system, kind of like Windows's C:/Windows folder
  • run: directory for programs to store runtime information. Similarly to /bin vs /usbin, run has the same function as /varun, but gets loaded sooner in the boot process.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, used by the Linux kernel to set or obtain information about the host system
  • tmp: temporary, runtime files that are cleared out after every reboot. Kinda like RAM in that way.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.

Appendix B: random resources

submitted by HeavenBuilder to learnprogramming [link] [comments]

[META] The Rules and their Entirety

These are the rules, everything that pertains to everyone who wishes to make any sort of interaction within this sub. Per the last META, clarity has been given in regards to bulk-type sales. Since EVERYTHING is here for you all to read, we expect there to be less issues with rule infractions and general confusion as to what’s acceptable, and what isn’t. We devote our time and energy for this sub to continuously never reach a balance amongst the users.
Our goal is to ensure the subreddit itself sticks around, along with trying to keep the userbase from being taken advantage of. Our rules make sense to some, and none to others but they serve a purpose. Regardless of how you feel, these are the rules and it is expected they be followed. At the time this post becomes visible, all of what’s listed below will be enforced as a hard rule, no more wrist slaps or babysitting.

Reporting Rules

Here are the Subreddit Reportable violations. Violating these rules will get you a ban.

Reddit Rules:

Reddit Rules regarding Firearms
No firearm sales. No Ammunition sales. No primers or gunpowder, as they are considered explosives.
No selling or distributing of files related to 3D printed firearms.
If you have no idea what this is referring to, please educate yourself before posting anything related to 3D printing files by reading up on them at the following websites:
Firearms: A Firearm is considered the serialized receiver or assembly of a working firearm. If you are unsure if an item is prohibited, contact the mods prior to posting it.
80% lowers and completion kits are not included in this prohibition as they are not firearms yet.
Bump-Stocks are considered Machine Guns by the ATF and are therefore prohibited from trading on the sub.
Binary Triggers, Cranks, and Rubber bands and other such items are not (currently) affected by this prohibition (unless Admins change their minds later).
Explosives & Hazmat: Gunpowder and Live Primers are considered as explosives and Hazardous Materials and are therefore prohibited from trade.
Ammunition: Reddit Admins use the ATF definition of ammunition which is as follows:
The term “Ammunition” means ammunition or cartridge cases, primers, bullets, or propellant powder designed for use in any firearm. The term shall not include (a) any shotgun shot or pellet not designed for use as the single, complete projectile load for one shotgun hull or casing, nor (b) any unloaded, non-metallic shotgun hull or casing not having a primer. 27 § 478.11
Brass and projectiles posted here will result in an immediate suspension by Reddit Admins, so if we find it first we will remove it.
Any violation of these above rules will result in a ban by us, or a site-wide suspension by Admins and their Anti-Evil goosesteppers.
Anyone attempting to skirt Reddit Rules will be given a 7 day ban on the first offense, a 30 day ban on the second offense, and a permaban thereafter due to the fact that Admins will use the bad behavior of a few to justify shutting down the sub for good.

Posting Rules:

This sub is for private sales only. Vendors must post in Gundeals or GunAccessoryVendors
Clarification on Vendor Rule: Don't include links to your business website, we are not a referral system, do your business on here. Please see the Reddit Self Promotion page for details on that. Reddit admins don't like you cutting in on their ad revenue. We do not support VENDORS, I.E. if you buy another company's products in bulk (such as Magpul), and just act as a distributoreseller, your business is not welcome here. That is /GunDeals territory. If you have an FFL, you cannot do business on here because are considered a firearm business, and cannot solicit any transactions involving firearms.
The limit on bulk sales/bulk items is 10, that means 10 of the same individual item can be posted for sale or trade. If you have 10 Geissele triggers, but only 4 are flat and 6 are curved, that will still count as 10, as they're the same branded trigger and likely purchased at the same time. If there are 3 OD Green items and 7 FDE that are otherwise the same item, that still count as 10. If you post 10 items of the same in one day, 10 the next, and 10 the following day after that, that will be viewed as vendor activity. To keep such things from happening, it will be limited to one sale of this type, per user, per week. The ONLY EXCEPTION to this rule is old magazines, as it is common for users to purge off part of their mag collection.
Please follow these rules when creating a listing: Prefix your title with the transaction type:
[WTS] - Want To Sell
[WTB] - Want To Buy
[WTT] - Want To Trade
[GIFT] - Gun It Forward Tactically
Suffix your title with your state (e.g. (GA) or (NY)). This will help incentivize local sales and could impact shipping costs. Also, it could affect legality of some items such as magazines and those accessories deemed as "assault weapon" parts by certain states.
Postings should all follow this general format as an example: "[WTS] M16A2 Carry Handle - $60 (VA)". If you do not list the price in the title, ensure that it is listed in the comments. Include a Dollar sign ($) or the bot will remove it.
Postings without a price value may be removed after a period of time. WTB posts require valid offering prices, and will be removed if they do not have one.
Postings with prices such as "$1 for the bot" or "$1,000,000 for the bot" that are intended to bypass our rules and automated removal system instead of posting a valid price, will be removed and a temporary ban will be issued immediately.
Postings without pictures will be removed immediately, unless these posts are WTB.
Do not post an item for sale if you do not have it in your possession at the time of posting. This includes an item you may have purchased elsewhere, you decided you don’t want it and it’s on its way to you, but it has yet to arrive. If you don’t have it, don’t post it.
If you post stock images of an item in your WTS/WTT post, that will result in a temp ban if it is your first time doing so, possibly permanent if done on multiple occasions. If you post images of someone else’s photos for “your” item, this will be viewed as scamming tactics and you will receive a permaban, immediately.
If you drop your price, use the Price Drop/NSFW Tag. If your items sell, use the Complete/Spoiler tag. Please don't delete the price of an item if it sells, because that can be used by people in the future to gauge what similar items may be worth.
If your post does not receive the traction you're wanting, refrain from reposting within a 24 hour time frame. You may repost after the 24 hours has passed, and a price drop is not required, but encouraged. Deleting your post and reposting afterwards is viewed as trying to evade this rule. It will be met with removal and a temp ban, possibly longer if done more than once.
Want to Buy/Sell/Trade (WTB/WTS/WTT): These transactions all require a price value for the item. If a listing does not include a price it may be removed and re-listed once it is in compliance. Giving an unrealistic price to avoid this rule will be treated as a rule violation. Examples of this are "WTB scope, $1" or "WTT Upper, $9999". Additionally, you must list what you are looking for in [WTT] posts. Fielding offers, testing the waters or any other post attempt to try and skirt this rule will result in the post being removed.
Gifting items forward: (GIFT) If you have small odds and ends that aren't worth much and the cost of shipping is prohibitive, you are allowed to offer items for free. The gifter is allowed to request compensation for shipping only, and can request a flair upgrade in the feedback thread for the transaction. If the receiver pays for shipping, they can also request a flair upgrade, but if they get the item for free, no flair upgrades for the recipient. Flair upgrades of this type are limited in order to avoid abuse, i.e. giving away 20 A2 grips in order to get +20 rep is not authorized.
Accounts with 5 or less flair (you must have at least 6) on GAFS are NOT eligible to participate in giveaways, due to users from other subs coming to win stuff without ever participating in GAFS, or GAFS users making multiple new burner accounts to enter giveaways.
New accounts (under 30 days of age) are not able to create WTS or WTT ads, nor should they offer things for sale in the comments of other peoples' posts. To prevent scams, new users can only post Want to Buy threads. If you want to attempt to bypass this account age requirement, you must be able to provide moderators evidence of a good trading history on another reputable online forum, such as Calgunner or AR15.com where you can show a longstanding history of positive trade feedback. If this is completed, moderators may provide an exception and allow WTS/WTT posts to be submitted by new users, with a warning caveat to any potential buyers to avoid using risky payment methods until the seller has had a chance to develop a positive trading reputation.
Any new accounts that utilize this subreddit that create names that are similar to a mods (i.e. sxbbzxro, sxbzxxro, subzxro, etc.) may be removed from participating here due to the possibility of confusing/having the ability to manipulate users into thinking they are in fact a mod.
Price Checks (PC): Because PC listings were abused by many to bypass the price rule, fish for "best offers", and otherwise snipe sales, they have been disabled after overwhelming support from the community.
We have a feedback system in place. The current month's flair thread is On the Sidebar, and is usually Stickied at the top as well. Check there for the specific directions. DO NOT create a thread for a sale that has already happened, or has happened in a different sub/website/forum etc. The Flair system is only for feedback for exchanges in /GAFS. Any attempts to game the flair system will be seen as an attempt to establish trust for scam purposes, and will be banned accordingly.
Law Enforcement: Be aware, we do not offer exemptions to any individuals who may have LE credentials. Due to the difficulty of verifying employment, possible job changes, leaving/termination from said job, etc. we treat all users as civilians. Any local and federal laws apply to all individuals who utilize this subreddit. Read up and stay up-to-date on these laws and regulations, you will be expected to know and abide by them. Failure to do so may lead to a ban.
External Sales:
NO LINKS to your external sales on TacSwap, eBay, Facebook, Armslist, Gunbroker, etc. Sales in multiple locations are allowed, but don't just provide a link to sale elsewhere. Make your listing here. The only caveat to these rules is to show a price point elsewhere if someone here has an item that is grossly overpriced, or is looking for an item.
This sub is not a "highest bid gets the item" format. There are also no lotteries for items i.e. 10 chances at $10 each to purchase a $75 flashlight with a random number generated to pick the winner.
High Value or Counterfeit Items:
To deter the sale of counterfeit products, any item that is serialized must have a picture of the serial. As firearms are not allowed for sale here, this shouldn't present a privacy issue to anyone. This policy covers items such as EOTechs, Aimpoints, Trijicons, etc. Along with this, if you're selling anything that's "new-in-box", you must unseal it and show the contents of said box/package.
No Stolen Property. If you are selling a knockoff item, indicate that fact. Items such as bipods, BUIS, flashlights, holsters, and scopes/optics are known to have some gray market options. KAC USMC Stamped Rear Sights are not stolen property and are allowed on here, unless another member can provide proof from a DoD source that they are in fact considered stolen government property.
All GAFS logos, icons, banners and visual content related to this subreddit, belong to the moderator team. Do not create/manufacture/produce items with this content onto itself. It is forbidden to profit off the GAFS name, unless discussed with the modteam in advance and given permission.

Shipping/Insurance Rules:

The official policy is for the mods to not get involved with issues regarding lost packages, provided that the parties can prove it was actually lost. If you feel like insurance should be added to your transaction, please take care to add that before finalizing terms.

General Rules:

WARNING: Be aware of all state and federal laws that apply to you and any parties involved in a firearms-related transaction. You are responsible for knowing and following the law. This Subreddit and its staff are in no way responsible for informing you of the law, but will make every effort to do so. As a buyer, be familiar with your state/county/city rules. As a seller, do not knowingly sell prohibited items to areas that have laws against your items, such as certain capacity magazines. Any person, buyer, or seller, who knowingly solicits a trade that is illegal for them may be subject to a ban.
Respect all federal and local laws for any transaction you take part in. This includes federal drug laws. Drug activity tied to your account tied to any other issues is sufficient grounds for banning. Here is the ATF Letter that explains why any suspected drug activity, including marijuana, is grounds for immediate banning from the sub. Illegal gun activity such as unregistered SBRs, AOWs, destructive devices, DIAS or lightning links in your reddit profile (in or outside the sub) can be reason for banning. Do not spread bad information regarding laws.
Any item you post for sale is expected to be in your current possession. If this is not the case, you must specify this in the listing. Circumstances such as selling for a friend is allowed, but pictures of your items are required to be shared to the public. You do not need an imgur.com account in order to host pictures of your item on imgur, so that is not an excuse.
If you are scammed, inform the mods as soon as you can so that we may investigate and ban the offending parties if necessary.
Do not post the personal information of any Reddit users. The exception to this is if someone uses PayPal to scam a member, this information may be sent to the mods to prevent others from also being scammed. Doxxing people will not be tolerated.
Do not antagonize posters about their price, opinion, or sexual orientation (etc). This translates to be a general rule of "no dickish behavior". If you disagree with someone's price, and can post evidence that their item has a current or recent better price elsewhere such as a link to a vendor, that information is authorized to be posted. That is not antagonism. People may comment on prices and offer counter-offers, as long as behavior is not insulting or unprofessional. If you feel that someone is being unprofessional regarding pricing, report it and the mods will evaluate the case. They are the determining factor whether behavior warrants muting, temporary banning, or permanent banning based on severity of incident, past behavior, and other factors. If your behavior does not contribute towards the positive image of firearms ownership, your participation in this subreddit may not be welcome.
Soliciting any type of transaction regarding prohibited items may result in a ban. This includes Price Checks of firearms and other prohibited items, as this can be seen as an attempt to garner PM offers for prohibited items. Remember that there is no expectation of privacy from Reddit Admins, and that they have shown in the past that they have access to private message histories.
As a general guideline, if a buyer wants to use PayPal Goods and Services (G&S) rather than Friends and family (F&F), it is expected that they will absorb the ~3% fee for the increased protections. However, PayPal F&F, Zelle, and Venmo and similar payment methods are discouraged here due to a lack of protections.
All rules and guidelines are subject to change. The moderators have the final say in all issues in relation to the rules and how to enforce them.
submitted by SxbZxro to GunAccessoriesForSale [link] [comments]

Steam client update for 6/1/20 (6/2/20 UTC)

Via the Steam store:

Remote Play

Windows

Linux

Linux Shader Pre-Caching

SteamNetworkingSockets

Steam Input

SteamVR

submitted by wickedplayer494 to Steam [link] [comments]

Under-represented and overlooked: Māori and Pasifika scientists in Aotearoa New Zealand’s universities and crown-research institutes

https://www.tandfonline.com/doi/full/10.1080/03036758.2020.1796103

"Under-represented and overlooked: Māori and Pasifika scientists in Aotearoa New Zealand’s universities and crown-research institutes

Tara G. McAllister ,Sereana Naepi📷,Elizabeth Wilson📷,Daniel Hikuroa📷 &Leilani A. Walker

ABSTRACT

This article provides insights into the ethnicity of people employed in Aotearoa New Zealand’s publicly-funded scientific workforce, with a particular focus on Māori and Pasifika scientists. We show that between 2008 and 2018, Māori and Pasifika scientists were severely under-represented in Aotearoa New Zealand’s universities and crown-research institutes. Despite espousals by these institutions of valuing diversity, te Tiriti o Waitangi and Māori research, there have been very little changes in the overall percentage of Māori and Pasifika scientists employed for a period of at least 11 years. Notably, one university reported having not employed a single Māori or Pasifika academic in their science department from 2008 to 2018. We highlight the urgent need for institutions to improve how they collect and disseminate data that speaks to the diversity of their employees. We present data that illustrate that universities and crown-research institutes are failing to build a sustainable Māori and Pasifika scientific workforce and that these institutions need to begin to recruit, retain and promote Māori and Pasifika scientists.

Introduction

In 2018, Dr Megan Woods (Minister of Research, Science and Innovation) launched the Ministry of Business, Innovation and Employment’s (MBIE) diversity in science statement, which states that ‘Diversity is vital for our science system to realise its full potential’ (MBIE 2018). Whilst this statement is a step towards raising awareness of the importance of diversity in science it needs to be followed by institutional changes, targeted programmes and directed responses from institutions. A vital component of achieving the aspirations espoused in this statement includes open reporting on diversity of ‘applicants, award holders, and advisory, assessment and decision making bodies’ (MBIE 2018). In two recent papers, McAllister et al. (2019) and Naepi (2019) spoke to the lack of diversity in Aotearoa New Zealand 1 ’s eight universities and provided evidence of the severe under-representation of Māori and Pasifika scholars, who comprise 16.5% and 7.5% respectively of the total population of Aotearoa. The authors showed that Māori and Pasifika comprise 4.8% and 1.7% respectively of academics, despite the espousals by universities of valuing diversity and their obligations to equity as outlined in te Tiriti o Waitangi (McAllister et al. 2019; Naepi 2019). The data used in these two studies, obtained from the Ministry of Education (MoE), provided information on the ethnicity of academic staff university wide and was not disaggregated by faculty. Consequently, data on the number of Māori and Pasifika academics in each faculty or department is currently not openly available. Previous research has shown that very few Māori academics exist outside of Māori departments and it remains difficult to access quantitative data on their lived experience as universities continue to silence reports (Kidman et al. 2015; UoO date unknown).
To ensure that the aspirations championed within MBIE’s diversity statement can be met, we first need open and accurate reporting on the diversity of people employed within Aotearoa New Zealand’s scientific workforce and there is currently a significant gap of openly available data that investigate this. Some annual reports and equity profiles of crown-research institutes (CRIs) and universities do contain selected ethnicity data (i.e. MWLR 2018; UoA 2018). However, these reports do not always present data in a meaningful and consistent way and are not always publically available. For example, the University of Otago’s annual report does not contain any information on the ethnicity of staff and instead focuses only on gender of staff and the ethnicity of students (UoO 2018). Instead, the ethnicity data for staff is presented in the equity report, which is only available to staff and access must be requested from the Head of Organisational Development (UoO date unknown).
A survey of Aotearoa New Zealand’s scientists and technologists in 2008 provides the most recent quantitative indication of the diversity of Aotearoa New Zealand’s scientific workforce, despite being conducted 12 years ago (Sommer 2010). The author indicated that there was very little change in ethnicity of Aotearoa New Zealand’s scientific workforce between the 1996 and 2008 surveys, with ‘European’ scientists making up 82.3% and 80.9% respectively (Sommer 2010). According to the author, there was a ‘modest increase’ in Māori scientists from 0.7% (1996) to 1.7% (2008) and this increase ‘represents a glimmer of success for those who have sought to develop policies to bring more Māori into the science and technology workforce’ (Sommer 2010, p. 10). However, an increase of 1% over a period of 15 years (i.e. an average increase of 0.07% per year) should be viewed as a significant failure. The percentage of Pasifika scientists also increased very slightly from 0.5% in 1996 to 0.6% in 2010 (Sommer 2010). McKinley (2002, p. 109) provided an insight into the extremely low numbers of Māori women employed by CRIs in 1998:
‘Of the 3,839 people employed by seven Crown Research Institutes (CRIs) in New Zealand, 57 women or approximately 1.5% of the total identified as Māori women. At the time these data were collected in 1998 there were no Māori women in management positions, two were categorised as scientists, 15 as science technicians, and 40 as ‘support’ staff that includes cafeteria staff, administration staff and cleaners’
The data presented by both McKinley (2002) and Sommer (2010) highlight the urgent need for institutions and government to move away from ‘business as usual’ and make a serious commitment to firstly collecting data on diversity, openly and transparently presenting it and secondly increasing the hiring, promoting and retention of Māori and Pasifika scientists.
The present paper aims to begin to address the gap in knowledge by collating data and investigating how diverse Aotearoa New Zealand’s scientific workforce is. An intersectional lens must be applied when thinking critically about diversity and equity, however policies, actions and research often privilege gender (i.e. Bhopal and Henderson 2019; Brower and James 2020) over ethnicity whilst ignoring other intersectional identities that go beyond white, cis women. Here, we focus on the intersectional identities of Māori and Pasifika scientists, while acknowledging that people who have other intersectional identities including those with disabilities, LGBTQIA, non-binary and women of colour are likely to be disproportionately affected and disadvantaged within Aotearoa New Zealand’s science system, which like universities, was arguably created by and made for white, cis men (Ahmed 2012; Osei-Kofi 2012; Naepi et al. 2017; Akenahew and Naepi 2015). This paper examines the current diversity of Aotearoa New Zealand’s scientific workforce, with a particular focus on Māori and Pasifika. We will address the following questions:
  1. How many Māori and Pasifika scientists are employed in Aotearoa New Zealand’s universities and CRIs?
  2. How has the percentage of Māori and Pasifika scientists in these institutions changed between 2008 and 2018?

Methods

Data collection

Data was requested from universities and CRIs by emailing key individuals within each organisation in 2019. Data from 2008 to 2018 on the percentage of scientists, relative to both the total headcount and the total number of full-time equivalents (FTEs) for each recorded ethnicity employed was requested from CRIs and universities. Both the nature of responses to this request and the time it took to receive a response varied among institutions. Responses from institutions ranged from an openness and willingness to contribute data to this project to hostility and racist remarks. Several institutions did not respond to multiple email requests. A subsequent email sent by a Principal Advisor from the Office of the Prime Minister’s Chief Science Advisor elicited a prompt response from all remaining institutions. After initial conversations with staff from HR departments and university management, it was agreed that all institutions would remain anonymous and we believe that this contributed significantly to increasing the willingness of institutions to contribute data. Overall, data was obtained from 14 out of 15 of Aotearoa New Zealand’s universities and CRIs. At most of these institutions staff self-declare their ethnicities and are given multiple choices, where data was provided for multiple ethnicities we used the first reported ethnicity,

Data from universities

Seven out of eight universities contributed data directly to this project, whereas data for university B was extracted from annual reports. Ethnicity data in the form of FTEs and headcount data was provided by most universities. Māori and Pasifika academics are more likely to be employed on contracts of less than one FTE compared to Pākehā academics (unpublished data). We therefore present the percentage of FTEs of staff for each recorded ethnicity, rather than headcount data as it is likely to be a more accurate measure of diversity. Recorded ethnicity groups differed among some universities, mainly in the fact that some distinguished between ‘European’ and ‘NZ European/Pākehā’, whereas at others these two ethnicities were combined.
It is important to note that the data from universities presented in this paper includes academic staff and excludes research staff, including post-doctoral fellows and laboratory technicians. Data on the number of scientists employed at universities also only includes scientists employed in science departments (i.e. excludes Māori scientists in health departments). However, a recent paper published by Naepi et al. (2020) showed that in 2017, there were only 55 Māori and 20 Pasifika postdoctoral fellows across all faculties in all of Aotearoa New Zealand’s universities. The number of Māori and Pasifika postdoctoral fellows employed in science faculties is, therefore, likely to be very small. Academic staff includes other academic staff, senior tutors, tutors, tutorial assistants, lecturers, senior lecturers, associate professors and professors. Previous research has shown that a large proportion of Māori and Pasifika academics are employed as tutors and other academic staff rather than in permanent senior academic positions (see Naepi 2019), so this is also likely to be the case within science faculties.
Concerningly, two universities (university E and H) were unable to provide data for the requested 11-year period (i.e. from 2008 to 2018). Upon querying this with human resource (HR) departments, their reasons included but were not limited to the following:

Data from crown-research institutes

Data, in some shape or form, was obtained from six out of seven of Aotearoa New Zealand’s CRIs. Obtaining accurate and consistent temporal data from CRIs was, despite their willingness, much more difficult than from universities. The MoE requires certain ethnicity data from universities in a particular format (see MoE date unknown), however the diversity of staff employed at Aotearoa New Zealand’s seven CRIs is currently not required by an external organisation. Most CRIs were unable to provide FTE data but were able to provide headcount data, consequently we present the headcount data in this report. Because the data from CRIs was highly variable, we were not prescriptive about how they defined a scientist, however at most institutions this included post-doctoral fellows and scientists.
Data on the percentage of Māori and Pasifika scientists employed from 2008 to 2018 could only be obtained from four out of seven of the CRIs. CRI F could only provide ethnicity for staff that were recent hires from 2016 to 2018, meaning we are unable to differentiate between science and non-science staff and data on staff employed prior to 2016 was unavailable. CRI E could only provide data for 2019, the year that we had asked for it, due to their HR system overwriting data and therefore having no historical record of staff ethnicity.
The ethnicity data from CRIs, with the exception of CRI B, can only be viewed as indicative due to inconsistencies in how CRIs collect data. Data from most institutions was therefore not conducive to any temporal or statistical analyses. For example, at CRI A over the 11-year period, the ethnicity categories offered to staff changed four times. Māori and Pasifika were consistently given as options, which provides some level of confidence in CRI A’s ethnicity data.

Results

Māori scientists employed in Aotearoa New Zealand’s universities

Before even considering the data presented below, we must acknowledge and highlight that science faculties within universities are generally not safe and inclusive environments for Māori and Pasifika academic staff. Reasons for this include that being the only Indigenous person in a faculty puts that one under extreme pressure to help colleagues, indigenise curriculum, support Indigenous students while also advancing their own career (Mercier et al. 2011; Kidman et al. 2015). It is well established that the job satisfaction of Māori academics is influenced by their proximity to other Māori academics (Mercier et al. 2011; Kidman et al. 2015). The interdisciplinary work of Māori scientists also often does not align with what the academy and their Pākehā counterparts define as ‘science’ and many scholars have explored this (see for example, McKinley 2005; Mercier 2014; Hikuroa 2017). Consequently, of the few Māori scientists that exist and survive within academia, several are employed outside of science faculties (see for example, Mercier 2014). This data therefore is likely to very slightly underestimate the numbers of Māori scientists within the academy. Furthermore, in the present paper we focus on Māori and Pasifika scientists in science faculties but there will also be Māori and Pasifika scientists in social science and humanities and health faculties, which will not be captured by the data reported below.
Māori are under-represented in science faculties at all of Aotearoa New Zealand’s eight universities (Table 1). University A had the highest level of representation, which may be attributed to the science faculty being combined with another discipline at this particular university (Table 1). From 2008 to 2018, University D has never employed a Māori academic in their science faculty (Table 1). Māori comprised less than 5% of the total FTEs in science faculties at all other universities between 2008 and 2018, the averages were 4.3, 1.4, 1.6, 3.7 and 0.6% respectively at University B, C, E, F and H (Table 1). Importantly, there were no significant differences between the percentage of Māori FTEs in 2008 and 2018 (paired t-test: t10 = −0.24, p = 0.82). Thus, meaning that over 11 years there has been no improvement in Māori representation in science faculties (Table 1).

Table 1. The percentage of Māori and Pasifika full-time equivalents (FTEs) of academic staff in science faculties at each of Aotearoa New Zealand’s eight universities. University A and G both have a combined faculty (i.e. science and another discipline) whereas all other universities have separate faculties and data is solely for science faculties. University E was unable to provide FTE data prior to 2011 and university H was only able to provide data from 2015.

CSVDisplay Table

Māori scientists employed in Aotearoa New Zealand’s crown-research institutes

Promisingly, and in contrast with patterns of Māori scientists at universities the percentage of Māori scientists (i.e. of the total headcount) employed by CRIs has increased from 2008 to 2018 at half (2/4) of the CRIs that were able to provide temporal data (Table 2). At CRI A, Māori comprised 1.8% of the scientists employed in 2008 and this steadily increased to 3.8% in 2018 (Table 2). Similarly at CRI B, the percentage of Māori scientists have increased from 3.4% to 7.8% respectively (Table 2). At CRI C, Māori have comprised between 0.01% and 0.03% of scientists employed over a period of 11 years and at CRI D it has varied between 0% and 0.6% (Table 2).

Table 2. The percentage of Māori and Pasifika scientists of the total headcount employed by each of Aotearoa New Zealand’s crown-research institutes. CRI E could only provide data for 2019 and CRI F only had data for new recruits from 2016–2018. CRI G did not contribute data to this research.

CSVDisplay Table
Certain CRIs are doing better than others, it is however important to note, particularly given CRIs outward espousals of commitments to and valuing ‘Māori research’ and mātauranga (i.e. GNS 2018), that Māori remain under-represented in all CRIs in Aotearoa New Zealand, including CRI A and B (Table 2). Additionally, the fact that three out of seven of the CRIs could not provide sufficient data suggests that these institutions have a lot of work to do in collecting data on the diversity of the staff that they employ.

Pasifika scientists employed in Aotearoa New Zealand’s universities and crown-research institutes

There is currently an absence of research into the experiences of Pasifika scientists in Aotearoa-New Zealand’s science system. However like Māori scientists, Pasifika scientists are likely to be marginalised and under-valued within the current science system. Pasifika scientists in both universities and CRIs are extremely under-represented (Tables 1 and 2). Notably of the 11 institutions (inclusive of universities and CRIs) that provided data only three reported having Pasifika representation exceeding 1% of either the total headcount or total number of FTEs in more than one year (Tables 1 and 2). Four institutions (one university and three CRIs) reported having employed zero Pasifika scientists for 11 consecutive years (Tables 1 and 2). Importantly, there were no significant differences between the percentage of Pasifika FTEs in universities in 2008 and 2018 (paired t-test: t8 = 0.36, p = 0.73). Thus, meaning that over 11 years there has been no improvement in Pasifika representation in science faculties (Table 2).
The patterns in the percentage of both Māori and Pasifika scientists employed at university G were very different from all other institutions (Table 1). Firstly, university G was the only university that in some years employed more Pasifika than Māori scientists (Table 1). In 2008, 7.4% of FTEs in the science faculty of university G belonged to Pasifika scientists, which was the highest recorded in all eight institutions over 11 years (Table 1). However, Pasifika scientists in this faculty had only 4.4 FTEs in 2008, meaning that 7.4% equated to five Pasifika staff (data not shown).

The diversity of scientists employed in science faculties in Aotearoa New Zealand’s universities

Between 2008 and 2018, the majority of academics in the Computing and Mathematical Sciences, Engineering and Science departments at university D were European comprising between 58.7% and 85.2% of the total FTEs (Figure 1(A)). University D distinguishes between ‘European’ and ‘New Zealand European/Pākehā’ and the data presented in Figure 1(A) suggests that not many academics in these departments associate with the latter group. Thus, suggesting that most academics employed within these departments are from overseas. In these departments (i.e. Computing and Mathematical Sciences, Engineering and Science) between 2008 and 2018 there was a consistent increase in the percentage of FTEs of Asian ethnicities (12.3% increase in Computing and Mathematical Sciences, 6.8% in Engineering, 2.4% in Science; Figure 1(A)).
Figure 1. (A) The percentage of full-time equivalents (FTEs) for each recorded ethnicity in three science faculties at university D in2008 and 2018 and (B) the percentage of Māori and Pasifika FTEs in those three faculties for academic staff from 2008–2018.
Note: In both the Engineering and Science departments there were no Māori or Pasifika employed between 2008 and 2018.
📷Display full size
The data provided by university D clearly illustrates a severe lack of Māori and Pasifika academic staff representation in sciences faculties (Figure 1(B)). It shows that in two of the three departments, there have never been any Māori academics employed (Figure 1(B)). Furthermore, in those three departments no Pasifika academic staff have been employed in 11 years (2008–2018). Māori academics have comprised 4.1%–7.5% of the total FTEs in the Computing and Mathematical Science department (Figure 1).
NZ European/Pākehā formed the majority (52.8%–63.6%) of academic staff employed in the science faculty of university B and this percentage has decreased by 11.8% between 2008 and 2018 (Figure 2). People who did not declare their ethnicity (unknown) comprised a small percentage (average = 3.2% of the total FTEs; Figure 2). European academics made up on average 20% of the total FTEs employed in this faculty between 2008 and 2018 (Figure 2). Māori and Pasifika scientists were under-represented, comprising on average 6.0% and 2.6% respectively (Figure 2). The percentage of Māori FTEs has decreased from 7.3% (2008) to 6.4% (2018), whereas the percentage Pasifika FTEs has increased from 2.0% to 4.8% over the 11-year period (2008–2018; Figure 2). However, there was no statistically significant difference between both Māori and Pasifika FTEs over time (p > 0.05).
Figure 2. The percentage offull-time equivalents (FTEs) for each recorded ethnicity at university B from 2008 to 2018.
Note: University B has a combined science faculty (i.e. science and another discipline).
📷Display full size
The importance of department by department analysis of universities ethnicity data is highlighted when comparing the percentage of Māori FTEs university-wide and the science faculty (Figure 3). The average percentage of Māori FTEs university wide at university F was 4.7% from 2008 to 2018, whereas it was consistently lower within the science faculty (Figure 3). Similarly, representation of Pasifika academics in the science faculty at university F was much lower compared to the entire university (Figure 4). The average between 2008 and 2018 was 1.5% of Pasifika FTEs across the university whereas it was only 0.4% in the science faculty (Figure 4).
Figure 3. The percentage of Māori full-time equivalents (FTEs) of academics in both the science facultyand across the entire university at university F.
Note: y axis is limited to 15%.
📷Display full size
Figure 4. The percentage of Pasifika full-time equivalents (FTEs) for academic staff in both the science faculty across the entire university at university F.
Note: The y axis is limited to 15%.
📷Display full size

The diversity of scientists employed in Aotearoa New Zealand’s crown-research institutes

CRI B was the only CRI that was able to provide relatively good quality, temporal data. Data from this institution indicated that African scientists made up approximately 1% of scientists employed from 2016 to 2018 and both Asian and Australian scientists have made up on average 5.4% and 5.0% respectively of the total headcount from 2008 to 2018 (Figure 5). The percentage of European scientists has increased steadily from 16.1% in 2008 to 23.5% in 2018 (Figure 5). The percentage of Māori scientists employed has also increased from 3.4% in 2008 to 7.8% in 2018 (Figure 5). Although this increase is promising, Māori remain under-represented within this institution. Interestingly, the percentage of NZ European/Pākehā employed at CRI B has decreased from 64.9% (2008) to 45.3% (2018; Figure 5). This may speak to the increasing value the science system places on international expertise, whereby scientists from overseas or with international experience are valued more than those from Aotearoa, which is driven in a large part by global ranking systems that value international staff recruitment (Stack 2016). This is driven largely by the increasing importance placed on international university ranking systems. Importantly, scientists coming from overseas will likely have very little understanding of things that are highly important within the context of Aotearoa (e.g. te Tiriti o Waitangi). Considering the data presented, urgent action is required to address this apparent selection of international scientists over Māori and Pasifika scientists. Rather than copying and pasting a blanket statement in job advertisements of empty words like the following: ‘The University of Canterbury actively seeks to meet its obligation under the Treaty of Waitangi | Te Tiriti o Waitangi’ (UoC date unknown), CRIs and universities need to be actively recruiting Māori and Pasifika scientists and hence need to consider the following questions when hiring new staff:
  1. How is this person likely to contribute to the uplifting of Māori communities in a meaningful way?
  2. Do they have any experience working with Indigenous communities?
  3. What is their understanding of Te Tiriti o Waitangi and the Treaty of Waitangi?
  4. How do you see your role as supporting our institution's commitments to Pasifika communities?
Figure 5. Percentage of the total headcount for each recorded ethnicity at crown-research institute (CRI) B from 2008 to 2018.
Note: Ethnicity groups in this graph differ from previous graphs.
📷Display full size
CRI E were only able to supply data in the year that it was requested (i.e. 2019) due to their HR systems. In 2019, this particular CRI employed zero Pasifika scientists and 1.6% of scientists were Māori (Figure 6). The majority of scientists employed at CRI E in 2019 were NZ European/Pākehā (55.0% NZ European) and 21.5% were ‘European’ (Figure 6).
Figure 6. The percentage of the total headcount of each recorded ethnicity at crown-research institute (CRI) E in 2019.
Note: Ethnicity groupings differ from previous graphs.
📷Display full size
CRI F only began collecting ethnicity data, despite previously collecting gender data, in 2016. Their data was also only collected for new recruits. We were, therefore, unable to disaggregate science staff from general and non-science staff. From 2016 to 2018 the majority (59%–66%) of new recruits were ‘NZ Europeans’. In 2017, 14% of new recruits were Pasifika whereas in 2016 and 2018 there were no Pasifika recruits. Māori comprised between 2% of new recruits in 2017 and 2018 but 8% in 2016 (data not shown)...."
submitted by lolpolice88 to Maori [link] [comments]

ShardingSphere 4.x FAQ

1. How to debug when SQL can not be executed rightly in ShardingSphere?

Answer:
sql.show configuration is provided in Sharding-Proxy and post-1.5.0 version of Sharding-JDBC, enabling the context parsing, rewritten SQL and the routed data source printed to info log. sql.show configuration is off in default, and users can turn it on in configurations.

2. Why do some compiling errors appear?

Answer:
ShardingSphere uses lombok to enable minimal coding. For more details about using and installment, please refer to the official website of lombok.
Sharding-orchestration-reg module needs to execute mvn install command first, and generate gRPC java files according to protobuf files.

3. Why is xsd unable to be found when Spring Namespace is used?

Answer:
The use norm of Spring Namespace does not require to deploy xsd files to the official website. But considering some users' needs, we will deploy them to ShardingSphere's official website.
Actually, META-INF\spring.schemas in the jar package of sharding-jdbc-spring-namespace has been configured with the position of xsd files: META-INF\namespace\sharding.xsd and META-INF\namespace\master-slave.xsd, so you only need to make sure that the file is in the jar package.

4. How to solve Cloud not resolve placeholder … in string value … error?

Answer:
${...} or $->{...} can be used in inline expression identifiers, but the former one clashes with place holders in Spring property files, so $->{...} is recommended to be used in Spring as inline expression identifiers.

5. Why does float number appear in the return result of inline expression?

Answer:
The division result of Java integers is also integer, but in Groovy syntax of inline expression, the division result of integers is float number. To obtain integer division result, A/B needs to be modified as A.intdiv(B).

6. If sharding database is partial, should tables without sharding database & table be configured in sharding rules?

Answer:
Yes. ShardingSphere merges multiple data sources to a united logic data source. Therefore, for the part without sharding database or table, ShardingSphere can not decide which data source to route to without sharding rules. However, ShardingSphere has provided two options to simplify configurations.
Option 1: configure default-data-source. All the tables in default data sources need not to be configured in sharding rules. ShardingSphere will route the table to the default data source when it cannot find sharding data source.
Option 2: isolate data sources without sharding database & table from ShardingSphere; use multiple data sources to process sharding situations or non-sharding situations.

7. In addition to internal distributed primary key, does ShardingSphere support other native auto-increment keys?

Answer:
Yes. But there is restriction to the use of native auto-increment keys, which means they cannot be used as sharding keys at the same time.
Since ShardingSphere does not have the database table structure and native auto-increment key is not included in original SQL, it cannot parse that field to the sharding field. If the auto-increment key is not sharding key, it can be returned normally and is needless to be cared. But if the auto-increment key is also used as sharding key, ShardingSphere cannot parse its sharding value, which will make SQL routed to multiple tables and influence the rightness of the application.
The premise for returning native auto-increment key is that INSERT SQL is eventually routed to one table. Therefore, auto-increment key will return zero when INSERT SQL returns multiple tables.

8. When generic Long type SingleKeyTableShardingAlgorithm is used, why doesClassCastException: Integer can not cast to Long exception appear?

Answer:
You must make sure the field in database table consistent with that in sharding algorithms. For example, the field type in database is int(11) and the sharding type corresponds to genetic type is Integer, if you want to configure Long type, please make sure the field type in the database is bigint.

9. In SQLSever and PostgreSQL, why does the aggregation column without alias throw exception?

Answer:
SQLServer and PostgreSQL will rename aggregation columns acquired without alias, such as the following SQL:
sql SELECT SUM(num), SUM(num2) FROM tablexxx;
Columns acquired by SQLServer are empty string and (2); columns acquired by PostgreSQL are empty sum and sum(2). It will cause error because ShardingSphere is unable to find the corresponding column.
The right SQL should be written as:
sql SELECT SUM(num) AS sum_num, SUM(num2) AS sum_num2 FROM tablexxx;

10. Why does Oracle database throw “Order by value must implements Comparable” exception when using Timestamp Order By?

Answer:
There are two solutions for the above problem: 1. Configure JVM parameter “-oracle.jdbc.J2EE13Compliant=true” 2. Set System.getProperties().setProperty(“oracle.jdbc.J2EE13Compliant”, “true”) codes in the initialization of the project.
Reasons:
com.dangdang.ddframe.rdb.sharding.merger.orderby.OrderByValue#getOrderValues():
java private List> getOrderValues() throws SQLException { List> result = new ArrayList<>(orderByItems.size()); for (OrderItem each : orderByItems) { Object value = resultSet.getObject(each.getIndex()); Preconditions.checkState(null == value || value instanceof Comparable, "Order by value must implements Comparable"); result.add((Comparable) value); } return result; }
After using resultSet.getObject(int index), for TimeStamp oracle, the system will decide whether to return java.sql.TimeStamp or define oralce.sql.TIMESTAMP according to the property of oracle.jdbc.J2EE13Compliant. See oracle.jdbc.driver.TimestampAccessor#getObject(int var1) method in ojdbc codes for more detail:
```java Object getObject(int var1) throws SQLException { Object var2 = null; if(this.rowSpaceIndicator == null) { DatabaseError.throwSqlException(21); }
 if(this.rowSpaceIndicator[this.indicatorIndex + var1] != -1) { if(this.externalType != 0) { switch(this.externalType) { case 93: return this.getTimestamp(var1); default: DatabaseError.throwSqlException(4); return null; } } if(this.statement.connection.j2ee13Compliant) { var2 = this.getTimestamp(var1); } else { var2 = this.getTIMESTAMP(var1); } } return var2; } 
```

11. Why is the database sharding result not correct when using Proxool?

Answer:
When using Proxool to configure multiple data sources, each one of them should be configured with alias. It is because Proxool would check whether existing alias is included in the connection pool or not when acquiring connections, so without alias, each connection will be acquired from the same data source.
The followings are core codes from ProxoolDataSource getConnection method in Proxool:
java if(!ConnectionPoolManager.getInstance().isPoolExists(this.alias)) { this.registerPool(); }
For more alias usages, please refer to Proxool official website.

12. Why are the default distributed auto-augment key strategy provided by ShardingSphere not continuous and most of them end with even numbers?

Answer:
ShardingSphere uses snowflake algorithms as the default distributed auto-augment key strategy to make sure unrepeated and decentralized auto-augment sequence is generated under the distributed situations. Therefore, auto-augment keys can be incremental but not continuous.
But the last four numbers of snowflake algorithm are incremental value within one millisecond. Thus, if concurrency degree in one millisecond is not high, the last four numbers are likely to be zero, which explains why the rate of even end number is higher.
In 3.1.0 version, the problem of ending with even numbers has been totally solved, please refer to: https://github.com/sharding-sphere/sharding-sphere/issues/1617

13. In Windows environment,when cloning ShardingSphere source code through Git, why prompt filename too long and how to solve it?

Answer:
To ensure the readability of source code,the ShardingSphere Coding Specification requires that the naming of classes,methods and variables be literal and avoid abbreviations,which may result in Some source files have long names.
Since the Git version of Windows is compiled using msys,it uses the old version of Windows Api,limiting the file name to no more than 260 characters.
The solutions are as follows:
Open cmd.exe (you need to add git to environment variables) and execute the following command to allow git supporting log paths: git config --global core.longpaths true
If we use windows 10, also need enable win32 log paths in registry editor or group strategy(need reboot):
Create the registry key HKLM\SYSTEM\CurrentControlSet\Control\FileSystem LongPathsEnabled (Type: REG_DWORD) in registry editor, and be set to 1. Or click "setting" button in system menu, print "Group Policy" to open a new window "Edit Group Policy", and then click 'Computer Configuration' > 'Administrative Templates' > 'System' > 'Filesystem', and then turn on 'Enable Win32 long paths' option.
Reference material:
https://docs.microsoft.com/zh-cn/windows/desktop/FileIO/naming-a-file https://ourcodeworld.com/articles/read/109/how-to-solve-filename-too-long-error-in-git-powershell-and-github-application-for-windows

14. In Windows environment, could not find or load main class org.apache.shardingshpere.shardingproxy.Bootstrap, how to solve it?

Answer:
Some decompression tools may truncate the file name when decompressing the Sharding-Proxy binary package, resulting in some classes not being found.
The solutions:
Open cmd.exe and execute the following command: tar zxvf apache-shardingsphere-${RELEASE.VERSION}-sharding-proxy-bin.tar.gz

15. How to solve Type is required error?

Answer:
In Apache ShardingSphere, many functionality implementation are uploaded through SPI, such as Distributed Primary Key. These functions load SPI implementation by configuring the type,so the type must be specified in the configuration file.

16. Why does my custom distributed primary key do not work after implementing ShardingKeyGenerator interface and configuring type property?

Answer:
Service Provider Interface (SPI) is a kind of API for the third party to implement or expand. Except implementing interface, you also need to create a corresponding file in META-INF/services to make the JVM load these SPI implementations.
More detail for SPI usage, please search by yourself.
Other ShardingSphere functionality implementation will take effect in the same way.

17. How to solve that DATA MASKING can't work with JPA?

Answer:
Because DDL for data masking has not yet finished, JPA Entity cannot meet the DDL and DML at the same time, when JPA that automatically generates DDL is used with data masking.
The solutions are as follows:
  1. Create JPA Entity with logicColumn which needs to encrypt.
  2. Disable JPA auto-ddl, For example setting auto-ddl=none.
  3. Create table manually. Table structure should use cipherColumn,plainColumn and assistedQueryColumn to replace the logicColumn.

18. How to speed up the metadata loading when service starts up?

Answer:
  1. Update to 4.0.1 above, which helps speed up the process of loading table metadata from the default dataSource.
  2. Configure max.connections.size.per.query(Default value is 1) higher referring to connection pool you adopt(Version >= 3.0.0.M3).

19. How to allow range query with using inline sharding strategy(BETWEEN AND, >, <, >=, <=)?

Answer:
  1. Update to 4.0.1 above.
  2. Configureallow.range.query.with.inline.sharding to true (Default value is false).
  3. A tip here: then each range query will be broadcast to every sharding table.

20. Why there may be an error when configure both sharding-jdbc-spring-boot-starter and a spring-boot-starter of certain datasource pool(such as druid)?

Answer:
  1. Because the spring-boot-starter of certain datasource pool (such as druid) will configured before sharding-jdbc-spring-boot-starter and create a default datasource, then conflict occur when sharding-jdbc create datasources.
  2. A simple way to solve this issue is removing the the spring-boot-starter of certain datasource pool, sharding-jdbc create datasources with suitable pools.

21. How to add a new logic schema dynamically when use sharing-proxy?

Answer:
  1. Before version 4.1.0, sharing-proxy can't support adding a new logic schema dynamically, for example, when a proxy starting with two logic schemas, it always hold the two schemas and will be notified about the table/rule changed events in the two schemas.
  2. Since version 4.1.0, sharing-proxy support adding a new logic schema dynamically via sharding-ui or zookeeper, and it's a plan to support removing a exist logic schema dynamically in runtime.

22. How to use a suitable database tools connecting sharding-proxy?

Answer:
  1. Sharding-proxy could be considered as a mysql sever, so we recommend using mysql command line tool to connect to and operate it.
  2. If users would like use a third-party database tool, there may be some errors cause of the certain implementation/options. For example, we recommend Navicat with version 11.1.13(not 12.x), and turn on "introspect using jdbc metadata"(or it will get all real tables info from informations_schema) in idea or datagrip.
submitted by Sharding-Sphere to u/Sharding-Sphere [link] [comments]

A guide on hitting Legend in Comp

Crossposting this from /crucibleplaybook, figured some people on here might find this helpful as well. A lot of this post applies to all Comp, not just hitting Legend.
I’ve been seeing a lot of posts lately asking for tips for hitting Legend in Comp so I figured I’d put together a brief guide for anyone that’s interested. I’m happy to see so many people interested in hitting Legend!
Intro
First off, a little bit about me. I never played D1 so I had a rough first few months of D2Y1 (and a rough first few weeks of Y2 with the new special weapon uptime and TTK) as it was my first time with the Destiny franchise. But even then had a blast in Crucible and I always wanted to get better. I’m also an extremely competitive person so that helped fuel my desire for improvement.
I play on Xbox and I just got my Unbroken title this season so I’ve been to Legend 3 times (S4, S6, S7). I’ve learned a ton along the way and hitting Legend each season has been easier and more enjoyable than the previous one for a variety of reasons that I’ll share in this post.
Improvement Mindset
While your end goal is to hit Legend, focusing on this binary goal isn’t a good idea. A better approach is to think of playing Comp with the main goal of improving both as a player and a team. With this more open and long-term mindset, you will improve rapidly as a player, win more often, and have a much more enjoyable experience as a result.
When you focus on something as binary as hitting a certain rank, every game or even decision within a game starts to feel tense and you put an immense amount of artificial pressure on yourself. This often builds over the course of a game. Even if it’s subconscious, it will effect your play. You’ll play too passive, too aggressive, and/or make bad decisions. Your brain will be too wrapped up playing out the consequences of failure to focus on what you should be doing to give yourself and your team the best chances of winning. It’s been scientifically proven both in real sports and in E-Sports that tension leads to poor performance.
Instead, take every engagement and every game as an opportunity to learn something and to improve. You WILL start getting your ass kicked at some point, it’s just a matter of when. It might be at 3k and it might not be until 5k, but at some point it’ll happen. And when it does, the best thing to do is to record your gameplay and watch it back.
Gameplay Review
You can easily record your gameplay via Twitch by streaming and having it save past broadcasts. Then you can watch your gameplay there, or you can take it a step further and download your gameplay and run it through a free video editing program such as DaVinci Resolve or iMovie. The advantage of doing it this way is you can better control the playback and even view it frame-by-frame.
I’d recommend picking a game that you performed poorly and watch it once all the way through and take some mental notes. Then I’d watch it again, noting each death with why you died and what you could have done better to either kill your opponent first or escape safely. Even if you died to a Wardcliff or a solo super, write down something you could have done differently to prevent dying. Then categorize and tally them the best you can. The most frequent ones are what you should focus on getting better at. This can be during your next Comp session or QP/Rumble.
The reason reviewing your gameplay is so important is it’ll help speed up your rate of improvement and help you get past your current plateau a bit faster. Games in high comp tend to be very fast paced so it can be hard to think about or remember exactly what happened. Or what you think happened in the moment wasn’t what really happened and the gameplay review will show you this.
While it’s certainly possible to improve naturally and over time, recording and reviewing your gameplay will make you improve faster.
Playing the meta
A lot of people seem reluctant to use meta loadouts for whatever reason. I think most of it boils down to either wanting to be unique, or having a superiority complex by refusing to use certain good or easy to use weapons and strategies because they’re “cheap” or too easy. Throw all of this out the window.
There’s nothing cheap in Comp (other than DDoSing which is actual cheating and we won’t discuss it). There’s nothing that takes “no skill” to use. If it’s in the game then it’s fair game to be used as much and as effectively as possible. Everything has a counter. If you don’t believe this then you probably have a scrub mentality and it’s going to hold you back. There are some great posts about scrub mentality on this very sub.
Meta loadouts or weapons are usually the perfect cross section of both lethality and ease-of-use - USE THEM. This is the time and the place. Your opponents are trying to win at all costs and so should you.
I don’t want to go too much into detail here or debate here, but in general these are the best options for high comp on Console (4k+). They’re ranked in terms of effectiveness, so it’s probably better to improve with something at the top of the list than use something at the bottom.
Primary Weapons: * Luna (NF if you have it already) * Adaptive or Aggressive pulse rifles * Ace/Thorn/TLW * Very well rolled Legendary HC * Jade Rabbit/Mida/Polaris Lance (large maps only)
Special Weapons: * Aggressive or Precision frame Shotgun (MindbendeToil/Imperial Decree/DRB/Retold Tale) * Erentil or Wizened Rebuke * Beloved/Twilight Oath/Supremacy/Revoker
Heavy Weapons: * Wardcliff * Truth * PotG * Any rocket launcher
Subclasses: * Hunter - middle void, middle or bottom arc * Titan - bottom void or bottom arc * Warlock - top arc or bottom solar
Exotics: * Stompees for Hunter * OEM or Antaeus Wards for Titan * Transversive Steps for Warlock
Mods: * 3+ super mods * 1-2 paragon mods for hunter if desired * 1-2 grenade mods for stormcaller or sentinel if desired * Otherwise 5 super mods
Stats: * Minimum of 1 resilience with as little as possible (Titans min is 3 or 4 I think). The rest goes to mobility and/or recovery. I’d recommend 6+ mobility for most people, but some prefer a lower mobility and higher recovery.
I don’t really want to debate what else is meta or what’s the best or other specifics. But in my experience both playing and watching others play high comp, this is the meta.
For weapons, Luna and a shotgun is still the best and most versatile loadout for most people and most maps. Consider swapping to a pulse or scout instead of Luna (or a sniper instead of a shotgun) for larger maps. Especially for countdown, consider having at least one sniper on your team as being able to get a pick and play 4v3 puts your team at a huge advantage. Fusion rifles are also incredibly strong right now. You can basically treat one like your primary weapon and just use your actual primary to clean people up or shoot people past ~30m.
In the current meta supers are incredibly important. You want to use them frequently and make orbs for your teammates for them to pick up and vice versa. Try to use your super when the enemy team doesn’t have any supers ready or heavy ammo is about to be up. Coordinate with your teammates on who’s popping a super and when so you don’t double pop and your teammates can get heavy, map control, and shoot the enemies running away from you.
I’ve gotten some questions on why so little resilience so I’ll answer it here. You’re going to die to supers, heavy ammo, and special weapons a lot more than primaries. Your resilience won’t really matter against those things. Plus the primaries you do see in high comp (mostly NF) don’t get effected by resilience. And even the other ones that you’ll occasionally see, resilience doesn’t really change the TTK, it only requires more headshots instead of body shots. At this level most players will be hitting their headshots anyways. Resilience was much more important in Y1 when there was a lot of primary weapon uptime.
The only time I’d recommend a higher resilience is if you’re on a Titan with OEM (to supplement recovery) and prefer low mobility. 7+ resilience will cause Erentil to take 5 bolts instead of 4 and might occasionally make a shotgun need to hit an extra pellet out of the spread to kill you (10 pellets of the 12, instead of 9 of 12 for example), among a couple of other minor advantages. I still wouldn’t really recommend it as I think you get more overall usage out of high recovery, but I’ve seen some people in high comp make it work.
Controlling heavy ammo wins games. Titans can use their barricade to pull heavy even while the other team is laning it. Prioritize getting the heavy and preventing your opponents from getting it. Once you get it, use it and don’t die with it. I’d recommend using it quickly but if you’re running Wardcliff it’s not a bad idea to save a rocket for an opponents super.
Finding Teammates
One of the most important parts of hitting Legend is having quality teammates. And by quality I mean both skill and temperament. Unless you already have a large friends list filled with quality teammates, you’ll need to network to find some. You can do this both in-game and using LFG. You can solo queue with a decent amount of success until about 3.5k or so, then you’ll want to start forming a team. If you seem to gel with teammates when solo queuing, shoot them a message and ask if they’d like to team up.
As far as LFG goes, there are lots of LFG websites these days. I’ve personally had a lot of success with Xbox’s built in LFG system. LFG can get a bad rep at times which is understandable. Some people are toxic, can tilt easily, blame teammates, complain all the time, not very skilled, etc. You obviously want to avoid these types of people and instead find teammates that are skilled, chill, encouraging and fun to play with. The best way to do this is to host the LFG group yourself by making the post and weeding people out. I’m not going to debate if/how important KD is to determine someone’s skill and if/what minimum you should ask for, use your own discretion here.
Once you get a team, just start playing. It might take a game or two for everyone to start to feel more comfortable with one another based on playstyles, tendencies, personalities, communication, etc. If things are going well after 4 or 5 games, keep playing. If they keep going well, add them to your friends list and ask them to do the same. If the games are not going well, you don’t seem to be playing together well as a team, and/or your personalities don’t seem to fit, consider politely excusing yourself and forming a new group. There’s absolutely nothing wrong with doing this. Sometimes the team is just not a good fit for whatever reason, it’s best for everyone to just move on with no hard feelings.
And by games going well I don’t necessarily mean winning. Are you guys teamshotting well? Baiting and switching effectively? Controlling the power ammo? Timing super usage? Moving together as a team? Playing complimentary angels and watching each other’s back? All of these things are good signs of a team working well. One of the best indicators is the number of assists you’re getting as a team (these can be looked up on any 3rd party website).
If your team is playing well together over a long session, like I said, add them and ask if they’ll do the same. Next time you get on, ask if they want to play before looking for a group via LFG. Sometimes they’ll even have friends that want to play as well which is great! Add anyone and everyone you play well with and seem to be on the same page with both in-game and personality wise. Rinse and repeat and you’ll have a solid list of friends to play Comp with. If you keep networking you can grow your friends list very quickly and effectively. You can also use Discord to schedule comp sessions.
The best way to attract good teammates is to be the best teammate you can. Be the teammate that you’d want on your team every single game and make things easy on your teammates. Hype them up for making good plays and encourage them if they make a bad one. Team shoot, make good callouts, don’t tilt, etc. Anything you’d look for in a good teammate, try to do that yourself and you’ll attract some great people to play with.
Always warm up before playing Comp and make sure your teammates have too. Rumble or QP is fine, but even a quick 10 minute private match rumble with your comp team can help warm up and build some camaraderie.
Closing Thoughts
Reaching Legend in Comp is seen by most as a daunting task and not how it should be seen - a huge accomplishment. Most people won’t even attempt to try for a variety of reasons ranging from pride to not enough reward to time and effort. High Comp is very challenging and honestly a much different game than QP or low Comp. It can be frustrating and stressful. But if you think of it as playing to improve and become the best player you can instead of just hitting Legend, then it’ll be very well worth it. Drastically improving as a player and as a result eventually hitting Legend is by far the best feeling in the entire game.
You might not even get there this season but that’s okay! But by having an improvement mindset and improving as a player, you’ll have a leg up next season - just stick with it and you’ll get there.
My final parting piece of advice is to just enjoy the journey. You’ll lose some close games and you’ll win some close games. You’ll get blown out by streamers or recovs and you’ll surprise yourself and beat some teams that are much better than you. Don’t sweat any of the losses, just enjoy playing the game. At the end of the day, this is a video game that we all play for fun.
One thing to keep in mind, especially once you get past 5k and are making that final push, you’re playing against some of the best players in the world and many of them play Comp for a living or it’s literally all they do. For most of us this is just one of many hobbies that we do for fun in our spare time, so don’t get too upset when you lose to these teams.
Thanks for reading - good luck and have fun! I’d be happy to answer any questions that anybody has.
Cheers!
submitted by Keetonicc to DestinyTheGame [link] [comments]

VJ- Binary Option Signal Indicator MT2 -Automation - YouTube Binary Option Winner Indicator Signal For Iq Option Live ... Best IQ Option 90% Signal Option Indicator Free Download ... Binary Options 60 Seconds Indicator %100 Winning Trades ... Binary Options 60 Seconds Indicator 99% Winning Live ... GOD OF INDICATORS - 99,99% work - binary option strategy ... Best Binary Option Auto Signal Indicator// Attach With ... Binary Options Signals Indicator 100% Free Download! - YouTube Best IQ Option 99% Accurate Signal Indicator// Attach With ... 2 Minutes Strategy Configuring SETTINGS And Indicators ...

Watch Binary Options 1 Hour Channel Trading Strategy 90% Accurate Yet Simple – Binary Options Daily; Considerations To Know About Best Binary Options Strategy Forum [Binary Options Software Forum] 80% Accurate 5 Min Binary Options Indicator [Binary Options Strategy Key Indicators] Watch Easy Binary Options Price Action Strategy! 87.3% ... UltimateFx v2.1 Trading with UltimateFx Signal will improve your trading easier with the use of MT4 or MT5. Become consistent in winnings over losses and be profitable in trading. The algorithm of the indicator lies in the price range and trends of the market. Download Now Verified Performance Best Automated Trading ” 3CCC” indicator – Binary Options Trade Examples “Consecutive Candle Count” and Forex Trading. You can also use the Binary signals indicator “Consecutive Candle Count” to take reversal trades. This is preferably done on higher timeframes. When a signal appears, take a reversal trade lasting the entire next candle or use your preferred takeprofit and stoploss levels. Below is an ... Free Download. 13. 14916. One Minute Profit Signal - Indicator for binary options turbo trading. 12. 14095. Nexus 6.1 - no repaint neural network binary indicator. 5. 4139. Neural Network Indicator – self-learning tool with accurate signals. 1 2 3 … 6 The next. About the category "Binary Options Indicators "Binary Options Indicators, unlike forex indicators, have their own specifics. In ... You choose best terms of trading before using the binary options indicator! DOWNLOAD FOR MT4. Advantages of the binary options indicator MT4. Using this binary options indicator allows to define in advance best terms for trading. It gives most accurate signals to do deals with minimizing risks. Uses PriceAction. For market prognosis it the price that has more significance, not its history ... The most favorite binary options indicators.rar Download. The most favorite binary options indicator . According to the result’s of the poll, the most favorite binary options indicator is, with 5385 votes, the MACD (Moving Average Convergence Divergence.) This comes as a little surprise, as my personal guess was the the most favorite would be Moving Average or Bollinger Bands. The full ... Binary Options Indicator – 90% Win Rate – Free Download. If you use binary options indicators for your regular trading, you know that there are so many crapy indicators out there when do not provides good results. Today we are giving you new proven binary options indicators that provide you most profitability. And to get this indicator you do not spend any dollar. By the way, if you want ... Download Binary options indicator 95 accurate indicator mt4 free. Remaining trend signals is an aggregate of signal indicators and records that works, in keeping with developers on the maximum advanced algorithms of worthwhile buying and selling. last fashion indicators makes use of the signs RSI, MACD and transferring average for the analysis of the contemporary state of affairs, the ... As we are using this indicator for the binary options, We need to use 1-minute chard and each trade should be 2-3 minutes expiry. All the major currency pair works best for this indicator. You can use any binary options brokers to trade with the help of this indicator. Binary.com who is the pioneer of binary trading recently introduce deriv.com. You can take a look and use this broker. The ... Download a huge collection of Binary options strategies, trading systems and Binary Options indicators 100% Free. Get your download link now.

[index] [23414] [3764] [22096] [8267] [11174] [3459] [6356] [17124] [14154] [375]

VJ- Binary Option Signal Indicator MT2 -Automation - YouTube

get trading bots contact with telegram https://bit.ly/3aR8baT get pro or free signals https://bit.ly/2N5PLrp get strategy trading, visit my twitter https://b... Hi Friends I will Show This Video Binary Options 60 Seconds Indicator Signal 99% Winning Live Trading Proof -----... pinry option99% All about Trading in Forex and Binary Option Marked. #iqoption#olymptrade#pocketoption#forextime Registration link iqoption https://bit.ly/2W... I Will Show In This Video Binary Option Winner Indicator Signal For Iq Option Live Trading _____ Join Telegram : http:... Hello Trader Characteristics of Indicator 1. Platform - Metatrader 4. 2. Asset - Any Currency Pair. 3. Candle Timeframe - 1 Min. 4. Expiry Time - 1 min. I Ho... OLYMP TRADE broker account opening link below: https://goo.gl/urftNR this video contains two trading examples with explanation. watch and learn how to trade ... Hello Trader Toady i will share you "Best Binary Option Auto Signal Indicator" Characteristics of Indicator 1. Platform - Metatrader4. 2. Asset - Show On Ind... /download mt2 automation tool here =====https://www.mt2trading.com/?ref=850 you can use the trial version only on demo accounts if you have an accurate ... FOREX & BINARY SIGNALS http://nextwavetrading.com/SIGNALS/forex&binary OPEN YOUR ACCOUNT IQ OPTION HERE: http://nextwavetrading.com/IQoption IQ OPTION FREE D... Binary Options Signals Indicator Free Download: https://www.altredo.com/altredo-free-download.aspx The BO Indicator is an Meta Trader 4 signal indicator that...

https://binaryoptiontrade.ziekhachedadol.tk