chore: migrate our headers to use more standard ID setting

This migration was done using a regex:

`(#\s+)([^{]+?)\s+\{(#.*?)\}`

And replacing with:

`$1[$2]($3)`
This commit is contained in:
Corbin Crutchley
2022-07-09 03:06:56 -07:00
committed by Corbin Crutchley
parent 84c65b3816
commit 87a7e66132
34 changed files with 372 additions and 372 deletions

View File

@@ -14,7 +14,7 @@ In the past, Android Studio did not support AMD's CPUs for hardware emulation of
However, while working on my Ryzen CPU powered desktop, I had difficulties getting the program working on my machine.
# BIOS Setup {#bios}
# [BIOS Setup](#bios)
To use Hyper-V, we have to have various settings configured on our motherboards.
@@ -25,7 +25,7 @@ Two of the settings we need to enable are:
I personally have a Gigabyte motherboard (the Gigabyte GA-AB350M-Gaming 3), so I'll showcase the places I had to find the options for these motherboard settings.
## SVM Mode {#gigabyte-svm}
## [SVM Mode](#gigabyte-svm)
To enable SVM mode, first start at the first screen to the left, labeled **"M.I.T"**.
@@ -41,7 +41,7 @@ Finally, open the **"Advanced CPU Core Settings"**.
Once on this page, you should see **"SVM Mode"** as the fourth option from the bottom. _Toggle that to **"Enabled"**_, then move onto enabling IOMMU
## IOMMU {#gigabyte-iommu}
## [IOMMU](#gigabyte-iommu)
Enabling IOMMU on a Gigabyte AMD motherboard is much easier than enabling SVM mode. Simply _go to the **"Chipset"** root tab, and it should be the first option at the top_. Even if it's set to "Auto", go ahead and _update that to be **"Enabled"**_.
@@ -51,7 +51,7 @@ Enabling IOMMU on a Gigabyte AMD motherboard is much easier than enabling SVM mo
Once changed, tab over to "Save & Exit" and select "Exit and save changes".
# Windows Features Setup {#windows-features}
# [Windows Features Setup](#windows-features)
Now that we have our BIOS (UEFI, really) configured correctly, we can enable the Windows features we need for the Android Emulator.
@@ -71,7 +71,7 @@ You'll want to turn on the following options:
After these three settings are selected, press **"OK"** and allow the features to install. After your features are installed, your machine will need a reboot. Go ahead and restart your computer before proceeding to install Android Studio.
# Setup Android Studio {#android-studio}
# [Setup Android Studio](#android-studio)
You have a few different methods for installing Android Studio. You can choose to use [Google's installer directly](https://developer.android.com/studio/install), you can [utilize the Chocolatey CLI installer](https://chocolatey.org/packages/AndroidStudio), or even use [JetBrain's Toolbox utility to install and manage an instance of Android Studio](https://www.jetbrains.com/toolbox-app/). _Any of these methods work perfectly well_, it's down to preference, really.
@@ -89,7 +89,7 @@ Once you see the popup dialog, you'll want to _select the "SDK Tools" tab_. Ther
Once you've selected it, press **"Apply"** to download the installer. _Because the "Apply" button only downloads the installer, we'll need to run it manually._
## Run the Installer {#amd-hypervisor-installer}
## [Run the Installer](#amd-hypervisor-installer)
To find the location of the installer, you'll want to go to the install location for your Android SDK. For me (who used the Jetbrains Toolbox to install Android Studio), that path was: `%AppData%/../Local/Android/Sdk`.
@@ -113,7 +113,7 @@ You should see the message _"DeleteService SUCCESS"_ if everything ran as expect
> If you get an error `[SC] StartService FAILED with error 4294967201.`, make sure you've followed the steps to [enable BOTH settings in your BIOS](#bios) as well as ALL of the [features mentioned in Windows](#windows-features)
## AVD Setup {#avd}
## [AVD Setup](#avd)
To run the emulator, you need to set up a device itself. You do this through the **"AVD Manager"** in the "configure" menu.

View File

@@ -18,7 +18,7 @@ More than that, if you want more powerful functionality, such as disabling an en
These features are hugely helpful when dealing with complex form logic throughout your application. Luckily for us, they're not just exclusive to native elements - we can implement this functionality into our own form!
# Example {#code-demo}
# [Example](#code-demo)
It's hard for us to talk about the potential advantages to a component without taking a look at it. Let's start with this component, just for fun.
@@ -74,7 +74,7 @@ With only a bit of CSS, we have a visually appealing, A11Y friendly, and quirky
Now, this component is far from feature complete. There's no way to `disable` the input, there's no way to extract data out from the typed input, there's not a lot of functionality you'd typically expect to see from an input component. Let's change that.
# ControlValueAccessor {#intro-concept}
# [ControlValueAccessor](#intro-concept)
Most of the expected form functionality will come as a complement of [the `ControlValueAccessor` interface](https://angular.io/api/forms/ControlValueAccessor). Much like you implement `ngOnInit` by implementing class methods, you do the same with ControlValueAccessor to gain functionality for form components.
@@ -87,7 +87,7 @@ The methods you need to implement are the following:
Let's go through these one-by-one and see how we can introduce change to our component to support each one.
## Setup {#forwardRef}
## [Setup](#forwardRef)
To use these four methods, you'll first need to `provide` them somehow. To do this, we use a combination of the component's `providers` array, `NG_VALUE_ACCESSOR`, and `forwardRef`.
@@ -131,7 +131,7 @@ With this, we'll finally be able to use these methods to control our component.
> If you're wondering why you don't need to do something like this with `ngOnInit`, it's because that functionality is baked right into Angular. Angular _always_ looks for an `onInit` function and tries to call it when the respective lifecycle method is run. `implements` is just a type-safe way to ensure that you're explicitly wanting to call that method.
## `writeValue` {#write-value}
## [`writeValue`](#write-value)
`writeValue` is a method that acts exactly as you'd expect it to: It simply writes a value to your component's value. As your value has more than a single write method (from your component and from the parent), it's suggested to have a setter, getter, and private internal value for your property.
@@ -171,7 +171,7 @@ export class ExampleInputComponent implements ControlValueAccessor {
Now, when we use a value like `new FormValue('test')` and pass it as `[formControl]` to our component, it will render the correct default value
## `setDisabledState` {#disabled-state}
## [`setDisabledState`](#disabled-state)
Implementing the disabled state check is extremely similar to [implementing value writing](#write-value). Simply add a setter, getter, and `setDisabledState` to your component, and you should be good-to-go:
@@ -194,7 +194,7 @@ Just as we did with value writing, we want to run a `markForCheck` to allow chan
> It's worth mentioning that unlike the other three methods, this one is entirely optional for implementing a `ControlValueAccessor`. This allows us to disable the component or keep it enabled but is not required for usage with the other methods. `ngModel` and `formControl` will work without this method implemented.
## `registerOnChange` {#register-on-change}
## [`registerOnChange`](#register-on-change)
While the previous methods have been implemented in a way that required usage of `markForCheck`, these last two methods are implemented in a bit of a different way. You only need look at the type of the methods on the interface to see as much:
@@ -224,7 +224,7 @@ While this code sample shows you how to store the function, it doesn't outline h
/>
```
## `registerOnTouched` {#register-on-touched}
## [`registerOnTouched`](#register-on-touched)
Like how you [store a function and call it to register changes](#register-on-change), you do much of the same to register when a component has been "touched" or not. This tells your consumer when a component has had interaction or not.
@@ -248,7 +248,7 @@ You'll want to call this `onTouched` method any time that your user "touches" (o
/>
```
# Consumption {#consume-demo}
# [Consumption](#consume-demo)
Now that we've done that work let's put it all together, apply [the styling from before](#code-demo), and consume the component we've built!
@@ -319,7 +319,7 @@ These classes include:
They reflect states so that you can update the visuals in CSS to reflect them. When using `[(ngModel)]`, they won't appear, since nothing is tracking when a component is `pristine` or `dirty`. However, when using `[formControl]` or `[formControlName]`, these classes _will_ appear and act accordingly, thanks to the `registerOnChange` and `registerOnTouched` functions. As such, you're able to display custom CSS logic for when each of these states are met.
# Gain Access To Form Control Errors {#form-control-errors}
# [Gain Access To Form Control Errors](#form-control-errors)
Something you'll notice that wasn't implemented in the `ControlValueAccessor` implementation is support for checking whether validators are applied. If you're a well-versed Angular Form-ite, you'll recall the ability to [validate forms using validators appended to `FormControl`s](https://angular.io/guide/form-validation). Although a niche situation — since most validation happens at the page level, not the component level — wouldn't it be nice to check when a form is valid or not directly from the component to which the form is attached?
@@ -412,7 +412,7 @@ export class AppComponent {
Not only do you have [a wide range of Angular-built validators at your disposal](https://angular.io/api/forms/Validators), but you're even able to [make your own validator](https://angular.io/api/forms/Validator)!
# Conclusion {#conclusion}
# [Conclusion](#conclusion)
Enabling `formControl` and `ngModel` usage is an extremely powerful tool that enables you to have feature-rich and consistent APIs across your form components. Using them, you can ensure that your consumers are provided with the functionality they'd expect in a familiar API to native elements. Hopefully, this article has provided you with more in-depth insight that you're able to use with your own components.

View File

@@ -18,7 +18,7 @@ However, we have multiple teams that rely on our shared component system, and we
Let's walk through how we did that.
# Setup Assets Package {#assets-package}
# [Setup Assets Package](#assets-package)
As we're wanting to ship our packages separately, we opted for two Git repositories for the component system and private assets. In a new repository, I have the following for the `package.json`:
@@ -51,7 +51,7 @@ As we're wanting to ship our packages separately, we opted for two Git repositor
While this package will not maintain code, I still believe it important to maintain a semver for the package. If a path of the package changes, the semver will communicate that with your package's consumers alongside the changeling. As such, this `package.json` utilizes [Conventional Commit and `commitlint` to auto-generate changelogs and maintain history version](/posts/setup-standard-version/).
## Add Font Files {#font-files}
## [Add Font Files](#font-files)
The "Foundry Stirling" font that I'm shipping is a combination of 7 `.otf` files. I start by creating a `fonts` directory. Inside that directory, I place the `.otf` files in the `fonts` directory.
@@ -74,7 +74,7 @@ Once done, your project repo should look something like this:
└── package.json
```
## `@font-face` CSS Definition {#css-declare}
## [`@font-face` CSS Definition](#css-declare)
Now that we have the fonts in their place, we need to create a common `foundry_stirling.css` file to access those fonts from CSS.
@@ -124,7 +124,7 @@ Because we're planning on using Angular CLI, we'll want to set the `src` propert
> @include foundry_sterling("/assets")
> ```
### Font Name Value Mapping {#font-val-mapping}
### [Font Name Value Mapping](#font-val-mapping)
Because our font had multiple files to declare the different CSS values weights, we had to declare the `@font-face` for each of the font files. This is the mapping we used:
@@ -140,7 +140,7 @@ Because our font had multiple files to declare the different CSS values weights,
| 800 | Extra-Bold / Ultra-Bold | `foundry_sterling_extra_bold.otf` |
| 900 | Black / Heavy | N/A |
# Consume Assets Package in Angular CLI {#angular-cli}
# [Consume Assets Package in Angular CLI](#angular-cli)
Now that we have our `npm` package configured for usage, we'll start preparing for consuming that package by installing it into our app's `package.json`:
@@ -150,7 +150,7 @@ npm i ecp-private-assets
> Remember, `ecp-private-assets` is the name of our internal package. You'll need to replace this `npm i` command with your own package name
## `angular.json` modification {#angular-json}
## [`angular.json` modification](#angular-json)
Once this is done, two steps are required. First, add the following to `angular.json`'s `assets` property. This will copy the files from `ecp-private-assets` to `/assets` once you setup a build.
@@ -197,7 +197,7 @@ This way, when we use the CSS `url('/assets/')`, it will point to our newly appo
}
```
## Import CSS {#css-import}
## [Import CSS](#css-import)
Now that we have our assets in place, we need to import the CSS file into our app.

View File

@@ -10,7 +10,7 @@
}
---
# Article Overview {#overview}
# [Article Overview](#overview)
> This article was written with the idea that the reader is at least somewhat familiar with the introductory concepts of Angular. As a result, if you haven't done so already, it is highly suggested that you make your way through the fantastic [Angular getting started guide](https://angular.io/start).
@@ -39,9 +39,9 @@ Sound like a fun time? Let's goooo! 🏃🌈
> The contents of this post was also presented in a talk under the same name. You can [find the slides here](./slides.pptx) or a live recording of that talk given by the post's author [on our YouTube channel](https://www.youtube.com/watch?v=7AilTMFPxqQ).
# Introduction To Templates {#intro}
# [Introduction To Templates](#intro)
## `ng-template` {#ng-template}
## [`ng-template`](#ng-template)
Before we dive into the meat of this article, let's do a quick recap of what templates are and what they look like.
@@ -66,7 +66,7 @@ We are then adding the [`ngIf`](https://angular.io/api/common/NgIf) structural d
If you had forgotten to include the `ngIf`, it would never render the `False` element because **a template is not rendered to the view unless explicitly told to — this includes templates created with `ng-template`**
## Rendering Manually with `ngTemplateOutlet` {#ng-template-outlet}
## [Rendering Manually with `ngTemplateOutlet`](#ng-template-outlet)
But there's a ~~simpler~~ ~~much more complex~~ another way show the same template code above!
@@ -101,7 +101,7 @@ Knowing that, you can see that the following example would show the user three o
With this, combined with template reference variables, you may find it easier to use a ternary operator to pass the correct template based on the value of `bool` to create an embedded view of that template.
## Pass Data To Templates — The Template Context {#template-context}
## [Pass Data To Templates — The Template Context](#template-context)
Do you know how I mentioned that you can pass data between templates (at the start of the article)? This can be accomplished by defining the _context_ of the template. This context is defined by a JavaScript object you pass to the template with your desired key/value pairs (just like any other object). When looking at an example below, **think of it in terms of passing data from a parent component to a child component through property binding**. When you define the context of a template, you're simply giving it the data it needs to fulfill its purpose in much the same way.
@@ -133,9 +133,9 @@ Now let's see it in action!
As a quick note, _I only named these template input variables differently from the context value key to make it clear that you may do so_. `let-personName="personName"` is not only valid, but it also can make the code's intentions clearer to other developers.
# View References — `ViewChild`/`ContentChild` {#view-references}
# [View References — `ViewChild`/`ContentChild`](#view-references)
## Keeping Logic In Your Controller using `ViewChild` {#viewchild}
## [Keeping Logic In Your Controller using `ViewChild`](#viewchild)
While template reference variables are very useful for referencing values within the template itself, there may be times when you'll want to access a reference to an item in the template from the component logic. Luckily, there's a way to get a reference to any component, directive, or view within a component template.
@@ -163,7 +163,7 @@ export class AppComponent {
_`ViewChild` is a "property decorator" utility for Angular that searches the component tree to find what you pass it as a query._ In the example above, when we pass the string `'templName'`, we are looking for something in the tree that is marked with the template variable `helloMsg`. In this case, it's an `ng-template`, which is then stored to the `helloMessageTemplate` property when this is found. Because it is a reference to a template, we are typing it as `TemplateRef<any>` to have TypeScript understand the typings whenever it sees this variable.
### Not Just for Templates! {#viewchild-not-just-templates}
### [Not Just for Templates!](#viewchild-not-just-templates)
`ViewChild` isn't just for templates, either. You can get references to anything in the view tree:
@@ -199,7 +199,7 @@ Despite the examples thus far having only used a string as the query for `ViewCh
For the particular example listed above, this code change would still yield the same results. _When using `ViewChild`, it might be dangerous to do this if you have many components with that class._ This is because when using `ViewChild`, _it only returns the first result that Angular can find_ — this could return results that are unexpected if you're not aware of that.
### My Name is ~~Inigo Montoya~~ the `read` Prop {#viewchild-read-prop}
### [My Name is ~~Inigo Montoya~~ the `read` Prop](#viewchild-read-prop)
Awesome! But I wanted to get the value of the `data-unrelatedAttr` attribute dataset, and my component definition doesn't have an input for that. How do I get the dataset value?
@@ -226,7 +226,7 @@ console.log(myComponent.nativeElement.dataset.getAttribute('data-unrelatedAttr')
`ViewChild` isn't an only child, though (get it?). There are other APIs similar to it that allow you to get references to other items in your templates from your component logic.
## `ViewChildren`: More references then your nerdy pop culture friend {#viewchildren}
## [`ViewChildren`: More references then your nerdy pop culture friend](#viewchildren)
`ViewChildren` allows you to get a reference to any items in the view that match your `ViewChildren` query as an array of each item that matches:
@@ -249,7 +249,7 @@ export class AppComponent {
Would give you a list of all components with that base class. You're also able to use the `{read: ElementRef}` property from the `ViewChild` property decorator to get a `QueryList<ElementRef>` (to be able to get a reference to the DOM [Elements](https://developer.mozilla.org/en-US/docs/Web/API/Element) themselves) instead of a query list of `MyComponentComponent` types.
### What is `QueryList` {#viewchildren-querylist}
### [What is `QueryList`](#viewchildren-querylist)
While `QueryList` (from `@angular/core`) returns an array-like, and the core team has done an outstanding job at adding in all the usual methods (`reduce`, `map`, etc.) and it _extends an iterator interface_ (so it works with `*ngFor` in Angular templates and `for (let i of _)` in TypeScript/JavaScript logic), _it is not an array_. [A similar situation occurs when using `document.querySelectorAll` in plain JavaScript](https://developer.mozilla.org/en-US/docs/Web/API/NodeList). _If you're expecting an array from an API that returns `QueryList`, it might be best to use `Array.from`_ on the value (in this case the `myComponents` component prop) when you access it in logic later.
@@ -282,7 +282,7 @@ It might be a good idea to gain familiarity of doing this as the Angular docs gi
> NOTE: In the future this class will implement an Observable interface.
## `ContentChildren`: If this article had kids {#contentchildren}
## [`ContentChildren`: If this article had kids](#contentchildren)
Author's note:
@@ -376,7 +376,7 @@ If we change the `ViewChildren` line to read:
We'll see that the code now runs as expected. The cards are recolored, the `consoles.log`s ran, and the developers are happy.
### The Content Without the `ng` {#viewchildren-without-ng-content}
### [The Content Without the `ng`](#viewchildren-without-ng-content)
`ContentChild` even works when you're not using `ng-content` but still passing components and elements as children to the component. So, for example, if you wanted to pass a template as a child but wanted to render it in a very specific way, you could do so:
@@ -408,13 +408,13 @@ export class AppComponent {
This is a perfect example of where you might want `@ContentChild` — not only are you unable to use `ng-content` to render this template without a template reference being passed to an outlet, but you're able to create a context that can pass information to the template being passed as a child.
# How Does Angular Track the UI {#understand-the-tree}
# [How Does Angular Track the UI](#understand-the-tree)
Awesome! We've been blowing through some of the real-world uses of templates like a bullet-train through a tunnel. 🚆 But I have something to admit: I feel like I've been doing a pretty bad job at explaining the "nitty-gritty" of how this stuff works. While that can often be a bit more dry of a read, I think it's very important to be able to use these APIs to their fullest. As such, let's take a step back and read through some of the more abstract concepts behind them.
One of these abstract concepts comes from how Angular tracks whats on-screen; just like the browser has the _Document Object Model_ tree (often called the DOM), Angular has the _View Hierarchy Tree_.
## The DOM Tree {#the-dom}
## [The DOM Tree](#the-dom)
Okay, I realize I just dropped some vocab on you without explaining first. Let's change that.
@@ -523,7 +523,7 @@ Little has changed, yet there's something new! A _view container_ is just what i
_It is because Angular's view containers being able to be attached to views, templates, and elements that enable the dependency injection system to get a `ViewContainerRef` regardless of what you're requested the `ViewContainerRef` on_.
## Host Views {#components-are-directives}
## [Host Views](#components-are-directives)
If you're looking for them, you might notice a few similarities between a component declaration's `template` and `ng-template`s:
@@ -647,9 +647,9 @@ In order to fix this behavior, we'd need to move the second `ng-template` into t
<iframe src="https://stackblitz.com/edit/start-to-source-12-fixed-template-var?embed=1&file=src/app/app.component.ts" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
# The Bane of All JavaScipt Developer: Timings {#timings}
# [The Bane of All JavaScipt Developer: Timings](#timings)
## Understanding timings with `ViewChildren` {#viewchildren-timings}
## [Understanding timings with `ViewChildren`](#viewchildren-timings)
But the example immediately above doesn't have the same behavior as the one we likely intended. We wanted to get:
@@ -697,7 +697,7 @@ Why is this error happening? What can we do to fix it?
This, my friends, is where the conversation regarding change detection, lifecycle methods, and the `static` prop come into play.
## Change Detection, How Does It Work {#change-detection}
## [Change Detection, How Does It Work](#change-detection)
> Change detection in Angular is deserving of its own massive article: This is not that article. That said, understanding how change detection and how it affects the availability of templates is imperative to understanding some of the more ambiguous aspects of Angular templates behaviors.
>
@@ -755,7 +755,7 @@ Because of this — when using the `ngDoCheck` — you're manually running the v
> If theres more interest in an article from me about Angular change detection, reach out — I'd love to gauge interest!
### Great Scott — You Control The Timing! The `static` Prop {#static-prop}
### [Great Scott — You Control The Timing! The `static` Prop](#static-prop)
That said, there might be times where having the value right off the bat from the `ngOnInit` might be useful. After all, if you're not embedding a view into a view, it would be extremely useful to be able to get the reference before the `ngAfterViewInit` and be able to avoid the fix mentioned above.
@@ -797,9 +797,9 @@ When taking the example with the `testingMessageCompVar` prop and changing the v
<iframe src="https://stackblitz.com/edit/start-to-source-15-static-first-check?ctl=1&embed=1&file=src/app/app.component.ts" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
# View Manipulation {#view-manipulation}
# [View Manipulation](#view-manipulation)
## View Limitations {#view-limitations}
## [View Limitations](#view-limitations)
Having covered views in the last section, it's important to mention an important limitation regarding them:
@@ -807,7 +807,7 @@ Having covered views in the last section, it's important to mention an important
>
>\- Angular Docs
## Embed Views {#embed-views}
## [Embed Views](#embed-views)
While we've covered how to insert a component using `ngTemplate`, Angular also allows you to find, reference, modify, and create them yourself in your component/directive logic! 🤯
@@ -994,7 +994,7 @@ EmbeddedViewRef<C> {
}
```
# Accessing Templates from a Directive {#directives}
# [Accessing Templates from a Directive](#directives)
Thus far, we've only used components to change and manipulate templates. However, [as we've covered before, directives and components are the same under-the-hood](#components-are-directives). As a result, _we have the ability to manipulate templates in the same way using directives rather than components_. Let's see what that might look like:
@@ -1030,7 +1030,7 @@ export class AppComponent {}
You'll notice this code is almost exactly the same from some of our previous component code.
## Reference More Than View Containers {#directive-template-ref}
## [Reference More Than View Containers](#directive-template-ref)
However, the lack of a template associated with the directive enables some fun stuff, for example, _we can use the same dependency injection trick we've been using to get the view container reference_ to get a reference to the template element that the directive is attached to and render it in the `ngOnInit` method like so:
@@ -1060,7 +1060,7 @@ export class AppComponent {}
<iframe src="https://stackblitz.com/edit/start-to-source-22-directive-template-reference?ctl=1&embed=1&file=src/app/app.component.ts" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
## Input Shorthand {#directive-same-name-input}
## [Input Shorthand](#directive-same-name-input)
With directives, we can even create an input with the same name, and just pass that input value directly to the template using a context:
@@ -1154,7 +1154,7 @@ export class NgTemplateOutlet implements OnChanges {
}
```
# Structural Directives — What Sorcery is this? {#structural-directives}
# [Structural Directives — What Sorcery is this?](#structural-directives)
If you've used Angular in any scale of application, you've ran into Angular helpers that look a lot like directives and start with a `*` such as `*ngIf` and `*ngFor`. These helpers are known as **structural directives** and are built upon all of the things we've learned to this point.
@@ -1298,7 +1298,7 @@ update(): void {
Here, we're using the `clear` method on the parent view ref to remove the previous view when the value is false. Because our structural directive will contain a template only used for this directive, we can safely assume that `clear` will only remove templates created within this directive and not from an external source.
#### How Angular Built It {#angular-ngif-source}
#### [How Angular Built It](#angular-ngif-source)
While Angular goes for a more verbose pattern due to additional features available in their structural directive, the implementation is not too different from our own.
@@ -1636,11 +1636,11 @@ If we [go back to the original section where we showed `ngIf` code from the Angu
this._context.$implicit = this._context.ngIf = condition;
```
## Syntax Rules {#microsyntax-rules}
## [Syntax Rules](#microsyntax-rules)
Thus far, I've been doing my best to keep the examples using a fairly consistent microsyntax. Because of this, you might think that you must use `;` to separate the calls, you need to have things in a certain order, or that there might be more rules you don't yet understand about how to use the syntax. This is not the case — the syntax is fairly loose, actually, although it can be hard to understand.
### Parts Make Up The Whole {#microsyntax-parts}
### [Parts Make Up The Whole](#microsyntax-parts)
The rules behind microsyntax can seem overwhelming, so let's take a look at each part on their own before coming them together.
@@ -1653,7 +1653,7 @@ Angular's microsyntax has 4 building blocks, that when combined in a particular
![A chart taking a microsyntax and turning it into a diagram. This diagram will be explained thoroughly via text in this section](./microsyntax.svg "A diagram showing the different parts of the microsyntax")
#### Expressions {#microsyntax-explain-expressions}
#### [Expressions](#microsyntax-explain-expressions)
The way I describe expressions in simple terms is "anything that, when referenced, returns a value". Like the example above, it could mean using an operator (`5 + 3`), calling a function (`Math.random()`), a variable (assuming `const numberHere = 12`, `numberHere`) or just a value itself (`'a string here'`).
@@ -1669,7 +1669,7 @@ While "what is and isnt an expression in JavaScript" could be its own post, s
<p *makePigLatin="functionsAsWell()"></p>
```
#### The `as` keyword {#microsyntax-explain-as}
#### [The `as` keyword](#microsyntax-explain-as)
The rules behind the `as` keyword as an alternative to `let` are fairly straightforward:
@@ -1678,7 +1678,7 @@ The rules behind the `as` keyword as an alternative to `let` are fairly straight
So, if you had the context as `{personName: 'Corbin', personInterests: ['programming']}`, and wanted to save the value from `personInterests` to a template input variable `interestList`, you could use: `personInterests as interestList`.
#### `keyExp` — Key Expressions {#microsyntax-explain-keyexp}
#### [`keyExp` — Key Expressions](#microsyntax-explain-keyexp)
A key expression is simply an expression that youre able to bind to an input on a structural directive.
@@ -1695,7 +1695,7 @@ A key expression is simply an expression that youre able to bind to an input
<p *makePigLatin="inputKey 'This is an expression'"></p>
```
#### `let` bindings {#microsyntax-explain-let}
#### [`let` bindings](#microsyntax-explain-let)
The `let` binding:

View File

@@ -21,7 +21,7 @@ It's important to note that _"networking" is a broad, catch-all term that infers
>
> That said, you need the right binary data to be input into the CPU for it to process, just like our brains need the right input to find the answer of what to do. Because of this, communication with the CPU is integral
# Architecture {#network-architectures}
# [Architecture](#network-architectures)
There are a lot of ways that information can be connected and transferred. We use various types of architecture to connect them.
_Computers speak in `1`s and `0`s, known as binary_. These binary values come in incredibly long strings of combinations of one of the two symbols to _construct all of the data used in communication_.
@@ -30,7 +30,7 @@ _Computers speak in `1`s and `0`s, known as binary_. These binary values come in
This is true regardless of the architecture used to send data - its all binary under-the-hood somewhere in the process. The architecture used to send data is simply a way of organizing the ones and zeros effectively to enable the types of communication required for a specific use-case.
## Bus Architecture {#bus-architecture}
## [Bus Architecture](#bus-architecture)
For example, one of the ways that we can send and receive data is by, well, sending them. _The bus architecture_, often used in low-level hardware such as CPU inter-communication, _simply streams the ones and zeros directly_.
@@ -42,7 +42,7 @@ In this example, the bus icons are similar to binary data - either a one or a ze
Furthermore, because error-handled bi-directional cancelable subscriptions (like the ones you make to servers to connect to the internet) are difficult using the bus architecture, _we typically don't use it for large-scale multi-device networks like the internet_.
## Packet Architecture {#packet-architecture}
## [Packet Architecture](#packet-architecture)
The weaknesses of the bus architecture led to the creation of the packet architecture. The packet architecture requires a bit more of a higher-level understanding of how data is sent and received. To explain this concept, we'll use an analogy that fits really well.
@@ -52,7 +52,7 @@ Let's say you want to send a note to your friend that's hours away from you. You
Similarly, a packet is _sent from a single sender, received by a single recipient, addressed where to go, and contains a set of information_.
### Metadata {#packet-metadata}
### [Metadata](#packet-metadata)
Letters may not give you the same kind of continuous stream of consciousness as in-person communications, but they do provide something in return: structure.
@@ -72,7 +72,7 @@ As a result, you might have a middleware packet handler that reads only the head
<video src="./header_routing.mp4" title="An example of a small packet being sent to a small file server and a larger packet being sent to the large file server based on the data in the packet header"></video>
# [It Takes A Village](https://en.wikipedia.org/wiki/It_takes_a_village) To Send A Letter {#osi-layers}
# [[It Takes A Village](https://en.wikipedia.org/wiki/It_takes_a_village) To Send A Letter](#osi-layers)
Understanding what a letter is likely the most important part of the communication aspect if you intend to write letters, but if someone asked you to deliver a letter it helps to have a broader understanding of how the letter gets sent. That's right: _there's a whole structure set in place to send the letters (packets) you want to be sent_. This structure is comprised of many levels, which we'll outline here.
@@ -96,35 +96,35 @@ This breakdown of layers is referred to as the [OSI model](https://en.wikipedia.
Let's start from the bottom and make our way up. Remember that each of these layers builds on top of each other, allowing you to make more complex but efficient processes to send data on each step.
## Physical {#osi-layer-1-physical}
## [Physical](#osi-layer-1-physical)
The physical layer is similar to the trucks, roads, and workers that are driving to send the data. Sure, you could send a letter just by handing letters one-by-one from driver to driver, but without some organization that's usually dispatched to higher levels, things can go wrong (as they often do [in a game of telephone](https://en.wikipedia.org/wiki/Chinese_whispers)).
In the technical world, _this layer refers to the binary bits themselves_ ([which compose to makeup letters and the rest of structure to your data](/posts/non-decimal-numbers-in-tech/)) _and the physical wiring_ constructed to transfer those bits. As it is with the mail world, this layer alone _can_ be used alone, but often needs delegation from higher layers to be more effective.
## Data Link {#osi-layer-2-data-link}
## [Data Link](#osi-layer-2-data-link)
Data link would be like UPS or FedEx offices: sending information between post office to post office. These offices don't have mail sorters yet (that's a layer up) but they do provide a means for drivers to arrive to exchange mail at a designated area. As a result, instead of having to meet the drivers in the road to receive my mail, I can simply go to a designated office to receive my mail.
Likewise, _the data link layer is the layer that transfers binary data between different locations_. This becomes especially helpful when _dealing with local networks that only exchange data between a single physical location_, where you might not need the added complexity large-level packet sorting might come into play.
## Network {#osi-layer-3-network}
## [Network](#osi-layer-3-network)
The network layer is similar to the mail sorters. Between being transferred from place to place, there may be instances where the mail is needed to be sorted and organized. This is _done with packets in the network layer to handle routing_ and other related activities between clients
## Transport {#osi-layer-4-transport}
## [Transport](#osi-layer-4-transport)
The transport layer delivers it from the post office to my apartment building. This means that not only does the package gets delivered from post-office building to post-office building, but it gets to-and-from its destination as intended.
## Session {#osi-layer-5-session}
## [Session](#osi-layer-5-session)
With newer packages delivered through services like UPS, you may want a tracking number for your package. This is similar to the session layer. With this layer, it includes a back-and-forth that can give you insight into the progress of the delivery or even include information like return-to-sender.
## Presentation {#osi-layer-6-presentation}
## [Presentation](#osi-layer-6-presentation)
But when a package gets received by you, it doesn't stop there, does it? You want to bring the package inside your home. For most packages, this is relatively trivial - you simply take it inside. However, for some specialized instances, this may require hiring movers to get a couch in your house. In this same way, HTTP and other protocols don't typically differentiate between the presentation layer and the application layer, but some networks do. When they do, they use the presentation layer to outline how the data is formed for sending and receiving
## Application {#osi-layer-7-application}
## [Application](#osi-layer-7-application)
You've just been delivered the fancy new blender you ordered for smoothies. After unwrapping the package, you plug it in and give it a whirl, making the most delicious lunch-time smoothie you've ever had. Congrats, you've just exemplified the application layer. In this layer, it encapsulates the layer your user (developer or end-user alike) will use, the application that communicates back-and-forth and the reason you wanted to send data in the first place.

View File

@@ -20,7 +20,7 @@ This is a diagram showing all possible knight moves:
The red mark above is an arbitrary starting point, and the green marks are all of the possible places that the knight can jump from that point.
# Solution Method {#solution-method}
# [Solution Method](#solution-method)
At first glance, this may look like a bizarre maze navigation algorithm with complex rules, and inspires any number of thoughts of number of possible iterations, how to decide if a move is constructive or not, etc.
@@ -39,7 +39,7 @@ So, right now, we have all of the squares labelled that we can get to in zero, o
If any of the labeled squares is the desired destination, then we know the minimum number of moves required to reach that square. So, all we have to do is start with our starting square and repeat this process until we happen to fill our ending destination with a number. The number in that square will be the minimal number of moves required to reach that spot.
# JavaScript Execution {#js-execution}
# [JavaScript Execution](#js-execution)
So, let's get started. I hacked this together in CodePen, and I didn't build an interface for it, but that would be an easy enough step. We could do all kinds of animations in D3js, etc, but that's not for this blog post.

View File

@@ -12,7 +12,7 @@
One of the hardest parts of any front-end application (native application or website alike) is the data layer. Where do I store information? That question alone can cause lots of headaches when dealing with large-scale applications. Well, worry not, as we'll be going over some of the different options you have at your disposal in your React Native applications today.
# Key-Value Pair Storage {#default-preference}
# [Key-Value Pair Storage](#default-preference)
Often, while creating settings options, it can be useful to store a simple key/value pairings of serializable data (like JSON). In the web world, we'd use `localStorage.` Ideally, we'd like a simple data storage for string-based data that has a `get,` a `set,` and a `clear` method to handle data for us. Luckily for us, there is!
@@ -24,7 +24,7 @@ yarn add react-native-default-preference
Under-the-hood, it utilized native methods for storing data in a key-value manner. These APIs it employs is the [`SharedPreferences` API on Android](https://developer.android.com/reference/android/content/SharedPreferences) and the [`UserDefaults` API on iOS](https://developer.apple.com/documentation/foundation/userdefaults). This native code utilization should mean that not only is the data straightforward to access, but speedy as well.
# Secure Key-Value Pair Storage {#secure-key-store}
# [Secure Key-Value Pair Storage](#secure-key-store)
There may be an instance where you want to store a part of secure information to the device. For example, in [my mobile Git client I'm currently writing](https://gitshark.dev), I'm grabbing an access token from the GitHub API. This type of sensitive data introduces a new set of problems when it comes to storage; conventional means of storing data are easily accessed from external sources, leading to a security vulnerability with such sensitive data. That said, both major mobile platforms have solved for this problem: [iOS has its Keychain API](https://developer.apple.com/documentation/security/keychain_services) while [Android provides a KeyStore API](https://developer.android.com/reference/java/security/KeyStore). Both can be accessed using the [`react-native-secure-key-store` npm package](https://github.com/pradeep1991singh/react-native-secure-key-store#readme) :
@@ -34,7 +34,7 @@ yarn add react-native-secure-key-store
This package provides an easy-to-use key-value pattern, not entirely dissimilar to the one we used [in the last section](#default-preference).
## Local Database Usage {#sqlite-storage}
## [Local Database Usage](#sqlite-storage)
There may be times where having simple key-value storage isn't enough. Sometimes, you need the power and flexibility that a full-scale database provides. That said, not all of the data you need always requires a database to be hosted on the server. This instance is where having a local SQL database comes into play. React Native has a few different options for utilizing an on-device SQL database, but the most popular is using the [`react-native-sqlite-storage`](https://github.com/andpor/react-native-sqlite-storage) package:
@@ -44,23 +44,23 @@ yarn add react-native-sqlite-storage
This package allows you to use full SQL syntax for querying and creating.
## ORM Options {#orms}
## [ORM Options](#orms)
Want the power and utility of a SQL database, but don't want to play with any of the SQL syntaxes yourself? No problem, there is a myriad of options to build on top of SQLite using React Native. One of my favorites us [TypeORM](http://typeorm.io/). Useful for both TypeScript and vanilla JS usage, it provides a bunch of functionality that maps relatively directly to SQL.
Alternatively, if you're looking for something with more of a framework feel, there's [WatermelonDB](https://github.com/Nozbe/WatermelonDB). WatermelonDB is utilized with [RxJS](https://rxjs.dev/) to provide an event-based fast-as-fusion alternative to more conventional ORMs.
# Remote Database Usage {#serverless}
# [Remote Database Usage](#serverless)
While you're able to utilize [something like Fetch or Axios to make calls to your remote API for data](https://reactnative.dev/docs/network#using-fetch), you might want to utilize a serverless database to provide data to your apps. React Native's got you covered whether you want to use [MongoDB Stitch](https://www.npmjs.com/package/mongodb-stitch-react-native-sdk), [Firebase's Firestore or Realtime Database](https://rnfirebase.io/), or others.
# Synchronized Database Usage {#realm}
# [Synchronized Database Usage](#realm)
While you're more than able to cache database calls manually, sometimes it's convenient to have your data synchronized with your backend. This convenience is one of the selling points of [Realm](https://realm.io/). Realm is an unconventional database in that it's written natively and is not SQL based. You're able to [integrate with React Native as a local database](https://realm.io/docs/javascript/latest#getting-started) and connect to their [Realm Sync platform](https://docs.realm.io/sync/getting-started-1/getting-a-realm-object-server-instance) to provide a simple to use synchronization between your database backend and mobile client.
> A note about RealmDB: [MongoDB acquired Realm in 2019](https://techcrunch.com/2019/04/24/mongodb-to-acquire-open-source-mobile-database-realm-startup-that-raised-40m/). While this may seem like an unrelated note to leave here, I mention it because large-scale changes are in the immediate horizon for the platform. MongoDB is open about such. They plan on integrating the platform into a larger-scoped platform _also_ (confusingly) called [Realm](https://www.mongodb.com/realm). I mention this because if you're starting a new project, you may want to be aware of what these changes will have in store for your future. It seems like they have a lot of exciting things coming soon!
# Pros and Cons {#pros-and-cons}
# [Pros and Cons](#pros-and-cons)
| Option | Pros | Cons |
| --------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |

View File

@@ -16,7 +16,7 @@ The tool I'm referring to is [the Node Debugger utility](https://nodejs.org/api/
Let's look at how we can do so and how to use the Chrome debugger for such purposes.
# Example Application {#example-code}
# [Example Application](#example-code)
Let's assume we're building an [Express server](https://expressjs.com/) in NodeJS. We want to `GET` an external endpoint and process that data, but we're having issues with the output data. Since it's not clear where the issue resides, we decide to turn to the debugger.
@@ -66,7 +66,7 @@ Instead of the ages of the employees as we might expect. We'll need to dive deep
> You may have already spotted the error in this small code sample, but I'd still suggest you read on. Having the skillsets to run a debugger can help immeasurably when dealing with large-scale codebases with many moving parts or even when dealing with an unfamiliar or poorly documented API.
# Starting the Debugger {#starting-the-debugger}
# [Starting the Debugger](#starting-the-debugger)
Whereas a typical Express application might have `package.json` file that looks something like this:
@@ -137,7 +137,7 @@ For help, see: https://nodejs.org/en/docs/inspector
At this point, _it will hang and not process the code or run it_. That's okay though, as we'll be running the inspector to get the code to run again in the next step.
# The Debugger {#the-debugger}
# [The Debugger](#the-debugger)
In order to access the debugger, you'll need to open up Chrome and go to the URL `chrome://inspect`. You should see a list of selectable debug devices, including the node instance you just started.
@@ -167,7 +167,7 @@ A race-car needs to drive around the track until the point where the pit-stop is
>
> This way, you should have the margins to add in a breakpoint where you'd like one beforehand.
# Using The Debugging Tools {#using-debug-tools}
# [Using The Debugging Tools](#using-debug-tools)
Once your code runs through a breakpoint, this window should immediately raise to focus (even if it's in the background).
@@ -185,7 +185,7 @@ Once you do so, you're in full control of your code and its state. You can:
- _Run arbitrary JavaScript commands_, similar to how a code playground might allow you to:
![A screenshot of indexing the body using "body.slice(0, 100)"](./arbitrary_js.png)
## Running Through Lines {#running-through-lines}
## [Running Through Lines](#running-through-lines)
But that's not all! Once you make changes (or inspect the values), you're also able to control the flow of your application. For example, you may have noticed the following buttons in the debug window:
@@ -221,7 +221,7 @@ We will have to run through the breakpoints we've set by pressing the "play" but
There we go! We're able to get the expected "23"! That said, it was annoying to have to press "play" twice. Maybe there's something else we can do in similar scenarios like this?
## Disabling Breakpoints {#disabling-breakpoints}
## [Disabling Breakpoints](#disabling-breakpoints)
As mentioned previously in an aside, you can disable breakpoints as simply as pressing the created breakpoint once again (pressing the line number will cause the blue arrow to disappear). However, you're also able to temporarily disable all breakpoints if you want to allow code to run normally for a time. To do this, you'll want to look in the same toolbar as the "play" and "skip" button. Pressing this button will toggle breakpoints from enabling or not. If breakpoints are disabled, you'll see that the blue color in the arrows next to the line number will become a lighter shade.
@@ -229,7 +229,7 @@ As mentioned previously in an aside, you can disable breakpoints as simply as pr
Whereas code used to pause when reaching breakpoints, it will now ignore your custom set breakpoints and keep running as normal.
## Step Into {#debugger-step-into}
## [Step Into](#debugger-step-into)
In many instances (such as the `map` we use in the following code), you may find yourself wanting to step _into_ a callback function (or an otherwise present function) rather than step over it. For example, [when pressing the "next" button in the previous section](#running-through-lines), it skipped over the `map` instead of running the line in it (line 10). This is because the arrow function that's created and passed to `map` is considered its own level of code. To dive deeper into the layers of code and therefore **into** that line of code, instead of the "next line" button to advance, you'll need to press the "step into" button.
@@ -266,7 +266,7 @@ Once inside the `map` function, there's even a button _to get you outside of tha
>
> You would still be able to "step into" `getEmployeeAges` and, once inside, "step outside" again in the same manor of the `map` function, as shown prior.
# Saving Files {#editing-files-in-chrome}
# [Saving Files](#editing-files-in-chrome)
One more feature I'd like to touch on with the debugger before closing things out is the ability to edit the source files directly from the debugger. Using this feature, it can make the Chrome debugger a form of lite IDE, which may improve your workflow. So, let's revert our code to [the place it was at before we applied the fix we needed](#example-code) and go from there.
@@ -282,7 +282,7 @@ In order to make your changes persist, you'll need to press `Ctrl + S` or `Comma
Not only does VS Code not recognize your changes, but once you close your debugging window, you won't know what you'd changed in order to get your code to work. While this may help in short debugging sessions, this won't do for a longer session of code changes. To do that, you'll want your changes to save to the local file system.
## Persisting Changes {#chrome-as-ide-persist-changes}
## [Persisting Changes](#chrome-as-ide-persist-changes)
In order to save the changes from inside the Chrome to the file system, you need to permit Chrome access to read and write your files. To do this, you'll want to press the "Add folder to workspace" button off to the left of the code screen.

View File

@@ -19,7 +19,7 @@ The idea of drawing under the navbar intrigued me. After lots of research, I was
Feel free to follow along with the code samples, but if you're looking for the easiest solution, [you might want to read to the bottom to see how to easily integrate it into your app without all of the manual work](#react-native-immersive-bars).
# The Wrong Way {#flag-layout-no-limits}
# [The Wrong Way](#flag-layout-no-limits)
After doing some initial research, I found myself presented with various StackOverflows and official documentation pointing towards a Window flag [`FLAG_LAYOUT_NO_LIMITS`](https://developer.android.com/reference/android/view/WindowManager.LayoutParams#FLAG_LAYOUT_NO_LIMITS) to, quote:
@@ -82,7 +82,7 @@ However, as you can see, it returned a height of `0`, which clearly wasn't the s
After some research, [I found out that the `safe-area-context` package does not work properly when using this flag](https://github.com/th3rdwave/react-native-safe-area-context/issues/8). It doesn't work because of [the underlying APIs that the library uses for Android detection](https://github.com/th3rdwave/react-native-safe-area-context/blob/master/android/src/main/java/com/th3rdwave/safeareacontext/SafeAreaViewManager.java) ([InsetsAPI](https://medium.com/androiddevelopers/windowinsets-listeners-to-layouts-8f9ccc8fa4d1)), does not support the `FLAG_LAYOUT_NO_LIMITS.` This was an automatic no-go for my app: I didn't want the contents of the app to be stuck under the navbar without a way to access it. I had to start over from the drawing board.
# Translucent Bars {#translucentcy}
# [Translucent Bars](#translucentcy)
After even further research, I'd found myself with a potential alternative: Translucent bars! I knew that the ability to draw under navbars was often accompanied with translucent bars in previous versions of Android! If we revert changes to the `MainActivity.java` file back to how they were initially, and simply update our `styles.xml` file located at:
@@ -120,7 +120,7 @@ Fantastic! It's not only drawing under the navbar, but it's also registering the
Unfortunately for me, there was nothing brought about by this testing.
## Further Tests to no Avail {#fitsSystemWindows}
## [Further Tests to no Avail](#fitsSystemWindows)
Before giving up on the `styles.xml` file, I tried two more flags that I thought might have helped.
@@ -180,7 +180,7 @@ That's done it! Not only is the button being drawn under the navbar fully transp
>
> Then you've forgotten to remove the `fitsSystemWindows` flag that we added in our `styles.xml` flag previously. Once that (and the `windowDrawsSystemBarBackgrounds` flag) was removed, it worked for me
# Other API Versions {#api-versions}
# [Other API Versions](#api-versions)
While the code I've mentioned thus far works, it only really works _well_ on Android O (API Level 26) and above. That's only about 60% of Android devices out there! Why does this only work well on Android O? Well, if you have a light background, it only makes sense to have dark buttons in the navigation bar. That functionality has only existed since [Android introduced the `SYSTEM_UI_FLAG_LIGHT_NAVIGATION_BAR` Vew flag in API 26](https://developer.android.com/reference/android/view/View#SYSTEM_UI_FLAG_LIGHT_NAVIGATION_BAR). To edge-case this, we'll need to add some conditional logic to draw our own dark translucent bar for versions lower than this:
@@ -205,7 +205,7 @@ if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.M) {
When viewing the app on older versions of Android (like M), you'll see the respective bars as a semi-transparent bar:
![The statusbar is transparent while the navbar is translucent](./transparent_m.png)
# The Easy Method {#react-native-immersive-bars}
# [The Easy Method](#react-native-immersive-bars)
Let's not sugar coat it: It's tedious to make changes to native Android code in order to support all of the various API levels there are, the various forms of OEM issues that could arise. Likewise, if your app implements a dark mode, there's now another level of challenge: You have to toggle the light and dark navigation buttons yourself!

View File

@@ -22,7 +22,7 @@ You may notice that our code samples use various libraries from [the Testing Lib
> That said if you're looking to include Jest and Testing Library into your Angular app,
> but don't know where to start, [we wrote a guide on how to do just that](/posts/writing-better-angular-tests/)
# Don't Include Application Logic in Tests {#dont-include-logic}
# [Don't Include Application Logic in Tests](#dont-include-logic)
I'd like to make a confession: I love metaprogramming. Whether it's typings, complex libraries, babel plugins, it's all joyous for me to write.
@@ -82,7 +82,7 @@ When bringing up this point to a coworker, they reminded me of the expression "W
Furthermore, there's another advantage to writing code simpler: Error messages. When using `for` loops, when an error is thrown, it's not known what piece of data is not rendering. You only know that _something_ isn't being rendered, but not what data, in particular, is missing. If I dropped the third row in its entirety, the error message in the `for` loop will not indicate what row was throwing the error. However, removing them from the for loop, it will immediately be clear which row, in particular, is throwing the error.
# Hardcode Your Testing Data {#hardcode-data}
# [Hardcode Your Testing Data](#hardcode-data)
While we started our example previously by removing for loops, this can be difficult to do without doing this step first. Hard-coding data is one of the most important things you can do to simplify your tests and reduce potential errors in your tests.
@@ -146,7 +146,7 @@ fs.writeFileSync('mock_data.js', `module.exports = ${rows}`);
You can then run `const mockData = require('./mock_data.js')` inside of your test file. Now, you should be able to hardcode your data, knowing what the first, second, and third index are.
# Keep Tests Focused {#seperate-tests}
# [Keep Tests Focused](#seperate-tests)
While working on tests, it can be easy to group together actions into a single test. For example, let's say we want to test our table component for the following behaviors:
@@ -191,7 +191,7 @@ it('should not render people from page 2 when page 1 is focused', () => {
While this may cause slower tests as a result of duplicating the `render` function's actions, it's worth mentioning that most of these tests should run in milliseconds, making the extended time minimally impact you.
Even further, I would argue that the extended time is worth the offset of having clearer, more scope restricted tests. These tests will assist with debugging and maintainability of your tests.
# Don't Duplicate What You're Testing {#dont-duplicate}
# [Don't Duplicate What You're Testing](#dont-duplicate)
There's yet another advantage of keeping your tests separated by `it` blocks that I haven't mentioned yet: It frees you to reduce the amount of logic you include in the next test. Let's take the code example from before:
@@ -256,7 +256,7 @@ In this example, I would prefer the second test. It's closer to how I would manu
Ultimately, when writing tests, a good rule to follow is "They should read like simple instructions that can be run, tested, and understood by a person with no technical knowledge"
# Dont Include Network Logic in Your Render Tests {#seperate-network-logic}
# [Dont Include Network Logic in Your Render Tests](#seperate-network-logic)
Let's say in a component we want to include some logic to implement some social features. Well follow all the best practices and have a wonderful looking app with GraphQL using ApolloGraphQL as our integration layer so we dont need to import a bunch of APIs and can hide them behind our server. Now were writing out tests and we have a _ton_ of mocked network data services and mock providers. Why do we need all of this for our render?
@@ -335,7 +335,7 @@ The tests get drastically simplified and we can write tests with mocks for our s
When using large amounts of network data that you'd like to mock, be sure to [hardcode that data using mock files](#hardcode-data).
# Conclusion {#conclusion}
# [Conclusion](#conclusion)
Using these methods, tests can be simplified, often made faster, and typically shorten the length of a testing file. While this may sound straightforward on a surface level, writing tests is a skill that's grown like any other. Practice encourages growth, so don't be discouraged if your tests aren't as straightforward as you'd like to first.

View File

@@ -28,7 +28,7 @@ We'll ask and answer the following questions:
> I'm writing this article as a starting point to a developer's journey or even just to learn more about how computers work under-the-hood. I'll make sure to cover as many of the basics as possible before diving into the more complex territory. That said, we all learn in different ways, and I am not a perfect author. If you have questions or find yourself stuck reading through this, drop a comment down below or [join our Discord](https://discord.gg/FMcvc6T) and ask questions there. We have a very friendly and understanding community that would love to explain more in-depth.
# Source Code {#source-code}
# [Source Code](#source-code)
If you've spent any time with developers, you'll likely have heard of the term "source code." Source code simply refers to the text that programmers type to make their programs. Take the following text:
@@ -58,7 +58,7 @@ How does your computer understand what to do when running a programming language
The answer to all of these involves an understanding of how hardware works, and one of the best ways to learn programming is to learn how a computer works in the first place.
# How A Computer Works {#computer-hardware}
# [How A Computer Works](#computer-hardware)
> This section won't be a complete "Computers 101" course. While we _will_ be writing material that dives deeper into these subject matters, this is meant as a short description to supplement explanations later on in the article. If you'd like to see that type of content in the future, be sure to [sign up for our newsletter](https://newsletter.unicorn-utterances.com/)
@@ -72,27 +72,27 @@ Your computer is comprised of many components, but today we'll be focusing on fi
These are used to connect each of these parts together, make up the "brains" of your computer. Whenever you take an action on your computer, these components launch into action to bring you the output you'd expect. Be it auditory, visual, or some other form of output, these components will do the "thinking" required to make it happen.
## Motherboard {#mobo}
## [Motherboard](#mobo)
**A motherboard is the platform in which all other components connect together and communicate through**. There are various integrated components to your motherboard, like storage controllers and chipsets necessary for your computer to work. Fancier motherboards include additional functionality like high-speed connectivity (PCI-E 4.0) and Wi-Fi.
When you turn on your computer, the first that will happen is your motherboard will do a "POST"; a hardware check to see if everything connected is functioning properly. Then the motherboard will start the boot sequence; which starts with storage
## Long Term Storage {#hdd}
## [Long Term Storage](#hdd)
**There are 2 primary types of storage in computers; Solid State Drives (SSD), and Hard Disk Drives (HDD)**. When the boot sequence hits storage, your drive will scan the very first bit of its disk ([also known as the "boot sector"](https://en.wikipedia.org/wiki/Boot_sector)) to find the installed operating system. Once your storage is done finding the relevant files, your computer reads the rest of the information off of the drive to load your system. This includes configuration files that you've updated by setting up your computer (like your username, wallpaper, and more) and the system files set up when you installed your operating system (like Windows). Moreover, this is also where your documents live. If you've written a document in Microsoft Word, downloaded a song from iTunes, or anything in between, it lives on your hard drive.
## Memory {#ram}
## [Memory](#ram)
**While SSDs and HDDs are fantastic for long-term file storage, they're too slow (in terms of reading speeds) to store data needed to run your computer**. This is why we have memory in the form of Registers and Random Access Memory (RAM). **Registers are the closest memory to your processor and are extremely fast, but they are extremely small.** System Memory, or RAM, is outside of the processor but allows us to store entire programs in a responsive manner. Everything from your operating system to your video player utilizes memory to store data while processing. We'll see how the computer utilized registers and RAM in programs [later in the article](#assembly-code).
**While this information is magnitudes faster to access than hard-drives, it's volatile.** That means that when you turn off your computer, the data stored in RAM is lost forever. Memory is also much more expensive than Storage. This is why we don't store our files to RAM for long-term access.
## GPU {#gpu}
## [GPU](#gpu)
Computers are a marvel, but without some ability to interact with them, their applications are limited. For many, that interaction comes through their computer screens - seeing the results of an action they've taken. **Your computer's "graphics processing unit" (GPU) is the hardware used to calculate the complex maths required to draw things on-screen.** The GPUs' complex mathematics prowess can also be utilized for things other than graphics (data analytics, cryptocurrency mining, scientific computation).
## CPU {#cpu}
## [CPU](#cpu)
Your CPU is what does all of the computation needed to perform tasks you do on your computer. **It does the math and logic to figure out what the other components need to be doing, and it coordinates them.** An example of this is telling the GPU what to draw. While your GPU does the calculations for what's to be drawn, the command to do such comes from the CPU. If your interaction requires data to be stored, it's the one that dispatches those actions to your HDD or RAM.
@@ -103,7 +103,7 @@ You can think of these components working together similarly to this:
> For those unaware, the visual cortex is the part of the brain that allows us to perceive and understand the information provided to us by our eyes. Our eyes simply pass the light information gathered to our brains, which makes sense of it all. Likewise, the GPU does the computation but does not display the data it processes; it passes that information to your monitor, which in turn displays the image source to you.
# Assembly: What's that? {#assembly-code}
# [Assembly: What's that?](#assembly-code)
At the start of this article, one of the questions I promised to answer was, "What language does the computer speak natively?". The answer to this question is, as you may have guessed from the section title, assembly.
@@ -185,7 +185,7 @@ addu $1,$2,$1 # Add (+) data from register 1 and 2, store the result back i
> Editors note: There's a way to add the numbers together without using RAM. We're only doing things this way to demonstrate how you use RAM in assembly. If you can figure out how this is done (hint: move some lines around), leave a comment! 😉
# This (code) Keeps Lifting me Higher {#introducing-c-code}
# [This (code) Keeps Lifting me Higher](#introducing-c-code)
As efficient as assembly code is, you may have noticed that it's not particularly readable. Further, for larger projects, it's impossible to manage a project of that scale without some abstractions that higher-level languages provide. This is where languages like C or JavaScript come into play.
@@ -216,7 +216,7 @@ int main() {
}
```
## Portability {#compilation}
## [Portability](#compilation)
While the previous example already demonstrates the readability that higher-level languages hold over assembly, when it comes to code complexity, there's no contest: High-level languages make I/O like printing something on-screen readily available.
@@ -284,7 +284,7 @@ Further, some abstractions make higher-level languages easier to build and scale
This is why to run your C code, you need to run the compiler to convert your source code into an executable file to run your program.
## Compiled vs. Runtime {#compiled-vs-runtime}
## [Compiled vs. Runtime](#compiled-vs-runtime)
At [the start of this section](#introducing-c-code), we mentioned that languages like C or JavaScript are higher-level languages than assembly. However, long-time developers will be quick to remind that these two languages are drastically different. The most significant difference between these being that C is a "compiled" language while JavaScript is a dynamic "runtime" language.
@@ -300,7 +300,7 @@ Simply because a language is compiled at run-time does not mean that there is a
In fact, many J.I.T languages - like Python - contain a way to optimize your code by **running your code through a pre-compiler to generate what's known as "bytecode."** This bytecode is often closer in resemblance to your instruction set, while not going so far as to compile all the way down to assembly. **You can think of this pre-optimization as pre-heating the oven** - you'll be faster to cook your food if much of the prep work is already handled. As such, you still need the runtime to run this optimized code, but because you've done the early optimization, the code will load much faster. In Python, once you [precompile your code](http://effbot.org/zone/python-compile.htm), it gets turned into a `.pyc` file, which is faster to run on first load.
# Introducing the AST {#ast}
# [Introducing the AST](#ast)
While we've talked about compiled languages (A.O.T. and J.I.T. alike), we haven't yet talked about how computers can convert high-level language source code into assembly. How does it know what commands to map to which instructions?
@@ -314,7 +314,7 @@ const magicNumber = 185;
While this code sample is extremely trivial (and doesn't do anything on its own), it contains enough complexity in how the computer understands it to use as an introductory example.
## The Lexer {#lexer}
## [The Lexer](#lexer)
There are (typically) two steps to turning source code into something that the computer can transform into assembly instruction sets.
@@ -344,7 +344,7 @@ Uncaught SyntaxError: Unexpected token '='
Notice how it reports "Unexpected token"? That's because the lexer is converting that symbol into a token before the parser recognizes that it's an invalid syntax.
## The Parser {#parser}
## [The Parser](#parser)
Now that we've loosely touched on the parser at the end of the last section let's talk more about it!
@@ -401,7 +401,7 @@ This showcases how much metadata is stored during the parsing process. Everythin
Whether you're using a compiled language or a runtime language, you're using an A.S.T. at some point in using the language.
## Why Not English? {#english-vs-ast}
## [Why Not English?](#english-vs-ast)
An A.S.T. seems like it's a lot of work just to convert code into other code. Wouldn't it make development simpler if we were to cut out the middle-man and build computers that can understand English (or some other human native tongue)?
@@ -425,7 +425,7 @@ I'll make my point by presenting you an extremely confusing grammatically correc
Yes, that's a complete and valid English sentence. Have fun writing a parser for that one.
### The Future {#AI}
### [The Future](#AI)
While writing a parser for the English language is near-impossible to do perfectly, there is some hope for using English in the programming sphere in the future. This hope comes in the form of AI, natural language processing.
@@ -437,7 +437,7 @@ Some folks have even been able to write [React code using GPT-3.0](https://twitt
Only time travelers will know precisely how AI will play out with using English for our programming in the future, but there are many obstacles to overcome before it becomes the norm.
# Conclusion {#conclusion}
# [Conclusion](#conclusion)
While computers can be incredibly complex, most of their foundation can be understood. We've touched on a lot in this article: a bit about hardware, some language design, even some linguistical parsing! This is both the blessing and the curse when it comes to a field as large as computer science: there are so many avenues to go down. If the path you're looking for is more in-depth explanations of how languages are parsed and understood by the computer, be sure to sign up for our newsletter down below! We're wanting to write an article explaining what "grammars" languages can follow.

View File

@@ -33,7 +33,7 @@ The first thing I do, _before looking at any tech whatsoever is think about it f
These are all of the questions I layout before even thinking about coding. I first start by white-boarding these things, explaining them to both myself and my partners, and generally doing my due-diligence concerning project planning.
# Wholistic Vision {#whats-your-vision}
# [Wholistic Vision](#whats-your-vision)
My holistic vision would consist of:
@@ -47,7 +47,7 @@ My holistic vision would consist of:
While the first point doesn't inform us of much at this early stage (we'll touch on UI tooling selection later), we can glean from the second point that we'll have to maintain some kind of storage layer. This will be something we'll need to keep in mind as we structure our goals.
# Target Audience {#who-are-you-targetting}
# [Target Audience](#who-are-you-targetting)
In this case, the groups of people I would want to appeal to are:
@@ -57,7 +57,7 @@ In this case, the groups of people I would want to appeal to are:
This potentially broad appeal might be able to drive a lot of business, but without a focused plan and a solid profit model, the project would fall flat.
# Profit Model {#layout-your-profit-model}
# [Profit Model](#layout-your-profit-model)
We'd plan to drive revenue by using the following profit model:
@@ -65,7 +65,7 @@ We'd plan to drive revenue by using the following profit model:
- No students would pay for accounts but might pay for a subscription to course content
- We'd likely take a cut of the subscription or charge for course features in some way
# Budget {#define-your-budget}
# [Budget](#define-your-budget)
Finally, none of this can be done without resources. These resources should be budgeted upfront, so what have we got? We have:
@@ -77,7 +77,7 @@ Our limited budget tells us that we will have to be hyper-focused when it comes
Now that we have a more precise goal of what the problem space we're entering is, we can more clearly define our goals (next part)
# Goals {#mvp}
# [Goals](#mvp)
Now that we're onto setting goals, I like to start thinking about "What is the bare minimum we need to show this to someone to spark a conversation." _This is often called the "minimum viable product" or "MVP" for short_.
@@ -93,7 +93,7 @@ Looking at what we need to do from the previous section, I can say that we could
While thinking about these features, I want to keep the implementation details to a minimum, just enough to suffice with our resources by ignoring the nuances of certain permission features. However, notice how, despite thinking about the features minimally, *I'm also mentally mapping how the data should be structured and thinking about long-term implications* in such a way that we can add them later without refactoring everything. This balance during architecture can be tough to achieve and becomes more and more natural with experience.
# Requirements {#data-requirements}
# [Requirements](#data-requirements)
Finally, I look at the data requirements and features and start thinking about what code requirements I'll run into to implement those data requirements.

View File

@@ -14,11 +14,11 @@ If you're new to web development, it can be difficult to figure out when (and ho
In this article, we'll outline what Node and npm are, how to use both `npm` and `yarn` to install dependencies for your project, and point out some "gotcha's" that are good to keep in mind while using them.
# What's Node and `npm`, anyway? {#what-are-they}
# [What's Node and `npm`, anyway?](#what-are-they)
If you're new to web development - well, firstly, welcome! - you may wonder what Node and `npm` are. Great questions!
## Node {#whats-node}
## [Node](#whats-node)
Let's start with Node. Node is a [JavaScript runtime](/posts/how-computers-speak/#compiled-vs-runtime) that allows you to run JavaScript code on your machine without having to run your JavaScript in a browser. This means that you can write JavaScript that interacts with your computer in ways your browser cannot. For example, you can host a REST web server from Node, write files to your hard drive, interact with operating system APIs (like notifications), and more!
@@ -26,7 +26,7 @@ Let's start with Node. Node is a [JavaScript runtime](/posts/how-computers-speak
Node also comes with an advantage over browsers for running JavaScript: you can interface with lower-level programming languages such as C via [Node's N-API](https://nodejs.org/api/n-api.html#n_api_node_api). This means that libraries you rely on can build on top of this N-API to provide a way to do things like send native desktop notifications, show something particular in your taskbar, or any other action that would require lower-level access to your local machine than JavaScript typically provides.
## `npm` {#whats-npm}
## [`npm`](#whats-npm)
Any sufficiently useful programming language needs an ecosystem to rely on. One of the primary elements for an ecosystem is a collection of libraries that you can use to build out your own libraries and applications.
@@ -41,7 +41,7 @@ When, say, Facebook wants to publish a new version of `react`, someone from the
While the registry is vital for the usage of the CLI utility, most of the time we say `npm` in this article, we're referring to the CLI tool. We'll make sure to be explicit when talking about the registry itself
# Setting Up Node {#setup-node}
# [Setting Up Node](#setup-node)
Before we explain how to install Node, let's explain something about the release process of the software.
@@ -57,7 +57,7 @@ The "current" release, on the other hand, usually sees new features of JavaScrip
NodeJS switches back and forth between LTS and non-LTS stable releases. For example, Node 12 and 14 were LTS releases, but Node 13 and 15 were not. You can [read more about their release cycle on their website](https://nodejs.org/en/about/releases/)
## Installing Node {#installing-node}
## [Installing Node](#installing-node)
You can find pre-built binaries ready-to-install from [NodeJS' website](https://nodejs.org/en/download/). Simply download the package you want and install it.
@@ -67,7 +67,7 @@ Node installs come pre-packaged with their own version of `npm`, so don't worry
However, the process of upgrading and changing version of NodeJS can be difficult. This is why I (and many others) recommend using NVM to manage your Node versions.
### NVM {#nvm}
### [NVM](#nvm)
While Node has a fairly stable API (and their LTS releases are often supported for many years at a time), there may be instances where it's benificial to have the ability to quickly upgrade and change the currently installed Node versions.
@@ -85,7 +85,7 @@ Additionally, you can (and, in order to use `nvm`, **must** use `nvm` to do so)
nvm install --lts
```
#### Switching Node Versions {#nvm-switch-node-ver}
#### [Switching Node Versions](#nvm-switch-node-ver)
NVM is a useful tool to switch Node versions, but there is something that should be noted before you do so. When you switch Node versions, it also resets the globally installed packages. This means that if you ran:
@@ -97,7 +97,7 @@ On Node 12, when you switch to Node 14, and attempt to run a `create-react-app`
It's also worth noting that some packages (like `sass`) have native dependencies. This means that they need to run specific commands on install depending on the version of Node you have installed. Because of this, if you switch from Node 12 to Node 14, you may need to re-run `npm i` on your packages before you attempt to re-run your applications.
#### Windows NVM {#windows-nvm}
#### [Windows NVM](#windows-nvm)
It's worth noting that the Windows variant of `nvm` does not support the same commands as the macOS and Linux variants. As such, when you find instructions for `nvm` online, you may have to find the alternative versions of those commands for the Windows version
@@ -113,7 +113,7 @@ Then, simply declare it as your main version of node:
nvm use 12.16.3
```
### Upgrading NPM {#upgrading-npm}
### [Upgrading NPM](#upgrading-npm)
The version of `npm` that's shipped with Node is typically good enough for 99.99% of use-cases. Like any other software, however, bug fixes and features are added to new versions of `npm`. You can follow [the official `npm` blog](https://blog.npmjs.org/) to read about new features and bug fixes the versions introduce.
@@ -125,7 +125,7 @@ npm i -g npm@latest
> Keep in mind that if you switch Node versions using `nvm`, you will need to re-run this command on every version of installed Node, as switching Node also switches the installed version of `npm`.
## Yarn {#yarn}
## [Yarn](#yarn)
`npm` isn't the only game in town when it comes to installing packages for use in webdev. One of the biggest alternatives to `npm` is the `yarn` package manager.
@@ -149,7 +149,7 @@ However, the ways `npm` and `yarn` install packages on your local machine are di
> Want to learn the differences between `npm` and `yarn` yourself? We're working on an article that covers that exact topic in-depth, both for newcomers and experiences devs alike. Be sure to subscribe to our update emails (at the bottom of the page right above the comments) to catch when that article lands!
## Installing Yarn {#install-yarn}
## [Installing Yarn](#install-yarn)
Once you have node and npm installed, installing yarn is as simple as:
@@ -159,7 +159,7 @@ npm i -g yarn
It's worth noting that, just like `npm` and any other globally installed packages, [when you change Node version using `nvm`, you'll need to re-run this command](#nvm-switch-node-ver). However, if you're able to natively install `yarn`, you can sidestep this issue and have `yarn` persist through `nvm` version changes.
### macOS {#yarn-mac}
### [macOS](#yarn-mac)
If you're using macOS and want to utilize `nvm`, you can also use Homebrew (a third party package manager for Macs) to install `yarn` natively:
@@ -169,7 +169,7 @@ brew install yarn
> There are other methods to install Yarn on macOS if you'd rather. [Look through `yarn`'s official docs for more](https://classic.yarnpkg.com/en/docs/install/#mac-stable)
### Windows {#yarn-windows}
### [Windows](#yarn-windows)
Just as there's a method for installing `yarn` natively on macOS, you can do the same on Windows using [the same third-party package manager we suggest using for installing and maintaining Windows programs on your machine, Chocolatey](https://unicorn-utterances.com/posts/ultimate-windows-development-environment-guide/#package-management):
@@ -181,7 +181,7 @@ choco install yarn
> There are other methods to install Yarn on Windows if you'd rather. [Look through `yarn`'s official docs for more](https://classic.yarnpkg.com/en/docs/install/#windows-stable)
# Using Node {#using-node}
# [Using Node](#using-node)
Now that you have it setup, let's walk through how to use Node. First, start by opening your terminal.
@@ -211,7 +211,7 @@ From here, you can type in JavaScript code, and hit "enter" to execute:
This view of Node - where you have an interactive terminal you can type code into - is known as the REPL.
## Executing JS Files {#node-run-file}
## [Executing JS Files](#node-run-file)
While Node's REPL is super useful for application prototyping, the primary usage of Node comes into effect when running JavaScript files.
@@ -278,7 +278,7 @@ You'll need to re-start Node to catch that update.
The way you restart a Node process is the same on Windows as it is on macOS - it's the same way you stop the process. simply type Ctrl+C in your terminal to stop the process running. Then, re-run your Node command.
### Hot Reload on File Edit {#nodemon}
### [Hot Reload on File Edit](#nodemon)
Node being able to run JavaScript files is useful once you have a finished product ready-to-run. However, while you're actively developing a file, it can be frustrating to manually stop and restart Node every time you make a change. I've had so many instances where I've Googled "NodeJS not updating JavaScript file" at some point in my debugging, only to realize that I'd forgotten to restart the process.
@@ -292,7 +292,7 @@ npm i -g nodemon
Then, simply replace your `node index.js` command with `nodemon index.js`.
# Using NPM/Yarn {#using-pkg-manager}
# [Using NPM/Yarn](#using-pkg-manager)
With basic Node usage established, we can expand our abilities by learning how to use `npm`/`yarn` efficiently.
@@ -334,7 +334,7 @@ Or:
yarn init
```
## Dependencies {#deps}
## [Dependencies](#deps)
Most projects you'll run into will have at least one dependency. A dependency is a library that your project depends on for it's functionality. For example, if I use the [`classnames` library](https://www.npmjs.com/package/classnames) to generate CSS-friendly class names from a JavaScript object:
@@ -380,7 +380,7 @@ yarn add classnames
> Just because using `classnames` as an example here, doesn't mean you have to. You can use the name of whatever dependency you're wanting to add.
### Semantic Versioning {#semver}
### [Semantic Versioning](#semver)
For each dependency listed, there is a number with three dots associated with it. These numbers represent the version of the library to install when running commands like `npm i`.
@@ -408,7 +408,7 @@ Because minor and patch releases do not contain breaking changes (when following
Again, this isn't the _only_ way to version a library, but it is an increasingly common method for making sure that new versions won't break your project's functionality.
#### SemVer Setting {#package-json-semver}
#### [SemVer Setting](#package-json-semver)
How can we leverage SemVer in our `package.json`? If you looked at the `dependencies` object in our example previously, you may have noticed an odd character that's not a number: `^`.
@@ -462,7 +462,7 @@ This can be useful when a package isn't following SemVer and instead includes br
There are other modifiers you can use such as version ranges that cross-over major releases, pre-release versions, and more. To learn more about these additional modifiers and to experiment with the tilde and caret modifiers, [NPM has setup a website that teaches you and lets you visually experiment with the modifiers](https://semver.npmjs.com/).
### Dev Dependencies {#dev-deps}
### [Dev Dependencies](#dev-deps)
Let's take a closer look at the `package.json` we were using as an example.
@@ -487,7 +487,7 @@ If you include `prettier` and other tools you use to develop the library, it blo
**`devDependency` allows you to keep a list of tools you'll utilize when developing, but which your code itself does not rely on to run.**
### Peer Dependencies {#peer-deps}
### [Peer Dependencies](#peer-deps)
While dependencies are incredibly useful, if you're using a framework like React, having every dependency in your project install a separate version of React would potentially cause issues. Each dep would have a different version, which may act differently, and your `node_modules` would be bloated.
@@ -508,7 +508,7 @@ This would allow you to have `react` installed on your project and able to share
It's worth noting in that in `npm 6`, you used to have to install these yourselves. However, `npm 7` made the change such that peer deps are installed automatically. If you see an error from a package saying that your peer dep doesn't match, find the project and make a pull request to add the correct versions of the peer deps. These warnings were not significant with `npm 6`, but with `npm 7`, these matter substantially more.
## Ignoring `node_modules ` {#gitignore}
## [Ignoring `node_modules `](#gitignore)
Once you have your packages installed (either by using `yarn` or `npm`), **it's important that you _do not commit_ your `node_modules` folder** to your code hosting. By commiting `node_modules`, you:
@@ -526,7 +526,7 @@ node_modules/
Worried that your dependencies might not resolve the same version on systems like CI where having replicable stable dependency installs matter a lot? That's where lock files comes into play
## Lock Files {#package-lock}
## [Lock Files](#package-lock)
Once you run `npm i` on a project with dependencies, you'll notice a new file in your root folder: `package-lock.json`. This file is called your **"lockfile"**. **This file is auto-generated by `npm` and should not be manually modified.**
@@ -540,7 +540,7 @@ While it's imperative not to track your `node_modules` folder, you **want to com
> Something to keep in mind is that different major versions of `npm` use slightly differently formatted lock files. If part of your team is using `npm 6` and the other part uses `npm 7`, you'll find that each team replaces the lockfile every single time `npm i` is installed. To avoid this, make sure your team is using the same major version of `npm`.
## Scripts {#npm-scripts}
## [Scripts](#npm-scripts)
You'll notice that the above `package.json` has a `start` script. When `npm run start` or `yarn start` is ran, it will execute `node index.js` to run the file with Node. While `node` usage is common, you're also able to leverage any command that's valid on your machine. You could have:

View File

@@ -16,7 +16,7 @@ Luckily for us, Unity has a system of "plugins" that allow us to do just that. U
> ⚠️ Be aware that this information is based on Unity 2018 versions. While this might be relevant for older versions of Unity, I have not tested much of this methodology of integration with older versions.
# Setting up Development Environment {#set-up-a-development-environment}
# [Setting up Development Environment](#set-up-a-development-environment)
[Unity supports using either Java files or Kotlin source files as plugins](https://docs.unity3d.com/Manual/AndroidJavaSourcePlugins.html). This means that you're able to take Android source files (regardless of if they're written in Java or Kotlin) and treat them as callable compiled library code. Unity will then take these files and then include them into its own Gradle build process, allowing you — the developer — to focus on development rather than the build process.
@@ -42,11 +42,11 @@ This will naturally incur a question for developers who have tried to maintain a
**How do you manage dependencies between these two folders?**
## Managing Android Dependencies {#android-dependencies}
## [Managing Android Dependencies](#android-dependencies)
Luckily for us, managing Android code dependencies in Unity has a thought-out solution from a large company: Google. [Because Google writes a Firebase SDK for Unity](https://firebase.google.com/docs/unity/setup), they needed a solid way to manage native dependencies within Unity.
### Installing the Unity Jar Resolver {#installing-jar-resolver}
### [Installing the Unity Jar Resolver](#installing-jar-resolver)
> If you've installed the Unity Firebase SDK already, you may skip the step of installing.
@@ -64,7 +64,7 @@ Then, you'll see a dialog screen that'll ask what files you want to import with
> Your screen may look slightly different from the one above. That's okay — so long as all of the files are selected, pressing "Import" is perfectly fine.
### Using the Jar Resolver {#using-jar-resolver}
### [Using the Jar Resolver](#using-jar-resolver)
Using the Jar resolver is fairly straightforward. Whenever you want to use a dependency in your Android code, you can add them to a file within [the `Assets/AndroidCode` folder](#set-up-a-development-environment) that adds dependencies with the same keys as the ones typically found in a `build.gradle` file for dependencies.
@@ -97,7 +97,7 @@ After creating the files, in the menubar, go to `Assets > Play Services Resolver
So long as your file ends with `Dependencies.xml`, it should be picked up by the plugin to resolve the AAR files.
#### Adding Support into Android Studio Environment {#add-android-studio-support}
#### [Adding Support into Android Studio Environment](#add-android-studio-support)
But that's only half of the equation. When editing code in Android Studio, you won't be able to use the libraries you've downloaded in Unity. This means that you're stuck manually editing both of the locations for dependencies. This is where a simple trick with build files comes into play.
@@ -117,7 +117,7 @@ For more information on how to manage your app's dependencies from within Unity,
# Call Android code from C# {#call-android-from-c-sharp}
# [Call Android code from C#](#call-android-from-c-sharp)
It's great that we're able to manage those dependencies, but they don't mean much if you're not able to utilize the code from them!
@@ -125,7 +125,7 @@ For example, take the following library: https://github.com/jaredrummler/Android
That library allows you to grab metadata about a user's device. This might be useful for analytics or bug reporters you may be developing yourself. Let's see how we're able to integrate this Java library in our C# code when building for the Android platform.
## Introduction {#intro-call-android-from-c-sharp}
## [Introduction](#intro-call-android-from-c-sharp)
You must make your callback extend the type of callback that is used in the library. For example, take the following code sample from the README of the library mentioned above:
@@ -170,7 +170,7 @@ You can see that we have a few steps here:
For each of these steps, we need to have a mapping from the Java code to C# code. Let's walk through these steps one-by-one
## Create `Callback` Instance {#android-c-sharp-callback}
## [Create `Callback` Instance](#android-c-sharp-callback)
In order to create an instance of a `Callback` in C# code, we first need a C# class that maps to the `Java` interface. To do so, let's start by extending the Android library interface. We can do this by using the `base` constructor of `AndroidJavaProxy` and the name of the Java package path. You're able to use `$` to refer to the interface name from within the Java package.
@@ -208,7 +208,7 @@ private class DeviceCallback : AndroidJavaProxy
}
```
## Get Current Context {#get-unity-context}
## [Get Current Context](#get-unity-context)
Just as all Android applications have some context to their running code, so too does the compiled Unity APK. When compiling down to Android, Unity includes a package called the "UnityPlayer" to run the compiled Unity code. The package path for the player in question is `com.unity3d.player.UnityPlayer`.
@@ -239,7 +239,7 @@ var deviceCallback = new DeviceCallback();
withCallback.Call("request", deviceCallback);
```
## Complete Code Example {#android-c-sharp-code-sample}
## [Complete Code Example](#android-c-sharp-code-sample)
Line-by-line explanations are great, but often miss the wholistic image of what we're trying to achieve. The following is a more complete code sample that can be used to get device information from an Android device from Unity.
@@ -278,7 +278,7 @@ class DeviceName : MonoBehaviour {
}
```
# Calling Source Code from Unity {#call-source-from-unity}
# [Calling Source Code from Unity](#call-source-from-unity)
Calling native Android code can be cool, but what if you have existing Android code you want to call from Unity? Well, that's supported as well. Let's take the following Kotlin file:
@@ -303,7 +303,7 @@ var testAndroidObj = new AndroidJavaObject("com.company.example.Test");
testAndroidObj.Call("runDebugLog");
```
# AndroidManifest.XML Overwriting {#manifest-file}
# [AndroidManifest.XML Overwriting](#manifest-file)
Many Android app developers know how important it can be to have the ability to customize their manifest file. By doing so, you're able to assign various metadata to your application that you otherwise would be unable to. Luckily for us, Unity provides the ability to overwrite the default XML file.
@@ -313,13 +313,13 @@ If you want to find what the default manifest file looks like, you'll want to lo
> It's worth mentioning that if you use Firebase Unity SDK and wish to provide your own manifest file, you'll need to [customize the default manifest file to support Firebase opperations](https://firebase.google.com/docs/cloud-messaging/unity/client#configuring_an_android_entry_point_activity).
# Firebase Support {#firebase}
# [Firebase Support](#firebase)
Let's say you're one of the users who utilizes the Firebase SDK for Unity. What happens if you want to send data from Android native code or even use background notification listeners in your mobile app?
You're in luck! Thanks to the Unity Firebase plugin using native code in the background, you're able to share your configuration of Firebase between your native and Unity code. So long as you've [configured Firebase for Unity properly](https://firebase.google.com/docs/cloud-messaging/unity/client#add-config-file) and [added the config change to Android Studio](#add-android-studio-support), you should be able to simply call Firebase code from within your source files and have the project configs carry over. This means that you don't have to go through the tedium of setting up and synchronizing the Unity and Android config files to setup Firebase — simply call Firebase code from your source files, and you should be good-to-go! No dependency fiddling required!
# Conclusion {#conclusion}
# [Conclusion](#conclusion)
I hope this article has been helpful to anyone hoping to use Android code in their Unity mobile game; I know how frustrating it can be sometimes to get multiple moving parts to mesh together to work. Rest assured, once it does, it's a satisfying result knowing that you're utilizing the tools that Unity and the Firebase team have so graciously provided to game developers.

View File

@@ -66,7 +66,7 @@ By making your services accessible to more people, you are most importantly maki
Accessibility isn't a pure science, however. If you arent a user of assistive technology, this may be an abstract idea at first. However, think of it like this: the colors an app uses or a button's visual placement may convey different messages and meanings depending on their context. This same problem applies to users of screen-readers and other accessible tech as well, just with different constraints. If the screen is visually cluttered, the content may be more difficult to read. Likewise, different accessibility methods will lead to different experiences for users of assistive technology. In both of these scenarios, there may not be objectively correct answers - some may prefer a button placed visually to the left, while others might advocate for it on the right. Similarly, how something is read using a screen reader may make sense to some, but might be confusingly expressed to others.
# Sensible Standards {#wcag}
# [Sensible Standards](#wcag)
While accessibility has some levels of subjectivity, it's important to note that there _are_ standards surrounding web application's accessibility support. ["Web Content Accessibility Guidelines"](https://www.w3.org/WAI/) (shortened to "WCAG") are guidelines to follow when considering your app's accessibility. These guidelines are published by a subgroup of the [World Wide Web Consortium](https://www.w3.org/) (shortened to "W3C"), the main international standards organization for the Internet. WCAG acts as the de-facto standard for accessibility guidelines.
@@ -100,7 +100,7 @@ Finally, AAA includes support for:
Interested in reading the full list? [Read the quick reference to WCAG 2.1](https://www.w3.org/WAI/WCAG21/quickref/).
# Smartly using Semantic HTML Tags {#html-semantic-tags}
# [Smartly using Semantic HTML Tags](#html-semantic-tags)
One of the easiest things you can do for your application's accessibility is to use semantic HTML tags.
@@ -138,7 +138,7 @@ As you may be able to hear, this screen reader is now able to read out that it's
Not only does this enhance the experience of assistive technology users browsing your list, but because search engine crawlers rely on HTML tags to inform what's what, your site may rank better in search engine queries as well! This is a massive boon to your site's SEO score.
# Understand `aria-` properties {#aria}
# [Understand `aria-` properties](#aria)
In our previous example, we used an HTML attribute [`aria-label`](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/ARIA_Techniques/Using_the_aria-label_attribute) on our `ul`. [ARIA is collection of HTML attributes that allow you to enhance the accessibility in applications](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA). That said, _**it is highly encouraged to use the suggested HTML tags instead of `aria` attributes whenever possible**_. Think of `aria` as a complex low level API that can enhance your experience when done properly, but drastically harm user experience when improperly utilized.
@@ -154,7 +154,7 @@ A super small small subsection of `aria-` attributes includes:
Additional to `aria` props, [the `role` property](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/ARIA_Techniques#roles) tells the browser what an element's intended purpose is, thus changing its behavior with accessible tech. Again, this is a highly advanced (and often incorrectly deployed) API for complex apps. To learn more, [read through Mozilla's ARIA basics article.](https://developer.mozilla.org/en-US/docs/Learn/Accessibility/WAI-ARIA_basics)
# Classy CSS {#css}
# [Classy CSS](#css)
While HTML relays a significant amount of information to assistive technologies like screen readers, it's not the only thing used to inform those tools. Certain CSS rules can change the functionality as well. After all, screen readers (and other tools) don't look through the source code of a website. Instead, they're looking at [the accessibility tree](https://developers.google.com/web/fundamentals/accessibility/semantics-builtin/the-accessibility-tree): a modified version of the DOM. The accessibility tree and the DOM are both constructed by the browser from the website's source code.
@@ -180,7 +180,7 @@ For this reason, there's a frequently used CSS class used to hide elements visua
There are many ways which CSS can influence assistive technologies. [Ben Myers covers this more in his blog post](https://benmyers.dev/blog/css-can-influence-screenreaders/).
# Contrast is Cool {#contrast}
# [Contrast is Cool](#contrast)
While screen readers are imperative to frontend accessibility testing, a site's visuals can help provide a good experience for many users. While a certain color palette may be aesthetically pleasing, it may be difficult to read for a colorblind user. Colorblind users aren't the only ones impacted, however.
@@ -200,7 +200,7 @@ This said, not all contrasts are the same. Per [WCAG guidelines](#wcag), you may
In this example you can see that the text passes the WCAG AA requirements for large text, but fails the same requirements for small text.
# Fantastic Fonts {#font-resize}
# [Fantastic Fonts](#font-resize)
One of the most widely used accessibility features is font scaling. While many browsers default to a font size of `16px`, the user is actually able to change settings on their device to configure websites to use a larger font size.
@@ -232,7 +232,7 @@ You can do the same in Firefox in [your preferences](about:preferences#general).
![Font settings in Firefox](./firefox_font_size.png)
## Implementation {#font-rem}
## [Implementation](#font-rem)
While browsers have the ability to set the font size, if you're using `px`, `vw`, `vh`, or other unit values for your fonts, the browser will not update these font sizes for you. In order to have your application rescale the font size to match the browser settings, you'll need to use the `rem` unit.
@@ -256,7 +256,7 @@ Say site "A" sets their font size to `1rem`, and site "B" sets their font size t
Want to learn more about `rem` and font sizing? [Take a look at this in-depth blog post that covers even more](https://www.24a11y.com/2019/pixels-vs-relative-units-in-css-why-its-still-a-big-deal/).
# Keyboard is King {#keyboard}
# [Keyboard is King](#keyboard)
Just as developers have preferences with keyboard or mouse, so too do your end-users. Some people may only be able to utilize the keyboard to navigate the digital world. Not only is keyboard navigation critical for accessibility, but it enables power users of your application to be more efficient as well.
@@ -276,7 +276,7 @@ As such, many sites (including New York Times) include a "Skip to Content" butto
This is far from the only considerations that should be made when considering a site's keyboard navigability, but is a prime example of a solution to a problem that might not be immediately obvious to users that primarily use the mouse.
## Focus Indicators {#focus-indicator}
## [Focus Indicators](#focus-indicator)
Something to keep in mind is that not all keyboard users use screen readers. Because of this, it's important to have an outline around the element you're currently focused on. Without this outline, how would a sighted person know where they are on the page?
@@ -292,7 +292,7 @@ Instead, it's suggested to either:
To learn more about the focus indicator and how to work alongside it, [check out this blog post from The A11Y Project](https://www.a11yproject.com/posts/2013-01-25-never-remove-css-outlines/).
# Humans Cant Be Automated {#no-automation}
# [Humans Cant Be Automated](#no-automation)
The perception for some is that accessibility is something that can be 1:1 adapted from an existing design. This is often untrue. You may want to add a "Skip to contents" button that only shows up with tabbing for some sites, while the visual order and tab order might need to be flipped for a better experience for screen-reader users. Remember, accessibility is a form of user experience that has to be crafted by hand. Each decision has nuance to it, and there are rarely objectives of which experience is better than others. Because of this, many companies will have dedicated accessibility specialists alongside their design and engineering teams.
@@ -300,7 +300,7 @@ You also need to make sure to [test your application](#testing) as you would any
If anyone is ever advertising to you that your inaccessible project can be made accessible (or prevent lawsuits) without any changes to your codebase, they're either lying to you or don't understand accessibility.
## Assistance is Amicable {#eslint}
## [Assistance is Amicable](#eslint)
While full automation will never be possible for improving a project's accessibility, not everyone proposing assistance in the process is trying to sell snake oil.
@@ -308,7 +308,7 @@ For example, [Deque's open-source Axe project](https://github.com/dequelabs/axe-
However, keep in mind that these tools are not infallible and are meant to supplement accessibility experts working with your engineering team, not replace them.
# Test, Test, Test Again {#testing}
# [Test, Test, Test Again](#testing)
Testing is a critical component of any application release. Whether using automated testing solutions or QA teams, they help ensure that your users are getting the best experience possible without regressions in an application's behavior.
@@ -318,7 +318,7 @@ You're also able to include automated tests that will help with accessibility re
As mentioned in [a previous section](#no-automation), the process to make your app accessible cannot be fully automated. This extends to testing as well. While real-world automated tests are fine and well, you need someone to experience the application on a broader scale to make sure the experience is as fluid as it can be. While a specific component might be accessible by default, perhaps in specific usages, it falls flat. [Displaying an accessibility statement](https://www.w3.org/WAI/planning/statements/) while transforming your users' reported problems into bug tickets and performing user testing with disabled users are great ways to close the loop with the real people affected.
# Fantastic Features {#features}
# [Fantastic Features](#features)
While there is plenty you can do to make existing functionality accessibility friendly, it's often forgotten that a strongly accessible app may opt to add specific functionality for its users with disabilities.
@@ -326,7 +326,7 @@ Some great examples of things like this are sites with lots of user-generated co
Oftentimes, you'll find that these features benefit everyone, not just assistive technology users. You may want to watch a video in a crowded area; with closed captions, that's a much easier sell than trying to hear over others and interrupting everyone around you.
# Radical Research {#further-reading}
# [Radical Research](#further-reading)
While we've done our best to have this article act as a starting point for accessibility, there's always more to cover. Let's talk about some of the ways you can continue learning more.
@@ -376,7 +376,7 @@ Additionally, there are a few sites that contain extensive lists of additional r
- [A11Y project's list of external resources](https://www.a11yproject.com/resources/)
- [A11Y & Me resource list](https://a11y.me/)
# Conclusion {#conclusion}
# [Conclusion](#conclusion)
We hope you've enjoyed learning from our accolade-worthy alliterative headlines.

View File

@@ -16,13 +16,13 @@ TypeScript's popularity cannot be understated. Either you likely know someone wh
>
> This podcast episode [can be found on one of our sponsor's pages](https://www.thepolyglotdeveloper.com/2019/10/tpdp-e32-getting-familiar-typescript-development/)
# What is TypeScript? {#what}
# [What is TypeScript?](#what)
**TypeScript is a superset of JavaScript**, meaning that _all valid JavaScript is valid TypeScript, but not all TypeScript is valid JavaScript_. Think of it as JavaScript plus some goodies. These goodies _allow developers to add type information to their code that is enforced during a TypeScript to JavaScript compilation step_.
These goodies are enabled by the TypeScript compiler, which takes your TypeScript source code and output JavaScript source code, capable of running in any JavaScript environment.
## Doesn't JavaScript Have Types Already? {#javascript-types}
## [Doesn't JavaScript Have Types Already?](#javascript-types)
While JavaScript _does_ have a loose understanding of types, they're not strictly enforced.
@@ -55,11 +55,11 @@ This point is made even more complex when dealing with how both values are handl
>
> We're going to try to go at a high-level, and while we may hint at deeper concepts or knowledge, know that we all learn at our own pace. It's more than okay to take your time feeling comfortable before diving into those topics.
# Why TypeScript? {#why}
# [Why TypeScript?](#why)
You may be asking yourself: "Why use TypeScript if it doesn't change the runtime behavior of your code?" There can be a few reasons you may want to integrate TypeScript in your project, such as the following.
## Type Safety {#type-safety}
## [Type Safety](#type-safety)
As mentioned before, [JavaScript may have rudimentary types, but TypeScript's are far more robust](#javascript-types). This robustness is able to lend developers to force their code inputs and outputs to strictly enforce the limitations the developer places on them. Let's look at take the following code for example:
@@ -90,17 +90,17 @@ test.ts:5:9 - error TS2345: Argument of type '"5"' is not assignable to paramete
In a smaller codebase, such as the given example, it can be easy to miss how important type checking is. Detecting the error in this small length of code is often trivial; however, it can be much more difficult to do so in larger, more complex codebases or when utilizing code that might not be from your project, such as a library or framework. In these use cases, it can be much easier to identify an edge case where changing a function's parameters would break another part of the codebase. Likewise, being able to quickly identify implementation errors when using a library is a significant factor in identifying problems effectively.
## Developer Quality of Life {#quality-of-life}
## [Developer Quality of Life](#quality-of-life)
While it can be easy to forget in the abstract world of development, developers make the code that we interact with on a daily basis. These developers (yourself included) tend to like enjoying certain experiences while working on their code. TypeScript provides a myriad of such quality-of-life improvements.
Let's go over some of the arguments in favor of TypeScript's developer quality of life improvements.
### Improved Tooling Support {#tooling}
### [Improved Tooling Support](#tooling)
Historically, having the ability to make assumptions about code in order to provide developer niceties (such as autocomplete code suggestions) in loosely typed languages such as JavaScript has been incredibly hard to do. As time has gone on, support for these types of actions has gotten better; but due to the nature of JavaScript's type system, there will likely always be limitations on how effectively this can be done. TypeScript's syntax, however, _can provide much of the type data about your source code needed for tools to be able to provide those niceties_ that are otherwise tricky for these tools to build. _The TypeScript team even provides a tool to communicate directly to these IDEs_ so that the work on implementing this syntax data consumption is much more trivial than they otherwise would be. _This is why [many changelogs for TypeScript releases](https://www.typescriptlang.org/docs/handbook/release-notes/overview.html) mention changes to editors such as [Visual Studio Code](https://code.visualstudio.com)_.
#### 3rd Party Library Support {#typing-files}
#### [3rd Party Library Support](#typing-files)
Because of JavaScript's awesome engineering diversity, many widely used projects do not use TypeScript. However, _there are ways we can still utilize TypeScript's tooling capabilities without porting the code_. If you have a good understanding of the given project's codebase and TypeScript, _you can write a definition file that sits separated from the rest of the codebase_. These definition files allow you many of the same tooling abilities native TypeScript source code allows.
@@ -119,17 +119,17 @@ function aNumberToAString(numProp) {
declare function aNumberToAString(numProp: number): string; // Accept a number arg, return a string
```
##### Community Hosting {#definitely-typed}
##### [Community Hosting](#definitely-typed)
Additionally, because TypeScript has a well established and widely used install-base, **there are already many different definition files in the wild for supporting non-TypeScript supporting projects**. One of the more extensive collections of these typings lives at the [DefinitelyTyped repository](https://github.com/DefinitelyTyped/DefinitelyTyped), which publishes the package's community typings under the package names `@types/your-package-name` (where `your-package-name` is the name of the project you're looking for typings of) that you can look for on your package manager.
### Documented Types {#typing-doc-references}
### [Documented Types](#typing-doc-references)
Another way TypeScript can help with the workflow while coding is in regard to gaining references to APIs and code.
When working on projects with objects that contain many properties that are used variously across files and functions, it can be difficult to track down what properties and methods are available to you without having to refer to the documentation of that scope in your application. With types present in your code, you're often able to reference that type (_often with a "jump to declaration" shortcut feature that is present in many IDEs_) to quickly refer to the properties and methods present on a given value or class.
## Type Information {#reflect-metadata}
## [Type Information](#reflect-metadata)
However, developer quality of life changes and type safety aren't the only positive for utilizing TypeScript in your projects!
@@ -173,11 +173,11 @@ export class User {
And this feature doesn't have an API dissimilar to standards-based APIs; [it's being built with and on top of features proposed for a future version of JavaScript (commonly referred to as ESNext).](https://www.typescriptlang.org/docs/handbook/decorators.html#metadata)
# What isn't TypeScript {#misconceptions}
# [What isn't TypeScript](#misconceptions)
Now that we've covered a bit of what TypeScript _is_, it might be a good idea to quickly synopsis what it _isn't_. After all, to know what something is not is oftentimes just as powerful as knowing what something _is_.
## It's Not the Tower You Think It Is {#typescript-is-not-babel}
## [It's Not the Tower You Think It Is](#typescript-is-not-babel)
One of the things TypeScript is not is a transpiler. What this means is that TypeScript (alone) _will not take TypeScript source code that contains syntax from newer JavaScript versions (ES6+) and output older versions of JavaScript (ES5) in order to improve browser compatibility (IE11)_.
@@ -185,7 +185,7 @@ For anyone who's used TypeScript, this may confuse you, as there are various fla
However, this does mean that you can utilize the entire arsenal of Babel tooling to your disposal, [such as Babel plugins](https://babeljs.io/docs/en/plugins/).
## Logic != Typings {#typings-are-not-logic}
## [Logic != Typings](#typings-are-not-logic)
_TypeScript will not find all your typing errors on its own_. This is because TypeScript is only as useful as your typings are. [Let's look back at an earlier example](#type-safety):
@@ -200,11 +200,11 @@ If you keep `addFive`'s `input` parameter without an explicit type, it will try
Although examples like this are simple, strict typings can also become fairly complex to maintain maximum type strictness. See the [Advanced section of the official handbook](https://www.typescriptlang.org/docs/handbook/advanced-types.html) for examples that illustrate this.
**Also, because typings do nothing to test against the logic of your program, they should not be seen as a replacement for testing, but rather a companion to them.** _With strict typings and proper testing, regressions can be severely limited and improve code quality of life._
##### Typing Mishaps Happen {#typings-can-be-wrong-too}
##### [Typing Mishaps Happen](#typings-can-be-wrong-too)
Remember, because typings are kept separately from the project's logic code, typings can be misleading, incomplete, or otherwise incorrect. While this can also happen with TypeScript logic code, it tends to be more actively mitigated as a project's ability to compile (and therefore distribute) relies on that typing information. This isn't to say that you should immediately mistrust typings, but this is simply a reminder that they too may have their flaws — just as any other part of a codebase.
## Don't Forget To Document {#typescript-is-not-documentation}
## [Don't Forget To Document](#typescript-is-not-documentation)
Just as typings shouldn't replace tests, typings should also not replace documentation or comments. Typings can help understand what inputs and outputs you're expecting, but just the same as with testing; _it doesn't explain what the logic does or provide context as to why the data types have specific properties_, what the properties are used for, and so forth. Additionally, typings often do little to help explain how to contribute in the larger scale when talking about documentation. For example, in a large-scale application, there may be come complex data patterns or order-of-operations that are required to do a task. Typings alone will not effectively communicate these design principles that are integral to the usage of the code.

View File

@@ -18,7 +18,7 @@ But there are a few pitfalls out there for the unwary new React developer. One o
What does that mean?
# Browsing in Public {#public}
# [Browsing in Public](#public)
Well, as it turns out, anything that happens in the browser basically happens out in the open. Anyone who knows how to open a developer console can see the output of the JavaScript console, the results of network requests/responses, and anything hidden in the HTML or CSS of the current page. While you are able to mitigate this type of reverse-engineering by randomizing variable names in a build step (often called "Obfuscating" your code), even a fairly quick Google session can often undo all of the efforts you took to muddy the waters. The browser is a terrible place to try to store or use secret information like unencrypted passwords or API keys - and React runs in the browser!
@@ -28,7 +28,7 @@ So, what is the answer? How do you keep your API keys from falling into the hand
We can't keep things like API keys a secret in React because it runs in the browser on the user's computer. The solution is to make sure your React application never sees the API key or uses it all - that way, it is never sent to the user's local machine. Instead, we have to get a proxy server to make our API calls and send the data back to the React app.
# What is a Proxy Server? {#proxy}
# [What is a Proxy Server?](#proxy)
If you are unfamiliar with the term "proxy server", that's alright! If you think about how a React app would typically interface with an API, you'd have a `GET` call to the API server in order to get the data you want from the API. However, for APIs that require an API key of "client_secret", we have to include an API key along with the `GET` request in order to get the data we want. This is a perfectly understandable method for securing and limiting an API, but it introduces the problem pointed out above: We can't simply bundle the API key in our client-side code. As such, we need a way to keep the API key out of reach of our users but still make data accessible. To do so, we can utilize another server (that we make and host ourselves) that knows the API key and uses it to make the API call _for_ us. Here's what an API call would look like without a proxy server:
@@ -40,7 +40,7 @@ Meanwhile, this is what an API call looks like with a proxy server:
As you can see, the proxy server takes calls that you would like to make, adds the API key, and returns the data from the API server. It's a straightforward concept that we can implement ourselves.
# How to use a Proxy Server {#how-to-use}
# [How to use a Proxy Server](#how-to-use)
It might make more sense to talk about things the other way around and start with the front end. Instead of using React to make a direct request to an API for information, we tell React to send an HTTP request to our proxy server. Since we are writing our front end application in JavaScript, it makes life a little easier to write our server in Node, though you could use Ruby or Python or any other back end friendly language if you want.
@@ -112,7 +112,7 @@ app.get('/', async (req, res) => {
With that out of the way, let's get to the good part - keeping your API keys out of your source code!
# Environmental Variables and You {#environment}
# [Environmental Variables and You](#environment)
Most of the time, we want to keep things like API keys and other credentials out of the source code of an app. There are some very good security reasons for this practice. For one thing, if your project is open source and hosted on a place like GitHub, it will be exposed to anyone browsing the website, not to mention the fact that there are some less-than-savory people out there who have written web scraping scripts to look for publicly exposed API keys and exploit them. Furthermore, even for private projects API keys integrated into the source code is a potential security vulnerability. A hacker could find a way into your system and compromise the usage of the API key. Being able to hide them away in a more configurable manner might keep things safer.
@@ -138,7 +138,7 @@ The other potential "gotcha" is to make sure to include the `.env` file in your
Now, It's true that you can use environmental variables in React. But they [will not keep your secrets](https://create-react-app.dev/docs/adding-custom-environment-variables/) the way they do in Node! Those variables will be embedded into your React build, which means that anyone will be able to see them.
# Conclusion {#conclusion}
# [Conclusion](#conclusion)
Now you know how to whip up a simple Node server and use environmental variables to keep your secrets when making API calls with front end libraries/frameworks like React. It's actually pretty easy and can serve as an introduction to the basics of Node and Express if you haven't had a reason to use them before.

View File

@@ -15,7 +15,7 @@ Modern-day remote live communication has never been as efficient or fast as it i
One way they've made extension development easier is by providing an SDK for Node developers to use and create extensions with. This post will outline how we can create a Slack bot to add functionality to chats.
# Initial Signup {#signup-for-dev-account}
# [Initial Signup](#signup-for-dev-account)
To start, we'll need to [sign up for a developer account and create an app to host our application logic using this link](https://api.slack.com/apps). This will allow us to create new Slack apps and bots to add into our workspace.
@@ -96,7 +96,7 @@ Then, in your `package.json`, you can **edit your `start` script** to reflect th
Now, whenever your code uses `process.env.SLACK_SIGNING_SECRET`, it'll represent the value present in your `.env` file.
# Development Hosting {#development-environment-setup}
# [Development Hosting](#development-environment-setup)
In order to have these events called, we'll need to get a public URL to route to our local development server. In order to do this, we can [use `ngrok`](https://github.com/inconshreveable/ngrok) to host a public URL in our local environment:
@@ -154,7 +154,7 @@ I can hear you asking "But here we're requesting `message.channels`, how do we k
You can actually check the event `type` from [the API reference documentation](https://api.slack.com/events/message.channels) to see that the `type`s match up.
# Development App Installation {#development-installation}
# [Development App Installation](#development-installation)
You'll notice, as I first did, that if you start your server with `npm start` and then send a message to a public channel that you'll notice something in your terminal. Or, well, rather, a lack of something in your terminal. The `console.log` that you would expect to run isn't doing so - why is that?
@@ -172,7 +172,7 @@ Once this is done, you can send a test message to a public channel and see it pr
![A showcase of the message "Hello, World" being sent to the app](./hello_world.png)
# App Interactivity {#interactive-message-package}
# [App Interactivity](#interactive-message-package)
While listening to events alone can be very useful in some circumstances, oftentimes having a way to interact with your application can be very helpful. As a result, the Slack SDK also includes the `@slack/interactive-messages` package to help you provide interactions with the user more directly. Using this package, you can respond to the user's input. For example, let's say we wanted to replicate the [PlusPlus](https://go.pluspl.us/) Slack bot as a way to track a user's score.
@@ -184,7 +184,7 @@ We want to have the following functionality for an MVP:
Each of these messages will prompt the bot to respond with a message in the same channel. Ideally we'd use a database to store score for long-term projects, but for now, let's use in-memory storage for an MVP of the interactivity we're hoping for.
## Setup {#interactive-bot-setup}
## [Setup](#interactive-bot-setup)
First and foremost, something you'll need to do is add a new OAuth permission to enable the functionality for the bot to write to the channel. Go into the dashboard and go to the "OAuth & Permissions" tab. The second section of the screen should be called "Scopes", where you can add the `chat:write:bot` permission.
![The permissions searching for "chat" which shows that "chat:write:bot" permission we need to add](./chat_write_bot_oauth.png)
@@ -199,7 +199,7 @@ Once this is done, you can access the OAuth token for the fresh installation of
Copying the token from the top of the screen, store it into our `.ENV` file so that we can utilize it in our application. I named the environment variable `OAUTH_TOKEN`, so when you see that in code examples, know that this is in reference to this value.
## The Code {#leaderboard-local-code}
## [The Code](#leaderboard-local-code)
To start adding in response functionality, we need to install the package that'll allow us to use the web API:
@@ -265,7 +265,7 @@ As it did before, the code will listen for every message we send. Then, we liste
> Remember, the channel ID is not the same thing as the human-readable channel name. It's a unique ID generated by Slack and as such you'd have to use the API to get the channel ID if you only knew the human-readable name
## Adding State {#interactive-local-state}
## [Adding State](#interactive-local-state)
Luckily for our MVP, we've already outlined that we won't be using a database for the initial version of the bot. As such, we're able to keep a simple stateful object and simply mutate it to keep track of what's being scored.
@@ -346,7 +346,7 @@ slackEvents.on('message', async event => {
As you can see, we're able to add in the functionality for the score-keeping relatively easily with little additional code. Slightly cheating, but to pretty-print the score table, we're using a `tablize` package that's part of [the "batteries not included" library we've built](https://github.com/unicorn-utterances/batteries-not-included) in order to provide an ASCII table for our output.
# Adding a Database {#mongodb}
# [Adding a Database](#mongodb)
Even though the bot works well so far, it's not ideal to keep a score in memory. If your server crashes or if there's any other form of interruption in the process running, you'll lose all of your data. As such, we'll be replacing our local store with a database. As our data needs are simple and I want to keep this article relatively short, let's use a NoSQL database to avoid having to structure tables. We'll use MongoDB in order to keep our data stored.
@@ -396,7 +396,7 @@ const uri = `mongodb+srv://${mongoUser}:${mongoPass}@cluster0-xxxxx.mongodb.net/
Now that we understand the URI we need to pass to the Node driver to connect to the database, we'll dive into the code we need to change to enable MongoDB.
## The Code {#mongodb-code}
## [The Code](#mongodb-code)
```javascript
const { createEventAdapter } = require('@slack/events-api');
@@ -510,7 +510,7 @@ If you do a diff against the previous code, you'll see that we were able to add
Because we now have a database running the data show, we can be sure that our data will persist - even if or when our server goes down (either for maintenance or a crash). Now that we have the code updates, let's get to deploying the code we had set up.
# Deployment {#deployment}
# [Deployment](#deployment)
Ideally, since our Slack app is a small side project, we'd like to host things in a straightforward manner for cheap/free. One of my favorite hosting solutions for such projects is [Heroku](heroku.com/). Heroku is no stranger to Slack apps, either. They have [an official blog post outlining making their own Slack bot using the web notification feature within Slack](https://blog.heroku.com/how-to-deploy-your-slack-bots-to-heroku). That said, our route is going to be a bit different from theirs because we chose to use the events subscriptions instead.
@@ -607,7 +607,7 @@ Run that last `git commit` and `git push heroku master` and congrats! You should
![A demo of the app by adding a point to "botsRCool" and removing one from "failedDemos"](./showcase.png)
# Conclusion {#conclusion}
# [Conclusion](#conclusion)
Slack provides a feature-rich, very useful chat application. Being able to add in your own functionality to said application only makes things more powerful for either your group or your end users. I know many businesses will use Slack bots as another experience for their business users. Now you've been able to see the power of their Node SDK and how easy it is to setup and deploy your very own Slack app using MongoDB and Heroku!

View File

@@ -20,7 +20,7 @@ In this article, we'll outline how to set up a new blog post site using Scully.
Without further ado, let's jump in, shall we?
# Initial Setup {#initial-setup}
# [Initial Setup](#initial-setup)
First, we have some requirements:
@@ -41,7 +41,7 @@ If we pause here and run `ng serve`, we'll find ourselves greeted with the defau
The file that this code lives under is the `app.component.html` file. We'll be modifying that code later on, as we don't want that UI to display on our blog site.
## Adding Scully {#adding-scully}
## [Adding Scully](#adding-scully)
After that, open the `my-scully-blog` directory and run the following command to install and add Scully to the project:
@@ -66,7 +66,7 @@ While Scully [_does_ have a generator to add in blog support](https://github.com
> This isn't a stab at Scully by any means, if anything I mean it as a compliment. The team consistently improves Scully and I had some suggestions for the blog generator at the time of writing. While I'm unsure of these suggestions making it into future versions, it'd sure stink to throw away an article if they were implemented.
## Angular Routes {#angular-blog-routes}
## [Angular Routes](#angular-blog-routes)
Before we get into adding in the Scully configs, let's first set up the page that we'll want our blog to show up within. We want a `/blog` sub route, allowing us to have a `/blog` for the list of all posts and a `/blog/:postId` for the individual posts.
@@ -102,7 +102,7 @@ const routes: Routes = [
This imports the `blog.module` file to use the further children routes defined there. If we now start serving the site and go to `localhost:4200/blog`, we should see the message "blog works!" at the bottom of the page.
### Routing Fixes {#router-outlet}
### [Routing Fixes](#router-outlet)
That said, you'll still be seeing the rest of the page. That's far from ideal, so let's remove the additional code in `app.component.html` to be only the following:
@@ -137,7 +137,7 @@ const routes: Routes = [
Now, we have both `/blog` and `/` working as-expected!
### Adding Blog Post Route {#blog-post-route}
### [Adding Blog Post Route](#blog-post-route)
Just as we added a new route to the existing `/` route, we're going to do the same thing now, but with `/blog` paths. Let's add a `blog-post` route to match an ID passed to `blog`. While we won't hookup any logic to grab the blog post by ID yet, it'll help to have that route configured.
@@ -157,7 +157,7 @@ const routes: Routes = [
That's it! Now, if you go to `localhost:4200/blog`, you should see the `blog works!` message and on the `/blog/asdf` route, you should see `blog-post works!`. With this, we should be able to move onto the next steps!
## The Markdown Files {#frontmatter}
## [The Markdown Files](#frontmatter)
To start, let's create a new folder at the root of your project called `blog`. It's in this root folder that we'll add our markdown files that our blog posts will live in. Let's create a new markdown file under `/blog/test-post.md`.
@@ -189,7 +189,7 @@ authorTwitter: crutchcorn
It's worth mentioning that the `publish` property has some built-in functionality with Scully that we'll see later on. We'll likely want to leave that field in and keep it `true` for now.
## Scully Routes {#scully-blog-route-config}
## [Scully Routes](#scully-blog-route-config)
Now we'll tell Scully to generate one route for each markdown file inside of our `blog` folder. As such, we'll update our `scully.my-scully-blog.config.js` file to generate a new `/blog/:postId` route for each of the markdown files:
@@ -300,7 +300,7 @@ Finally, if we go to [http://localhost:1668/blog/test-post](http://localhost:166
![A preview of the post as seen on-screen](./hello_world_blog_post.png)
## Scully Build Additions {#scully-build-folder}
## [Scully Build Additions](#scully-build-folder)
You'll notice that if you open your `dist` folder, you'll find two folders:
@@ -311,11 +311,11 @@ You'll notice that if you open your `dist` folder, you'll find two folders:
The reason for the two separate folders is because Scully has it's own build folder. When you ran `ng build`, you generated the `my-scully-blog` folder, then when you later ran `npm run scully`, it generated the `static` folder. As such, if you want to host your app, you should use the `static` folder.
## Asset Routes {#scully-build-routes}
## [Asset Routes](#scully-build-routes)
If you open the `/src/assets` folder, you'll notice another file you didn't have before `npm run scully`. This file is generated any time you run Scully and provides you the routing metadata during an `ng serve` session. [Remember how I mentioned that there was a way to access the Markdown frontmatter data?](#scully-blog-route-config) Well, this is how! After running a Scully build, you'll be provided metadata at your disposal. In the next section, we'll walk through how to access that metadata!
# Listing Posts {#scully-route-acess}
# [Listing Posts](#scully-route-acess)
To get a list of posts, we're going to utilize Scully's route information service. To start, let's add that service to the `blog.component.ts` file:
@@ -372,7 +372,7 @@ And that'll give us what we're looking for:
0: {route: "/blog/test-post", title: "Test post", description: "This is a post description", publish: true, authorName: "Corbin Crutchley", }
```
## Final Blog List {#scully-avail-routes}
## [Final Blog List](#scully-avail-routes)
We can cleanup the code a bit by using [the Angular `async` pipe](https://angular.io/api/common/AsyncPipe):
@@ -418,7 +418,7 @@ This code should give us a straight list of blog posts and turn them into links
While this isn't a pretty blog, it is a functional one! Now you're able to list routes; we can even get the metadata for a post
## Final Blog Post Page {#scully-avail-routes-filtered}
## [Final Blog Post Page](#scully-avail-routes-filtered)
But what happens if you want to display metadata about a post on the post page itself? Surely being able to list the author metadata in the post would be useful as well, right?

View File

@@ -14,13 +14,13 @@
In the last article in the series, we outlined what a packet architected network was, what the OSI layers represent, and demonstrated how we could use physical mail as an analogy for how packet-based networks function. Since we've gone to a hundred-mile view in the last series, I figured we'd take a look at what we deliver in an HTTP network. You see, the internet, as you know it, is merely a large scale HTTP network; it's built upon the packet architecture. There are two common types of packets that are delivered in the HTTP network: UDP and TCP.
# Commonalities {#udp-and-tcp-both}
# [Commonalities](#udp-and-tcp-both)
Let's start by talking about what similarities UDP and TCP have. While they do have their distinct differences, they share a lot in common.
Since they're both packet-based, they both require an "address" of sorts to infer where they've come from and where they're going.
## IP Addresses {#ip-address}
## [IP Addresses](#ip-address)
The "address" used to identify the "to" and "from" metadata about a packet is an "IP Address." When you send a packet of data out, you label it with an IP address to go to; then, through a process of various other utilities processing that data, it's sent! An IP address might look something like this: `127.0.0.0`, or something like this: `0:0:0:0:0:0:0:1`
@@ -28,7 +28,7 @@ This IP address is then stored in a packet's header ([if you recall, that's wher
![A packet being directed to the correct client matching the IP in the header](./showing-an-ip-address.svg)
### Different Types of IP Addresses {#ipv4-vs-ipv6}
### [Different Types of IP Addresses](#ipv4-vs-ipv6)
While IP addresses may seem somewhat arbitrary at first glance, there are important rules to abide by to have what's considered a "valid" IP address. What's considered "valid" is defined by the TCP/IP specification, which is lead by the [Internet Engineering Task Force](https://en.wikipedia.org/wiki/Internet_Engineering_Task_Force), a group created specifically to manage and handle network protocol standardization. As time has gone on, there have been various revisions to the IP validation methods. What was once valid is now considered outdated and migrated to a newer standard of IP address. The two most commonly used standards for defining IP addresses today are:
@@ -39,13 +39,13 @@ Due to the explosion of internet enabled-devices, we have had to make changes to
![A showcase of an example IPv4 address and an IPv6 address. IPv4 example is "131.198.246.34" while IPv6 is "4131:e0fd:ef8e:ed27:f5b:ac98:640c:bfa5"](./ip-comparison.svg)
#### What Happened to version 5? {#ipv5}
#### [What Happened to version 5?](#ipv5)
As mentioned previously, the Internet Engineering Task Force manages various specifications regarding the standardization of internet communication. Back in 1995, they gathered to attempt to create a new version of the protocol to handle the growing use of live-streamed communication. To make a long story short, IPv5 was abandoned for various reasons, and they moved on to tackle the issue of unique identifiers rapidly diminishing. To avoid confusion with the attempted streaming protocol improvements, when a new version of the protocol was being worked on afterward, it was called IPv6.
> If you'd like to read more about this version for fun, you can read through [the Wikipedia page](https://en.wikipedia.org/wiki/Internet_Stream_Protocol). Unfortunately, there's limited information, and things get very quickly highly technical due to the "in progress" nature that things were left at.
## Ports {#udp-ports}
## [Ports](#udp-ports)
Continuing with the mail analogy, just like an apartment complex can have a single mailbox for multiple apartments living within the same building, so too can a single machine have multiple landing sites for network packets.
@@ -53,17 +53,17 @@ These separated landing sites are called "ports"; called as such because they op
This method of port address selection even has it's own shorthand. For example, if you wanted to send data to IP address `192.168.1.50` on port `3000`, you'd send that data to: `192.168.1.50:3000`, being sure to use a colon to delineate between the IP address and the port number.
### Pre-Assigned Ports {#standard-ports}
### [Pre-Assigned Ports](#standard-ports)
Like an apartment complex may pre-assign individuals to specific rooms, so too does the specification for Internet Protocol pre-assign specific applications to specific ports. For example, port 21 is officially designated to the [File Transfer Protocol (FTP)](https://en.wikipedia.org/wiki/File_Transfer_Protocol), which can be used to transfer files if a server is set up on a machine to handle this protocol. As a result, it's strongly discouraged to use these ports that are reserved for your application stack if you want to use a specific port for networking in your app or project.
### A Note On IP Addresses {#localhost}
### [A Note On IP Addresses](#localhost)
You might remember from [the start of this section](#ip-addresses) that I listed `127.0.0.1` and `0:0:0:0:0:0:0:1` as examples of IPv4 and IPv6 addresses. This isn't without reason! These addresses are known as "loopback" addresses, and forward all traffic addressed to those IP addresses back to your machine! Why might this be useful? Let's take the following real-world example:
Let's say you're developing a web application using React and want to see it hosted on your local development environment without deploying it to the public internet to see. In this example, you could spin up a server to host the React code on `127.0.0.1:3000`, and you could then access it via `localhost:3000` in your browser. For programs like React, this functionality is built-in to [it's CLI utility](https://reactjs.org/docs/create-a-new-react-app.html), but this isn't limited to React; It's universal for any form of network communication you need to test locally.
# UDP {#udp}
# [UDP](#udp)
Now that we've explained what IP addresses are and what ports are let's walk through how UDP is unique. _UDP stands for "User datagram protocol."_ You may be familiar with "User" and "Protocol," but the term **"datagram"** may be new.
@@ -81,11 +81,11 @@ Likewise, if you've sent multiple packets at once, you have no way of knowing if
## When is UDP Useful? {#udp-uses}
## [When is UDP Useful?](#udp-uses)
UDP is useful for various low-level communication used to set up networks in ways that we'll touch later in the series. That said, there are also application-level usages for UDP's core strength: Speed. See, because UDP does not engage in any form of delivery confirmation, it tends to be significantly faster than it's TCP counterpart. As such, if you require high-speed data throughput and can afford to lose some data, UDP is the way to go. This speed is why it's often utilized in video calling software. You can scale up/down the video quality based on which packets are able to make it through but keep latency low due to pressing forward when packets don't arrive in time.
# TCP {#tcp}
# [TCP](#tcp)
If you've ever sent an expensive package through a mail courier service, you may have opted to have the recipient "sign" for the package, as a method of certifying that they did, in fact, get the package.

View File

@@ -12,7 +12,7 @@
Computers, on a very low level, are built upon binary (ones and zeros). Think about that — all of the text you're reading on your screen started life as either a one or a zero in some form. That's incredible! How can it turn something so simple into a sprawling sheet of characters that you can read on your device? Let's find out together!
# Decimal {#decimal}
# [Decimal](#decimal)
When you or I count, we typically use 10 numbers in some variation of combination to do so: `0`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`, and `9`.
@@ -31,7 +31,7 @@ Remember that the number **`10`** is a combination of **`1`** and **`0`**? That'
![A "9" in the tens column, and a "9" in the ones column which drop down to show "90 + 9" which equals 99](./base_10_99.svg)
# Binary {#binary}
# [Binary](#binary)
Now this may seem rather simplistic, but it's an important distinction to be made to understand binary. Our typical decimal numeral system is known as the _base 10_ system. **It's called as such because there are 10 symbols used to construct all other numbers** (once again, that's: `0`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`, and `9`).
Binary, on the other hand, is _base two_. **This means that there are only two symbols that exist in this numeral system.**
@@ -93,7 +93,7 @@ And voilà, you have the binary representation of `50`: **`0110010`**.
>
> While there are plenty of ways to find the binary representation of a decimal number, this example uses a "greedy" algorithm. I find this algorithm to flow the best with learning the binary number system, but it's not the only way (or even the best way, oftentimes).
# Hexadecimal {#hexadecimal}
# [Hexadecimal](#hexadecimal)
Binary isn't the only non-decimal system. You're able to use any number as your base as long as you have enough symbols to represent the digits. Let's look at another example of a non-decimal system: _hexadecimal_.
@@ -157,7 +157,7 @@ In order to add a number larger than `15` in the hexadecimal system, we need to
>
> _`732`_ for example, in base 10, can be written as (7 × 10<sup>2</sup>) + (3 × 10<sup>1</sup>) + (2 × 10<sup>0</sup>).
## To Binary {#hexadecimal-to-binary}
## [To Binary](#hexadecimal-to-binary)
Remember that at the end of the day, hexadecimal is just another way to represent a value using a specific set of symbols. Just as we're able to convert from binary to decimal, we can convert from hexadecimal to binary and vice versa.
In binary, the set of symbols is much smaller than in hexadecimal, and as a result, the symbolic representation is longer.
@@ -168,7 +168,7 @@ After all, they're just reflections of the numbers that we represent using a spe
# Applications
## CSS Colors {#hex-css}
## [CSS Colors](#hex-css)
Funnily enough, if you've used a "hex" value in HTML and CSS, you may already be loosely familiar with a similar scenario to what we walked through with the hexadecimal section.
@@ -200,7 +200,7 @@ Even without seeing a visual representation, you can tell that this color likely
![A visual representation of the color above, including a color slider to show where it falls in the ROYGBIV spectrum](./F33BC6.png)
## Text Encoding {#ascii}
## [Text Encoding](#ascii)
Although hexadecimal has a much more immediately noticeable application with colors, we started this post off with a question: "How does your computer know what letters to display on the screen from only binary?"

View File

@@ -12,7 +12,7 @@
Every new C/C++ programmer will eventually reach the point at which they are forced to work with pointers and will undoubtedly realize that they extremely dislike using them because they are a little complex. Today, we'll be looking at what pointers are, deconstructing their usage, and hopefully, making the usage of pointers easier to grok.
# What is a Pointer? {#what-is-a-pointer}
# [What is a Pointer?](#what-is-a-pointer)
A pointer is simply a variable or object that instead of holding a value, holds a memory address to another spot in memory. You will commonly see a pointer being most recognizable by their declaration including the **\*** operator, also known as the **dereference operator**. This operator is called the dereference operator because when you try to access the value that the pointer is referencing, you have to use the **\*** operator to "de-reference" the value. Which is just a fancy way of saying, "go to that reference".
@@ -53,7 +53,7 @@ As you can see, the pointer p holds the memory address of num, and when you use
Pointers can also get a lot more complex and must be used in certain situations. For example, if you put an object on the heap (Check out my article on Virtual Memory to learn more about heap memory) then you will have to use a pointer because you can't access the heap directly. So, instead of having a pointer to an address on the stack, it will point to an address on the heap. You might even find yourself using double or triple pointers as you get more used to them.
# What is a Reference? {#what-is-a-reference}
# [What is a Reference?](#what-is-a-reference)
In simple terms, a reference is simply the address of whatever you're passing. The difference between a pointer and a reference lies in the fact that a reference is simply the **address** to where a value is being stored and a pointer is simply a variable that has it's own address as well as the address its pointing to. I like to consider the **&** operator the "reference operator" even though I'm pretty sure that's not actually what it is called. I used this operator in the last example, and it's pretty straightforward.
@@ -89,7 +89,7 @@ Heres what this looks like in memory with more easily understandable addresse
**![Memory Example](./memory.png)**
# Pass by Reference vs. Pass by Value {#passing}
# [Pass by Reference vs. Pass by Value](#passing)
This is another more complex topic that we as programmers need to be aware of in almost all languages - even languages without pointers. The idea of the two all stems from functions, sometimes called methods, and their parameters. Whenever you pass something into a function, does the original variable/object that is passed in get updated inside as well as outside the function, or is it hyperlocal and it just creates a copy of the original parameter? "Pass by reference" refers to when the parameter is changed both within the function and outside of it. "Pass by value" refers to when the parameters are merely a copy and have their own memory address, only being updated inside of the function.
@@ -139,6 +139,6 @@ As you can see, when passed by reference, the local value is changed, but when i
This gets confusing after a while if you're not paying attention to your outputs. In fact, Python gets even more confusing, but that's a topic for another day; be sure to sign up for our newsletter to see when that lands 😉
# Review/Conclusion {#conclusion}
# [Review/Conclusion](#conclusion)
Pointers and References are extremely important in your day to day work in languages like C/C++. C++ gives you a lot of manual control with the most common being memory. Knowing how each one of your pointers or variables are stored will help you write code faster and more efficiently. Knowing also how parameters are passed to your functions as well as how they are updated, will make your life **so** much easier.

View File

@@ -22,7 +22,7 @@ We'll also be exploring additional functionality to each of those two definition
> As most of this content relies on the `useRef` hook, we'll be using functional components for all of our examples. However, there are APIs such as [`React.createRef`](https://reactjs.org/docs/refs-and-the-dom.html#creating-refs) and [class instance variables](https://www.seanmcp.com/articles/storing-data-in-state-vs-class-variable/) that can be used to recreate `React.useRef` functionality with classes.
# Mutable Data Storage {#use-ref-mutate}
# [Mutable Data Storage](#use-ref-mutate)
While `useState` is the most commonly known hook for data storage, it's not the only one on the block. React's `useRef` hook functions differently from `useState`, but they're both used for persisting data across renders.
@@ -115,7 +115,7 @@ Thanks to the lack of rendering on data storage, it's particularly useful for st
<iframe src="https://stackblitz.com/edit/react-use-ref-mutable-data?ctl=1&embed=1" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
# Visual Timer with Refs {#visual-timers}
# [Visual Timer with Refs](#visual-timers)
While there are usages for timers without rendered values, what would happen if we made the timer render a value in state?
@@ -194,7 +194,7 @@ Because `useRef` relies on passing by reference and mutating that reference, if
> ```
> We're simply using a `useRef` to outline one of the important properties about refs: mutation.
# DOM Element References {#dom-ref}
# [DOM Element References](#dom-ref)
At the start of this article, I mentioned that `ref`s are not just a mutable data storage method but a way to reference DOM nodes from inside of React. The easiest of the methods to track a DOM node is by storing it in a `useRef` hook using any element's `ref` property:
@@ -232,7 +232,7 @@ Because `elRef.current` is now a `HTMLDivElement`, it means we now have access t
<iframe src="https://stackblitz.com/edit/react-use-ref-effect-style?ctl=1&embed=1" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
## Alternative Syntax {#ref-function}
## [Alternative Syntax](#ref-function)
It's worth noting that the `ref` attribute also accepts a function. While [we'll touch on the implications of this more in the future](#callback-refs), just note that this code example does exactly the same thing as `ref={elRef}`:
@@ -248,7 +248,7 @@ It's worth noting that the `ref` attribute also accepts a function. While [we'll
)
```
# Component References {#forward-ref}
# [Component References](#forward-ref)
HTML elements are a great use-case for `ref`s. However, there are many instances where you need a ref for an element that's part of a child's render process. How are we able to pass a ref from a parent component to a child component?
@@ -327,7 +327,7 @@ Now that we are using `forwardRef`, we can use the `ref` property name on the pa
<iframe src="https://stackblitz.com/edit/react-use-ref-effect-style-forward-ref?ctl=1&embed=1" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
# Class Component References {#class-ref}
# [Class Component References](#class-ref)
While I mentioned that we'll be using functional components and hooks for a majority of this article, I think it's important that I cover how class components handle the `ref` property. Take the following class component:
@@ -424,7 +424,7 @@ console.log(this.container.current.render);
ƒ render()
```
## Custom Properties and Methods {#class-ref-methods-props}
## [Custom Properties and Methods](#class-ref-methods-props)
Not only are React Component built-ins (like `render` and `props`) accessible from a class ref, but you can access data that you attach to that class as well. Because the `container.current` is an instance of the `Container` class, when you add custom properties and methods, they're visible from the ref!
@@ -466,7 +466,7 @@ function App() {
<iframe src="https://stackblitz.com/edit/react-class-ref-instance-custom-props?ctl=1&embed=1" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
# Unidirectional Flow {#unidirectional-flow}
# [Unidirectional Flow](#unidirectional-flow)
While the concept of "universal directional flow" is a broader subject than what I originally wanted to cover with this article, I think it's important to understand why you shouldn't utilize the pattern outlined above. One of the reasons refs are so useful is one of the reasons they're so dangerous as a concept: They break unidirectional data flow.
@@ -523,7 +523,7 @@ This is what a proper React component _should_ look like. This pattern of raisin
Now that we have a better understanding of the patterns to follow let's take a look at the wrong way to do things.
## Breaking from Suggested Patterns {#bidirectionality-example}
## [Breaking from Suggested Patterns](#bidirectionality-example)
Doing the inverse of "lifting state," let's lower that state back into the `SimpleForm` component. Then, to access that data from `App`, we can use the `ref` property to access that data from the parent.
@@ -605,7 +605,7 @@ As you can see, while the number of steps is similar between these methods (and
This is why the React core team (and the community at large) highly suggests you use unidirectionality and rightfully shuns breaking away from that pattern when it's not required.
# Add Data to Ref {#use-imperative-handle}
# [Add Data to Ref](#use-imperative-handle)
If you've never heard of the `useImperativeHandle` hook before, this is why. It enables you to add methods and properties to a `ref` forwarded/passed into a component. By doing this, you're able to access data from the child directly within the parent, rather than forcing you to raise state up, which can break unidirectionality.
@@ -708,7 +708,7 @@ That said, you're not limited to simply the names of native APIs. What do you th
> When your focus is set to the `Container` element, try typing in the ["Konami code"](https://en.wikipedia.org/wiki/Konami_Code) using your arrow keys. What does it do when that's done?
# React Refs in `useEffect ` {#refs-in-use-effect}
# [React Refs in `useEffect `](#refs-in-use-effect)
I have to make a confession: I've been lying to you. Not maliciously, but I've repeatedly used code in the previous samples that should not ever be used in production. This is because without hand-waving a bit, teaching these things can be tricky.
@@ -850,7 +850,7 @@ Now, once you've triggered the `useState` "add" button, do the same with the `us
[TL;DR](https://www.dictionary.com/browse/tldr) - Try pressing `useState` "add" twice. The value on-screen will be 2. Then, try pressing the `useRef` "add" button thrice. The value on-screen will be 0. Press `useState`'s button once again and et voilà - both values are 3 again!
## Comments from Core Team {#core-team-comments}
## [Comments from Core Team](#core-team-comments)
Because of the unintended effects of tracking a `ref` in a `useEffect`, the core team has explicitly suggested avoiding doing so.
@@ -870,7 +870,7 @@ Because of the unintended effects of tracking a `ref` in a `useEffect`, the core
These are great points... But what does Dan mean by a "callback ref"?
# Callback Refs {#callback-refs}
# [Callback Refs](#callback-refs)
Towards the start of this article, we mentioned an alternative way to assign refs. Instead of:
@@ -936,7 +936,7 @@ That's true. However, you _can_ combine the two behaviors to make a callback tha
<iframe src="https://stackblitz.com/edit/react-use-ref-callback-and-effect?ctl=1&embed=1" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
# `useState` Refs {#usestate-refs}
# [`useState` Refs](#usestate-refs)
Sometimes the combination of `useRef` and callback refs is not enough. There are the rare instances where you need to re-render whenever you get a new value in `.current.`. The problem is that the inherent nature of `.current` prevents re-rendering. How do we get around that? Eliminate `.current` entirely by switching your `useRef` out for a `useState`.

View File

@@ -16,7 +16,7 @@ This versioning complexity birthed _a set of tools that allows you to generate c
If you _enforce a standardized set of commit messages_ (both header and body), then _a tool can automatically run through each commit_ since your last release _and generate the changelog_. Furthermore, because the commit message standards you'll follow outline when a new feature, bug fix, or breaking change is introduced, _this tooling can assume what portion of SEMVER (major, minor, or patch) to bump_. It can change the version numbers in your files as well!
# Step 0: Commit Rules {#conventional-commit}
# [Step 0: Commit Rules](#conventional-commit)
Before we start setting up tooling (to generate the changelogs, commit message verification, and more), we need first to understand what the rules are that we're signing up for. As mentioned before, we'll need to standardize the way we write our commit messages for our tooling to work effectively. The standardized commit message template we'll be following in this article is called [Conventional Commits](https://www.conventionalcommits.org/). Conventional Commits generally follow an outline as such:
@@ -80,11 +80,11 @@ BREAKING CHANGE: If you're using the `first` or `last` events in the paginator,
The `BREAKING CHANGE:` at the start of your commit body tells your tooling that this should indicate a package bump of a MAJOR version, and will highlight this change at the top of your changelog as such.
## Commit Scope {#lerna-usage}
## [Commit Scope](#lerna-usage)
An immediate question that might be asked is, "why would I put the scope of changes? How could this realistically help me?" One use-case where adding a commit scope is hugely advantageous is when using a monorepo for multiple packages in a single repo. When using [Lerna](https://github.com/lerna/lerna) to help manage a monorepo, there are even addons that enable [restricting your _scope_ to match one of the project's packages names](https://github.com/conventional-changelog/commitlint/tree/master/@commitlint/config-lerna-scopes). By doing so, you're able to generate individual `CHANGELOG.md` files for each package, enabling your tooling to scope with your project's scale.
# Step 1: Commit Message Enforcement {#commit-lint}
# [Step 1: Commit Message Enforcement](#commit-lint)
Any suitable set of tooling should have guide-rails that help you follow the rules you set for yourself (and your team). Like a linter helps keeps your codebase syntactically consistent, _Conventional Commit setups often have a linter setup of their own_. This linter isn't concerned about your code syntax, but rather your commit message syntax.
@@ -92,7 +92,7 @@ Just as you have many options regarding what linting ruleset you'd like to enfor
Another similarity to their code syntax contemporaries is that your commit linter has [a myriad of configuration options available](https://commitlint.js.org/#/reference-rules?id=rules). These options allow you to overwrite the existing configuration you're utilizing or even create your configuration from scratch.
## Setup {#install-commit-lint}
## [Setup](#install-commit-lint)
While you can go as in-depth as creating your own configuration, let's assume that we want to stick with the out-of-box settings. Let's assume that you already have a `package.json` configured. First thing's first, let's install the dependencies we need:
@@ -114,7 +114,7 @@ npx commitlint --from=HEAD~1
It should either validate or fail, depending on whether the last commit message followed the ruleset.
### Husky Setup {#husky}
### [Husky Setup](#husky)
While you _could_ set up a CI system with something like the `commitlint` command from above, it wouldn't be very effective at making sure you and your team remain vigilant with your commit schema. You're _able to enforce your commit messages directly from your development machine_ at the time of commit. To do so, we'll hookup git hooks to validate our commit messages before they finalize (and prevent a commit when they don't pass the linting rules). While there _are_ ways to do this manually, the easiest (and most sharable) method to do so using `package.json` is by installing a dependency called `husky`.
@@ -134,7 +134,7 @@ By installing `husky`, we can now add the following to our `package.json` to tel
}
```
## Test The Hook {#testing-husky}
## [Test The Hook](#testing-husky)
Now that we have `husky` configured properly, we're able to ensure that the linting is working as expected. Now, if you run `git commit` it will give the following behavior pattern:
@@ -151,7 +151,7 @@ No staged files match any of provided globs.
husky > commit-msg hook failed (add --no-verify to bypass)
```
# Step 2: Manage Your Releases {#standard-version}
# [Step 2: Manage Your Releases](#standard-version)
While contiguous commit consistency is cool (what a mouthful), our end goal is to have easier management of our releases. To this end, we have the [`standard-version` ](https://github.com/conventional-changelog/standard-version). This tool allows you to generate git tags, changelogs, and bump your `package.json` files. To start, we'll install the package as a developer dependency:
@@ -177,13 +177,13 @@ npm run release -- --first-release
To generate your initial `CHANGELOG.md` file. This will also create a tag of the current state so that every subsequent release can change your version numbers.
## Usage {#use-standard-version}
## [Usage](#use-standard-version)
Having an initial starting point for releases is cool but ultimately useless without understanding how to cut a new release. Once you've made a series of commits, you'll want to re-run `npm run release`. This will do all of the standard release actions. [As mentioned before, the `type` of commits will dictate what number (patch, minor, major) is bumped](#conventional-commits). As all of your changes will make it into your `CHANGELOG.md`, you may want to consider squashing PRs before merging them, so that your changelog is clean and reflective of your public changes (not just the implementation detail).
One thing to note is that you'll want to run `npm run release` _**before**_ running your build or release. This is because it bumps your package version, and as-such won't change the package version in your deployed updates.
## Changelog Customization {#customize-changelog}
## [Changelog Customization](#customize-changelog)
From here, your `CHANGELOG.md` file should look like the following:
@@ -216,7 +216,7 @@ Let's say we introduce a new version that has a set of features and bug fixes:
You might think "Well, this file is auto-generated. I shouldn't modify it, least it stop working!" Luckily for us, this is not the case! So long as we leave the headers as-is, we're able to customize the `CHANGELOG.md` file with further details. _We can even include images_ using the standard markdown `![]()` syntax! Using this knowledge, we can create extremely robust and explanative changelogs for our consumers.
## Bump Version Files {#bump-package-json}
## [Bump Version Files](#bump-package-json)
While working in a monorepo, I often find myself needing to change the version number in more than a single file at a time. I've also found myself in need of multi-file version bumping when using a different `package.json` for release than the one I use for development.
@@ -247,7 +247,7 @@ You'll want to create a `.versionrc` file and put the following in it:
Multiple different kinds of files that can be updated, and you can even [write your own `updater` method to update any file you'd so like](https://github.com/conventional-changelog/standard-version#custom-updaters).
# Conclusion {#conclusion}
# [Conclusion](#conclusion)
Keep in mind, simply because you have a new tool to manage releases doesn't mean that you have a free pass on ignoring your branching strategy. If you're developing a developer tool that has breaking changes every week, you're certainly going to alienate anyone that's not a staunch consumer. You'll want to keep following best practices for your use-cases to ensure that this tool isn't squandered by other project issues.

View File

@@ -36,7 +36,7 @@ In this guide we'll cover:
> [Download our Regex Cheat Sheet](https://coderpad.io/regular-expression-cheat-sheet/)
# What does a regex look like? {#what-does-a-regex-look-like}
# [What does a regex look like?](#what-does-a-regex-look-like)
In its simplest form, a regex in usage might look something like this:
@@ -70,9 +70,9 @@ In fact, most regexes can be written in multiple ways, just like other forms of
>
> In this article, we'll focus on the ECMAScript variant of Regex, which is used in JavaScript and shares a lot of commonalities with other languages' implementations of regex as well.
# How to read (and write) regexes {#how-to-read-write-regex}
# [How to read (and write) regexes](#how-to-read-write-regex)
## Quantifiers {#quantifiers}
## [Quantifiers](#quantifiers)
Regex quantifiers check to see how many times you should search for a character.
@@ -159,7 +159,7 @@ H.*?llo
![We're using a regex /H.*?llo/ to look for the words "Hillo", "Hello", and partially match the "Hello" in "Helloollo"](./h_star_question_llo.png)
## Pattern collections {#pattern-collections}
## [Pattern collections](#pattern-collections)
Pattern collections allow you to search for a collection of characters to match against. For example, using the following regex:
@@ -192,7 +192,7 @@ You can even combine these together:
- `[0-9A-Z]` - Match any character that's either a number or a capital letter from "A" to "Z"
- `[^a-z]` - Match any non-lowercase letter
## General tokens {#general-tokens}
## [General tokens](#general-tokens)
Not every character is so easily identifiable. While keys like "a" to "z" make sense to match using regex, what about the newline character?
@@ -319,7 +319,7 @@ Or want to find every instance of this blog post's usage of the "\n" string. Wel
\\n
```
# How to use a regex {#how-to-use-a-regex}
# [How to use a regex](#how-to-use-a-regex)
Regular expressions aren't simply useful for *finding* strings, however. You're also able to use them in other methods to help modify or otherwise work with strings.
@@ -388,7 +388,7 @@ Here, we should expect to see both "Hello" and "Hi" matched, but we don't.
This is because we need to utilize a Regex "flag" to match more than once.
# Flags {#flags}
# [Flags](#flags)
A regex flag is a modifier to an existing regex. These flags are always appended after the last forward slash in a regex definition.
@@ -455,7 +455,7 @@ To solve this problem, we can simply assign `lastIndex` to 0 before running each
![If we run `regex.lastIndex = 0` in between each `regex.exec`, then every single `exec` runs as intended](./consistent_regex_fix.png)
# Groups {#groups}
# [Groups](#groups)
When searching with a regex, it can be helpful to search for more than one matched item at a time. This is where "groups" come into play. Groups allow you to search for more than a single item at a time.

View File

@@ -14,7 +14,7 @@ Mientras trabajas en varios proyectos, puedes encontrarte con una sintaxis de as
_Los tipos genéricos son una forma de manejar tipos abstractos en tu función._ **Actúan como una variable para los tipos en el sentido de que contienen información sobre la forma en que funcionarán tus tipos.** Son muy poderosos por derecho propio, y su uso no se limita a TypeScript. Verás muchos de estos conceptos aplicados bajo terminologías muy similares en varios lenguajes. Sin embargo, basta con esto. ¡Vamos a sumergirnos en cómo usarlos! 🏊
# El problema {#generico-usecase-setup}
# [El problema](#generico-usecase-setup)
Los tipos genéricos — en el nivel más alto — _permiten aceptar datos arbitrarios en lugar de una tipificación estricta, lo que hace posible ampliar el alcance de un tipo_.
@@ -32,7 +32,7 @@ returnProp(4); // ❌ Esto falla porque `4` no es un string
```
En este caso, queremos asegurarnos de que todos los tipos de entrada posibles estén disponibles para el tipo prop. Echemos un vistazo a algunas soluciones potenciales, con sus diversos pros y contras, y veamos si podemos encontrar una solución que se ajuste a los requisitos para proporcionar tipado a una función como ésta.
## Solución potencial 1: Unions {#generic-usecase-setup-union-solution}
## [Solución potencial 1: Unions](#generic-usecase-setup-union-solution)
Una posible solución a este problema podrían ser las uniones de TypeScript. _Las uniones nos permiten definir una condición `or` para nuestros tipos_. Como queremos permitir varios tipos para las entradas y salidas, ¡quizás eso pueda ayudarnos!
@@ -57,7 +57,7 @@ const newNumber = shouldBeNumber + 4;
La razón por la que la operación `shouldBeNumber + 4` produce este error es porque le has dicho a TypeScript que `shouldBeNumber` es o bien un número **o** una cadena haciendo que la salida esté explícitamente tipada como una unión. Como resultado, TypeScript es incapaz de hacer la suma entre un número y una cadena (que es uno de los valores potenciales) y por lo tanto arroja un error.
### Soluciones potenciales Descargo de responsabilidad {#silly-examples-disclaimer}
### [Soluciones potenciales Descargo de responsabilidad](#silly-examples-disclaimer)
> Nota del autor:
>
@@ -65,7 +65,7 @@ La razón por la que la operación `shouldBeNumber + 4` produce este error es po
>
> Dicho esto, estamos tratando de construir sobre los conceptos, por lo que estamos tratando de proporcionar algunos ejemplos de donde esto podría ser utilizado y lo que hace. También hay instancias, como los archivos de definición de tipos, donde esta inferencia podría no estar disponible para un autor de tipos, así como otras limitaciones con este método que veremos más adelante.
## Solución potencial 2: Sobrecarga de funciones {#generic-usecase-setup-overloading-solution}
## [Solución potencial 2: Sobrecarga de funciones](#generic-usecase-setup-overloading-solution)
Para evitar los problemas de devolver explícitamente una unión, usted _PODRÍA_ utilizar la sobrecarga de funciones para proporcionar los tipos de retorno adecuados:
@@ -92,7 +92,7 @@ returnProp({}) // El argumento de tipo '{}' no es asignable a un parámetro de t
Esto puede parecer obvio a partir de los tipos, pero _lo ideal es que queramos que `returnProp` acepte CUALQUIER tipo porque **no estamos usando ninguna operación que requiera conocer el tipo**._ (nada de sumas o restas, que requieran un número; nada de concatenación de cadenas que pueda restringir el paso de un objeto).
## Solución potencial 3: Any {#generic-usecase-setup-any-solution}
## [Solución potencial 3: Any](#generic-usecase-setup-any-solution)
Por supuesto, podemos utilizar el tipo `any` para forzar cualquier tipo de entrada y retorno. (¡Dios sabe que he tenido mi parte justa de frustraciones que terminaron con unos cuantos `any`s en mi código base!)
@@ -109,7 +109,7 @@ returnedObject.test(); // esto no retorna un error pero debería 🙁
returnedObject.objProperty; // Esto tambien (correctamente) no arroja un error, pero TS no sabrá que es un número ☹️
```
# La Solución Real {#generics-intro}
# [La Solución Real](#generics-intro)
¿Cuál es la respuesta? ¿Cómo podemos obtener datos de tipo preservado tanto en la entrada como en la salida?
@@ -145,7 +145,7 @@ returnedObject.objProperty;
>
> Recuerde, las variables de tipo son como otras variables en el sentido de que necesita mantenerlas y entender lo que están haciendo en su código.
# Está bien, ¿pero por qué? {#logger-example}
# [Está bien, ¿pero por qué?](#logger-example)
¿Por qué podríamos querer hacer esto? [Devolver un elemento como sí mismo en una función de identidad](#generic-usecase-setup) está bueno, pero no es muy útil en su estado actual. Dicho esto, hay **muchos** usos para los genéricos en las bases de código del mundo real.
@@ -202,7 +202,7 @@ Un ejemplo de esto sería una sintaxis como esta:
logTheValue<number>(3);
```
# Non-Function Generics {#non-function-generics}
# [Non-Function Generics](#non-function-generics)
Como has visto antes con la interfaz `LogTheValueReturnType` - las funciones no son las únicas con genéricos. Además de usarlos dentro de las funciones e interfaces, también puedes usarlos en las clases.
@@ -246,7 +246,7 @@ interface ImageConvertMethods<DataType> {
type ImageTypeWithConvertMethods<DataType> = ImageType<DataType> & ImageConvertMethods<DataType>
```
# De acuerdo, ¿pero por qué? {#polymorphic-functions}
# [De acuerdo, ¿pero por qué?](#polymorphic-functions)
Vaya, parece que no te fías de mi palabra cuando te digo que los genéricos de tipo son útiles. Está bien, supongo; después de todo, la duda mientras se aprende puede llevar a grandes preguntas! 😉 .
@@ -277,7 +277,7 @@ function toPNG(data: DataType): DataType {
Aunque esta función acepta varios tipos de datos, los maneja de forma diferente bajo el capó. Las funciones que tienen este tipo de comportamiento de "aceptar muchos, manejar cada uno ligeramente diferente" se llaman **Funciones Polimórficas**. Son particularmente útiles en las bibliotecas de utilidades.
# Restringiendo los tipos {#extends-keyword}
# [Restringiendo los tipos](#extends-keyword)
Por desgracia, hay un problema con el código anterior: no sabemos qué tipo es `DataType`. ¿Por qué es importante? Bueno, si no es una cadena, un Buffer, o un tipo Array, ¡lanzará un error! Ese no es ciertamente un comportamiento para encontrarse en tiempo de ejecución.
@@ -291,7 +291,7 @@ function toPNG<DataType extends (string | Array<number> | Buffer)>(data: DataTyp
En este ejemplo _estamos usando la palabra clave `extends` para imponer algún nivel de restricción de tipo en la definición, por lo demás amplia, de un tipo genérico_. Estamos usando una unión de TypeScript para decir que puede ser cualquiera de esos tipos, y todavía somos capaces de establecer el valor a la variable de tipo `DataType`.
# Expande tus horizontes {#imperative-casting-extends}
# [Expande tus horizontes](#imperative-casting-extends)
También podemos mantener esa restricción amplia de tipos dentro de sí misma. Digamos que tenemos una función que sólo se preocupa si un objeto tiene una propiedad específica:

View File

@@ -14,7 +14,7 @@ While working in various projects, you may come across a weird looking syntax in
_Type generics are a way to handle abstract types in your function._ **They act as a variable for types in that they contain information about the way your types will function.** They're very powerful in their own right, and their usage is not just restricted to TypeScript. You'll see many of these concepts applied under very similar terminologies in various languages. Enough on that, however. Let's dive into how to use them! 🏊‍
# The Problem {#generic-usecase-setup}
# [The Problem](#generic-usecase-setup)
Type generics — on the highest level — _allow you to accept arbitrary data instead of strict typing, making it possible to broaden a type's scope_.
@@ -33,7 +33,7 @@ returnProp(4); // ❌ This would fail as `4` is not a string
In this case, we want to make sure that every possible input type is available for the prop type. Let's take a look at a few potential solutions, with their various pros and cons, and see if we can find a solution that fits the requirements for providing typing for a function like this.
## Potential Solution 1: Unions {#generic-usecase-setup-union-solution}
## [Potential Solution 1: Unions](#generic-usecase-setup-union-solution)
One potential solution to this problem might be TypeScript unions. _Unions allow us to define an `or` condition of sorts for our types_. As we want to allow various types for inputs and outputs, perhaps that can help us here!
@@ -58,7 +58,7 @@ const newNumber = shouldBeNumber + 4;
The reason that the operation `shouldBeNumber + 4` yields this error is because you've told TypeScript that `shouldBeNumber` is either a number **or** a string by making the output explicitly typed as a union. As a result, TypeScript is unable to do addition between a number and a string (which is one of the potential values) and therefore throws an error.
### Potential Solutions Disclaimer {#silly-examples-disclaimer}
### [Potential Solutions Disclaimer](#silly-examples-disclaimer)
> Author's note:
>
@@ -66,7 +66,7 @@ The reason that the operation `shouldBeNumber + 4` yields this error is because
>
> That said, we're trying to build on concepts, so we're trying to provide some examples of where this might be used and what it does. There are also instances, such as type definition files, where this inference might not be available to an author of typings, as well as other limitations with this method that we'll see later.
## Potential Solution 2: Function Overloading {#generic-usecase-setup-overloading-solution}
## [Potential Solution 2: Function Overloading](#generic-usecase-setup-overloading-solution)
In order to get around the issues with explicitly returning a union, you _COULD_ utilize function overloading to provide the proper return typings:
@@ -91,7 +91,7 @@ returnProp({}) // Argument of type '{}' is not assignable to parameter of type '
This may seem obvious from the typings, but _we ideally want `returnProp` to accept ANY type because **we aren't using any operations that require knowing the type**._ (no addition or subtraction, requiring a number; no string concatenation that might restrict an object from being passed).
## Potential Solution 3: Any {#generic-usecase-setup-any-solution}
## [Potential Solution 3: Any](#generic-usecase-setup-any-solution)
Of course, we could use the `any` type to force any input and return type. (Goodness knows I've had my fair share of typing frustrations that ended with a few `any`s in my codebase!)
@@ -108,7 +108,7 @@ returnedObject.test(); // This will not return an error but should 🙁
returnedObject.objProperty; // This will also (correctly) not throw an error, but TS won't know it's a number ☹️
```
# The Real Solution {#generics-intro}
# [The Real Solution](#generics-intro)
So what's the answer? How can we get preserved type data on both the input and the output??
@@ -144,7 +144,7 @@ returnedObject.objProperty;
>
> Remember, type variables are like other variables in that you need to maintain them and understand what they're doing in your code.
# Okay, but Why? {#logger-example}
# [Okay, but Why?](#logger-example)
Why might we want to do this? [Returning an item as itself in an identity function](#generic-usecase-setup) is cool, but it's not very useful in its current state. That said, there **are** many, many uses for generics in real-world codebases.
@@ -217,7 +217,7 @@ An example of this would be a syntax like this:
logTheValue<number>(3);
```
# Non-Function Generics {#non-function-generics}
# [Non-Function Generics](#non-function-generics)
As you saw before with the `LogTheValueReturnType` interface — functions aren't the only ones with generics. In addition to using them within functions and interfaces, you can also use them in classes.
@@ -261,7 +261,7 @@ interface ImageConvertMethods<DataType> {
type ImageTypeWithConvertMethods<DataType> = ImageType<DataType> & ImageConvertMethods<DataType>
```
# Okay, but why-_er_? {#polymorphic-functions}
# [Okay, but why-_er_?](#polymorphic-functions)
My my, you don't seem to take my word for it when I tell you that type generics are useful. That's alright, I suppose; After all, doubt while learning can lead to some great questions! 😉
@@ -292,7 +292,7 @@ function toPNG(data: DataType): DataType {
Even though this function accepts various data types, it handles them differently under the hood! Functions that have this type of "accept many, handle each slightly differently" behavior are called **Polymorphic Functions**. They're particularly useful in utility libraries.
# Restricting The Types {#extends-keyword}
# [Restricting The Types](#extends-keyword)
Unfortunately, there's a problem with the above code: we don't know what type `DataType` is. Why does that matter? Well, if it's not a string, a Buffer, or an Array-like, it will throw an error! That's certainly not behavior to run into at runtime.
@@ -306,7 +306,7 @@ function toPNG<DataType extends (string | Array<number> | Buffer)>(data: DataTyp
In this example _we're using the `extends` keyword to enforce some level of type restriction in the otherwise broad definition of a type generic_. We're using a TypeScript union to say that it can be any one of those types, and we're still able to set the value to the type variable `DataType`.
## Broaden Your Horizons {#imperative-casting-extends}
## [Broaden Your Horizons](#imperative-casting-extends)
We're also able to keep that type restriction broad within itself. Let's say we had a function that only cared if an object had a specific property on it:

View File

@@ -24,7 +24,7 @@ What many don't know is that Windows has gained many of these options over the y
Moreover, much of what we'll be taking a look at today is either free, open-source, or both! There will be a few mentions of paid software as alternatives to the free options, but I've personally used every piece of commercial software in this article. None of the paid software we mention here has been included as part of a sponsorship or financial deal in any way, I just like them and use them myself.
# Package Management {#package-management}
# [Package Management](#package-management)
When it comes to CLI package management on Windows, you have two main choices:
@@ -35,7 +35,7 @@ Both of them are incredibly polished and ready-to-use today. While `winget` is M
Let's look through both.
## Winget {#winget}
## [Winget](#winget)
One of the strongest advantages of `winget` is that it's built right into all builds of Windows 11 and most newer builds of Windows 10.
@@ -63,7 +63,7 @@ Finally, you can upgrade all of your `winget` installed packages simply by runni
`winget upgrade --all`
## Chocolatey {#chocolatey}
## [Chocolatey](#chocolatey)
[Chocolatey only takes a single PowerShell command to install](https://chocolatey.org/install), not unlike [Homebrew for macOS](https://brew.sh/). The comparisons with Homebrew don't stop there either. Much like it's *nix-y counterparts, Chocolatey is an unofficial repository of software that includes checks of verification for a select number of popular packages.
@@ -75,7 +75,7 @@ You can also use `choco list --local-only` to see a list of all locally installe
Finally, `choco upgrade all` will upgrade all locally installed packages.
### Manage Packages via GUI {#chocolatey-gui}
### [Manage Packages via GUI](#chocolatey-gui)
Readers, I won't lie to you. I'm not the kind of person to use a CLI for everything. I absolutely see their worth, but remembering various command is simply not my strong suit even if I understand the core concepts entirely. For people like me, you might be glad to hear that _Chocolatey has a GUI for installing, uninstalling, updating, and searching packages_. It's as simple as (Chocolate) pie! More seriously, installing the GUI is as simple as:
```
@@ -88,7 +88,7 @@ You can see that it gives a list of installed packages with a simple at-glance v
![A search result of the Chocolatey GUI](./choco_gui_search.png)
## Suggested Packages {#suggested-packages}
## [Suggested Packages](#suggested-packages)
While Chocolatey has a myriad of useful packages for developers, there are some that I have installed on my local machine that I'd like to highlight in particular.
@@ -104,7 +104,7 @@ Additionally, I know a lot of developers would like to have access to common GNU
choco install git.install--params "/GitAndUnixToolsOnPath"
```
### CLI Utilities {#cli-packages}
### [CLI Utilities](#cli-packages)
| Name | Choco Package | Winget Package | Explanation |
| ------------------------------------------------- | ------------- | -------------- | ------------------------------------------------------------ |
@@ -126,7 +126,7 @@ Or, the ones supported by `winget`:
winget install --id=GitHub.cli -e && winget install --id=Yarn.Yarn -e
```
### IDEs {#ides}
### [IDEs](#ides)
| Name | Choco Package | Winget Package | Explanation |
| ----------------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------------------------------------------------------- |
@@ -147,7 +147,7 @@ Or, with `winget`:
winget install --id=Microsoft.VisualStudioCode -e && winget install --id=SublimeHQ.SublimeText.4 -e && winget install --id=Microsoft.VisualStudio.2019.Community -e && winget install --id=JetBrains.Toolbox -e
```
### Others {#utilities}
### [Others](#utilities)
| Name | Choco Package | Winget Package | Explanation |
| ----------------------------------------------------------- | ------------------------------------------ | ------------------------------------------------- | ------------------------------------------------------------ |
@@ -177,11 +177,11 @@ Or, the ones supported by `winget`:
winget install --id=Microsoft.PowerToys -e && winget install --id=Oracle.VirtualBox -e && winget install --id=Typora.Typora -e && winget install --id=Postman.Postman -e && winget install --id=Mozilla.Firefox -e && winget install --id=Cockos.LICEcap -e && winget install --id=NickeManarin.ScreenToGif -e && winget install --id=7zip.7zip -e && winget install --id=Oracle.JDK.17 -e && winget install --id=Oracle.JavaRuntimeEnvironment -e
```
### Missing from the List {#awesome-windows}
### [Missing from the List](#awesome-windows)
Didn't see your favorite utilities or tools? Unfortunately, I only can highlight a few options. That said, there's no shortage of utilities, tools, and customization options for Windows. A great collection of utilities to look through would be [the Awesome Windows list](https://github.com/Awesome-Windows/Awesome). At the time of writing, it includes over 300 programs with a short description and a link learn more.
## Microsoft Store {#microsoft-store}
## [Microsoft Store](#microsoft-store)
I'm sure some avid Microsoft fans will have pointed out by now that I forgot something. You know, the official solution by Microsoft? Naturally, I haven't forgotten about the Microsoft Store.
@@ -189,17 +189,17 @@ While some of you may be surprised to hear this, the Microsoft Store has put tog
![A preview of the "Downloads and updates" tab in the Microsoft Store](./windows_store_update.png)
# Terminal Usage {#terminal-usage}
# [Terminal Usage](#terminal-usage)
The terminal is essential for most developers. It's a relatively universal utility regardless of what form of programming you're into. It's important to make sure that your terminal is fully featured both for functionality and so the user can customize to their taste.
## Terminal Emulators {#terminals}
## [Terminal Emulators](#terminals)
One of the most important elements to one's experience with the terminal is, well, the terminal itself! While Windows has not historically had many options in this regard, things have turned around in recent years. Additional to the built-in CMD and PowerShell applications, we now have many newcomers, including one from Microsoft itself.
First, let's start with the third party offerings. We have many options, but the two I want to highlight is `Cmder` and `Terminus`.
### Cmder {#cmder}
### [Cmder](#cmder)
[Cmder is an open-source terminal offering](https://github.com/cmderdev/cmder) built on top of a long-standing base called [ConEmu](https://conemu.github.io/). Not only is it a terminal window for you to interface with, but it provides a massive set of configurations. It not only provides configurations for the CMD shell backend but for PowerShell as well, meaning you can freely switch between them (and WSL) to suit your current needs. These configurations can even be used without having to utilize the terminal window. I think the config makes the terminal much more useful and pretty. For example, this is the default view of Cmder:
@@ -216,13 +216,13 @@ The terminal itself contains all kinds of functionality:
Those are just the features I can think of off the top of my head! What's nice about Cmder is that even if you don't use the terminal itself, you can use the configurations for CMD and PowerShell with other shells if you like. All of the screenshots for the other terminals will be shown using the Cmder configs.
### Terminus {#terminus}
### [Terminus](#terminus)
Terminus is another excellent option for those looking for alternative terminal shells. Because it's rendered using web tech, it's UI is much more customizable. It also has an easy-to-install plugin system to add further functionality to the shell. What you're seeing is the initial out-of-the-box experience [with the Cmder configuration applied](https://github.com/cmderdev/cmder/wiki/Seamless-Terminus-Integration)
![A preview of the Terminus shell with the Cmder config](./terminus.png)
### Windows Terminal {#windows-terminal}
### [Windows Terminal](#windows-terminal)
Last, but certainly not least, we have the newly-introduced Windows Terminal. This is the new terminal that's built by Microsoft itself. [The project is open-source](https://github.com/microsoft/terminal) and is available now [via the Microsoft Store](https://aka.ms/windowsterminal). In fact, in Windows 11, this terminal is now built-in and acts as the default terminal emulator.
@@ -230,7 +230,7 @@ Last, but certainly not least, we have the newly-introduced Windows Terminal. Th
This terminal shell has been the most stable in my experience. It supports tabs, a highly customizable UI, and supports using multiple terminal applications in different tabs.
#### Cmder Integration {#windows-terminal-cmder}
#### [Cmder Integration](#windows-terminal-cmder)
While Cmder integration with Windows Terminal is relatively trivial, it's not very well documented. Let's walk through how to get it up and running.
@@ -280,7 +280,7 @@ Finally, if you want to set one of these profiles as default (I wanted to make m
"defaultProfile": "{61c54bbd-c2c6-5271-96e7-009a87ff44bf}",
```
#### Color Configuration {#windows-terminal-colors}
#### [Color Configuration](#windows-terminal-colors)
Windows Terminal also supports text and background color customization, among other things. The color settings I used for the screenshot above is the Dracula color theme. You can add that color theme by adding the following to the `schemes` array in the `profiles.json` file:
@@ -329,7 +329,7 @@ Resulting in the following for my PowerShell config:
}
```
### Comparisons {#compare-different-terminals}
### [Comparisons](#compare-different-terminals)
While each of the three terminals offers something different, they each have their own set of pros and cons. Here's how I see it:
@@ -342,7 +342,7 @@ I only outlined three terminal emulators here. They are my favorites; I've used
- [Fluent Terminal](https://github.com/felixse/FluentTerminal)
- [Hyper Terminal](https://hyper.is/)
## Terminal Styling {#terminal-styling}
## [Terminal Styling](#terminal-styling)
Anyone that's used `ohmyzsh` on Mac or Linux before can tell you that customizing your terminal shell doesn't just stop at picking an emulator.
@@ -354,7 +354,7 @@ In fact, regardless of your emulator, you have a wide swath of customization opt
[While some terminals have a quick single (or zero) config change to add some fancy styling](#windows-terminal-cmder), you can have full control over your terminal styling.
### OhMyPosh {#oh-my-posh}
### [OhMyPosh](#oh-my-posh)
One option to customize your windows shell styling is [OhMyPosh](https://ohmyposh.dev/). Named after the similarly powerful [`OhMyZSH`](https://ohmyz.sh/), it allows you to have themes you can utilize for both PowerShell and CMD alike.
@@ -364,7 +364,7 @@ For example, this is [my terminal theme](https://github.com/crutchcorn/dotfiles/
> That emoji at the start? That's randomized on every shell start with a preselected list of emoji. Pretty 🔥 if you ask me.
### Powerline Fonts {#powerline-fonts}
### [Powerline Fonts](#powerline-fonts)
Once setting up OhMyPosh in CMD/PowerShell or OhMyZSH in WSL, you may notice that your terminal display looks weird with some themes:
@@ -394,7 +394,7 @@ Then, when you open the terminal, you should see the correct terminal display.
## Make Configuration Changes {#terminal-system-config}
## [Make Configuration Changes](#terminal-system-config)
While terminals are important, another factor to be considered is the configuration of those terminal shells. It's important to keep system-level configuration settings in mind as well. For example, if you need to [make or modify environmental variables](#env-variables) or [make changes to the system path](#env-path). Luckily for us, they both live on the same path. As such, let's showcase how to reach the dialog that contains both of these settings before explaining each one in depth.
@@ -410,7 +410,7 @@ After this, a dialog should pop up. This dialog should contain as one of the low
![The "environmental variables" dialog](./environmental_variables_dialog.png)
### Environmental Variables {#env-variables}
### [Environmental Variables](#env-variables)
When working with the CLI, it's often important to have environmental variables to customize the functionality of a utility or program. Because Windows has the concept of users, there are two kinds of environment variables that can be set:
@@ -427,7 +427,7 @@ Simply add the name of the variable and the value of the environmental variable
You're able to do the same with editing a variable. Simply find the variable, highlight it, then select "Edit" and follow the same process.
### Adding Items to Path {#env-path}
### [Adding Items to Path](#env-path)
Have you ever run into one of these errors?
@@ -452,9 +452,9 @@ Just as before, you're able to delete and edit a value by highlighting and press
> In order to get SCC running, you may have to close and then re-open an already opened terminal window. Otherwise, running `refreshenv` often updates the path so that you can use the new commands.
## Git Configurations {#git-config}
## [Git Configurations](#git-config)
### Editor {#git-editor}
### [Editor](#git-editor)
Git, by default, uses `vim` to edit files. While I understand and respect the power of `vim`, I have never got the hang of `:!qnoWaitThatsNotRight!qq!helpMeLetMeOut`. As such, I tend to change my configuration to use `micro`, the CLI editor mentioned in [the CLI packages section](#cli-packages). In order to do so, I can just run:
@@ -468,7 +468,7 @@ However, we can go a step further. Let's say that we want the full power of VSCo
git config --global core.editor "code --wait"
```
### Difftool {#git-difftool}
### [Difftool](#git-difftool)
Not only are you able to set VSCode as your editor for rebase messages, but [you can use it as your difftool as well](https://code.visualstudio.com/docs/editor/versioncontrol#_vs-code-as-git-diff-tool)!
@@ -483,7 +483,7 @@ Simply edit your global git config (typically found under `%UserProfile%/.gitcon
And it should take care of the rest for you.
### Line Endings {#git-line-endings}
### [Line Endings](#git-line-endings)
While most high-level language code is interoperable between different OSes, one of the primary differences between high-level codebases in Windows vs. macOS or Linux is the line-endings. As you might know, Windows uses `\r\n` line-ending where Linux and macOS end with `\n`.
Luckily for us, Git can automatically convert the Windows line-endings before committing them to the repository. To do so, simply run the following command:
@@ -492,7 +492,7 @@ Luckily for us, Git can automatically convert the Windows line-endings before co
git config --global core.autocrlf true
```
## WSL {#wsl}
## [WSL](#wsl)
Alright, alright, I'm sure you've been expecting to see this here. I can't beat around the bush any longer. Windows Subsystem for Linux (WSL) enables users to run commands on a Linux instance without having to dual-boot or run a virtual machine themselves.
@@ -521,7 +521,7 @@ There are even tweaks that are done with Windows to make it easier to use. If yo
The cross-WSL compatibility isn't uni-directional either. You can [open files from your Linux filesystem in Windows](#access-wsl-files), [call Windows executables from WSL](https://docs.microsoft.com/en-us/windows/wsl/interop#run-windows-tools-from-wsl), and much more!
### Shell Configuration {#linux-shell}
### [Shell Configuration](#linux-shell)
If you prefer an alternative shell, such as ZSH or Fish, you can install those in your distro as well. For example, I have an [`oh-my-zsh`](https://ohmyz.sh/) instance that runs anytime I start-up `wsl`.
@@ -544,7 +544,7 @@ You can even able to tell Windows Terminal to use WSL as default! If you open Wi
All you need to do is change the `defaultProfile` to match the `guid` of the WSL profile.
### Accessing Linux Files {#access-wsl-files}
### [Accessing Linux Files](#access-wsl-files)
Since [Windows 10 (1903)](https://devblogs.microsoft.com/commandline/whats-new-for-wsl-in-windows-10-version-1903/), you're able to access your WSL Linux distro files directly from Windows explorer. To do this, simply look to the sidebar panel of your File Explorer.
@@ -552,7 +552,7 @@ Since [Windows 10 (1903)](https://devblogs.microsoft.com/commandline/whats-new-f
Here, you can read and write files to and from your Linux installation in WSL.
### Linux GUI Programs {#wsl-gui}
### [Linux GUI Programs](#wsl-gui)
[In Windows 11, you're now able to run Linux GUI apps with WSL](https://docs.microsoft.com/en-us/windows/wsl/tutorials/gui-apps). Simply install them as you usually would using your distro's package manager and run them from the command line.
@@ -566,17 +566,17 @@ sudo apt install gedit
### USB Pass-thru {#wsl-usb}
### [USB Pass-thru](#wsl-usb)
For some development usage, having USB access from Linux is immensely useful. In particular, when dealing with Linux-only software for flashing microcontrollers or other embedded devices it's an absolute necessity.
Luckily, as of late [Microsoft has worked with a third party project to add support to WSL](https://devblogs.microsoft.com/commandline/connecting-usb-devices-to-wsl/) to directly connect USB to Linux. This allows you to do flashing with `dd` and similar
# Keyboard Usage {#keyboard-usage}
# [Keyboard Usage](#keyboard-usage)
When asking many of my Linux-favoring friends why they love Linux so much, I've heard one answer time and time again. They love being able to control their computer front, back, and sideways without having to touch the mouse. Well, dear reader, I assure you that Windows provides the same level of control.
## Built-Ins {#built-in-keyboard-shortcuts}
## [Built-Ins](#built-in-keyboard-shortcuts)
By default, Windows includes a myriad of shortcuts baked right in that allow you to have powerful usage of your system using nothing but your keyboard. Here are just a few that I think are useful to keep-in-mind:
@@ -598,7 +598,7 @@ By default, Windows includes a myriad of shortcuts baked right in that allow you
| <kbd>Win</kbd> + <kbd>Ctrl</kbd> + <kbd>F4</kbd> | Close current virtual desktop |
## Window Tiling {#window-tiling}
## [Window Tiling](#window-tiling)
"Surely, you can't forget about window tiling!"
@@ -614,11 +614,11 @@ Back at the (Redmond-based) ranch, the [previously mentioned Microsoft made Powe
As you can see, there's an incredible amount of customization available with "FancyZones".
# Customization {#customization}
# [Customization](#customization)
I'm not sure about you, but when I get a new machine, I want it to feel _mine_. This applies just as much to my wallpaper as it does the stickers I plaster my laptops with. The following software enables some new functionality or aesthetic difference that users might enjoy.
## Free {#free-customization-software}
## [Free](#free-customization-software)
| Program Name | What It Is | Windows Compatibility |
| ------------------------------------------------------------ | ------------------------------------------------------------ | --------------------- |
@@ -649,11 +649,11 @@ I'm not sure about you, but when I get a new machine, I want it to feel _mine_.
| [StartAllBack](https://www.startallback.com/) | Windows 11 start menu replacement |Windows 11|$5|
| [StartIsBack](https://www.startisback.com/) | Windows 10 start menu replacement |Windows 10|$5|
# Functionality {#functionality}
# [Functionality](#functionality)
Windows also has some differing functionality to Linux/macOS in some critical ways. Some of the functionality you might be used to simply doesn't have an obvious analog in Windows. Let's take a look at some of these that we have an alternative to.
## Virtual Desktops {#virtual-desktops}
## [Virtual Desktops](#virtual-desktops)
Longtime users of Linux will be quick to note that they've had virtual desktops for years. While a newer feature to the Windows product line, it too was actually introduced in Windows 10!
@@ -675,7 +675,7 @@ Finally, to delete a virtual desktop, you can hover over the preview of the desk
> A feature that's soon-to-release is renaming a virtual desktop! This functionality is [being added in the 2020 stable release of Windows](https://blogs.windows.com/windowsexperience/2019/09/06/announcing-windows-10-insider-preview-build-18975/) launching soon!
### Touchpad Users {#virtual-desktop-touchpad-users}
### [Touchpad Users](#virtual-desktop-touchpad-users)
If you're a laptop user (or have a touchpad for your desktop) that supports Windows gestures, you can configure a three or four finger feature to switch desktops with a simple swipe. Simply go to "Settings > Devices > Touchpad" to see if you're able to configure this or not. If you are able to, you should be able to select dropdowns to configure which one you'd like.
@@ -683,7 +683,7 @@ If you're a laptop user (or have a touchpad for your desktop) that supports Wind
## Symbolic Links {#symlinks}
## [Symbolic Links](#symlinks)
Symbolic links are a method of having a shortcut of sorts from one file/folder to another. Think of it as Windows Shortcuts but baked directly into the filesystem level. This may come as a surprise to some developers, but Windows actually has support for symbolic links!
@@ -693,7 +693,7 @@ To use symbolic links from the CLI, you have to first enable developer mode on y
Once done, you're able to run `mklink`, which provides you the ability to make a symbolic link.
### Usage {#using-mklink}
### [Usage](#using-mklink)
By default, it creates a soft link from the first argument to the second.
```
@@ -717,7 +717,7 @@ And `/J` for folders:
mklink /J SymlinkDir SourceFolder
```
### GUI Alternative {#link-shell-extension}
### [GUI Alternative](#link-shell-extension)
While the CLI enables you to make hard and soft symbolic links, it's far from graceful. It would be ideal to have that functionality baked right into the explorer menu options if used frequently. Luckily for us, there's an app for that! [Link Shell Extension](https://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html) adds the options to the context menu itself. It's even able to be installed using [Chocolatey](#package-management):
@@ -734,7 +734,7 @@ Then you're able to navigate to the folder you're looking for, right click, and
There are a myriad of options to choose from and should handle any type of symlink you'd need.
# Additional Configuration {#additional-configuration}
# [Additional Configuration](#additional-configuration)
They may not really count as a customization or making up for a "missing feature," but there are a few more things you can do to configure your Windows 10 installation to make life as a developer just a little bit better.

View File

@@ -14,7 +14,7 @@ Any web application relies on some fundamental technologies: HTML, CSS, and Java
> If you're unfamiliar with HTML, CSS, or JavaScript, you may want to take a look at [our post that introduces these three items](/posts/intro-to-html-css-and-javascript). They'll provide a good foundation for this article for newcomers to the programming scene or folks who may not be familiar with what those languages do.
# The DOM {#the-dom}
# [The DOM](#the-dom)
Just as the source code of JavaScript programs are broken down to abstractions that are more easily understood by the computer, so too is HTML. HTML, initially being derived from [SGML (the basis for XML as well)](https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language), actually _forms a tree structure in memory_ in order to [describe the relationships, layout, and executable tasks for items in the tree](#how-the-browser-uses-the-dom). This tree structure in memory is _called the Document Object Model_ (or _DOM_ for short).
@@ -71,7 +71,7 @@ There are some rules for the tree that's created from these nodes:
![A chart showing the aforementioned rules of the node relationships](./dom_relationship_rules.svg)
### How It's Used By The Browser {#how-the-browser-uses-the-dom}
### [How It's Used By The Browser](#how-the-browser-uses-the-dom)
This tree tells the browser all of the information it needs to execute tasks in order to display and handle interaction with the user. For example, when the following CSS is applied to this HTML file:
@@ -117,7 +117,7 @@ This tree relationship also enables CSS selectors such as the [general sibling s
# Using The Correct Tags {#accessibility}
# [Using The Correct Tags](#accessibility)
HTML, as a specification, has tons of tags that are able to be used at one's disposal. These tags contain various pieces of metadata internally to provide information to the browser about how they should be rendered in the DOM. This metadata can then be handled by the browser how it sees fit; it may apply default CSS styling, it may change the default interaction the user has with it, or even what behavior that element has upon clicking on it (in the case of a button in a form).
@@ -179,7 +179,7 @@ In fact, the metadata that specific tags have by default can be manually applied
>
> This is all to say, unless you have a **really** good reason for using `role` rather than an appropriate tag, stick with the related tag. Just as any other form of engineering, properly employing HTML requires nuance and logic to be deployed at the hand of the implementing developer.
# Element Metadata {#interacting-with-elements-using-js}
# [Element Metadata](#interacting-with-elements-using-js)
If you've ever written a website that had back-and-forth communication between HTML and JavaScript, you're likely aware that you can access DOM elements from JavaScript: modifying, reading, and creating them to your heart's content.
@@ -189,7 +189,7 @@ Let's look at some of the built-in utilities at our disposal for doing so:
- [The `Element` base class](element-class)
- [The event system](#events)
## Document Global Object {#document-global-object}
## [Document Global Object](#document-global-object)
[As mentioned before, the DOM tree must contain one root node](#the-dom). This node, for any instance of the DOM, is the document entry point. When in the browser, this entry point is exposed to the developer with [the global object `document`](https://developer.mozilla.org/en-US/docs/Web/API/Document). This object has various methods and properties to assist in a meaningful way. For example, given a standard HTML5 document:
@@ -248,7 +248,7 @@ console.log(boldedElements[0].innerHTML); // Will output the HTML for that eleme
> It's worth mentioning that the way `querySelector` works is not the same [way that the browser checks a node against the CSS selector data when the browser "visits" that node](#how-the-browser-uses-the-dom). `querySelector` and `querySelectorAll` work from a more top-down perspective where it searches the elements one-by-one against the query. First, it finds the top-most layer of the CSS selector. Then it will move to the next item and so-on-so forth until it returns the expected results.
## Element Base Class {#element-class}
## [Element Base Class](#element-class)
While `innerHTML` has been used to demonstrate that the element that's gathered is in fact the element that was queried, there are many _many_ more properties and methods that can be run on an element reference.
@@ -266,7 +266,7 @@ console.log(mainTextElement.getBoundingClientRect());
>
> This means that all queried elements will have their own `getBoundingClientRect` methods.
### Attributes {#html-attributes}
### [Attributes](#html-attributes)
[As covered earlier, elements are able to have _attributes_ that will apply metadata to an element for the browser to utilize.](#accessibility) However, what I may not have mentioned is that you're able to read and write that metadata, as well as applying new metadata, using JavaScript.
@@ -319,7 +319,7 @@ Once this is run, if you inspect the elements tab in your debugger, you should b
... which is significantly more accessible for users that utilize screen readers, [as mentioned previously](#accessibility). You'll notice that despite not having any of the ARIA attributes prior, the `setAttribute` was able to implicitly create them with the newly placed values.
### Properties {#element-properties}
### [Properties](#element-properties)
[As mentioned in a prior section, elements also have properties and methods associated with the instance of the underlying base class](#element-class). These properties are different from attributes as they are not part of the HTML specification. Instead, they're standardized JavaScript `Element` API additions. Some of these properties are able to be exposed to HTML and provide a two-way binding to-and-from the HTML API and the JavaScript `Element` API.
@@ -356,7 +356,7 @@ Will turn the element's background color red, for example.
Somewhat silly, seeing as how the `<div>` is no longer green. 🤭
#### Limitations {#attribute-limitations}
#### [Limitations](#attribute-limitations)
While attributes can be of great use to store data about an element, there's a limitation: Values are always stored as strings. This means that objects, arrays, and other non-string primitives must find a way to go to and from strings when being read and written.
@@ -407,7 +407,7 @@ console.log(element.dataset.userInfo); // "[object Object]"
## Events {#events}
## [Events](#events)
Just as your browser uses the DOM to handle on-screen content visibility, your browser also utilizes the DOM for knowing how to handle user interactions. The way your browser handles user interaction is by listening for _events_ that occur when the user takes action or when other noteworthy changes occur.
@@ -417,7 +417,7 @@ For example, say you have a form that includes a default `<button>` element. Whe
_Bubbling_, as shown here, is the default behavior of any given event. Its behavior is to move an event up the DOM tree to the nodes above it, moving from child to parent until it hits the root. Parent nodes can respond to these events as expected, stop their upward motion on the tree, and more.
### Event Listening {#event-bubbling}
### [Event Listening](#event-bubbling)
Much like many of the other internal uses of the DOM discussed in this article, you're able to hook into this event system to handle user interaction yourself.
@@ -471,7 +471,7 @@ You can see a running example of this here:
<iframe src="https://stackblitz.com/edit/event-bubbling-demo?ctl=1&embed=1&file=index.js&hideExplorer=1&hideNavigation=1" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
### Capturing {#event-capturing}
### [Capturing](#event-capturing)
Bubbling isn't the only way events are able to move. Just as they can move up from the bottom, they can also move from the top down. This method of emitting events is known as _capture mode_.

View File

@@ -12,7 +12,7 @@
Memory in your standard computer works in a much more abstract and complex way then you would initially expect. I'm writing this from the point of someone developing software and less about someone understanding each part of the hardware.
# Virtual Memory {#virtual-memory}
# [Virtual Memory](#virtual-memory)
Operating systems (OS) are the ones in control of all of the physical memory in your computer. This is a safeguard to make sure that memory is being allocated fairly to all processes. The way this is done is by a concept called **Virtual Memory**. As the name suggests, it means that it's shrouding the physical memory in hardware with seemingly infinite storage (in reality, you're limited by various elements of your hardware, but the amount of memory you can assign to virtual memory is typically orders of magnitude higher than the amount you can store in physical memory) for each process that is created. This is accomplished by using more than just your main memory in the case that if you need more storage, then the OS can also store it on your hard drive or SSD. Even though it may seem like a slower alternative, it allows much more freedom for processes without wrecking your computer.
@@ -20,7 +20,7 @@ Operating systems (OS) are the ones in control of all of the physical memory in
Virtual Memory uses what are called **page tables** that point to a memory map which will then finally point to either your physical memory or something like an HDD. It works like a cache where each entry in a page table is only used when absolutely necessary. Whenever a process comes in it only stores the "pages" that the OS thinks the process will need in main memory while the rest stay behind. This reduces the amount of memory that is taken up as well as speeding up the overall time it would take to complete a process.
## What Virtual Memory looks like in C/C++ {#virtual-memory-cpp}
## [What Virtual Memory looks like in C/C++](#virtual-memory-cpp)
In C/C++ your virtual memory is broken up into ~4 basic "blocks" for where different aspects of your code are stored. The four memory areas are Code, Static/Global contexts, Stack, and Heap. The code section as you can probably guess is where your local code is held, it's specifically for the syntax of the area of the code that is being read. The Static/Global contexts are also as expected, either your global variables or your static methods that are set. The last two are the more complex areas and the two that you will want to have the most understanding in if you are working with a language that doesn't have garbage collection.
@@ -29,7 +29,7 @@ In C/C++ your virtual memory is broken up into ~4 basic "blocks" for where diffe
- Static/Global
- Code
# The Stack {#stack}
# [The Stack](#stack)
The stack is where all of your local variables from the inner contexts are stored. The more local the variable the higher they are on the stack to eventually be popped off. The stack data structure is the same one that is used in memory, and it works by placing objects in by **LIFO**(Last in First out). Just like a stack of papers, you can see the paper on top of the stack but none of the others. You also can't reach inside of the stack, you have to remove the papers on top to see the papers below.
@@ -38,7 +38,7 @@ The stack is where all of your local variables from the inner contexts are store
int num = 12;
```
# The Heap {#heap}
# [The Heap](#heap)
The heap is where you store objects that seem to be global in nature. In C/C++ when you're swapping between methods and you want an object that you are returning to go outside of the local context of the method, you use the **new** or **malloc()** keywords. The heap is another data structure that works like a binary tree held in a normal array or list. This shows that there is a hierarchical difference compared to the stack.
@@ -124,7 +124,7 @@ The other method, example2(), just creates a new local vector and sets vec equal
What happens is you get a segfault. This segfault occurs because you're trying to assign a value to a pointer when that pointer isn't pointing anywhere. You're trying to access memory that doesn't exist.
# Review/Conclusion {#conclusion}
# [Review/Conclusion](#conclusion)
Operating systems protect physical memory by giving each process a seemingly infinite amount of virtual memory where they each have their own address space that doesn't affect any other processes. Understanding this and how the virtual memory is represented is a fundamental building block in becoming a better and more efficient programmer.

View File

@@ -27,7 +27,7 @@ While many sites today are built using a component-based framework like Angular,
This is a reasonably straightforward flow once you get the hang of it. Let's take a look at what happens when you throw a component-based framework into the fray.
# Client Side Rendering {#csr}
# [Client Side Rendering](#csr)
While you may not be familiar with this term, you're more than likely familiar with how you'd implement one of these; After all, this is the default when building an Angular, React, or Vue site. Let's use a React site as an example. When you build a typical React SPA without utilizing a framework like NextJS or Gatsby, you'd:
@@ -41,7 +41,7 @@ While you may not be familiar with this term, you're more than likely familiar w
This is because React's code has to initialize to render the components on screen before it can spit out HTML for the browser to parse. Sure, there's an initial HTML file that might have loading spinner, but until your components have time to render, that's hardly useful content for your user. _While these load times can be sufficient for smaller applications_, if you have many components loading on-screen, _you may be in trouble if you want to keep your time-to-interactive (TTI) low_. That scenario is where SSR often comes into play.
# Server Side Rendering (SSR) {#ssr}
# [Server Side Rendering (SSR)](#ssr)
Because React has to initialize _somewhere_, what if we were to move the initial rendering off to the server? Imagine - for each request the user sends your way, you spin up an instance of React. Then, you're able to serve up the initial render (also called "fully hydrated") HTML and CSS to the user, ready to roll. That's just what server-side rendering is!
@@ -62,7 +62,7 @@ Moreover, if you have your server and database in the same hosting location, you
That said because you're relying on server functionality to do this rendering, you have to have a custom server setup. No simple CDN hosting here - your server has to initialize and render each user's page on request.
# Static Site Generation (SSG) {#ssg}
# [Static Site Generation (SSG)](#ssg)
If SSR is ["passing the buck"](https://en.wikipedia.org/wiki/Buck_passing) to the server to generate the initial page, then SSG is passing the buck to you - the developer.
@@ -80,7 +80,7 @@ This simply extends the existing build process that many front-end frameworks ha
Since you're only hosting HTML and CSS again, you're able to host your site as you would a client-side rendered app: Using a CDN. This means that you can geo-sparse your hosting much more trivially but comes with the caveat that you're no longer to do rapid network queries to generate the UI as you could with SSR.
# Pros and Cons {#pros-and-cons}
# [Pros and Cons](#pros-and-cons)
It may be tempting to look through these options, find one that you think is the best, and [overfit](https://en.wiktionary.org/wiki/overfit) yourself into a conclusion that one is superior to all the others. That said, each of these methods has its strengths and weaknesses.
@@ -96,7 +96,7 @@ Consider each of these utilities a tool in your toolbox. You may be working on a
In fact, if you're using a framework that supports more than one of these methods ([like NextJS does as-of version 9.3](https://nextjs.org/blog/next-9-3)), knowing which of these utilities to use for which pages can be critical for optimizing your app.
# A Note Regarding Performance Benchmarks {#lighthouse}
# [A Note Regarding Performance Benchmarks](#lighthouse)
I was once tasked with migrating a landing page with an associated blog from CSR to use SSG. Once I had done so, however, I noticed that [my Lighthouse score](https://developers.google.com/web/tools/lighthouse) had gone _down_ despite my page rendering a much more useful initial page significantly faster than it'd taken for my app's spinner to go away.

View File

@@ -13,7 +13,7 @@
Some evangelicals say that before code ever exists, there always needs to be a test to know how the code should be written. That frankly isn't true. A test isn't _strictly_ needed to determine how to code. What **is** needed are tests that give confidence that as code is written, a change to already existing functionality doesn't happen and that new functionality will behave properly as time goes on. To this end, a lot of testing libraries and frameworks exist. Often times, tests are written in regards to the library or framework used and not to the end product's specifications. For Angular, this is especially true when the default testing implementation is for testing angular, and not for testing what a developer would use Angular to build. **Tests should be written in the same way a user would use them.** We don't need to test Angular; we need to test what we make with Angular.
# Writing tests for an Angular application does not mean testing Angular {#test-the-web-not-angular}
# [Writing tests for an Angular application does not mean testing Angular](#test-the-web-not-angular)
In regards to Angular and writing tests, we must first understand what the tests are for. For a great many projects, that means testing a webpage. In proper testing for a webpage, the underlying library should be able to be changed at any time for maintainability purposes, and the tests should still work. To that end, we must write tests for the web and not for Angular. When using the Angular CLI, it sets up some tests, but when looking closely at the tests, it becomes apparent that the tests are testing Angular and not the output.
@@ -53,13 +53,13 @@ This test no longer even needs Angular to be the library chosen. It just require
Writing tests that don't rely on testing Angular, but instead rely on the DOM, allows the application to be tested in a way that a user would use the application instead of the way that Angular internally works.
# Fixing that shortcoming using Testing Library {#testing-library}
# [Fixing that shortcoming using Testing Library](#testing-library)
Thankfully, writing tests like these have been made simple by a testing library simply called "[Testing Library](https://testing-library.com)." Testing Library is a collection of libraries for various frameworks and applications. One of the supported libraries is Angular, through the [Angular Testing Library](https://testing-library.com/docs/angular-testing-library/intro). This can be used to test Angular apps in a simple DOM focused manner with some nice helpers to make it even easier to work with. It relies on [Jest](https://jestjs.io/) as an extension to the Jasmine testing framework to make testing easier, and more end-results focused. With that tooling, a project can have tests much less focused on Angular and much more focused on what is being made.
## Transitioning to Jest and Angular Testing Library {#transitioning-to-jest}
## [Transitioning to Jest and Angular Testing Library](#transitioning-to-jest)
### Get rid of Karma {#remove-karma}
### [Get rid of Karma](#remove-karma)
Angular ships with Karma alongside Jasmine for running tests and collecting coverage. With Jest, an Angular project no longer needs Karma or the other packages that would be installed by the Angular CLI.
@@ -69,7 +69,7 @@ Angular ships with Karma alongside Jasmine for running tests and collecting cove
npm uninstall karma karma-chrome-launcher karma-coverage-istanbul-reporter karma-jasmine karma-jasmine-html-reporter
```
#### Remove the leftover configurations {#remove-karma-config}
#### [Remove the leftover configurations](#remove-karma-config)
Deleting the following will remove the leftover configuration files from the project:
@@ -119,7 +119,7 @@ tsconfig.spec.json
Now the project is ready for installing any other test runner.
### Setting up Jest {#setup-jest}
### [Setting up Jest](#setup-jest)
Now that the project has no Karma it can be setup with Jest
@@ -213,6 +213,6 @@ Now the project is ready to have better tests written for it and by using [Angul
npm install --save-dev @testing-library/angular
```
# Ready, Steady, Test! {#conclusion}
# [Ready, Steady, Test!](#conclusion)
Now that the project has a better testing library with some great helpers better tests can be written. There are plenty of [great examples](https://testing-library.com/docs/angular-testing-library/examples) for learning and [Tim Deschryver](https://timdeschryver.dev/blog/good-testing-practices-with-angular-testing-library) has more examples to help in that endeavor, and the Angular Testing Library will make tests much simpler to write and maintain. With Angular, good tests, and plenty of confidence anyone would be happy to ship a project with this setup.

View File

@@ -76,7 +76,7 @@ Si quieres aprender más sobre los patrocinios y su impacto en nuestro sitio, pu
En pocas palabras: ningún patrocinador toma decisiones sobre el contenido publicado en el sitio.
# Declaración de Ética {#ethics}
# [Declaración de Ética](#ethics)
Nunca queremos terminar en un lugar en el que nuestro contenido educativo, la experiencia,
o la comunidad se vean comprometidos ya sea por influencias financieras o miembros potencialmente
@@ -88,4 +88,4 @@ También nos comprometemos por mantener transparencia en cuanto a las finanzas q
No todo patrocinio incluye una contribución económica, pero si alguna lo hace, especificaremos en qué se
invierte esa contribución, así como lo que haremos a cambio.
# Colaboradores {#contributors}
# [Colaboradores](#contributors)

View File

@@ -76,7 +76,7 @@ If you want to learn more about our sponsorships and how they impact our site, y
TLDR: No sponsor has any say about the content hosted on the site
# Statement of Ethics {#ethics}
# [Statement of Ethics](#ethics)
We never want to end up in a place where our educational content, experience,
or community is compromised by either financial sway or potentially harmful
@@ -89,4 +89,4 @@ through the project. Not every sponsorship contains a financial contribution,
but if one does we will disclose both what those finances
are going towards and what will be done in exchange.
# Contributors {#contributors}
# [Contributors](#contributors)