chore: lint MD

This commit is contained in:
Corbin Crutchley
2022-08-20 22:14:04 -07:00
parent b07487ebbd
commit d7ac9d23dd
87 changed files with 2940 additions and 2992 deletions

View File

@@ -14,21 +14,21 @@ appearance, race, religion, or sexual identity and orientation.
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
- The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
@@ -69,10 +69,9 @@ members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
available at <https://www.contributor-covenant.org/version/1/4/code-of-conduct.html>
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq
<https://www.contributor-covenant.org/faq>

View File

@@ -28,7 +28,7 @@ as a new JSON object in the array.
This information includes:
- A username for your profile (used in your profile URL).
[IE, our founder's username is `crutchcorn`, and [their page can be found here](https://unicorn-utterances.com/unicorns/crutchcorn)]
\[IE, our founder's username is `crutchcorn`, and [their page can be found here](https://unicorn-utterances.com/unicorns/crutchcorn)]
- Full name
@@ -74,7 +74,7 @@ Now that we have your user attribution data, we can move onto the post data itse
#### Save Location
Once you have your `.md` file, we'll need a place to put it. We place a subdirectory in our [`content/blog` folder](./content/blog) for each of the blog posts on the site. The naming of these subdirectories is integral to keep in mind, as they reflect the URL path of the article once finished. For example, the folder [`what-is-ssr-and-ssg`](./content/blog/what-is-ssr-and-ssg) will turn into the URL for the article:
[https://unicorn-utterances.com/posts/what-is-ssr-and-ssg/](https://unicorn-utterances.com/posts/what-is-ssr-and-ssg/)
<https://unicorn-utterances.com/posts/what-is-ssr-and-ssg/>
Once you've created a subfolder with the URI you'd like your article to have, move the `.md` file into the folder with the name `index.md`. If you have linked images or videos, you'll need to save those files in the same folder and change your markdown file to reference them locally:
@@ -121,16 +121,20 @@ The following data **must** be present:
- Title for the article
- We ask that your titles are less than 80 characters.
- A description of the article
- We ask that your descriptions are less than 190 characters.
- A published date
- Please follow the format as seen above
- An array of authors
- This array must have every value match [one of the `id`s of the `unicorns.json` file](./content/data/unicorns.json)
- An array of related tags
- Please try to use existing tags if possible. If you don't find any, that's alright
- We ask that you keep it to 4 tags maximum
- A `license` to be associated with the post
- This must match the `id` field for one of the values [in our `license.json` file](./content/data/licenses.json)
- If you're not familiar with what these licenses mean, view the `explainLink` for each of them in the `license.json` file. It'll help you understand what permissions the public has to the post

View File

@@ -2,198 +2,208 @@ Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form,
and Modifications of such Source Code Form, in each case
including portions thereof.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form,
and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. “Incompatible With Secondary Licenses”
means
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
```
a. that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms
of a Secondary License.
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms
of a Secondary License.
```
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material,
in a separate file or files, that is not Covered Software.
1.7. “Larger Work”
means a work that combines Covered Software with other material,
in a separate file or files, that is not Covered Software.
1.8. “License”
means this document.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently,
any and all of the rights conveyed by this License.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently,
any and all of the rights conveyed by this License.
1.10. “Modifications”
means any of the following:
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
```
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
b. any new file in Source Code Form that contains any Covered Software.
```
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the
GNU Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those licenses.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the
GNU Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this License.
For legal entities, “You” includes any entity that controls,
is controlled by, or is under common control with You. For purposes of
this definition, “control” means (a) the power, direct or indirect,
to cause the direction or management of such entity, whether by contract
or otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this License.
For legal entities, “You” includes any entity that controls,
is controlled by, or is under common control with You. For purposes of
this definition, “control” means (a) the power, direct or indirect,
to cause the direction or management of such entity, whether by contract
or otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications,
or as part of a Larger Work; and
```
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications,
or as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell,
offer for sale, have made, import, and otherwise transfer either
its Contributions or its Contributor Version.
b. under Patent Claims of such Contributor to make, use, sell,
offer for sale, have made, import, and otherwise transfer either
its Contributions or its Contributor Version.
```
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor
first distributes such Contribution.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor
first distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted
under this License. No additional rights or licenses will be implied
from the distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted
by a Contributor:
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted
under this License. No additional rights or licenses will be implied
from the distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted
by a Contributor:
a. for any code that a Contributor has removed from
Covered Software; or
```
a. for any code that a Contributor has removed from
Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its
Contributor Version); or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its
Contributor Version); or
c. under Patent Claims infringed by Covered Software in the
absence of its Contributions.
c. under Patent Claims infringed by Covered Software in the
absence of its Contributions.
```
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License
(if permitted under the terms of Section 3.3).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License
(if permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing,
or other equivalents.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing,
or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the
licenses granted in Section 2.1.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the
licenses granted in Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including
any Modifications that You create or to which You contribute, must be
under the terms of this License. You must inform recipients that the
Source Code Form of the Covered Software is governed by the terms
of this License, and how they can obtain a copy of this License.
You may not attempt to alter or restrict the recipients rights
in the Source Code Form.
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including
any Modifications that You create or to which You contribute, must be
under the terms of this License. You must inform recipients that the
Source Code Form of the Covered Software is governed by the terms
of this License, and how they can obtain a copy of this License.
You may not attempt to alter or restrict the recipients rights
in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more than
the cost of distribution to the recipient; and
```
a. such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more than
the cost of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients rights in the Source Code Form under this License.
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients rights in the Source Code Form under this License.
```
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of
Covered Software with a work governed by one or more Secondary Licenses,
and the Covered Software is not Incompatible With Secondary Licenses,
this License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the
Covered Software under the terms of either this License or such
Secondary License(s).
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of
Covered Software with a work governed by one or more Secondary Licenses,
and the Covered Software is not Incompatible With Secondary Licenses,
this License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the
Covered Software under the terms of either this License or such
Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of
Covered Software. However, You may do so only on Your own behalf,
and not on behalf of any Contributor. You must make it absolutely clear
that any such warranty, support, indemnity, or liability obligation is
offered by You alone, and You hereby agree to indemnify every Contributor
for any liability incurred by such Contributor as a result of warranty,
support, indemnity or liability terms You offer. You may include
additional disclaimers of warranty and limitations of liability
specific to any jurisdiction.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of
Covered Software. However, You may do so only on Your own behalf,
and not on behalf of any Contributor. You must make it absolutely clear
that any such warranty, support, indemnity, or liability obligation is
offered by You alone, and You hereby agree to indemnify every Contributor
for any liability incurred by such Contributor as a result of warranty,
support, indemnity or liability terms You offer. You may include
additional disclaimers of warranty and limitations of liability
specific to any jurisdiction.
4. Inability to Comply Due to Statute or Regulation
@@ -209,31 +219,31 @@ to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means,
this is the first time You have received notice of non-compliance with
this License from such Contributor, and You become compliant prior to
30 days after Your receipt of the notice.
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means,
this is the first time You have received notice of non-compliance with
this License from such Contributor, and You become compliant prior to
30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted
to You by any and all Contributors for the Covered Software under
Section 2.1 of this License shall terminate.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted
to You by any and all Contributors for the Covered Software under
Section 2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
6. Disclaimer of Warranty
@@ -311,9 +321,11 @@ this License against a Contributor.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the terms of the
Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed
with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
```
This Source Code Form is subject to the terms of the
Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed
with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
```
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
@@ -324,6 +336,7 @@ You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible With Secondary Licenses”,
as defined by the Mozilla Public License, v. 2.0.
```
This Source Code Form is “Incompatible With Secondary Licenses”,
as defined by the Mozilla Public License, v. 2.0.
```

View File

@@ -16,9 +16,7 @@ Este repository actúa como la ubicación del código fuente para el [blog de Un
## Patrocinadores
<a href="https://www.thepolyglotdeveloper.com/" target="_blank" rel="noopener noreferrer sponsored"><img alt="The Polyglot Developer" src="https://unicorn-utterances.com/sponsors/the-polyglot-developer.svg" width="300"/></a>
<a href="https://oceanbit.dev/" target="_blank" rel="noopener noreferrer sponsored"><img alt="OceanBit" src="https://unicorn-utterances.com/sponsors/oceanbit.svg" width="300"/></a>
<a href="https://coderpad.io/" target="_blank" rel="noopener noreferrer sponsored"><img alt="CoderPad" src="https://unicorn-utterances.com/sponsors/coderpad.svg" width="300"/></a>
<a href="https://www.thepolyglotdeveloper.com/" target="_blank" rel="noopener noreferrer sponsored"><img alt="The Polyglot Developer" src="https://unicorn-utterances.com/sponsors/the-polyglot-developer.svg" width="300"/></a> <a href="https://oceanbit.dev/" target="_blank" rel="noopener noreferrer sponsored"><img alt="OceanBit" src="https://unicorn-utterances.com/sponsors/oceanbit.svg" width="300"/></a> <a href="https://coderpad.io/" target="_blank" rel="noopener noreferrer sponsored"><img alt="CoderPad" src="https://unicorn-utterances.com/sponsors/coderpad.svg" width="300"/></a>
[Reconocemos todos los patrocinios que compartimos abiertamente en GitHub](https://github.com/unicorn-utterances/unicorn-utterances/issues?q=is%3Aissue+label%3Adisclosure+is%3Aclosed)

View File

@@ -16,9 +16,7 @@ This repository acts as the source code location for [the Unicorn Utterances blo
## Sponsors
<a href="https://www.thepolyglotdeveloper.com/" target="_blank" rel="noopener noreferrer sponsored"><img alt="The Polyglot Developer" src="https://unicorn-utterances.com/sponsors/the-polyglot-developer.svg" width="300"/></a>
<a href="https://oceanbit.dev/" target="_blank" rel="noopener noreferrer sponsored"><img alt="OceanBit" src="https://unicorn-utterances.com/sponsors/oceanbit.svg" width="300"/></a>
<a href="https://coderpad.io/" target="_blank" rel="noopener noreferrer sponsored"><img alt="CoderPad" src="https://unicorn-utterances.com/sponsors/coderpad.svg" width="300"/></a>
<a href="https://www.thepolyglotdeveloper.com/" target="_blank" rel="noopener noreferrer sponsored"><img alt="The Polyglot Developer" src="https://unicorn-utterances.com/sponsors/the-polyglot-developer.svg" width="300"/></a> <a href="https://oceanbit.dev/" target="_blank" rel="noopener noreferrer sponsored"><img alt="OceanBit" src="https://unicorn-utterances.com/sponsors/oceanbit.svg" width="300"/></a> <a href="https://coderpad.io/" target="_blank" rel="noopener noreferrer sponsored"><img alt="CoderPad" src="https://unicorn-utterances.com/sponsors/coderpad.svg" width="300"/></a>
[We disclose every sponsorship we share openly on GitHub](https://github.com/unicorn-utterances/unicorn-utterances/issues?q=is%3Aissue+label%3Adisclosure+is%3Aclosed)
@@ -42,4 +40,3 @@ We highly encourage and celebrate others contributing to our site and our commun
Keep in mind that we request developers reach out [via our Discord](https://discord.gg/FMcvc6T) or [via GitHub issue](https://github.com/unicorn-utterances/unicorn-utterances/issues/new) before extensive development is pursued. If you have a feature you'd like to add to the site, let us know! We'd love to do some brainstorming before coding begins!
We extend this invitation to those who may be unfamiliar with our processes. Be sure to check out [our CONTRIBUTING.md](./CONTRIBUTING.md) file first, but don't be afraid to join in and ask questions if you're uncertain of anything

View File

@@ -12,7 +12,7 @@
In the past, Android Studio did not support AMD's CPUs for hardware emulation of an Android device. [That all changed in 2018 when Google added Hyper-V support to the Android Emulator](https://android-developers.googleblog.com/2018/07/android-emulator-amd-processor-hyper-v.html).
However, while working on my Ryzen CPU powered desktop, I had difficulties getting the program working on my machine.
However, while working on my Ryzen CPU powered desktop, I had difficulties getting the program working on my machine.
# BIOS Setup {#bios}
@@ -47,15 +47,13 @@ Enabling IOMMU on a Gigabyte AMD motherboard is much easier than enabling SVM mo
![The chipset tab](./iommu.jpg)
Once changed, tab over to "Save & Exit" and select "Exit and save changes".
# Windows Features Setup {#windows-features}
Now that we have our BIOS (UEFI, really) configured correctly, we can enable the Windows features we need for the Android Emulator.
To start, press <kbd>Win</kbd> + <kbd>R</kbd>, which should bring up the **"Run"** dialog. Once open, _type `OptionalFeatures` and press **"OK"**_.
To start, press <kbd>Win</kbd> + <kbd>R</kbd>, which should bring up the **"Run"** dialog. Once open, _type `OptionalFeatures` and press **"OK"**_.
![The "run dialog" box with the typed suggestion](./run_dialog.png)
@@ -73,7 +71,7 @@ After these three settings are selected, press **"OK"** and allow the features t
# Setup Android Studio {#android-studio}
You have a few different methods for installing Android Studio. You can choose to use [Google's installer directly](https://developer.android.com/studio/install), you can [utilize the Chocolatey CLI installer](https://chocolatey.org/packages/AndroidStudio), or even use [JetBrain's Toolbox utility to install and manage an instance of Android Studio](https://www.jetbrains.com/toolbox-app/). _Any of these methods work perfectly well_, it's down to preference, really.
You have a few different methods for installing Android Studio. You can choose to use [Google's installer directly](https://developer.android.com/studio/install), you can [utilize the Chocolatey CLI installer](https://chocolatey.org/packages/AndroidStudio), or even use [JetBrain's Toolbox utility to install and manage an instance of Android Studio](https://www.jetbrains.com/toolbox-app/). _Any of these methods work perfectly well_, it's down to preference, really.
Once you get Android Studio installed, go ahead and _open the SDK Manager settings screen_ from the **"Configure"** dropdown.
@@ -83,15 +81,11 @@ Once you see the popup dialog, you'll want to _select the "SDK Tools" tab_. Ther
![The mentioned screen with the AMD hypervisor selected](./select_amd_hypervisor.png)
Once you've selected it, press **"Apply"** to download the installer. _Because the "Apply" button only downloads the installer, we'll need to run it manually._
Once you've selected it, press **"Apply"** to download the installer. _Because the "Apply" button only downloads the installer, we'll need to run it manually._
## Run the Installer {#amd-hypervisor-installer}
To find the location of the installer, you'll want to go to the install location for your Android SDK. For me (who used the Jetbrains Toolbox to install Android Studio), that path was: `%AppData%/../Local/Android/Sdk`.
To find the location of the installer, you'll want to go to the install location for your Android SDK. For me (who used the Jetbrains Toolbox to install Android Studio), that path was: `%AppData%/../Local/Android/Sdk`.
The hypervisor installer is located under the following subpath of that path:
@@ -125,13 +119,13 @@ You'll then see a list of the devices that you currently have setup. I, for exam
You can create a new one by _pressing **"Create Virtual Device"**_.
Upon the dialog creation, you'll see a list of devices that you can use as a baseline for your emulator. This sets the hardware information (screen size and such). Even if you pick a device, it does not restrict the versions of Android you can use with it. I picked Pixel 2 and KitKat for my KK testing device, despite the Pixel 2 being released well after that OS release.
Upon the dialog creation, you'll see a list of devices that you can use as a baseline for your emulator. This sets the hardware information (screen size and such). Even if you pick a device, it does not restrict the versions of Android you can use with it. I picked Pixel 2 and KitKat for my KK testing device, despite the Pixel 2 being released well after that OS release.
![The "select hardware" screen as mentioned](./select_virtual_device.png)
Once you've selected a device, you can pick the version of Android to run. You'll want to select an `x86` or `x86_64` build of Android you're looking for. I've noticed better performance from `x86_64` emulators myself, so I went with an `x86_64` build of Android Pie.
![The selected image for x86_64 Pie](./pie_device.png)
![The selected image for x86\_64 Pie](./pie_device.png)
Afterward, you'll want to name your emulator. I try to keep them without strings and not too long, so if I need to run the emulator manually in the CLI, I can do so with the name of the emulator easily.
@@ -139,13 +133,12 @@ Afterward, you'll want to name your emulator. I try to keep them without strings
Finally, once you've selected **"Finish"**, it should save the emulator's settings and start the emulator itself.
> You may get an error such as `HAXM is not installed` when trying to set up an emulator. If you get this error, it's most likely that you have not [enabled the settings in BIOS](#bios). I know in my case, I had recently performed a BIOS upgrade, and it had reset my BIOS settings, making me go back and re-enable them.
![The emulator once ran](./device_running.png)
# Conclusion
I've had incredible success with my Ryzen powered desktop during my Android development. Not only is it cost-efficient for my usage compared to the Intel option, but it's able to run the emulator quickly. Hopefully, this article has been able to help you set up your machine as well.
I've had incredible success with my Ryzen powered desktop during my Android development. Not only is it cost-efficient for my usage compared to the Intel option, but it's able to run the emulator quickly. Hopefully, this article has been able to help you set up your machine as well.
Let us know what your thoughts on this article were! We not only have our comments down below, but we have [a Discord community](https://discord.gg/FMcvc6T) as well that we invite you to join! We chat about all kinds of programming and CS related topics there!

View File

@@ -130,7 +130,6 @@ With this, we'll finally be able to use these methods to control our component.
> If you're wondering why you don't need to do something like this with `ngOnInit`, it's because that functionality is baked right into Angular. Angular _always_ looks for an `onInit` function and tries to call it when the respective lifecycle method is run. `implements` is just a type-safe way to ensure that you're explicitly wanting to call that method.
## `writeValue` {#write-value}
`writeValue` is a method that acts exactly as you'd expect it to: It simply writes a value to your component's value. As your value has more than a single write method (from your component and from the parent), it's suggested to have a setter, getter, and private internal value for your property.
@@ -302,7 +301,6 @@ Finally, you can pass these options to `ngModel` and `formControl` (or even `for
If done properly, you should see something like this:
<iframe src="https://stackblitz.com/edit/angular-value-accessor-example?ctl=1&embed=1&file=src/app/app.component.ts" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
# Form Control Classes
@@ -311,7 +309,6 @@ Angular CSS masters might point to [classes that's applied to inputs when variou
These classes include:
- `ng-pristine`
- `ng-dirty`
- `ng-untouched`
@@ -409,7 +406,6 @@ export class AppComponent {
<iframe src="https://stackblitz.com/edit/angular-value-accessor-dep-inject?ctl=1&embed=1&file=src/app/app.component.ts" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
Not only do you have [a wide range of Angular-built validators at your disposal](https://angular.io/api/forms/Validators), but you're even able to [make your own validator](https://angular.io/api/forms/Validator)!
# Conclusion {#conclusion}

View File

@@ -160,7 +160,7 @@ In this article we'll learn:
- [How to use a base class in Angular](#base-class-angular)
- [How to simplify Angular base class usage using an abstract class](#abstract-class)
- [Overwriting lifecycle methods in Angular extended classes](#lifecycle-methods)
- [Using dependency injection with your extended class](#dependency injection)
- \[Using dependency injection with your extended class]\(#dependency injection)
- [Why you don't want to use base classes with Angular](#dont-extend-base-classes)
# What is an extension class, anyway? {#base-class}
@@ -195,8 +195,6 @@ This class has a few things going on:
When we create an "instance" of this class, it will in turn call the `constructor` and give us an object with all of the properties and methods associated with `HelloMessage` as an "instance" of that class.
Now, let's say that we want to reuse the `sayHi` logic in multiple classes at a time.
> Sounds like a familiar problem, doesn't it?
@@ -584,8 +582,6 @@ class AppComponent extends BaseComponent implements OnInit, OnDestroy {
}
```
## Overwriting `constructor` behavior {#overwriting-constructors}
When working with class extension, regardless of being used in Angular or in JavaScript itself, you need to call `super()` when trying to overwrite a constructor:
@@ -837,8 +833,6 @@ class AppComponent extends BaseComponent {
}
```
# Why you don't want to extend Angular base classes {#dont-extend-base-classes}
Now that we've learned how to extend base classes in Angular to share lifecycle methods, allow me to flip the script:
@@ -995,8 +989,6 @@ class AppComponent implements OnDestroy {
}
```
# Conclusion
And that's it! I hope this has been an insightful look into how you can extend component logic.

View File

@@ -113,7 +113,7 @@ Because we're planning on using Angular CLI, we'll want to set the `src` propert
> font-weight: 800;
> src: url("#{$base_path}/foundry_sterling_extra_bold.otf") format('opentype')
> }
>
>
> // ... Other @font-face declarations
> }
> ```
@@ -152,7 +152,7 @@ npm i ecp-private-assets
## `angular.json` modification {#angular-json}
Once this is done, two steps are required. First, add the following to `angular.json`'s `assets` property. This will copy the files from `ecp-private-assets` to `/assets` once you setup a build.
Once this is done, two steps are required. First, add the following to `angular.json`'s `assets` property. This will copy the files from `ecp-private-assets` to `/assets` once you setup a build.
```json
{
@@ -201,14 +201,13 @@ This way, when we use the CSS `url('/assets/')`, it will point to our newly appo
Now that we have our assets in place, we need to import the CSS file into our app.
If your app utilizes `postcss`'s `import` plugin or if you're using vanilla CSS, add the following line to your `main.scss` file:
```css
@import "ecp-private-assets/fonts/foundry_sterling.css";
```
> Remember to keep the `@import`s at the top of your file, as you will receive an error otherwise.
> Remember to keep the `@import`s at the top of your file, as you will receive an error otherwise.
However, if you're not using `postcss` and have SCSS installed, you can use the following:

View File

@@ -62,7 +62,7 @@ We are then adding the [`ngIf`](https://angular.io/api/common/NgIf) structural d
- If `bool` is true, it renders `<p>True</p>`, and the template containing `<p>False</p>` does not
- If `bool` is false, it then checks if the [`else` condition built into `ngIf`](https://angular.io/api/common/NgIf#showing-an-alternative-template-using-else) has a value assigned to it. If there is a value assigned to the `else` condition, it renders that template.
- In this example, it does; the template we've assigned to `templHere`. Because of this, `<p>False</p>` is rendered
- In this example, it does; the template we've assigned to `templHere`. Because of this, `<p>False</p>` is rendered
If you had forgotten to include the `ngIf`, it would never render the `False` element because **a template is not rendered to the view unless explicitly told to — this includes templates created with `ng-template`**
@@ -209,7 +209,6 @@ Fancy that.
When we want to overwrite the type of data we expect `ViewChild` to return, we can use a second property passed to the `ViewChild` decorator with the type we want to be returned. With the use-case mentioned above, we can tell Angular that we want a reference to the element of the component itself by using the `ElementRef`.
```typescript
/* This would replace the previous @ViewChild */
@ViewChild('myComponent', {read: ElementRef, static: false}) myComponent: ElementRef;
@@ -314,7 +313,7 @@ action-card {
}
```
But this is often not the case. _[Angular's `ViewEncapsulation`](https://angular.io/api/core/ViewEncapsulation) prevents styles from one component from affecting the styling of another_. This is especially true if you're using a configuration that allows the native browser to handle the components under the browser's shadow DOM APIs, which restricts stylesheet sharing on a browser-level. This is why the [Angular-specific CSS selector `::ng-deep`](https://angular.io/guide/component-styles#deprecated-deep--and-ng-deep) has been marked for depreciation (sorry old-school Angular developers [including myself, so much to migrate 😭]).
But this is often not the case. _[Angular's `ViewEncapsulation`](https://angular.io/api/core/ViewEncapsulation) prevents styles from one component from affecting the styling of another_. This is especially true if you're using a configuration that allows the native browser to handle the components under the browser's shadow DOM APIs, which restricts stylesheet sharing on a browser-level. This is why the [Angular-specific CSS selector `::ng-deep`](https://angular.io/guide/component-styles#deprecated-deep--and-ng-deep) has been marked for depreciation (sorry old-school Angular developers \[including myself, so much to migrate 😭]).
It's no matter, though. We have the power of `ViewChildren` on our side! Corbin already showed us how to get a reference to an element of a rendered component! Let's spin up an example:
@@ -407,7 +406,6 @@ export class AppComponent {
This is a perfect example of where you might want `@ContentChild` — not only are you unable to use `ng-content` to render this template without a template reference being passed to an outlet, but you're able to create a context that can pass information to the template being passed as a child.
# How Does Angular Track the UI {#understand-the-tree}
Awesome! We've been blowing through some of the real-world uses of templates like a bullet-train through a tunnel. 🚆 But I have something to admit: I feel like I've been doing a pretty bad job at explaining the "nitty-gritty" of how this stuff works. While that can often be a bit more dry of a read, I think it's very important to be able to use these APIs to their fullest. As such, let's take a step back and read through some of the more abstract concepts behind them.
@@ -516,11 +514,8 @@ Little has changed, yet there's something new! A _view container_ is just what i
</p>
```
![A chart showing an element as the root with two children, a template and a view. The view points towards the template](./hierarchy_view_container_on_element.svg "Diagram showing the above code as a graph")
_It is because Angular's view containers being able to be attached to views, templates, and elements that enable the dependency injection system to get a `ViewContainerRef` regardless of what you're requested the `ViewContainerRef` on_.
## Host Views {#components-are-directives}
@@ -566,11 +561,8 @@ export class ChildComponent {}
export class AppComponent {}
```
![A chart showing the heirarchy of the above code. It shows "my-app" having a host view, which has a view container. This view container is the parent to an element and "child-component", which has its own host view, view container, and children](./hierarchy_tree_example.svg "Diagram showing the above code as a graph")
## Template Input Variable Scope
Template input variables are the variables you bind to a template when using context. `<ng-template let-varName>`. _These variables are defined from the context that is applied to the template_. As a result **these templates are able to be accessed by the children views of the templates, but not from a higher level** — as the context is not defined above the template:
@@ -628,8 +620,6 @@ If you look at the output of this example, you'll notice that `testingMessage` i
![Chart showing the above code sample to match the prior visualization aids](./template_reference_scope.svg "Visualization of the hierarchy tree for the prior cod example")
When the view that is trying to render `testMessage` looks for that template reference variable, it is unable to, as it is bound to the `helloThereMsg` template view. Because it cannot find a template reference variable with the id `testMessage`, it treats it like any other unfound variable: an `undefined` value. The default behavior of `undefined` being passed to `ngTemplateOutlet` is to not render anything.
In order to fix this behavior, we'd need to move the second `ng-template` into the `helloThereMsg` template view so that the `ngTemplateOutlet` is able to find the matching template reference variable within its view scope.
@@ -709,7 +699,6 @@ One of the default checks that is ran when Angular is starting the initial rende
These checks trigger the lifecycle method `DoCheck`, which you can manually handle. The `DoCheck` lifecycle method will trigger every time Angular detects data changes, regardless of if the check of that data does not decide to update the item on-screen or not.
So let's look at the example we had previously, but let's add some lifecycle methods to evaluate when `ViewChild` is able to give us our value.
```typescript
@@ -743,7 +732,7 @@ ngAfterViewInit | The template is present? true
ngDoCheck | The template is present? true
```
You can see that the `testingMessageCompVar` property is not defined until the `ngAfterViewInit`. _The reason we're hitting the error is that the template is not defined in the component logic until `ngAfterViewInit`._ It is not defined until them due to timing issues:* **the template is being declared in an embedded view, which takes a portion of time to render to screen**. As a result, the `helloThereMsg` template must render first, then the `ViewChild` can get a reference to the child after the initial update.
You can see that the `testingMessageCompVar` property is not defined until the `ngAfterViewInit`. _The reason we're hitting the error is that the template is not defined in the component logic until `ngAfterViewInit`._ It is not defined until them due to timing issues:\* **the template is being declared in an embedded view, which takes a portion of time to render to screen**. As a result, the `helloThereMsg` template must render first, then the `ViewChild` can get a reference to the child after the initial update.
When using `ViewChild` by itself, it updates the value of the `testingMessageCompVar` at the same time that the `AfterViewInit` lifecycle method is ran. This value update is then in turn reflected in the template itself.
@@ -803,9 +792,9 @@ When taking the example with the `testingMessageCompVar` prop and changing the v
Having covered views in the last section, it's important to mention an important limitation regarding them:
>Properties of elements in a view can change dynamically, in response to user actions; the structure (number and order) of elements in a view can't. You can change the structure of elements by inserting, moving, or removing nested views within their view containers.
> Properties of elements in a view can change dynamically, in response to user actions; the structure (number and order) of elements in a view can't. You can change the structure of elements by inserting, moving, or removing nested views within their view containers.
>
>\- Angular Docs
> \- Angular Docs
## Embed Views {#embed-views}
@@ -846,12 +835,12 @@ Starting with some small recap:
- We're creating a template with the `ng-template` tag and assigning it to a template reference variable `templ`
- We're also creating a `div` tag, assigning it to the template reference variable `viewContainerRef`
- Lastly, `ViewChild` is giving us a reference to the template on the `templ` component class property.
- We're able to mark both of these as `static: true` as neither of them are obfuscated by non-host-view views as parents
- We're able to mark both of these as `static: true` as neither of them are obfuscated by non-host-view views as parents
Now the new stuff:
- We're also using `ViewChild` to assign the template reference variable `viewContainerRef` to a component class property.
- We're using the `read` prop to give it the [`ViewContainerRef`](https://angular.io/api/core/ViewContainerRef) class, which includes some methods to help us create an embedded view.
- We're using the `read` prop to give it the [`ViewContainerRef`](https://angular.io/api/core/ViewContainerRef) class, which includes some methods to help us create an embedded view.
- Then, in the `ngOnInit` lifecycle, we're running the `createEmbeddedView` method present on the `ViewContainerRef` property to create an embedded view based on the template.
If you take a look at your element debugger, you'll notice that the template is injected as a sibling to the `.testing` div:
@@ -897,7 +886,6 @@ console.log(embeddIndex); // This would print `0`.
The view container keeps track of all of the embedded views in its control, and when you `createEmbeddedView`, it searches for the index to insert the view into.
You're also able to lookup an embedded view based on the index you're looking for using `get`. So, if you wanted to get all of the indexes being tracked by `viewContainerRef`, you'd do:
```typescript
@@ -950,7 +938,6 @@ To get around this, we can use the `ng-container` tag, which allows us to get a
<ng-container #viewContainerRef></ng-container>
```
<iframe src="https://stackblitz.com/edit/start-to-source-18-create-embedd-context?ctl=1&embed=1&file=src/app/app.component.ts" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
#### Move/Insert Template
@@ -959,7 +946,6 @@ But oh no! You'll see that the ordering is off. The simplest (and probably most
But this is a blog post, and I needed a contrived example to showcase how we can move views programmatically:
```typescript
const newViewIndex = 0;
this.viewContainerRef.move(embeddRef1, newViewIndex); // This will move this view to index 1, and shift every index greater than or equal to 0 up by 1
@@ -1034,7 +1020,6 @@ You'll notice this code is almost exactly the same from some of our previous com
However, the lack of a template associated with the directive enables some fun stuff, for example, _we can use the same dependency injection trick we've been using to get the view container reference_ to get a reference to the template element that the directive is attached to and render it in the `ngOnInit` method like so:
```typescript
@Directive({
selector: '[renderTheTemplate]'
@@ -1162,7 +1147,6 @@ The main idea behind structural directives is that **they're directives that wil
Let's look at a basic sample to start:
```typescript
@Directive({
selector: '[renderThis]'
@@ -1194,7 +1178,7 @@ export class AppComponent {}
Too much CS (computer science) speak? Me too, let's rephrase that. When you add the `*` to the start of the directive that's being attached to the element, you're essentially telling Angular to wrap that element in an `ng-template` and pass the directive to the newly created template.
From there, the directive can get a reference to that template from the constructor (as Angular is nice enough to pass the template to our directive when we ask for it [this is what the DI system does]).
From there, the directive can get a reference to that template from the constructor (as Angular is nice enough to pass the template to our directive when we ask for it \[this is what the DI system does]).
The cool part about structural directives, though? Because they're simply directives, **you can remove the `*` and use it with an `ng-template` directly**. Want to use the `renderThis` without a structural directive? No problem! Replace the template with the following code block and you've got yourself a rendered template:
@@ -1216,7 +1200,6 @@ But rendering a template without changing it in any way isn't a very useful stru
So if we added an input with the same name as the directive ([as we did previously](#directive-same-name-input)) to accept a value to check the truthiness of, added an `if` statement to render only if the value is true, we have ourselves the start of an `ngIf` replacement that we've built ourselves!
```typescript
@Directive({
selector: '[renderThisIf]'
@@ -1344,16 +1327,16 @@ export class NgIfContext {
Just to recap, let's run through this line-by-line:
1. `_context` is creating a default of `{$implicit: null, ngIf: null}`
- The object shape is defined by the `NgIfContext` class below
- This is to be able to pass as a context to the template. While this is not required to understand how Angular implemented this directive in basic terms, it was left in to avoid editing code elsewhere
- The object shape is defined by the `NgIfContext` class below
- This is to be able to pass as a context to the template. While this is not required to understand how Angular implemented this directive in basic terms, it was left in to avoid editing code elsewhere
2. We're then defining a variable to keep track of the template reference and the view reference ([what `createEmbeddedView` returns](https://angular.io/api/core/EmbeddedViewRef)) for usage later
3. The constructor is then assigning the template reference to the variable, and getting a reference to the view container
4. We're then defining an input with the same name as a setter, as we did with our implementation
- This setter is also calling an update function, just as were with our implementation
- This setter is also calling an update function, just as were with our implementation
5. The update view is then seeing if the `$implicit` value in the context is truthy (as we're assigning the value of the `ngIf` input to the `$implicit` key on the context)
6. Further checks are made to see if there is a view reference already.
- If there is not, it will proceed to make one (checking first that there is a template to create off of)
- If there is, it will not recreate a view, in order to avoid performance issues by recreating views over-and-over again
- If there is not, it will proceed to make one (checking first that there is a template to create off of)
- If there is, it will not recreate a view, in order to avoid performance issues by recreating views over-and-over again
## Microsyntax
@@ -1428,12 +1411,10 @@ export class MakePigLatinDirective {
export class AppComponent {}
```
<iframe src="https://stackblitz.com/edit/start-to-source-31-structural-named-context?ctl=1&embed=1&file=src/app/app.component.ts" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
Just as before, we would use semicolons to split the definitions, then bind the external (as in: from the directive) context value of `original` to the local (this template) variable of `ogMsg`.
### Additional Attribute Inputs
With a typical — non-structural — directive, you'd have inputs that you could add to your directive. For example, you could have a directive with the following inputs:
@@ -1513,7 +1494,6 @@ The magic in the syntax comes from that input name. I know in previous examples
**This is why we usually call the directive selector the structural directive prefix — it should prefix the names of any of your microsyntax inputs**. Outside of the prefix rule, there's little else that you'll need to keep in mind with these input names. Want to make it `makePiglatinCasingThingHere`? No problem, just change that part of the input syntax to read `casingThingHere: 'upper'`
#### Why not bind like a typical input?
Now, I remember when I was learning a lot of the structural directive stuff, I thought "well this syntax is cool, but it might be a bit ambiguous". I decided I was going to change that a bit:
@@ -1687,7 +1667,6 @@ A key expression is simply an expression that youre able to bind to an input
- Youll then want to **place an expression that will be passed as the input value** for the `key` you started the key expression with
- Finally, _if youd like to save the input value_, youre able to **use the `as` keyword**, followed by the name youd like to save the input value to (as a template input variable)
```html
<p *makePigLatin="inputKey: 'This is an expression' as localVar"></p>
<p *makePigLatin="inputKey: 'This is an expression'"></p>
@@ -1702,8 +1681,8 @@ The `let` binding:
- Starts with a `let` preserved keyword
- Then lists the template input variable to save the value to
- Youll then want to put the key of the context you want to save a value of after a `=` operator
- Its worth mentioning that this is optional. This is because of the `$implicit` key in context.
EG: a context of `{$implicit: 1, namedKey: 900}` and `let smallNum; let largerNum = namedKey` would assign `1` to `smallNum` and `900` to `largerNum`
- Its worth mentioning that this is optional. This is because of the `$implicit` key in context.
EG: a context of `{$implicit: 1, namedKey: 900}` and `let smallNum; let largerNum = namedKey` would assign `1` to `smallNum` and `900` to `largerNum`
### Combining Them Together
@@ -1821,14 +1800,16 @@ export class AppComponent {
<iframe src="https://stackblitz.com/edit/start-to-source-39-uni-for?ctl=1&embed=1&file=src/app/app.component.ts" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
- We're starting with enabling `uniFor` as the structural directive name
- Then we're defining an input to accept `of` as a key in the syntax (to match the `ngFor` structural directive syntax).
- We can then reference this value later with `this.uniForOf` just as we are in the `ngAfterViewInit`.
- In that lifecycle method, we're then creating an embedded view for each item in the array
- This view is passed a context with an implicit value (so that `_var` in`let _var of list` will have the value of this item)
- We also pass the index to the context to give a boolean if an item is the first in a list
- Then we pass a `uniForOf` so that we can use `as` to capture the value passed to the `of` portion of the syntax
- This view is passed a context with an implicit value (so that `_var` in`let _var of list` will have the value of this item)
- We also pass the index to the context to give a boolean if an item is the first in a list
- Then we pass a `uniForOf` so that we can use `as` to capture the value passed to the `of` portion of the syntax
- Finally, we use the [async pipe](https://angular.io/api/common/AsyncPipe) to get the value of the array that's inside of an observable
# Conclusion

View File

@@ -22,7 +22,8 @@ It's important to note that _"networking" is a broad, catch-all term that infers
> That said, you need the right binary data to be input into the CPU for it to process, just like our brains need the right input to find the answer of what to do. Because of this, communication with the CPU is integral
# Architecture {#network-architectures}
There are a lot of ways that information can be connected and transferred. We use various types of architecture to connect them.
There are a lot of ways that information can be connected and transferred. We use various types of architecture to connect them.
_Computers speak in `1`s and `0`s, known as binary_. These binary values come in incredibly long strings of combinations of one of the two symbols to _construct all of the data used in communication_.
@@ -44,7 +45,7 @@ Furthermore, because error-handled bi-directional cancelable subscriptions (like
## Packet Architecture {#packet-architecture}
The weaknesses of the bus architecture led to the creation of the packet architecture. The packet architecture requires a bit more of a higher-level understanding of how data is sent and received. To explain this concept, we'll use an analogy that fits really well.
The weaknesses of the bus architecture led to the creation of the packet architecture. The packet architecture requires a bit more of a higher-level understanding of how data is sent and received. To explain this concept, we'll use an analogy that fits really well.
Let's say you want to send a note to your friend that's hours away from you. You don't have the internet so you decide to send a letter. In a typical correspondence, you'd send off a letter, include a return address, and wait for a response back. That said, _there's nothing stopping someone from sending more than a single letter before receiving a response_. This chart is a good example of that:
@@ -56,7 +57,7 @@ Similarly, a packet is _sent from a single sender, received by a single recipien
Letters may not give you the same kind of continuous stream of consciousness as in-person communications, but they do provide something in return: structure.
The way you might structure your thoughts when speaking is significantly different from how you might organize your thoughts on paper. For example, in this article, there is a clear beginning, end, and structured headings to each of the items in this article. Such verbose metadata (such as overall length) cannot be communicated via in-person talking. _The way you may structure data in a packet may also differ from how you might communicate data via a bus_.
The way you might structure your thoughts when speaking is significantly different from how you might organize your thoughts on paper. For example, in this article, there is a clear beginning, end, and structured headings to each of the items in this article. Such verbose metadata (such as overall length) cannot be communicated via in-person talking. _The way you may structure data in a packet may also differ from how you might communicate data via a bus_.
That said, simply because there's a defined start and an end does not mean that you cannot _send large sequences of data through multiple packets and stitch them together_. Neither is true for the written word. This article does not contain the full set of information the series we hope to share, but rather provides a baseline and structure for how the rest of the information is to be consumed. So too can packets provide addendums to other packets, if you so wish.

View File

@@ -23,13 +23,13 @@ In order to do this, youll need to install a few things first:
During installation, it will ask you if you want to setup an emulator. Youll want to install all of the related Intel Virtualization packages, as it will greatly increase your speed of the emulator.
- Download and install the android-platform-tools. This will include
- Download and install the android-platform-tools. This will include
```
adb
```
command directly on your path for you to utilize
command directly on your path for you to utilize
- For macOS, I suggest using [the Homebrew package manager](https://brew.sh/) and running `brew cask install android-platform-tools`
- For Windows, I suggest using [the Chocolatey package manager](https://chocolatey.org/) and running `choco install adb`
@@ -45,19 +45,19 @@ Then, press “Configure” in the bottom right corner. Then press the “AVD Ma
![The sub-menu for configure in the Android Studio startup screen](./2.png)
Youll see a popup window that will show you the list of virtual devices. *These are devices that will be used in order to run an emulator*. You may already have a virtual device setup from the initial setup of Android Studio. They include the version of the operating system you use when you boot up the device. While the virtual device that was setup out-of-the-box is fine for most operations, well want to setup an older version of the emulator. This will allow us to change the host file in Android, which requires root (something the default images wont allow).
Youll see a popup window that will show you the list of virtual devices. _These are devices that will be used in order to run an emulator_. You may already have a virtual device setup from the initial setup of Android Studio. They include the version of the operating system you use when you boot up the device. While the virtual device that was setup out-of-the-box is fine for most operations, well want to setup an older version of the emulator. This will allow us to change the host file in Android, which requires root (something the default images wont allow).
Select **Create Virtual Device**, then select a device type. In my example, I selected **Nexus 5**, but any device definition of a relatively modern phone should work.
![A popup dialog for creating a new virtual device setup](./3.png)
As mentioned before, the default images that are provided will not allow us to replace the host files. In order to do so, *we have to download an older Android image* (and one that does not include Google Play Store). To do this, I selected the **x86_64 Android 7.1.1** (non Google API version) image to download and then selected **Next**.
As mentioned before, the default images that are provided will not allow us to replace the host files. In order to do so, _we have to download an older Android image_ (and one that does not include Google Play Store). To do this, I selected the **x86\_64 Android 7.1.1** (non Google API version) image to download and then selected **Next**.
![The selection of the aforementioned Nougat image](./4.png)
Its worth noting that we specifically must select a non-Google version, otherwise our future commands will not work (per Googles restrictions on Google API images).
After this step, proceed to name the Android Device. *Id suggest you name it something without any spaces in order to run a command later that youll need to run*. In this case, I called the image **Nexus5**.
After this step, proceed to name the Android Device. _Id suggest you name it something without any spaces in order to run a command later that youll need to run_. In this case, I called the image **Nexus5**.
![My naming of the AVD](./5.png)
@@ -65,8 +65,8 @@ After this step, proceed to name the Android Device. *Id suggest you name it
Once the AVD is initially setup, open your terminal, and find your installation path of Android Studio.
- For MacOS, this should be under **~/Library/Android/sdk**
- For Windows, this *should* be **C:\Users<username>\AppData\Local\Android\sdk**
- For MacOS, this should be under **\~/Library/Android/sdk**
- For Windows, this _should_ be **C:\Users<username>\AppData\Local\Android\sdk**
Once in that path, you want to run a specific emulator command:
@@ -92,9 +92,9 @@ Once youre done with running the emulator, open a new tab and run the follow
![A screenshot of the above commands running](./7.png)
Upon running these commands, youll find a **hosts** file. *This file is the file that tells your OS what path a given domain has.* You can, for example, map `example.com` to go to a specific IP address, similar to how DNS works for most domains.
Upon running these commands, youll find a **hosts** file. _This file is the file that tells your OS what path a given domain has._ You can, for example, map `example.com` to go to a specific IP address, similar to how DNS works for most domains.
Inside the emulator, the IP address `10.0.2.2` refers to the *host* OS. For example, if youre running a local server on your Windows/MacOS/Linux machine on `localhost:3000`, you can access it using `10.0.2.2:3000` from the Android emulator.
Inside the emulator, the IP address `10.0.2.2` refers to the _host_ OS. For example, if youre running a local server on your Windows/MacOS/Linux machine on `localhost:3000`, you can access it using `10.0.2.2:3000` from the Android emulator.
Knowing these two things, you can change the host file to make `example.com` refer to the host by adding the following to the host file:

View File

@@ -118,7 +118,7 @@ console.log(findPath(3, 3, 2, 1));
I correctly got 1.
One of the spots that we can reach in 2 moves is \[4,6\].
One of the spots that we can reach in 2 moves is \[4,6].
```javascript
console.log(findPath(3, 3, 4, 6));

View File

@@ -13,36 +13,33 @@
Tech recruiting is difficult. Interviews are tricky for candidates - and for interviewers. One of the untold challenges of interviewing is knowing how to set up good candidates for success. After all, you want a process that tests the right skills, and filters out the noise that is not helpful to evaluating candidates.
This can be done in many ways, but lets talk about a few today.
# Dont Be Afraid To Help
Something to keep in mind while interviewing candidates is that theyre just like you and me: people. People that make mistakes from time-to-time or might get stuck on a certain phrasing of a question.
Something to keep in mind while interviewing candidates is that theyre just like you and me: people. People that make mistakes from time-to-time or might get stuck on a certain phrasing of a question.
Oftentimes, lending a gentle helping hand can be the ticket to a successful interview. It can be as simple as rephrasing the question in a way that points towards the solution, or providing a structural bit of code that needs tweaking in order to be solved.
This is particularly beneficial for junior engineers, whos interviews should focus more on “thought process” and “ability to learn and communicate” than existing skill sets. However, even senior engineers can have the solution escape them until it finally clicks with some small assistance.
While this all might seem counterintuitive to assist a candidate (even in small ways) during an interview, you have to remember that they need support. In their future role with your company, they wont (and shouldnt) be working in isolation. Instead, they will have a team to lean on. By giving a small hint here and there, youre able to understand how they receive feedback and when they need help.
# Allow for Resources
As mentioned earlier, candidates are just people. Because of that, you will never find an all-knowing candidate who only ever relies solely on their existing knowledge to fix an issue (no matter what big-ego Jim says). Time-and-time again Ive heard from seasoned developers that research and cheat-sheets are part of their daily engineering work.
As mentioned earlier, candidates are just people. Because of that, you will never find an all-knowing candidate who only ever relies solely on their existing knowledge to fix an issue (no matter what big-ego Jim says). Time-and-time again Ive heard from seasoned developers that research and cheat-sheets are part of their daily engineering work.
While it might not be immediately obvious, knowing how to search for and find the relevant content is incredibly important. Not only that, its something thats developed gradually alongside a developers journey - just like any other skill.
After all, the point of coding interviews is to see how capable a developer is at the job theyre applying for. You want to test in real-world situations, not in an isolated environment that doesnt represent the daily aspects of the job.
# Less Algorithms, More Demos
Speaking of representing a job in a more realistic light: think about the last time you had a ticket in your backlog that required discussion of tree reversal (or similar algorithm). Now think of the last time you asked a question like that in your interviews. See where Im going here? Im not implying that algorithm questions are inherently bad for every position, but in this industry theyve been used as a stop-gap for more relevant questions.
Many engineers can attest to being asked algorithmic questions in an interview - only to be working with styling and application state management in their day jobs. The usage of complex algorithms are far-and-few between - especially in front-end engineering.
Many engineers can attest to being asked algorithmic questions in an interview - only to be working with styling and application state management in their day jobs. The usage of complex algorithms are far-and-few between - especially in front-end engineering.
Even when algorithms ***are\*** relevant, theres usually a team to discuss with, research to be done, and benchmarking to verify the usage of a given algorithm for key application logic. These discussions can take significantly longer than an hour long interview.
Even when algorithms \***are\*** relevant, theres usually a team to discuss with, research to be done, and benchmarking to verify the usage of a given algorithm for key application logic. These discussions can take significantly longer than an hour long interview.
Not only are these questions rarely representative of the actual job, theyre also easy to cheat with someone with enough free-time to dedicate towards algorithm memorization. Googling “interview algorithm questions” provides over 17 million results. Even the first page of results promises to teach you how to instantly solve dozens of common algorithm questions.
@@ -62,7 +59,7 @@ While real-world code samples provide many upsides, setting up a real-world exam
# Take-Homes
We at CoderPad are ***strong\*** advocates of take-home interviews for technical assessments. While [weve written about many of the benefits of take-homes before](https://coderpad.io/blog/hire-better-faster-and-in-a-more-human-way-with-take-homes/), well touch on some of the advantages here:
We at CoderPad are \***strong\*** advocates of take-home interviews for technical assessments. While [weve written about many of the benefits of take-homes before](https://coderpad.io/blog/hire-better-faster-and-in-a-more-human-way-with-take-homes/), well touch on some of the advantages here:
- Lower stress environment for candidate
- Weve heard a lot from the autistic community and those with anxiety that this helps a lot

View File

@@ -55,7 +55,7 @@ Every element on the browser has a box model. You can inspect them using browser
Nearly every HTML element has some default browser styles. These styles are called HTML defaults. These defaults may change depending on the browsers rendering engine.
> 🤓 Not every browser supports every CSS property! For up-to-date browser support I suggest checking out [Can I Use?](https://www.google.com/search?q=caniuse&rlz=1C1CHBF_enCA963CA963&oq=caniuse&aqs=chrome.0.69i59j69i60l3.1776j0j4&sourceid=chrome&ie=UTF-8)
> 🤓 Not every browser supports every CSS property! For up-to-date browser support I suggest checking out [Can I Use?](https://www.google.com/search?q=caniuse\&rlz=1C1CHBF_enCA963CA963\&oq=caniuse\&aqs=chrome.0.69i59j69i60l3.1776j0j4\&sourceid=chrome\&ie=UTF-8)
Every HTML element has a place and a purpose. Some HTML elements are strictly used for grouping content and are generally referred to as containers, while other HTML elements are used for text, images and more.
@@ -203,7 +203,7 @@ justify-content: space-evenly;
> 🤓 Space your content out with justify-content
Here is a list of CSS properties used to control flexbox properties:
Here is a list of CSS properties used to control flexbox properties:
- [`flex-direction`](https://developer.mozilla.org/en-US/docs/Web/CSS/flex-direction) - controls flexbox direction
- [`flex-grow`](https://developer.mozilla.org/en-US/docs/Web/CSS/flex-grow) - controls a flex items grow factor
@@ -337,9 +337,9 @@ There are five types of element positions:
#### Flexbox:
- Used in headers, lists, tags, or any other block or inline content with the correct flex-direction
- Primary method to align and justify content in small components
- Easy to use
- Used in headers, lists, tags, or any other block or inline content with the correct flex-direction
- Primary method to align and justify content in small components
- Easy to use
For example, YouTube uses a flexbox to space out their headers children elements:
@@ -347,20 +347,20 @@ For example, YouTube uses a flexbox to space out their headers children elements
> 🤓 Mastering the flexbox will take you very far in CSS as it is used everywhere
#### Gridbox:
#### Gridbox:
- Used in creating complex layouts that require both columns and rows
- Provides the easiest and shortest way to center elements
- Verbose and powerful
- Used in creating complex layouts that require both columns and rows
- Provides the easiest and shortest way to center elements
- Verbose and powerful
For example, Spotify uses a gridbox to achieve their playlist player layout:
![spotify.png](./spotify.png)
#### Positioning:
#### Positioning:
- Used in lightboxes, mobile menus, modal windows, and similar overlaying elements
- Primarily used to remove elements from document flow
- Used in lightboxes, mobile menus, modal windows, and similar overlaying elements
- Primarily used to remove elements from document flow
For example, the cookies modal on stackoverflow uses a fixed position to stay on your screen while hovering above other document elements:
@@ -422,7 +422,7 @@ There are five basic CSS selectors:
- **Type (`h1`)** - Targets all with the given type
- **Attribute (`[type="submit"]`)** Targets all with the given attribute
> 🤓 I recommend using the `.class` selector over the `#id` selector as ID attributes are unique
> 🤓 I recommend using the `.class` selector over the `#id` selector as ID attributes are unique
You can group selectors under one CSS rule using commas to share properties among multiple selectors:
@@ -532,4 +532,4 @@ CSS variables allow us to define arbitrary values for reuse across a stylesheet.
It is common to use CSS variables for repeated values such as colors, font-size, padding, etc.
> ⚡ [Live Code Example: CSS Variables](https://codesandbox.io/s/css-variables-tx14z?file=/styles.css)
> ⚡ [Live Code Example: CSS Variables](https://codesandbox.io/s/css-variables-tx14z?file=/styles.css)

View File

@@ -62,14 +62,14 @@ While you're more than able to cache database calls manually, sometimes it's con
# Pros and Cons {#pros-and-cons}
| Option | Pros | Cons |
| --------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| [Key-Value Pair Storage](#default-preference) | <ul><li>Extremely fast</li><li>Useful for simple data storing</li></ul> | <ul><li>Can only store serializable data</li><li>Not very cleanly separated</li><li>Not very secure</li></ul> |
| [Secure Key-Value Storage](#secure-key-store) | <ul><li>Fast</li><li>A secure method of data storing</li></ul> | <ul><li>Can only store serializable data</li><li>Not very cleanly separated</li></ul> |
| [SQLite without ORM](#sqlite-storage) | <ul><li>Cleanly separated data</li></ul> | <ul><li>Difficult to maintain code and table migrations manually</li><li>Not very fast compared to key-value pairs</li></ul> |
| [SQLite with ORM](#orms) | <ul><li>Cleanly separated data</li><li>Much more easy to maintain than writing SQL itself</li></ul> | <ul><li>Often slower than writing SQL by hand</li><li>More work to get setup</li></ul> |
| [Serverless](#serverless) | <ul><li>Simple setup</li><li>No need to schema or migrate a database when data requirements change</li></ul> | <ul><li>Potentially difficult to cache</li><li>Not on-device</li></ul> |
| [RealmDB](#realm) | <ul><li>An easy-to-sync on-device and cloud storage</li></ul> | <ul><li>A heavier requirement of investment than something more standard</li><li>Large migrations on the horizon</li></ul> |
| Option | Pros | Cons |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- |
| [Key-Value Pair Storage](#default-preference) | <ul><li>Extremely fast</li><li>Useful for simple data storing</li></ul> | <ul><li>Can only store serializable data</li><li>Not very cleanly separated</li><li>Not very secure</li></ul> |
| [Secure Key-Value Storage](#secure-key-store) | <ul><li>Fast</li><li>A secure method of data storing</li></ul> | <ul><li>Can only store serializable data</li><li>Not very cleanly separated</li></ul> |
| [SQLite without ORM](#sqlite-storage) | <ul><li>Cleanly separated data</li></ul> | <ul><li>Difficult to maintain code and table migrations manually</li><li>Not very fast compared to key-value pairs</li></ul> |
| [SQLite with ORM](#orms) | <ul><li>Cleanly separated data</li><li>Much more easy to maintain than writing SQL itself</li></ul> | <ul><li>Often slower than writing SQL by hand</li><li>More work to get setup</li></ul> |
| [Serverless](#serverless) | <ul><li>Simple setup</li><li>No need to schema or migrate a database when data requirements change</li></ul> | <ul><li>Potentially difficult to cache</li><li>Not on-device</li></ul> |
| [RealmDB](#realm) | <ul><li>An easy-to-sync on-device and cloud storage</li></ul> | <ul><li>A heavier requirement of investment than something more standard</li><li>Large migrations on the horizon</li></ul> |
# Conclusion

View File

@@ -42,7 +42,7 @@ app.get('/', (req, res) => {
app.listen(3000);
```
You'll notice that we're using the dummy endpoint http://www.mocky.io/v2/5e1a9abe3100004e004f316b. This endpoint returns an array of values with a shape much like this:
You'll notice that we're using the dummy endpoint <http://www.mocky.io/v2/5e1a9abe3100004e004f316b>. This endpoint returns an array of values with a shape much like this:
```json
[
@@ -145,7 +145,7 @@ In order to access the debugger, you'll need to open up Chrome and go to the URL
Then you'll want to select `inspect` on the node instance.
Doing so will bring up a screen of your entrypoint file with the source code in a window with line numbers.
Doing so will bring up a screen of your entrypoint file with the source code in a window with line numbers.
![The aforementioned code screen](./initial_debugger.png)
@@ -155,7 +155,7 @@ Think about running your code like driving an experimental race-car. This car ha
It's similar to a debug mode of your program. You can evaluate data using `console.log`, but _to gain greater insight, you may want to pause your application_, inspect the small details in the code during a specific state, and to do so you must pause your code. This is where breakpoints come into play: they allow you to place "pause" points into your code so that when you reach the part of code that a breakpoint is present on, your code will pause and you'll be given much better insight to what your code is doing.
To set a breakpoint from within the debugging screen, you'll want to select a code line number off to the left side of the screen. This will create a blue arrow on the line number.
To set a breakpoint from within the debugging screen, you'll want to select a code line number off to the left side of the screen. This will create a blue arrow on the line number.
> If you accidentally select a line you didn't mean to, that's okay. Pressing this line again will disable the breakpoint you just created
@@ -163,7 +163,7 @@ To set a breakpoint from within the debugging screen, you'll want to select a co
A race-car needs to drive around the track until the point where the pit-stop is in order to be inspected; _your code needs to run through a breakpoint in order to pause and give you the expected debugging experience_. This means that, with only the above breakpoint enabled, the code will not enter into debug mode until you access `localhost:3000` in your browser to run the `app.get('/')` route.
> Some applications may be a bit [quick-on-the-draw](https://en.wiktionary.org/wiki/quick_on_the_draw) in regards to finding an acceptable place to put a breakpoint. If you're having difficulties beating your code running, feel free to replace the `--inspect` flag with `--inspect-brk` which will automatically add in a breakpoint to the first line of code in your running file.
> Some applications may be a bit [quick-on-the-draw](https://en.wiktionary.org/wiki/quick_on_the_draw) in regards to finding an acceptable place to put a breakpoint. If you're having difficulties beating your code running, feel free to replace the `--inspect` flag with `--inspect-brk` which will automatically add in a breakpoint to the first line of code in your running file.
>
> This way, you should have the margins to add in a breakpoint where you'd like one beforehand.
@@ -179,7 +179,7 @@ Once you do so, you're in full control of your code and its state. You can:
- _Inspect the values of variables_ (either by highlighting the variable you're interested in, looking under the "scope" tab on the right sidebar, or using the `Console` tab to run inspection commands à la [`console.table`](https://developer.mozilla.org/en-US/docs/Web/API/Console/table) or [`console.log`](https://developer.mozilla.org/en-US/docs/Web/API/Console/log)):
![A screenshot of all three of the mentioned methods to inspect a variable's value](./inspect_variable_value.png)
![A screenshot of all three of the mentioned methods to inspect a variable's value](./inspect_variable_value.png)
- _Change the value of a variable:_
![A screenshot of using the Console tab in order to change the value of a variable as you would any other JavaScript variable](./change_variable_value.png)
- _Run arbitrary JavaScript commands_, similar to how a code playground might allow you to:
@@ -199,7 +199,7 @@ So, if we want to see what happens after the `body` JSON variable is parsed into
Knowing this, let's move through the next few lines manually by pressing each item. The ran values of the variables as they're assigned should show up to the right of the code itself in a little yellow box; This should help you understand what each line of code is running and returning without `console.log`ging or otherwise manually.
![A screenshot showing ran lines until line 12 of the "console.log". It shows that "employeeAges" is "[undefined]"](./next_few_lines.png)
![A screenshot showing ran lines until line 12 of the "console.log". It shows that "employeeAges" is "\[undefined\]"](./next_few_lines.png)
But oh no! You can see, `employeeAges` on line `9` is the one that results in the unintended `[undefined]`. It seems to be occurring during the `map` phase, so let's add in a breakpoint to line `10` and reload the `localhost:3000` page (to re-run the function in question).
@@ -209,7 +209,7 @@ Once you hit the first breakpoint on line `7`, you can press "play" once again t
This will allow us to see the value of `employee` to see what's going wrong in our application to cause an `undefined` value.
![A show of the "employee" object that has a property "employee_age"](./inspect_employee.png)
![A show of the "employee" object that has a property "employee\_age"](./inspect_employee.png)
Oh! As we can see, the name of the field we're trying to query is `employee_age`, not the mistakenly typo'd `employeeAge` property name we're currently using in our code. Let's stop our server, make the required changes, and then restart the application.
@@ -217,8 +217,6 @@ We will have to run through the breakpoints we've set by pressing the "play" but
![Showing that the console log works out the way expected once the map is changed](./working_ran_debugger_code.png)
There we go! We're able to get the expected "23"! That said, it was annoying to have to press "play" twice. Maybe there's something else we can do in similar scenarios like this?
## Disabling Breakpoints {#disabling-breakpoints}
@@ -253,7 +251,7 @@ Once inside the `map` function, there's even a button _to get you outside of tha
> }
> return ageArray;
> };
>
>
> app.get('/', (req, res) => {
> request('http://www.mocky.io/v2/5e1a9abe3100004e004f316b', (error, response, body) => {
> const responseList = JSON.parse(body);

View File

@@ -66,7 +66,7 @@ function calculateUserScore({killsArr, deaths, assists}) {
}
```
While we've seen the function change, remember that your game may be making this calculation in multiple parts of the codebase. On top of this, maybe your API *still* isn't perfect for this function. What if you want to display the special kills with additional points after a match?
While we've seen the function change, remember that your game may be making this calculation in multiple parts of the codebase. On top of this, maybe your API _still_ isn't perfect for this function. What if you want to display the special kills with additional points after a match?
These drastic refactors mean that each iteration requires additional refactor work, likely delaying the time to ticket completion. This can impact releases dates or other scheduled launches.
@@ -148,19 +148,19 @@ In fact, this _includes_ tests. 😱 Tests are a good way of conveying API examp
In particular, if you're good about [writing primarily integration tests](https://kentcdodds.com/blog/write-tests), you're actually writing out usage API docs while writing testing code.
This is particularly true when writing developer tooling or libraries. Seeing a usage example of how to do something is extremely helpful, especially with a test to validate its behavior alongside it.
This is particularly true when writing developer tooling or libraries. Seeing a usage example of how to do something is extremely helpful, especially with a test to validate its behavior alongside it.
-------
---
Another thing "documentation-driven development" does not prescribe is "write once and done." This idea is a myth and may be harmful to your scope and budgets - time or otherwise.
As we showed with the `calculateUserScore` example, you may need to modify your designs before moving forward for the final release: that's okay. Docs influence code influence docs. The same is true for TDD.
--------
---
DDD isn't just useful for developing code for production, either. In interviews, some good advice to communicate your development workflow is to write code comments and **then** write the solution. This allows you to make mistakes in the documentation phase (of writing comments) that will be less time-costly than if you'd made a mistake in implementation.
By doing this, you can communicate with your interviewer that you know how to work in a team and find well-defined goals. These will allow you to work towards an edgecase-free* implementation with those understandings.
By doing this, you can communicate with your interviewer that you know how to work in a team and find well-defined goals. These will allow you to work towards an edgecase-free\* implementation with those understandings.
# Bring it back now y'all
@@ -173,10 +173,10 @@ Each of these refers to a form of validating the functionality of code behind us
# Conclusion
I've been using documentation-driven development as a concept to drive my coding on some projects. Among them was my project [`CLI Testing Library`](https://github.com/crutchcorn/cli-testing-library), which allowed me to write a [myriad of documentation pages](https://github.com/crutchcorn/cli-testing-library/tree/main/docs) as well as [verbose GitHub issues](https://github.com/crutchcorn/cli-testing-library/issues/2).
I've been using documentation-driven development as a concept to drive my coding on some projects. Among them was my project [`CLI Testing Library`](https://github.com/crutchcorn/cli-testing-library), which allowed me to write a [myriad of documentation pages](https://github.com/crutchcorn/cli-testing-library/tree/main/docs) as well as [verbose GitHub issues](https://github.com/crutchcorn/cli-testing-library/issues/2).
Both of these forced me to better refine my goals and what I was looking for. The end-product, I believe, is better as a result.
What do you think? Is "DDD" a good idea? Will you be using it for your next project?
Let us know what you think, and [join our Discord](https://discord.gg/FMcvc6T) to talk to us more about it!
Let us know what you think, and [join our Discord](https://discord.gg/FMcvc6T) to talk to us more about it!

View File

@@ -41,6 +41,7 @@ Then if you tried the same in Angular:
What happened? Lets compare the custom components.
`MyTextCell.vue`
<iframe src="https://codesandbox.io/embed/async-leftpad-gjxmqv?codemirror=1&fontsize=14&hidenavigation=1&module=%2Fsrc%2Fcomponents%2FMyTextCell.vue&theme=dark&view=editor"
style="width:100%; height:500px; border:0; border-radius: 4px; overflow:hidden;"
title="MyTextCell.vue"
@@ -48,6 +49,7 @@ What happened? Lets compare the custom components.
></iframe>
`text-cell.component.ts`
<iframe src="https://codesandbox.io/embed/cranky-shadow-frukfg?codemirror=1&fontsize=14&hidenavigation=1&module=%2Fsrc%2Fapp%2Ftext-cell.component.ts&theme=dark&view=editor"
style="width:100%; height:500px; border:0; border-radius: 4px; overflow:hidden;"
title="text-cell.component.ts"
@@ -70,6 +72,7 @@ Vue output:
```
Angular output:
```html
<my-table _ngcontent-rbt-c270="" _nghost-rbt-c271="">
<table _ngcontent-rbt-c271="">
@@ -90,6 +93,6 @@ Angular output:
So much more going on. This is because, even though the components look like they are doing the same thing, the renderer has, and must, output extra tags. Now this isn't how I'd make a table in Angular and there are ways to address this like using `display: table-cell` and `role` attributes on the host but it could still be a hindrance.
This doesn't mean Angular is all that bad. Actually, the reason Angular components are written this way is to closely resemble a proposed standard; Web Components.
This doesn't mean Angular is all that bad. Actually, the reason Angular components are written this way is to closely resemble a proposed standard; Web Components.
Speaking of that checkout [Corbin](/unicorns/crutchcorn)'s [series on Web Components](/collections/web-components-101) and also [Angular elements](https://angular.io/guide/elements).

View File

@@ -27,7 +27,8 @@ If you didn't know already, when you get the MOD of something, you divide the fi
## The days of the week
The days of the week have numbers, I have put ways of remembering them in brackets:
```
```
0 = Sunday (Noneday)
1 = Monday (Oneday)
2 = Tuesday (Twosday)
@@ -36,6 +37,7 @@ The days of the week have numbers, I have put ways of remembering them in bracke
5 = Friday (Fiveday)
6 = Saturday (Six-a-day)
```
This makes it very easy to add numbers to them.
## Anchor Days
@@ -53,7 +55,7 @@ For examples of this, let's use the 18th of March 1898 for reference. To start,
- 4th of April
- 6th of June
- 8th of August
- 10th of October
- 10th of October
- 12th of December
There is one for every month, but I have only listed the 4th, 6th, 8th, 10th, and 12th months; This leaves January, February, March, May, July, September, and November, or the 1st, 2nd, 3rd, 5th, 7th, 9th and 11th month. I say the numbers and will from now on as it helps with the calculations later on.
@@ -68,7 +70,7 @@ For the 5th month, the 9th is a doomsday. An easy way of remembering it is "work
Now its just the 7th month and the 11th month. This is where the rest of the mnemonic comes. For the 7th month, the 11th is a doomsday, and the same the other way, for the 11th month, the 7th is a doomsday. So the full mnemonic is "working 9 to 5, at 7-11"
List of all doomsdays (written is day/month):
List of all doomsdays (written is day/month):
- 3/1 or 4/1
- 28/2 or 29/2
@@ -120,6 +122,7 @@ After these calculations, you finally do:
Then you get the MOD of offset. Think of MOD as if you were to add 7 to Wednesday (3), then run MOD over that new number (10), you are back to Wednesday. (3)
So using the example you would do:
```
a = 69 / 12
@@ -135,6 +138,7 @@ c = 2
So, 5 + 9 + 2 = 16
```
Then get the MOD which is, 2.
So we now know that the year offset is 2, and we know that the anchor day for the century is Wednesday, or 3. We add the two numbers together to get 5 and match that to our date chart, so for 1969, the doomsday is Friday.
@@ -151,7 +155,7 @@ Then we add this to the doomsday. So `5 + 2 = 7`. But there isn't a 7 in our cha
I wrote the script using functions so that it would be easy to edit the code.
### Coding part 1 *The backbone of this project*
### Coding part 1 _The backbone of this project_
First, I thought it would be good to start with the harder bit, which is the actual solver; But what is the first part you need for the calculations? Inputs. Let's build a script to generate those inputs.
@@ -217,6 +221,7 @@ Doomsday = Doomsday % 7
```
So far the script looks like:
```python
day = int(input("What day do you want? (number needed) "))
month = int(input("What month do you want? (number needed) "))
@@ -253,6 +258,7 @@ for a in range(len(DoomsdayList)):
if DoomsdayList[a][1] == month:
location = a
```
But this causes a problem; `DoomsdayList` has only the dates for a leap year. But what if it's not a leap year? We can solve this problem by adding an if statement after our check. This if statement should check if `year MOD 4` is greater than or equal to 1. If this statement is true, set the 10th item of the list to `[3, 1]` and the 11th item to `[28, 2]`.
```python
@@ -262,9 +268,11 @@ if year % 4 >= 1:
```
### Quick summary
Just summarizing what we have done so far, we have three inputs: year, month, and day. The script calculates the anchor day for the century. It gets the last two digits of the year then uses the a, b and c calculations. Once this is done, it adds them together, gets the MOD 7 of that then adds the anchor. Next, it receives the MOD 7 of that answer. The answer for the last MOD is the doomsday. We have a `DoomsdayList` which contains the doomsdays. We have an if statement that changes the 10th and 11th item if it's not a leap year. Then we have a loop that looks for the location of the doomsday in the picked month.
The code should look like this now:
```python
day = int(input("What day do you want? (number needed) "))
month = int(input("What month do you want? (number needed) "))
@@ -298,13 +306,14 @@ for a in range(len(DoomsdayList)):
location = a
```
### Coding part 2 *Doomsday and printing the right day of the week*
### Coding part 2 _Doomsday and printing the right day of the week_
The most difficult bit is now done. Now, we only need to worry about the day of the month. After all, the script now knows the location of the closest doomsday. But this is only half true; we haven't told it to get the item in that location, which is luckily pretty easy to do:
```python
ClosestDoomsday = DoomsdayList[location]
```
Now we need the difference of day and `ClosestDoomsday[0]`, we put the zero there as that is the day of the month.
```python
@@ -330,22 +339,26 @@ DayOfWeek = DayOfWeek % 7
```
Then we can just output the number:
```python
print("This date falls on a", DayOfWeek)
```
But this just would print out a number, so we will create another list called `weekList` we put this just above `DoomsdayList`:
```python
# 0 1 2 3 4 5 6
weekList = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
```
Then we can change the output command to:
```python
print("This date falls on a", weekList[DayOfWeek])
```
Finally, the completed script should look like this:
```python
day = int(input("What day do you want? (number needed) "))
month = int(input("What month do you want? (number needed) "))
@@ -399,10 +412,12 @@ print("This date falls on a", weekList[DayOfWeek])
With the hardest bit now out of the way and put into functions, we can now relax. Make a new file but keep the old when as you can look back at it to see how it works and even modify it. We built it to be used to understand what our logic should do and can be used as a template for other programs that will use it: like the tester.
I built the tester using `randint` from the module `random`. This should be your first line of code:
```python
from random import randint
```
### Coding part 1: *The functions*
### Coding part 1: _The functions_
In the previous program, we made three inputs: Year, Month, Day. We still need to pass these to the doomsday program, but we need to do it randomly. To keep our code's structure, we will use functions.
@@ -420,6 +435,7 @@ So when ever `randomYear` is called it will pick a random number between `StartY
#### Random Month
`randomMonth` is the easiest out of the three. We don't need any arguments just a line that picks a random number between one and twelve. As there are 12 months in a year:
```python
def randomMonth():
return randint(1, 12)
@@ -430,6 +446,7 @@ def randomMonth():
`randomDay` is the hardest out of the three, as there 4 possibilities that can come out of the function. `31`, `30`, `29` and `28`
we need two arguments which will be the `month` and `Year`. We need the month to see how many days there are in the month. but if it's a leap year (That's why we need the year) and it is February we need to know if its 28 days or 29 days the code looks like this for starters:
```python
def randomDay(Month, Year):
IsLeap = Year % 4 == 0
@@ -443,16 +460,21 @@ def randomDay(Month, Year):
elif IsLeap == False and Month == 2:
DaysInMonth = 28
```
After we have done this all we need to do is get a random number between 1 and `DaysInMonth`:
```python
Day = randint(1, DaysInMonth)
```
Then we just return `Day`:
```python
return Day
```
so altogther the `randomDay()` function should look like this:
```python
def randomDay(Month, Year):
IsLeap = Year % 4 == 0
@@ -475,6 +497,7 @@ def randomDay(Month, Year):
So now we have a random day, month and year. We can now make a function from The solver segment code, we need to change it slightly, like getting rid of the inputs and putting a `return` at the end. We will have three arguments and so we don't need to go through and change the variable names we will call them `day`, `month` and `year`. Then we need to put a `return` at the end that returns the day of the week.
As I have already talked about how this code works I will just give you the whole function:
```python
def Finder(day, month, year):
# 0 1 2 3 4 5 6
@@ -520,28 +543,34 @@ def Finder(day, month, year):
return weekList[DayOfWeek]
```
### Coding Part 2 *Making the question code __almost there__*
### Coding Part 2 _Making the question code **almost there**_
We will start by making a variable called `questions`. This will simply just be to see how many questions the user wants:
```python
questions = int(input("How many questions? "))
```
then we will have a `for` loop that repeats for how many questions the user wants:
```python
for a in range(questions):
```
Until I say, all the code from now on will be inside this loop.
We want to get random `Year`, `Month` and `Day` first:
```python
Year = randomYear(1800, 2200)
Month = randomMonth()
Day = randomDay(Month, Year)
```
So this will get a random year between 1800 and 2200, a random month, then a random day using the data from Month and Year.
We then want the code to figure out the correct day, this is easy as we have a function that we just built that does this:
```python
DayOfWeek = Finder(Day, Month, Year)
```
@@ -549,22 +578,28 @@ We then want the code to figure out the correct day, this is easy as we have a f
So even before the question is asked the code already knows the answer.
Then we ask the user what the day is:
```python
print(a+1, ". What day is ", Day, "/", Month, "/", Year, sep="", end=" ", flush=True)
```
This will show the question number (The `+1` is there as the loop starts on 0) then the date, then will add space for the input.
We then need to get the input. Very easy. We will then convert the string to uppercase so that if they put `Monday`, `monday` or `mOnday` it would just set it to `MONDAY`:
```python
guess = input()
guessUpper = guess.upper()
```
We also want to do this to the answer that the code got as well:
```python
DayOfWeekUpper = DayOfWeek.upper()
```
Then we can find out if the user was right using a `if` statement:
```python
if guessUpper == DayOfWeekUpper:
Correct = True
@@ -573,6 +608,7 @@ Then we can find out if the user was right using a `if` statement:
```
Then we need the code to do something if they got it right we can have another `if` statementL
```python
if Correct:
print("That is correct.")
@@ -580,19 +616,25 @@ Then we need the code to do something if they got it right we can have another `
else:
print("That is incorrect. The correct answer is", DayOfWeek)
```
Here we have used a variable called `score` which we haven't set anywhere so just above the `for` loop we will put:
```python
score = 0
```
We put it outside the loop as if it was in the loop it would set `score` back to zero, after each question so it would only be 1 or 0.
So now outside the `for` loop we will show the percentage of correct answers:
```python
print("Your percentage is", (score / questions) * 100)
```
And thats it! I will put the whole code here so that you can check errors and stuff:
# If you just want the code and don't care how it works, look here!
```python
from random import randint

View File

@@ -10,14 +10,13 @@
}
---
While working on [my React Native mobile app](https://gitshark.dev), [the super-talented designer for the project](/unicorns/edpratti) raised an interesting question to me:
> "Are we able to draw under the navigation bar and status bar? [Google officially recommends new apps to do so](https://youtu.be/Nf-fP2u9vjI).
The idea of drawing under the navbar intrigued me. After lots of research, I was finally able to implement it in my app, but not without struggles. Let's walk through how to do it manually and what I ended up doing to solve the issue myself.
Feel free to follow along with the code samples, but if you're looking for the easiest solution, [you might want to read to the bottom to see how to easily integrate it into your app without all of the manual work](#react-native-immersive-bars).
Feel free to follow along with the code samples, but if you're looking for the easiest solution, [you might want to read to the bottom to see how to easily integrate it into your app without all of the manual work](#react-native-immersive-bars).
# The Wrong Way {#flag-layout-no-limits}
@@ -55,8 +54,6 @@ Once this was done, I loaded my app and "et voilà"!
![The FAB is placed under the navbar as expected](./flag_layout_no_limits.png)
"Success," I'd thought to myself. Since the FAB was drawn under the navbar, I thought the goal had been achieved! However, once I tried [the `safe-area-context` package](https://github.com/th3rdwave/react-native-safe-area-context) to draw margins and paddings (to move the FAB above the navbar), I faced difficulties.
When I utilized the following code:
@@ -207,11 +204,11 @@ When viewing the app on older versions of Android (like M), you'll see the respe
# The Easy Method {#react-native-immersive-bars}
Let's not sugar coat it: It's tedious to make changes to native Android code in order to support all of the various API levels there are, the various forms of OEM issues that could arise. Likewise, if your app implements a dark mode, there's now another level of challenge: You have to toggle the light and dark navigation buttons yourself!
Let's not sugar coat it: It's tedious to make changes to native Android code in order to support all of the various API levels there are, the various forms of OEM issues that could arise. Likewise, if your app implements a dark mode, there's now another level of challenge: You have to toggle the light and dark navigation buttons yourself!
Fear not, fellow developer! I've taken my learnings from implementing this into [my mobile Git Client](https://gitshark.dev) and created a package for you to utilize!
https://github.com/crutchcorn/react-native-immersive-bars
<https://github.com/crutchcorn/react-native-immersive-bars>
It's as simple to add as:
@@ -238,4 +235,3 @@ It supports dark mode switching, as many API levels as React Native does, and mu
This feature was not a trivial one for me to implement. Not often is it that such a short article reflects how long I'd spent debugging and researching this issue. I want to make sure to thank [James Fenn](/unicorns/fennifith) and [Sasi Kanth](https://github.com/msasikanth) for helping me debug and research for this. I'm happy I did so, though. I think it adds a nice level of polish to my app, and I think you'll find the same in your app as well. Hopefully, the package I made is able to ease the process for you. If you have any comments or questions regarding the package, please refer to the GitHub issues for said project.
Otherwise, if you have comments or questions about the article, you can leave them in the comments down below. We also have a newsletter that you can subscribe to for more articles like this: I plan on writing much more about React Native as I develop my app.

View File

@@ -19,9 +19,9 @@ Es posible que notes que nuestros ejemplos utilizan varias librería de [the Tes
> Ten en cuenta que Jest (y por entonces, Testing Library) no es exclusivo a
> ninguna herramienta o framework. Este artículo pretende dar consejos generales sobre testing
>
>
> Con eso dicho, si estas planeando en incluir Jest y Testing Library en tu aplicación de Angular,
> pero no sabes por donde empezar, [hemos escrito una guía en como hacerlo]
> pero no sabes por donde empezar, \[hemos escrito una guía en como hacerlo]
# No incluyas lógica que pertenece a tu aplicación en tus tests {#dont-include-logic}
@@ -29,7 +29,8 @@ Tengo una confesión que hacer: me gusta la metaprogramación. Ya sea que trate
El problema con el que me encuentro es que a veces no es un placer para otra gente tener que leer (o depurar) este tipo de código. Esto es más notable cuando escribo tests: cuando no me aseguro de mantenerlos sencillos, mis tests tienden a sufrir.
Para demonstrar este argumento, vamos a utilizar un componente de ejemplo: Una tabla. Este componente debería tener esta funcionalidad:
Para demonstrar este argumento, vamos a utilizar un componente de ejemplo: Una tabla. Este componente debería tener esta funcionalidad:
- Paginación opcional
- Cuando la paginación está desactivada, debería enumerar todos los elementos
- Mostrar una fila de varios conjuntos de datos
@@ -125,6 +126,7 @@ Eso nos lleva a otro argumento para hacer hard-coding de datos y simplificar nue
Asi que ahora la pregunta es: ¿cómo generamos grandes porciones de datos sin tener que incluirlos de forma manual?
Todavía puede hacerlo programáticamente como lo hicimos antes, simplemente tienes que guardarlo en un archivo separado. Por ejemplo:
```javascript
const faker = require('faker')
const fs = require('fs')
@@ -312,7 +314,6 @@ export default () => {
Ahora los tests para el componente se ven mucho mas sencillos:
```javascript
// ComponenteConectado.spec.tsx
it("se muestra sin datos", async () => {

View File

@@ -16,7 +16,7 @@ Weve collected five methods for simplifying your tests while making them easi
You may notice that our code samples use various libraries from [the Testing Library suite of libraries](https://testing-library.com/). This is because we feel that these testing methodologies mesh well with the user-centric testing that the library encourages.
> Keep in mind that Jest (and furthermore, Testing Library) is not exclusive to
> Keep in mind that Jest (and furthermore, Testing Library) is not exclusive to
> any specific framework or toolset. This article is meant just as general advice for testing.
>
> That said if you're looking to include Jest and Testing Library into your Angular app,
@@ -150,7 +150,6 @@ You can then run `const mockData = require('./mock_data.js')` inside of your tes
While working on tests, it can be easy to group together actions into a single test. For example, let's say we want to test our table component for the following behaviors:
Shows all of the column data on users
Make sure a user on page 2 does not show when looking at page one
@@ -191,6 +190,7 @@ it('should not render people from page 2 when page 1 is focused', () => {
While this may cause slower tests as a result of duplicating the `render` function's actions, it's worth mentioning that most of these tests should run in milliseconds, making the extended time minimally impact you.
Even further, I would argue that the extended time is worth the offset of having clearer, more scope restricted tests. These tests will assist with debugging and maintainability of your tests.
# Don't Duplicate What You're Testing {#dont-duplicate}
There's yet another advantage of keeping your tests separated by `it` blocks that I haven't mentioned yet: It frees you to reduce the amount of logic you include in the next test. Let's take the code example from before:
@@ -235,7 +235,6 @@ it('should render all of the users', () => {
Or, we could write our test like this:
```javascript
it('should render all columns of data', () => {
expect(screen.getByText('Jadyn Larson')).toBeInTheDocument();
@@ -339,4 +338,4 @@ When using large amounts of network data that you'd like to mock, be sure to [ha
Using these methods, tests can be simplified, often made faster, and typically shorten the length of a testing file. While this may sound straightforward on a surface level, writing tests is a skill that's grown like any other. Practice encourages growth, so don't be discouraged if your tests aren't as straightforward as you'd like to first.
If you have any questions about testing, or maybe have a test you're unsure how to simplify, be sure to join [our Discord Server](https://discord.gg/FMcvc6T). We engage in tons of engineering discussions there and even live pair-program solutions when able.
If you have any questions about testing, or maybe have a test you're unsure how to simplify, be sure to join [our Discord Server](https://discord.gg/FMcvc6T). We engage in tons of engineering discussions there and even live pair-program solutions when able.

View File

@@ -43,7 +43,7 @@ function calculateDaysBetweenDates(date1, date2) {
}
```
While every new tech has its naysayers, and even some on our team are skeptical of its true utility, its undoubtedly a powerful tool that will, for some, change the way they write code.
While every new tech has its naysayers, and even some on our team are skeptical of its true utility, its undoubtedly a powerful tool that will, for some, change the way they write code.
GitHubs not alone in this venture of AI-powered code generation, either! Other companies, such as [Kite](https://www.kite.com/) or [TabNine](https://www.tabnine.com/) are working hard at this problem space as well! Whether we want it to or not, AI-generated code helpers are here to stay.
@@ -55,7 +55,7 @@ Lets see how Copilot interacts with some common interview questions.
One common question thats a favorite of interviewers hoping to quickly glean mathematical competency is a function to check if a given number is prime or not.
Well, its been a few years since Ive refreshed my math skills, so they might be a little shakey. I should be able to figure it out all the same.
Well, its been a few years since Ive refreshed my math skills, so they might be a little shakey. I should be able to figure it out all the same.
Lets open VSCode and start implementing the function.
@@ -77,11 +77,11 @@ function isPrime(n) {
Sure, we could change the `var` to a `const`, but this is code that I wouldnt blink twice at in a code review!
While this may seem like an outlier, Copilot seems to excel in these algorithm-based questions.
While this may seem like an outlier, Copilot seems to excel in these algorithm-based questions.
But surely `isPrime` would be too trivial for a *real* interview question, right? Perhaps, but watch what happens when a popular coding YouTuber attempts to utilize Copilot to solve Leetcode interview questions
But surely `isPrime` would be too trivial for a _real_ interview question, right? Perhaps, but watch what happens when a popular coding YouTuber attempts to utilize Copilot to solve Leetcode interview questions
https://www.youtube.com/watch?v=FHwnrYm0mNc
<https://www.youtube.com/watch?v=FHwnrYm0mNc>
Without fail, Copilot is able to generate usable code for each difficulty level of algorithms given to it. Further, the solutions are all more performant than the average of submissions. For the permutation question, its able to be faster than 88% of other submissions!
@@ -91,7 +91,7 @@ While this may be true for some interviews, if your interviews are susceptible t
## Algorithms Are Breaking Your Interviews
On paper, algorithm based interview questions sound like a great way to assess a candidate's skills. They can help give guidance on a candidate's understanding of logic complexity, how efficient (or inefficient) a specific solution is, and is usually an insight into a candidates ability to think in abstract manners.
On paper, algorithm based interview questions sound like a great way to assess a candidate's skills. They can help give guidance on a candidate's understanding of logic complexity, how efficient (or inefficient) a specific solution is, and is usually an insight into a candidates ability to think in abstract manners.
However, in practice algorithm questions tend to go against the grain of real-world engineering. Ideally, an interview process should act as a way to evaluate a candidates ability to do the same kind of engineering theyd be using in their projects at your company. While a developer may, with resources, implement an algorithm once in a while, theyre more than likely doing things like refactoring significantly more often.
@@ -99,12 +99,11 @@ Weve written more about how [algorithms arent often effective as interview
But more than being unrepresentative of the job, algorithm questions are often easy to cheat - without the use of GitHub Copilot. Because most algorithm questions are significantly similar to each other, theres often a small selection of tips and tricks a candidate can memorize in order to drastically improve their output in these styles of interviews.
Theres even the potential for a candidate to be able to output an algorithm verbatim. There are hundreds of sites that will give a candidate one algorithm question after another in the hopes of improving their understanding of these algorithms.
But with GitHub Copilot, the propensity for cheating on an algorithm question rises significantly. As weve demonstrated previously in the article, its capable of generating significant portions of code at a time. In fact, Copilot is so proficient at algorithm questions that a [non-trivial number of algorithm questions we asked it to solve were done](https://github.com/CoderPad/github-copilot-interview-question) before we could even finish the function signature. All it takes is a candidate to give Copilot the name of the function and paste the results into their assessment editor.
Further, folks wanting to cheat have had the ability to do something similar for some time now in the form of forum questions. Simply lookup any algorithm on a code forum or site like [StackOverflow](http://stackoverflow.com/) and you can find hundreds of answers at your disposal.
Further, folks wanting to cheat have had the ability to do something similar for some time now in the form of forum questions. Simply lookup any algorithm on a code forum or site like [StackOverflow](http://stackoverflow.com/) and you can find hundreds of answers at your disposal.
In fact, many have pointed out that Copilots process of looking up code based on its expected constraints is similar to one that a developer might experience by searching StackOverflow for code snippets. Funnily, some thought the idea so similar they decided to build an alternative VSCode plugin to Copilot that simply [looks up StackOverflow answers as suggestions](https://github.com/hieunc229/copilot-clone).
@@ -112,9 +111,9 @@ Regardless of how the question is answered, a candidates skill at these types
## How to Fix Your Interviews
While you *could* simply require candidates to use a non-VSCode IDE for your technical assessments, there are no guarantees that your take-homes will be spared the same fate. Further, while VSCode is the launch platform for Copilot, its more than likely to gain plugins for other IDEs in the future as well.
While you _could_ simply require candidates to use a non-VSCode IDE for your technical assessments, there are no guarantees that your take-homes will be spared the same fate. Further, while VSCode is the launch platform for Copilot, its more than likely to gain plugins for other IDEs in the future as well.
But what of it? Lets say your company doesnt do take-homes ([even though you totally should](https://coderpad.io/blog/hire-better-faster-and-in-a-more-human-way-with-take-homes/)), what of it? Well, even if restricting VSCode would work to avoid Copilot for a while, you ideally want to be able to standardize your IDE platform for all candidates.
But what of it? Lets say your company doesnt do take-homes ([even though you totally should](https://coderpad.io/blog/hire-better-faster-and-in-a-more-human-way-with-take-homes/)), what of it? Well, even if restricting VSCode would work to avoid Copilot for a while, you ideally want to be able to standardize your IDE platform for all candidates.
Plus, as we touched on in the previous section, algorithm-based interview questions are still able to be manipulated - with or without Copilot.

View File

@@ -18,7 +18,7 @@ Some have taken these advanced algorithm assessment capabilities as a warning si
Automation is amazing. Some would argue the whole point of programming is to automate as much as possible.
But when you automate things you often lose a fair amount of nuance within the problem-space you're trying to automate. This is true for any industry and any problem you run into: Especially so with programming.
But when you automate things you often lose a fair amount of nuance within the problem-space you're trying to automate. This is true for any industry and any problem you run into: Especially so with programming.
We all know the meme: The junior engineer asks a question and the senior mentor answers "it depends"
@@ -34,7 +34,7 @@ On first glance, GitHub Copilot may appear to walk the "I'm an Engineer replacem
Lets first remember what the job of an engineer or developer is. While on the surface, yes, developers do type code into their IDE - the real work is done in the developers mind. To code something is to consider a problems expected outcome, its constraints, edge cases, and to take those into account to decide on an implementation.
While Copilot is highly capable of generating *a* solution, it doesnt know your engineering constraints. This is where architecture decisions come into play. Sure, you may know that you want a sorting algorithm - but *which* sorting algorithm may be more important than being able to implement it. After all, if you are wanting to implement a complex sort on a large dataset with limited memory, your biggest problems are likely to stem from knowing where to store your data in an [external sort](https://en.wikipedia.org/wiki/External_sorting) as opposed to the specific code syntax youll utilize to make that a reality.
While Copilot is highly capable of generating _a_ solution, it doesnt know your engineering constraints. This is where architecture decisions come into play. Sure, you may know that you want a sorting algorithm - but _which_ sorting algorithm may be more important than being able to implement it. After all, if you are wanting to implement a complex sort on a large dataset with limited memory, your biggest problems are likely to stem from knowing where to store your data in an [external sort](https://en.wikipedia.org/wiki/External_sorting) as opposed to the specific code syntax youll utilize to make that a reality.
That said, not every engineer is at or needs to be at an architectural level. Some of us are most comfortable when we can focus within our IDEs as opposed to meeting rooms where those constraints often come to light. However, there is a skill that every developer will need to develop as they code: Debugging.
@@ -46,12 +46,11 @@ Regardless of if you use the debugger or print statements (which, we all do at s
## Refactors
Likewise, a common task in an existing codebase is to refactor it in order to be more secure, efficient, fast, readable, or otherwise better. While Copilot is able to glean context from the current file youre presently in, refactors can often span multiple files as you modify the underlying abstractions in a codebase. Even then, while [GitHub says theyre adding support for full project-based context in the future](https://copilot.github.com/#faq-what-context-does-github-copilot-use-to-generate-suggestions), automated refactors would be extremely difficult to attain.
Likewise, a common task in an existing codebase is to refactor it in order to be more secure, efficient, fast, readable, or otherwise better. While Copilot is able to glean context from the current file youre presently in, refactors can often span multiple files as you modify the underlying abstractions in a codebase. Even then, while [GitHub says theyre adding support for full project-based context in the future](https://copilot.github.com/#faq-what-context-does-github-copilot-use-to-generate-suggestions), automated refactors would be extremely difficult to attain.
> When I'm talking about automated refractors, I'm *not* talking about [codemods](https://www.sitepoint.com/getting-started-with-codemods/) powered by AST manipulation to, say, migrate from one version of a library to another. Codemods like those rely on consistent information existing for both versions of the library code being migrated. Further, these codemods dont come for free and libraries must usually engineer specifically with automated migrations in mind.
>
> When I'm talking about automated refractors, I'm _not_ talking about [codemods](https://www.sitepoint.com/getting-started-with-codemods/) powered by AST manipulation to, say, migrate from one version of a library to another. Codemods like those rely on consistent information existing for both versions of the library code being migrated. Further, these codemods dont come for free and libraries must usually engineer specifically with automated migrations in mind.
In order to automate refactors, Copilot would not only need to know how things *were* done, but what the newer method of doing things is. After all, the previous code exists for a reason, what is it doing, why is it doing what it is, and how are we able to improve it? When application-wide refactors occur, a team often sits down and discusses the advantages of standards and sets a level of consistency to strive for. However, refactors often have hidden levels of complexity within. When actually diving into a refactor, there may be constraints in the new technology that may not have been known previously. When this occurs, the team must make decisions based on many parameters. A machine simply isnt up for the task.
In order to automate refactors, Copilot would not only need to know how things _were_ done, but what the newer method of doing things is. After all, the previous code exists for a reason, what is it doing, why is it doing what it is, and how are we able to improve it? When application-wide refactors occur, a team often sits down and discusses the advantages of standards and sets a level of consistency to strive for. However, refactors often have hidden levels of complexity within. When actually diving into a refactor, there may be constraints in the new technology that may not have been known previously. When this occurs, the team must make decisions based on many parameters. A machine simply isnt up for the task.
## Code Review
@@ -59,16 +58,14 @@ When GitHub Copilot first launched, there was a lot of discussion about how good
Maybe, but you cant be certain it will get it right every time. However, the same can be said for others: you cant be certain another person on the team will get it right every time.
This nuance brings another point against the concept of developers being fully automated by Copilot: Code review.
This nuance brings another point against the concept of developers being fully automated by Copilot: Code review.
Ideally, you shouldn't be allowing developers to push code directly to production on a regular basis. While there will always be emergency scenarios where this doesn't apply, it's dangerous to ignore the code review stage. This isn't to say that you shouldn't trust your developers, but we're only human after all. If [Google can make a single-character typo to wipe every ChromeOS laptop with a certain update installed](https://www.androidpolice.com/2021/07/20/a-new-chrome-os-91-update-is-breaking-chromebooks-like-a-bull-in-a-china-shop/), it's not impossible your team may make a similar mistake.
During this process of code review, your team may discover bugs, realize that an experience is impacted by planned implementation, or even point out a more optimized or easier-to-read implementation. Having this team environment allows for a more diverse pooled perspective on the code that's being contributed towards a codebase and results in a better product.
During this process of code review, your team may discover bugs, realize that an experience is impacted by planned implementation, or even point out a more optimized or easier-to-read implementation. Having this team environment allows for a more diverse pooled perspective on the code that's being contributed towards a codebase and results in a better product.
# GitHub Copilots Strengths
None of this is to say that Copilot as a tool isnt advantageous. Copilot is often able to make suggestions that impress me. In particular, if I have a somewhat repetitive task or are simply exploring a commonly implemented function, Copilot can fill in the blanks for me with only the name.
All of these utils are generated with Copilot using only the function name as an argument passed in:
@@ -93,9 +90,7 @@ const usageOfChar = (str, char) => {
![GitHub copilot making the code suggestions shown above with only a function name](./utils-suggestions.png)
While there may be _faster_ implementations of some of these, they're undoubtedly extremely readable and maintainable - they're how I'd implement these functions myself!
While there may be _faster_ implementations of some of these, they're undoubtedly extremely readable and maintainable - they're how I'd implement these functions myself!
This isn't to say that GitHub Copilot is simply limited to small-scale utility functions, either. I recently wanted to make an implementation of a binary search tree. I was barely a class name into implementing when Copilot made the following suggestion:
@@ -128,9 +123,9 @@ This isn't all that was suggested by Copilot, either - [it generated a full impl
![Showcasing GitHub Copilot generating the binary file from a single class name](./binary.png)
This is an impressive range of capabilities for an automated code generation tool. While I would likely want to customize or otherwise modify this exact implementation, this is a very valid base of a binary tree that I could take an expand into something production-ready.
This is an impressive range of capabilities for an automated code generation tool. While I would likely want to customize or otherwise modify this exact implementation, this is a very valid base of a binary tree that I could take an expand into something production-ready.
## Invisible Helper
## Invisible Helper
I've read a lot of conversations about GitHub Copilot. Participated in a lot of them too. Something that often comes up is how distracting Copilot can be at times.
@@ -144,14 +139,14 @@ However, once used to tools like Intellisense, it can become easy to ignore it w
For a start, while I had significant problems with Copilot going overboard with suggestions early on, I've since noticed two things:
1) It's gotten a **lot** better since then. Undoubtedly due to it being an AI and learning from it's usage and some tweaks made by the GitHub team
2) This only _really_ occurs when I sit still in my editor for extended periods of time with an incomplete variable typed out
1. It's gotten a **lot** better since then. Undoubtedly due to it being an AI and learning from it's usage and some tweaks made by the GitHub team
2. This only _really_ occurs when I sit still in my editor for extended periods of time with an incomplete variable typed out
The first point feels fairly self-explanatory, but let's stop and explore the practical side-effect of the second point being made. When I am programming, I am often doing one of three things:
1) Thinking of what code is doing or how to move forward next
2) Actively typing something I have thought of
3) Making small changes and waiting to see if the compiler/linter is angry at me (it usually is)
1. Thinking of what code is doing or how to move forward next
2. Actively typing something I have thought of
3. Making small changes and waiting to see if the compiler/linter is angry at me (it usually is)
For my workflow in particular, I tend to pause for extended periods of time before actually typing something in my main IDE file. It's only really when dealing with unfamiliar codebases, concepts, or naming conventions that I pause in significant level of frequency in the middle of typing something. Oftentimes, this is because I'm trying to remember how to do something by memory or looking at some documentation/reference APIs I have pulled up elsewhere.
@@ -163,8 +158,7 @@ Because Copilot can grok more complex implementation details, it's often able to
This capability is so good at the "transparent tools that get out of your way" test that while [streaming on my Twitch](https://twitch.tv/crutchcorn), I was confident I didn't have Copilot enabled and had to check after a particularly clever suggestion, hours after starting work.
https://clips.twitch.tv/TacitFitIcecreamTriHard-KgJCKYYIEPqxe4dQ
<https://clips.twitch.tv/TacitFitIcecreamTriHard-KgJCKYYIEPqxe4dQ>
It's this transparency that I feel is Copilot's _true_ strength. It's important to remember that even GitHub isn't poising Copilot as a replacement to developers of any kind - simply a tool that developers can utilize to make their jobs easier.
@@ -186,7 +180,7 @@ After all, when I am learning how to program something new, it's always been use
Now, my early career was somewhat formed by what I'd seen others do in codebases I was adjacent to. Because I was assigned tasks, the things I learned about tended to pertain specifically to those tasks. Further, I found it somewhat tricky to find symbols like `!.` After all, [Google search is not kind to many symbols we use in programming](https://stackoverflow.com/a/3737197).
But honestly, I was lucky to've been taken on as a Junior so early on into my code learning experience; Not everyone is so privileged.
But honestly, I was lucky to've been taken on as a Junior so early on into my code learning experience; Not everyone is so privileged.
This is why that Tweet stuck out so much in my mind. Would I have learned code as quickly if I was doing independent study without a such in-depth reference point to others' code? Likely not.
@@ -196,7 +190,7 @@ Moreover: how does a new developer's confidence play into that? I started with A
Even today, a developer of 7 years professionally - having written compilers and apps with millions of users - I am still oftentimes intimidated to read through large project's codebase. **I regularly have to psyche myself out and reassure myself that it's okay to explore before diving into big source projects.**
The beauty (and, unfortunately, again, [controversy](https://twitter.com/eevee/status/1410037309848752128)) of **GitHub Copilot in this instance is that it doesn't tell you where large chunks of its generated code comes from**. You're able to find new ways to do things and want to learn and research more without all the self-imposed stress I mentioned.
The beauty (and, unfortunately, again, [controversy](https://twitter.com/eevee/status/1410037309848752128)) of **GitHub Copilot in this instance is that it doesn't tell you where large chunks of its generated code comes from**. You're able to find new ways to do things and want to learn and research more without all the self-imposed stress I mentioned.
I've seen code generated by Copilot that could easily pass for logic within any major framework or application I've ever read through.
@@ -233,4 +227,3 @@ After all, GitHubs tool is called “Copilot”, not “Autopilot”
What do you think? Let us know [on Twitter](https://twitter.com/UnicornUttrncs) or [join our Discord](https://discord.gg/FMcvc6T) and start a conversation with us! We're an open-source community ran project with no ads, no spam.
We'd love to hear your thoughts!

View File

@@ -1,4 +1,4 @@
---
---
{
title: "A Guide to Python's Secret Superpower: Magic Methods",
description: "",
@@ -8,55 +8,55 @@
attached: [],
license: 'coderpad',
originalLink: 'https://coderpad.io/blog/development/guide-to-python-magic-methods/'
}
---
Python has a secret superpower with a similarly stupendous name: Magic Methods. These methods can fundamentally change the way you code with Python classes and introduce code that seems ✨ magical ✨ to handle complex logic. Theyre more powerful than [list comprehensions](https://coderpad.io/blog/development/python-list-comprehension-guide/) and more exciting than any new [PEP8](https://peps.python.org/pep-0008/) linter.
Today, well be talking about a few things:
- What magic methods are
- Some simple introductory magic method usage
- How to programmatically manage class properties
- How to overwrite operator symbol functionality
- How to make your classes iterable
We also have a cheat sheet for utilizing these magic methods quicker within your projects:
> [Download the related Magic Methods Cheat Sheet](https://coderpad.io/python-magic-methods-cheat-sheet/)
Without further ado, lets dive in!
## What are magic methods?
Magic methods are methods that Python calls on your behalf in specific circumstances. These methods are named in a particular way to quickly distinguish them from other Python methods: theyre preceded and followed by two underscores.
```python
}
---
Python has a secret superpower with a similarly stupendous name: Magic Methods. These methods can fundamentally change the way you code with Python classes and introduce code that seems ✨ magical ✨ to handle complex logic. Theyre more powerful than [list comprehensions](https://coderpad.io/blog/development/python-list-comprehension-guide/) and more exciting than any new [PEP8](https://peps.python.org/pep-0008/) linter.
Today, well be talking about a few things:
- What magic methods are
- Some simple introductory magic method usage
- How to programmatically manage class properties
- How to overwrite operator symbol functionality
- How to make your classes iterable
We also have a cheat sheet for utilizing these magic methods quicker within your projects:
> [Download the related Magic Methods Cheat Sheet](https://coderpad.io/python-magic-methods-cheat-sheet/)
Without further ado, lets dive in!
## What are magic methods?
Magic methods are methods that Python calls on your behalf in specific circumstances. These methods are named in a particular way to quickly distinguish them from other Python methods: theyre preceded and followed by two underscores.
```python
class Speaker:
# This is a magic method
def __init__(self):
print("Hello, world!")
# This will call __init__ and print "Hello, world!"
instance = Speaker()
```
> This is why magic methods also called “dunder methods,” which is a shorthand for “Double underscore methods.”
In the above code you can see what Im talking about: Python calls the `__init__` dunder method on your behalf when a new class instance is created.
This barely scratches the surface when it comes to the power that magic methods provide. Lets dive into their usage.
## Simple magic method usage
If youve ever created a class, youre likely familiar with the following method:
`__init__(self, …args)` - `ClassName()`
Its probably the best-known magic method, Pythons __init__ acts as a class constructor. You can use this to pass initial arguments to a Python class.
For example, take the following:
```python
instance = Speaker()
```
> This is why magic methods also called “dunder methods,” which is a shorthand for “Double underscore methods.”
In the above code you can see what Im talking about: Python calls the `__init__` dunder method on your behalf when a new class instance is created.
This barely scratches the surface when it comes to the power that magic methods provide. Lets dive into their usage.
## Simple magic method usage
If youve ever created a class, youre likely familiar with the following method:
`__init__(self, …args)` - `ClassName()`
Its probably the best-known magic method, Pythons **init** acts as a class constructor. You can use this to pass initial arguments to a Python class.
For example, take the following:
```python
class Speaker:
message = ""
def __init__(self, val):
@@ -66,20 +66,20 @@ class Speaker:
print(self.message)
instance = Speaker("Hello, world!")
instance.sayIt()
```
Here, whenever the `Speaker` class is initialized, it will assign `self.message` to the passed value. Were then able to use a custom “sayIt” method that utilizes `self.message`.
### Clean up class instantiation with `del`
In addition to a class initializer, theres also a class deletion handler:
`__del__(self)` - `del instance`
This method will run any time you call `del` on a class instance. This is particularly useful whenever you have an I/O operation in the constructor in order to cleanup said I/O operations.
```python
instance.sayIt()
```
Here, whenever the `Speaker` class is initialized, it will assign `self.message` to the passed value. Were then able to use a custom “sayIt” method that utilizes `self.message`.
### Clean up class instantiation with `del`
In addition to a class initializer, theres also a class deletion handler:
`__del__(self)` - `del instance`
This method will run any time you call `del` on a class instance. This is particularly useful whenever you have an I/O operation in the constructor in order to cleanup said I/O operations.
```python
import os
class Test:
@@ -93,22 +93,22 @@ class Test:
firstItem = Test()
del firstItem
```
This type is cleanup is integral to ensure your applications are deterministic on each run, which in turn increases general application stability. After all, if you leave remnants of your cache, theyre likely to be picked up by subsequent runs and cause havoc with your application logic.
## How to programmatically manage class properties
Stuff like class constructors and cleanup are par for the course when it comes to class management. Ready for the weird stuff?
What about declaring attributes that dont exist? `__getattr__` has you covered.
`__getattr__(self, key)` - `instance.property` (when `property` doesnt exist)
Simply check what the lookup key is (in this case with the `__name` property) and return a value if you want to create a new property programmatically:
```python
del firstItem
```
This type is cleanup is integral to ensure your applications are deterministic on each run, which in turn increases general application stability. After all, if you leave remnants of your cache, theyre likely to be picked up by subsequent runs and cause havoc with your application logic.
## How to programmatically manage class properties
Stuff like class constructors and cleanup are par for the course when it comes to class management. Ready for the weird stuff?
What about declaring attributes that dont exist? `__getattr__` has you covered.
`__getattr__(self, key)` - `instance.property` (when `property` doesnt exist)
Simply check what the lookup key is (in this case with the `__name` property) and return a value if you want to create a new property programmatically:
```python
class Test:
number = 1
@@ -120,14 +120,14 @@ class Test:
test = Test()
print(test.number) # Will print `1`
print(test.string) # Will print `"Test"`
```
There also exists a slightly different __getattribute__ built-in:
`__getattribute__(self, key)` - `instance.property` (regardless of if `property` exists)
```python
print(test.string) # Will print `"Test"`
```
There also exists a slightly different **getattribute** built-in:
`__getattribute__(self, key)` - `instance.property` (regardless of if `property` exists)
```python
class Test:
number = 1
@@ -139,18 +139,18 @@ class Test:
test = Test()
print(test.number) # `None`
print(test.string) # `"Test"`
```
Notice how instead of `test.number` returning the expected `1` value, it returns a `None`.
This is because while `__getattr__` will resolve the existing variables and fallback to the special method when nothing is found, `__getattribute__` runs first and doesnt fall back to existing values in the class instance.
In order to have `__getattribute__` to have the same behavior as `__getattr__`, we need to explicitly tell Python not to get stuck in the `__getattribute__` trap weve set up.
To do this, we can call `super().__getattribute__`:
```python
print(test.string) # `"Test"`
```
Notice how instead of `test.number` returning the expected `1` value, it returns a `None`.
This is because while `__getattr__` will resolve the existing variables and fallback to the special method when nothing is found, `__getattribute__` runs first and doesnt fall back to existing values in the class instance.
In order to have `__getattribute__` to have the same behavior as `__getattr__`, we need to explicitly tell Python not to get stuck in the `__getattribute__` trap weve set up.
To do this, we can call `super().__getattribute__`:
```python
class Test:
number = 1
@@ -171,16 +171,16 @@ class Test:
test = Test()
print(test.number) # Will print `1`
print(test.string) # Will print `"Test"`
```
### Customize class property dictionary lookup
While `__getattr__` and `__getattribute__` both work wonders for adding in keys programmatically, theres a problem with that method. When using [the `dir` built-in method](https://docs.python.org/3/library/functions.html#dir), it wont show the new keys.
Lets show you what Im talking about with a code sample. Take the following:
```python
print(test.string) # Will print `"Test"`
```
### Customize class property dictionary lookup
While `__getattr__` and `__getattribute__` both work wonders for adding in keys programmatically, theres a problem with that method. When using [the `dir` built-in method](https://docs.python.org/3/library/functions.html#dir), it wont show the new keys.
Lets show you what Im talking about with a code sample. Take the following:
```python
class Test:
number = 1
@@ -191,38 +191,37 @@ class Test:
test = Test()
print(dir(test))
```
This `print` statement will output all of these keys:
```
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'number']
```
This list of keys includes other magic methods, which muddies the output a bit for our needs. Lets filter those out with the following logic:
```python
print(dir(test))
```
This `print` statement will output all of these keys:
```
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'number']
```
This list of keys includes other magic methods, which muddies the output a bit for our needs. Lets filter those out with the following logic:
```python
def simpledir(obj):
return [x for x in dir(obj) if not x.startswith('__')]
```
Now, when we run `simpledir(test)`, we only see:
```python
['number']
```
But where is our `string` field? It doesnt show up.
This is because while weve told Python how to look up the overwritten values, weve not told Python which keys weve added.
To do this, we can use the `__dir__` magic method.
`__dir__(self)` - `dir(instance)`
```python
return [x for x in dir(obj) if not x.startswith('__')]
```
Now, when we run `simpledir(test)`, we only see:
```python
['number']
```
But where is our `string` field? It doesnt show up.
This is because while weve told Python how to look up the overwritten values, weve not told Python which keys weve added.
To do this, we can use the `__dir__` magic method.
`__dir__(self)` - `dir(instance)`
```python
class Test:
number = 1
@@ -234,18 +233,18 @@ class Test:
def __getattr__(self, __name: str):
if __name == "string":
return "Test"
pass
```
Customizing `dir` behavior like this will now enable us to treat our dynamic properties as if they existed normally. Now all were missing is a way to set values to those properties…
### Set programmatically created keys
While were now telling Python which keys were programmatically creating and how to lookup the value of those keys, were not telling Python how to store those values.
Take the following code:
```python
pass
```
Customizing `dir` behavior like this will now enable us to treat our dynamic properties as if they existed normally. Now all were missing is a way to set values to those properties…
### Set programmatically created keys
While were now telling Python which keys were programmatically creating and how to lookup the value of those keys, were not telling Python how to store those values.
Take the following code:
```python
class Test:
number = 1
@@ -258,22 +257,22 @@ class Test:
test = Test()
test.string = "Value"
print(test.string)
```
Here, we might expect the `print(test.string)` to output "Test" as well as "Value", since `getattr` should be called. But, if we look at the log, we only see the following:
```python
"Value"
```
This is because, once we assign `test.string`, it no longer calls `getattr` the way we expect it to.
To solve this problem, we need to use the `__setattr__` magic method to “listen” for property assignment.
`__setattr__(self, key, val)` - `instance.property = newVal`
```python
print(test.string)
```
Here, we might expect the `print(test.string)` to output "Test" as well as "Value", since `getattr` should be called. But, if we look at the log, we only see the following:
```python
"Value"
```
This is because, once we assign `test.string`, it no longer calls `getattr` the way we expect it to.
To solve this problem, we need to use the `__setattr__` magic method to “listen” for property assignment.
`__setattr__(self, key, val)` - `instance.property = newVal`
```python
class Test:
updateCount = 0
valid = 1
@@ -285,20 +284,20 @@ class Test:
test = Test()
test.valid = 12
print(test.updateCount)
```
> Notice our usage of `super().__setattr__`. We need to do this similarly to how we utilized the `super()` method in `__getattribute__`, otherwise `self.updateCount += 1` would trigger an infinite loop of calls to `__setattr__`.
### Clean up programmatic property instanciation
Just as we can hook into the setting and getting behavior of an attribute, we can also hook into the `del` behavior of an attribute using `__delattr__`.
For example, what if we wanted to create a class that acted like a dictionary. For each key created in this dictionary wed want to automatically create a temporary file. Then, on cleanup (using `del`), lets remove that file with `os.remove`:
`__delattr__(self, key)` - `del instance.property`
```python
print(test.updateCount)
```
> Notice our usage of `super().__setattr__`. We need to do this similarly to how we utilized the `super()` method in `__getattribute__`, otherwise `self.updateCount += 1` would trigger an infinite loop of calls to `__setattr__`.
### Clean up programmatic property instanciation
Just as we can hook into the setting and getting behavior of an attribute, we can also hook into the `del` behavior of an attribute using `__delattr__`.
For example, what if we wanted to create a class that acted like a dictionary. For each key created in this dictionary wed want to automatically create a temporary file. Then, on cleanup (using `del`), lets remove that file with `os.remove`:
`__delattr__(self, key)` - `del instance.property`
```python
import os
class FileDictionary:
@@ -313,39 +312,38 @@ class FileDictionary:
fileDictionary = FileDictionary()
fileDictionary.README = "Hello"
del fileDictionary.README
```
Remember, if youre not cleaning up your side effects, it may cause havoc with future usage of your app. This is why its so important to add in `__delattr__` when relevant.
### Convert programatic lookups to index properties
In our most recent `FileDictionary` example, we created a class called “FileDictionary”, but then accessed the child values with the dot accessor:
```python
fileDictionary.README = "Hello"
```
However, this dot syntax causes some minor headache: its not consistent with how you access properties from a dictionary. The reason were not using the standard dictionary syntax is because if you do the following:
```python
fileDictionary['README'] = "Hello"
```
We would quickly get an error from Python:
```
> TypeError: 'FileDictionary' object is not subscriptable
```
To solve this problem, we need to migrate away from `__setattr__`, which only supports dot notation, to `__setitem__`, which only supports the dictionary-style notation.
- `__getitem__(self, key)` - `instance[property]`
- `__setitem__(self, key, val)` - `instance[property] = newVal`
- `__delitem__(self, key)` - `del instance[property]`
```python
del fileDictionary.README
```
Remember, if youre not cleaning up your side effects, it may cause havoc with future usage of your app. This is why its so important to add in `__delattr__` when relevant.
### Convert programatic lookups to index properties
In our most recent `FileDictionary` example, we created a class called “FileDictionary”, but then accessed the child values with the dot accessor:
```python
fileDictionary.README = "Hello"
```
However, this dot syntax causes some minor headache: its not consistent with how you access properties from a dictionary. The reason were not using the standard dictionary syntax is because if you do the following:
```python
fileDictionary['README'] = "Hello"
```
We would quickly get an error from Python:
```
> TypeError: 'FileDictionary' object is not subscriptable
```
To solve this problem, we need to migrate away from `__setattr__`, which only supports dot notation, to `__setitem__`, which only supports the dictionary-style notation.
- `__getitem__(self, key)` - `instance[property]`
- `__setitem__(self, key, val)` - `instance[property] = newVal`
- `__delitem__(self, key)` - `del instance[property]`
```python
import os
class FileDictionary:
@@ -360,48 +358,47 @@ class FileDictionary:
fileDictionary = FileDictionary()
fileDictionary['README'] = "Hello"
del fileDictionary['README']
```
As a wonderful side effect, youre now able to add in a file extension to the `fileDictionry`. This is because bracket notation supports non-ASCII symbols while the dot notation does not.
```python
del fileDictionary['README']
```
As a wonderful side effect, youre now able to add in a file extension to the `fileDictionry`. This is because bracket notation supports non-ASCII symbols while the dot notation does not.
```python
fileDictionary['README.md'] = "Hello"
del fileDictionary['README.md']
```
## How to replace operator symbol functionality with custom logic
Theres nothing more Pythonic than the simplicity of using simple mathematical symbols to represent mathematic actions.
After all, what could more clearly represent the sum of two numbers than:
```python
sum = 2 + 2
```
Meanwhile, if we have a wrapper around a number:
```python
sum = numInstance.getNumber() + numInstance.getNumber()
```
It gets a bit harder to read through.
What if we could utilize those symbols to handle this custom class logic for us?
```python
sum = numInstance + numInstance;
```
Luckily we can!
For example, heres how we can make the `+` symbol run custom logic:
- `__add__(self, other)` - `instance + other`
```python
del fileDictionary['README.md']
```
## How to replace operator symbol functionality with custom logic
Theres nothing more Pythonic than the simplicity of using simple mathematical symbols to represent mathematic actions.
After all, what could more clearly represent the sum of two numbers than:
```python
sum = 2 + 2
```
Meanwhile, if we have a wrapper around a number:
```python
sum = numInstance.getNumber() + numInstance.getNumber()
```
It gets a bit harder to read through.
What if we could utilize those symbols to handle this custom class logic for us?
```python
sum = numInstance + numInstance;
```
Luckily we can!
For example, heres how we can make the `+` symbol run custom logic:
- `__add__(self, other)` - `instance + other`
```python
class Test:
__internal = 0
def __init__(self, val):
@@ -414,23 +411,23 @@ firstItem = Test(12)
secondItem = Test(31)
# This will call "__add__" instead of the traditional arithmetic operation
print(firstItem + secondItem)
```
Theres also other math symbols you can overwrite:
- `__sub__(self, other)` - `instance - other`
- `__mul__(self, other)` - `instance * other`
### Manage comparison symbol behavior
Addition, subtraction, and multiplication arent the only usages for operator overloading, however. We can also modify the comparison operators in Python to run custom logic.
Lets say we want to check if two strings match, regardless of casing:
- `__eq__(self, other)` - `instance == other`
```python
print(firstItem + secondItem)
```
Theres also other math symbols you can overwrite:
- `__sub__(self, other)` - `instance - other`
- `__mul__(self, other)` - `instance * other`
### Manage comparison symbol behavior
Addition, subtraction, and multiplication arent the only usages for operator overloading, however. We can also modify the comparison operators in Python to run custom logic.
Lets say we want to check if two strings match, regardless of casing:
- `__eq__(self, other)` - `instance == other`
```python
class Test():
str = ""
@@ -443,35 +440,33 @@ class Test():
firstItem = Test("AB")
secondItem = Test("ab")
print(firstItem == secondItem)
```
You can also have different logic for `==` and `!=` using `__ne__`.
- `__ne__(self, other)` - `instance != other`
However, if you dont provide a `__ne__`, but **do** provide a `__eq__`, Python will simply negate the `__eq__` logic on your behalf when `instance != other` is called.
Theres also a slew of magic methods for customizing other comparison operators:
- `__lt__(self, other)` - `instance < other`
- `__gt__(self, other)` - `instance > other`
- `__le__(self, other)` - `instance <= other`
- `__ge__(self, other)` - `instance >= other`
### Overwrite a classs type casting logic
Python, like any other programming language, has the concept of data types. Similarly, youre able to convert easily from any of those types to another type using built-in methods of type-casting data.
For example, if you call `bool()` on a string, it will cast the truthy value to a Boolean.
What if you could customize the behavior of the `bool()` method? You see where were going with this…
- `__bool__(self)` - `bool(instance)`
```python
print(firstItem == secondItem)
```
You can also have different logic for `==` and `!=` using `__ne__`.
- `__ne__(self, other)` - `instance != other`
However, if you dont provide a `__ne__`, but **do** provide a `__eq__`, Python will simply negate the `__eq__` logic on your behalf when `instance != other` is called.
Theres also a slew of magic methods for customizing other comparison operators:
- `__lt__(self, other)` - `instance < other`
- `__gt__(self, other)` - `instance > other`
- `__le__(self, other)` - `instance <= other`
- `__ge__(self, other)` - `instance >= other`
### Overwrite a classs type casting logic
Python, like any other programming language, has the concept of data types. Similarly, youre able to convert easily from any of those types to another type using built-in methods of type-casting data.
For example, if you call `bool()` on a string, it will cast the truthy value to a Boolean.
What if you could customize the behavior of the `bool()` method? You see where were going with this…
- `__bool__(self)` - `bool(instance)`
```python
from os.path import exists
class File:
@@ -486,20 +481,19 @@ class File:
file = File("temp.txt")
# Will return True or False depending on if file exists
print(bool(file))
```
Theres also other type casts logic you can customize:
- `__int__(self)` - `int(instance)`
- `__str__(self)` - `str(instance)`
## How to make your classes iterable
Lets say that weve used a custom class to build a replacement for a List:
```python
print(bool(file))
```
Theres also other type casts logic you can customize:
- `__int__(self)` - `int(instance)`
- `__str__(self)` - `str(instance)`
## How to make your classes iterable
Lets say that weve used a custom class to build a replacement for a List:
```python
class ListLike:
length = 0
@@ -521,26 +515,26 @@ print(listLike.length) # 0
listLike.append("Hello")
listLike.append("World")
print(listLike.length) # 2
print(listLike[0]) # "Hello"
```
This appears to work amazingly at first glance, until you try to do the following:
```python
[x for x in listLike]
```
Or any other kind of iteration on the ListLike. Youll get the following confusingly named error:
```python
'ListLike' object has no attribute '2'
```
This is because Python doesnt know *how* to iterate through your class, and therefore attempts to access a property in the class. This is where `__iter__` comes into play: It allows you to return an iterable to utilize anytime Python might request iterating through the class, like in [a list comprehension](https://coderpad.io/blog/development/python-list-comprehension-guide/).
- `__iter__(self)` - `[x for x in instance]`
```python
print(listLike[0]) # "Hello"
```
This appears to work amazingly at first glance, until you try to do the following:
```python
[x for x in listLike]
```
Or any other kind of iteration on the ListLike. Youll get the following confusingly named error:
```python
'ListLike' object has no attribute '2'
```
This is because Python doesnt know _how_ to iterate through your class, and therefore attempts to access a property in the class. This is where `__iter__` comes into play: It allows you to return an iterable to utilize anytime Python might request iterating through the class, like in [a list comprehension](https://coderpad.io/blog/development/python-list-comprehension-guide/).
- `__iter__(self)` - `[x for x in instance]`
```python
class ListLike:
length = 0
@@ -567,30 +561,29 @@ listLike = ListLike()
listLike.append("Hello")
listLike.append("World")
[print(x) for x in listLike]
```
> Notice that were having to return a real list wrapped in the `iter` method for the `__iter__` return value: This is required by Python.
>
> If you don't do this, you'll get the error:
>
> ```
> iter() returned non-iterator of type 'list'
> ```
### Check if an item exists using the “in” keyword
The `__iter__` magic method isnt the only way to customize traditionally list-like behavior for a class. You can also use the `__contains__` method to add support for simple “is this in the class” checks.
- `__contains__(self, item)` - `key in instance`
Something to keep in mind is that if `__contains__` isn't defined, Python will use the information provided by `__iter__` to check if the key is present. However, `__contains__` is a more optimized method, since the default `__iter__` checking behavior will iterate through every key until it finds a match.
## Python magic method cheat sheet
Python magic methods can level up your application logic by reducing the amount of boilerplate required to do specific actions, but thats not its only usecase. Othertimes, you might want to use magic methods to provide an API with a nicer development experience for consuming developers.
That said, we know that with so many magic methods it can be difficult to remember them all. This is why we made a cheat sheet that you can download or print out to reference when writing code.
> [Download the related Magic Methods Cheat Sheet](https://coderpad.io/python-magic-methods-cheat-sheet/)
[print(x) for x in listLike]
```
> Notice that were having to return a real list wrapped in the `iter` method for the `__iter__` return value: This is required by Python.
>
> If you don't do this, you'll get the error:
>
> ```
> iter() returned non-iterator of type 'list'
> ```
### Check if an item exists using the “in” keyword
The `__iter__` magic method isnt the only way to customize traditionally list-like behavior for a class. You can also use the `__contains__` method to add support for simple “is this in the class” checks.
- `__contains__(self, item)` - `key in instance`
Something to keep in mind is that if `__contains__` isn't defined, Python will use the information provided by `__iter__` to check if the key is present. However, `__contains__` is a more optimized method, since the default `__iter__` checking behavior will iterate through every key until it finds a match.
## Python magic method cheat sheet
Python magic methods can level up your application logic by reducing the amount of boilerplate required to do specific actions, but thats not its only usecase. Othertimes, you might want to use magic methods to provide an API with a nicer development experience for consuming developers.
That said, we know that with so many magic methods it can be difficult to remember them all. This is why we made a cheat sheet that you can download or print out to reference when writing code.
> [Download the related Magic Methods Cheat Sheet](https://coderpad.io/python-magic-methods-cheat-sheet/)

View File

@@ -60,11 +60,11 @@ As you can probably tell, Android `TextViews` are always smaller than the ones g
![A further comparison of the above image's demo of spacing around on Figma and spacing between on Android](./under_the_hood_02.png "Another comparison between Figma and Android line-spacing")
Now you might ask yourself, “*How can I calculate the height of each `TextView`, then?*
Now you might ask yourself, “_How can I calculate the height of each `TextView`, then?_
When you use a `TextView`, it has one parameter turned on by default: **`includeFontPadding`**. `includeFontPadding` increases the height of a `TextView` to give room to ascenders and descenders that might not fit within the regular bounds.
![A comparison between having "includeFontPadding" on and off. When it's off the height is "19sp" and when it's on it is "21.33sp". It shows the formula "includeFontPadding = TextSize * 1.33"](includefontpadding.png "A comparison of having the 'includeFontPadding' property enabled")
![A comparison between having "includeFontPadding" on and off. When it's off the height is "19sp" and when it's on it is "21.33sp". It shows the formula "includeFontPadding = TextSize \* 1.33"](includefontpadding.png "A comparison of having the 'includeFontPadding' property enabled")
Now that we know how Androids typography works, lets look at an example.
@@ -72,9 +72,9 @@ Heres a simple mockup, detailing the spacing between a title and a subtitle.
![A spec file of a phone dailing application](./specs.png)
![A mockup with spec lines enabled of a call log app](./implementation.png )
![A mockup with spec lines enabled of a call log app](./implementation.png)
*Of course, because its Android, the line height has no effect on the height of the `TextView`, and the layout is therefore `8dp` too short of the mockups.*
_Of course, because its Android, the line height has no effect on the height of the `TextView`, and the layout is therefore `8dp` too short of the mockups._
But even if it did have an effect, the problems wouldnt stop there; the issue is more complex than that.
@@ -94,11 +94,11 @@ _`firstBaselineToTopHeight`_ and _`lastBaselineToBottomHeight`_ are powerful too
This means that designers, alongside developers, can force the bounds of a `TextView` to match the design specs and open the door to perfect implementations of their mockups.
This is something Ive personally tested in an app I designed. [**Memoire**, a note-taking app](http://tiny.cc/getmemoire) for Android, is a 1:1 recreation of its mockups — for every single screen. This was made possible due to these APIs — *and because [**@sasikanth**](https://twitter.com/its\_sasikanth) is not confrontational* — since text is what almost always makes baseline alignment and hard grids impossible to implement in production.
This is something Ive personally tested in an app I designed. [**Memoire**, a note-taking app](http://tiny.cc/getmemoire) for Android, is a 1:1 recreation of its mockups — for every single screen. This was made possible due to these APIs — _and because [**@sasikanth**](https://twitter.com/its_sasikanth) is not confrontational_ — since text is what almost always makes baseline alignment and hard grids impossible to implement in production.
<video src="./memoire_bounds_and_baselines.mp4" title="Near-perfect duplication of guidelines against Memoire's mockups and actual app"></video>
*Memoires TextViews are all customized using these APIs.*
_Memoires TextViews are all customized using these APIs._
# What is the purpose of firstBaselineToTopHeight and lastBaselineToBottomHeight?
@@ -114,9 +114,9 @@ As you might imagine, **if we want to keep our text aligned to a baseline grid,
![A comparison table of Dos and Donts that matches the below table](./dos_donts.png)
|✅ Good|🛑 Bad|
|--|--|
|Applying `firstBaseline` and `lastBaseline` in styles allows you to know exactly what the distance between baselines is, without having to set them one by one to ensure they properly align to a `4dp` grid. | Without applying `firstBaseline` and `lastBaseline` in styles, you cant detect what the default values are, so you are forced to apply these one by one to every `TextView` to ensure they align to a `4dp` grid. |
| ✅ Good | 🛑 Bad |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Applying `firstBaseline` and `lastBaseline` in styles allows you to know exactly what the distance between baselines is, without having to set them one by one to ensure they properly align to a `4dp` grid. | Without applying `firstBaseline` and `lastBaseline` in styles, you cant detect what the default values are, so you are forced to apply these one by one to every `TextView` to ensure they align to a `4dp` grid. |
<video src="./ios_vs_android.mp4" title="A comparison of how text spacing is applied on iOS and Android"></video>
@@ -140,35 +140,35 @@ Its actually pretty simple. Lets walk through how to adapt one of Material
![A headline 6 within Figma showing "32pt" height](./figma_textbox_size.png "Text box within Figma")
*Text box within Figma.*
_Text box within Figma._
Here we can see that the text box has a height of `32`. This is inherited from the line height set in Figma, but we need to know the minimum height on Android. We can easily calculate the minimum height in production using *includeFontPadding*.
Here we can see that the text box has a height of `32`. This is inherited from the line height set in Figma, but we need to know the minimum height on Android. We can easily calculate the minimum height in production using _includeFontPadding_.
> Headline 6 = `20` (text size) `* 1.33` (`includeFontPadding`) = `26.667sp`
![An image showcasing the headline height mentioned above](./android_textview_size.png "TextView on Android")
*`TextView` on Android.*
_`TextView` on Android._
Now resize your Figma text box to `26.6`*it will round it to `27`, but thats fine.*
Now resize your Figma text box to `26.6`_it will round it to `27`, but thats fine._
**Step 2: With the resized text box, align its baseline with the nearest `4dp` breakpoint in your grid.**
![Baseline now sits on the "4dp" grid.](./step_01.png)
*Baseline now sits on the `4dp` grid.*
_Baseline now sits on the `4dp` grid._
**Step 3: Measure the distance between the baseline and the top and bottom of the text box.**
![Showcasing the above effect by having 'firstBaselineToTopHeight' set to 20.66 and 'lastBaselineToBottomHeight' to 6.0](step_02.png)
*`firstBaselineToTopHeight`: `20.66` | `lastBaselineToBottomHeight`: `6.0`*
_`firstBaselineToTopHeight`: `20.66` | `lastBaselineToBottomHeight`: `6.0`_
**Step 4: Now right click the text box and select Frame Selection.**
![The right-click dialog hovering over Frame Selection, key binding Ctrl+Alt+G](./step_03.png "The right-click dialog hovering over Frame Selection")
*When created from an object, a frames dimensions are dependent on the content inside it.*
_When created from an object, a frames dimensions are dependent on the content inside it._
**Step 5: While holding Ctrl / Command, drag the frame handles and resize it so that the top and bottom align with the nearest baselines beyond the minimum values.**
@@ -190,13 +190,13 @@ This will cause the text box to return to its original height of `32sp` — inhe
![A showcase of the text box being "1sp" down from the frame](./step_07.png)
*The text box is 1sp down from the frame, but thats normal. We no longer care about the text box height.*
_The text box is 1sp down from the frame, but thats normal. We no longer care about the text box height._
**Step 7: With the text box selected, set its constraints to *Left & Right* and *Top & Bottom*.**
**Step 7: With the text box selected, set its constraints to _Left & Right_ and _Top & Bottom_.**
![A view of the constraints dialog in Figma on the headline](./step_08.png)
*Now your text box will resize with your frame. This is essential when using the text components.*
_Now your text box will resize with your frame. This is essential when using the text components._
You would need to find these values for every text style in your app, but if youre taking the Material Design Type Spec as a base for your own, I have already measured and picked the right values for each! _**Resources at the end.**_
@@ -224,7 +224,6 @@ We first set up a `TextAppearance` — which your app probably already has —
<!-- **TEXT_STYLE** -->
```
Lets use Memoire once again as an example.
![An example of the Memoire codebase showing the headline of 4](./memoire_headline_4_code.png)
@@ -240,7 +239,7 @@ For example, _**`textAppearanceCaption`**_, _**`textAppearanceBody1`**_, etc.
![A display of code styling when "TextStyle" is properly applied. See 'styles.xml' at the bottom of the post for an example](text_style_applied_properly.png "A display of code styling when TextStyle is properly applied")
*What happens to a `TextView` when a `TextStyle` is properly applied.*
_What happens to a `TextView` when a `TextStyle` is properly applied._
# And now, a couple of warnings
@@ -254,9 +253,9 @@ Applying a `TextStyle` to a component — instead of a `TextAppearance` — caus
![A showcase of a "button" component not having the text align to the height of the component](./textstyle_buttons.png)
*Uh-oh…*
_Uh-oh…_
This happens because Material Components already have padding that _**IS NOT**_ overridden by `firstBaseline` and `lastBaseline` values. Buttons, in particular, have a **maximum height *and* padding**, meaning were effectively trying to fit a large text box into a very narrow container, causing the text to shrink as a result.
This happens because Material Components already have padding that _**IS NOT**_ overridden by `firstBaseline` and `lastBaseline` values. Buttons, in particular, have a **maximum height _and_ padding**, meaning were effectively trying to fit a large text box into a very narrow container, causing the text to shrink as a result.
As far as other issues, I havent been able to find any.
@@ -266,27 +265,26 @@ Now that youve scrolled all the way down without reading a single word, here
![A preview of the Figma document with code and layout samples](./preview.png)
*Figma document with code and layout samples.*
_Figma document with code and layout samples._
## For designers: [Figma Document](https://www.figma.com/file/F1RVpdJh73KmvOi06IJE8o/Hard-Grid-—-Text-Components/duplicate)
Document containing:
* A slight introduction
- A slight introduction
* All the text components
- All the text components
* A small tutorial on how to use them effectively
- A small tutorial on how to use them effectively
* Prebuilt layout examples to get you started
- Prebuilt layout examples to get you started
* Customizable code blocks for each style in a text box, so you can change each depending on your theme and hand it to developers
- Customizable code blocks for each style in a text box, so you can change each depending on your theme and hand it to developers
## For developers: [styles.xml](./styles.xml)
A styles.xml file containing:
* All the `TextAppearance`s that can be used with Material Components
* All the `TextStyle`s to theme `TextView`s accordingly
- All the `TextAppearance`s that can be used with Material Components
- All the `TextStyle`s to theme `TextView`s accordingly

View File

@@ -18,15 +18,20 @@ In this article, we're going to introduce you to various concepts to helping you
We'll ask and answer the following questions:
- [What is "source code"?](#source-code)
- [What are the major components of a computer, and how do they tie together?](#computer-hardware)
- [What language does the computer speak natively?](#assembly-code)
- [Why do I need a custom program to run some programming languages?](#compiled-vs-runtime)
- [How does a computer turn letters and symbols into instructions that it knows how to run?](#lexer)
- [Why do some programming languages have different rules and look different from one another?](#parser)
- [Why can't we simply give the computer English instructions and have it run those with a special program?](#english-vs-ast)
> I'm writing this article as a starting point to a developer's journey or even just to learn more about how computers work under-the-hood. I'll make sure to cover as many of the basics as possible before diving into the more complex territory. That said, we all learn in different ways, and I am not a perfect author. If you have questions or find yourself stuck reading through this, drop a comment down below or [join our Discord](https://discord.gg/FMcvc6T) and ask questions there. We have a very friendly and understanding community that would love to explain more in-depth.
> I'm writing this article as a starting point to a developer's journey or even just to learn more about how computers work under-the-hood. I'll make sure to cover as many of the basics as possible before diving into the more complex territory. That said, we all learn in different ways, and I am not a perfect author. If you have questions or find yourself stuck reading through this, drop a comment down below or [join our Discord](https://discord.gg/FMcvc6T) and ask questions there. We have a very friendly and understanding community that would love to explain more in-depth.
# Source Code {#source-code}
@@ -74,7 +79,7 @@ These are used to connect each of these parts together, make up the "brains" of
## Motherboard {#mobo}
**A motherboard is the platform in which all other components connect together and communicate through**. There are various integrated components to your motherboard, like storage controllers and chipsets necessary for your computer to work. Fancier motherboards include additional functionality like high-speed connectivity (PCI-E 4.0) and Wi-Fi.
**A motherboard is the platform in which all other components connect together and communicate through**. There are various integrated components to your motherboard, like storage controllers and chipsets necessary for your computer to work. Fancier motherboards include additional functionality like high-speed connectivity (PCI-E 4.0) and Wi-Fi.
When you turn on your computer, the first that will happen is your motherboard will do a "POST"; a hardware check to see if everything connected is functioning properly. Then the motherboard will start the boot sequence; which starts with storage
@@ -102,7 +107,6 @@ You can think of these components working together similarly to this:
> For those unaware, the visual cortex is the part of the brain that allows us to perceive and understand the information provided to us by our eyes. Our eyes simply pass the light information gathered to our brains, which makes sense of it all. Likewise, the GPU does the computation but does not display the data it processes; it passes that information to your monitor, which in turn displays the image source to you.
# Assembly: What's that? {#assembly-code}
At the start of this article, one of the questions I promised to answer was, "What language does the computer speak natively?". The answer to this question is, as you may have guessed from the section title, assembly.
@@ -144,7 +148,7 @@ Now that we have that data loaded into registers, we can now do the `addu` instr
addu $1,$2,$1 # Add (+) data from registers 1 and 2, store the result back into register 1
```
Finally, if you were to inspect the value within register 1, you'd find a value representing the number `185`.
Finally, if you were to inspect the value within register 1, you'd find a value representing the number `185`.
This works well, but what happens if we want to add 3 numbers together? We don't have enough registers to store all of these values at once!
@@ -234,9 +238,9 @@ int main() {
This code simply says, "print the number 185 to the screen so the user can see it". **To do the same in assembly requires a massive amount of knowledge about the system** you're intending to run code on, due to the lack of portability granted by higher-level languages.
What do I mean by portability? Well, let's say you want to write code that runs on both low-end Chromebooks and high-end Desktops alike, you need to adapt your code to run on their respective processors. Most low-end Chromebooks use a type of CPU called "ARM", while most high-end Desktops run "x86_64" processors. **This difference in CPU architecture means an entirely different instruction set, which requires a different set of assembly instructions to be written to do the same thing in both**.
What do I mean by portability? Well, let's say you want to write code that runs on both low-end Chromebooks and high-end Desktops alike, you need to adapt your code to run on their respective processors. Most low-end Chromebooks use a type of CPU called "ARM", while most high-end Desktops run "x86\_64" processors. **This difference in CPU architecture means an entirely different instruction set, which requires a different set of assembly instructions to be written to do the same thing in both**.
Meanwhile (with a few exceptions), simple C code will run both platforms equally with some minor changes. This is because of C's _compiler_.
Meanwhile (with a few exceptions), simple C code will run both platforms equally with some minor changes. This is because of C's _compiler_.
What is a compiler?
@@ -344,7 +348,7 @@ Uncaught SyntaxError: Unexpected token '='
Notice how it reports "Unexpected token"? That's because the lexer is converting that symbol into a token before the parser recognizes that it's an invalid syntax.
## The Parser {#parser}
## The Parser {#parser}
Now that we've loosely touched on the parser at the end of the last section let's talk more about it!
@@ -356,7 +360,6 @@ After the lexer has converted the code into a series of tokens (complete with me
>
> Once a set of data is turned into a tree, the computer knows how to "walk" through this tree and utilize the data (and metadata of their relationships) to take actions. In this case, the tree that is created by the parser is traversed to compile the code into instruction sets.
Once the tokenized code is ran through the parser, we're left with the "syntax tree" of the code in question. For example, when run through Babel's parser (A JavaScript parser that's written itself in JavaScript), we're left with something like the following:
![](./parser_1.svg)
@@ -421,17 +424,17 @@ While humans have grown to parse this type of language, doing so for computers i
I'll make my point by presenting you an extremely confusing grammatically correct sentence:
> ["Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo."](https://simple.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffalo_buffalo_buffalo_Buffalo_buffalo)
> ["Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo."](https://simple.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffalo_buffalo_buffalo_Buffalo_buffalo)
Yes, that's a complete and valid English sentence. Have fun writing a parser for that one.
### The Future {#AI}
While writing a parser for the English language is near-impossible to do perfectly, there is some hope for using English in the programming sphere in the future. This hope comes in the form of AI, natural language processing.
While writing a parser for the English language is near-impossible to do perfectly, there is some hope for using English in the programming sphere in the future. This hope comes in the form of AI, natural language processing.
AI has been used for years to help computers understand our language. If you've ever used a Google Home, Amazon Alexa, or Apple's Siri, you've utilized an AI that tries its best to parse your language into instructions pre-determined by the parent company's developers.
Likewise, there are projects such as [OpenAPI's GPT-3.0](https://beta.openai.com/) that make some moonshot ideas closer to reality.
Likewise, there are projects such as [OpenAPI's GPT-3.0](https://beta.openai.com/) that make some moonshot ideas closer to reality.
Some folks have even been able to write [React code using GPT-3.0](https://twitter.com/sharifshameem/status/1284807152603820032).

View File

@@ -10,7 +10,6 @@
}
---
## Introduction
In this day and age, the programming ecosystem has become so rich and complex that asking questions is inevitable for any developer, regardless of what stage of their career they find themselves in.
@@ -29,7 +28,7 @@ Now whilst this sounds simple, there are many people who struggle to define thei
The most basic way to define your problem is to explain what the expected behaviour is, and what the actual behaviour is. For example:
> *I've written a function `sum` that is meant to return the sum of 2 numbers. When I do `sum(1, 1);` I'm expecting it to return 2, however it's returning 1.*
> _I've written a function `sum` that is meant to return the sum of 2 numbers. When I do `sum(1, 1);` I'm expecting it to return 2, however it's returning 1._
Now whoever is reading my question knows what I'm trying to achieve and what is currently happening.
@@ -43,14 +42,14 @@ If you can pinpoint the problematic code, then it's important that you share it.
In my previous example I would do the following:
> *This is where `sum` is defined and how I'm calling `sum`:*
>
> _This is where `sum` is defined and how I'm calling `sum`:_
>
> ```javascript
> function sum(a, b) {
> return a * b;
> }
>
>
>
>
> const result = sum(1, 1); // result == 1?
> ```
@@ -62,49 +61,44 @@ Just to save everyone some time, and also get a better understanding of the enti
Sharing what you've tried helps people to rule out anything you've already tried and they can concentrate on thinking of another solution that might work. Following my basic example:
> *I've also tried to pass in 2 and 2 and that seemed to work, but then if I passed in 3 and 2 I would get 6.*
>
>
>
> *Finally I've tried to do sum like the following but that also didn't work:*
>
> _I've also tried to pass in 2 and 2 and that seemed to work, but then if I passed in 3 and 2 I would get 6._
>
> _Finally I've tried to do sum like the following but that also didn't work:_
>
> ```javascript
> function sum(a, b) {
> return b * a;
> }
> ```
## Other tips for asking questions
Those are the main tips that are applicable to any programming question, regardless of where / when you're asking the question. But there are other tips that may apply to more specific situations that I want to talk about in this section:
* ###### Choose the right time and the right person
- ###### Choose the right time and the right person
Are you asking someone at work or in your household? Firstly make sure that it's a good time for them. If they're busy working away then you might want to try to find a different time to ask your question.
* ###### Do some research
Are there any terms you're unsure of that are relevant to your question? Look around first and make sure *you* know the entire problem before expecting other people to.
- ###### Do some research
Are there any terms you're unsure of that are relevant to your question? Look around first and make sure _you_ know the entire problem before expecting other people to.
* ###### If something's not clear then ask about it
- ###### If something's not clear then ask about it
It's easy to feel bad when someone explains something to you and you still don't fully understand and need to ask again. There is no shame in asking someone to be clearer as long as you're being respectful about it. They're already helping you which means they're probably happy to give away some of their time to help you out!
* ###### Be understanding of other people's time
- ###### Be understanding of other people's time
Most clearly defined questions can be solved in 15 minutes or less. If you find that a problem is taking longer than that, or you notice the person helping you is taking longer to reply, it's okay to ask for continued help for another time! You could simply ask them when is a more convenient time for you two to have another go at it? Something like:
> *"This is taking more time than it thought it would, is it OK if I asked you if we could give it another go maybe when you've got a bit more time?"*
> _"This is taking more time than it thought it would, is it OK if I asked you if we could give it another go maybe when you've got a bit more time?"_
Most times people will be happy to!
* ###### Finally, asking "bad" questions from time to time is fine
- ###### Finally, asking "bad" questions from time to time is fine
Don't feel bad if you've asked a question that you're worried is low quality. It's easy to forget that sometimes Google exists and you ask a coworker something that could've been answered faster by the internet! Similarly, don't lash out when someone asks you a bad question. Instead, kindly teach them how to ask better questions for future reference!
## Conclusion
Asking questions is easy, and asking *good questions* can also be easy if you follow the general rules described in this article. But most importantly of all, remember that people answering your questions are often giving up their time for you, so don't forget to be polite, grateful and happy that they're giving you a helping hand.
Asking questions is easy, and asking _good questions_ can also be easy if you follow the general rules described in this article. But most importantly of all, remember that people answering your questions are often giving up their time for you, so don't forget to be polite, grateful and happy that they're giving you a helping hand.
Similarly, if you're answering a question, make sure to always be polite as well! It only takes a couple people to answer a question rudely or patronisingly to put someone off asking questions and clearing their doubts. We were all beginners once, and many of us still are!
Thanks for the time you've dedicated to reading this article, I really appreciate it. Have a good one!

View File

@@ -29,13 +29,13 @@ It was no different to learning another language, like Python, JavaScript, or ev
First of, we are going to start with something extremely simple. We are going to create a simple Console Application. There is just one small, but important
step we must take before we can start writing code. Now unfortunately, if you use a Linux based system, you will need to install the .NET Core SDK, which can
be tedious. You can download the SDK from the following link: https://www.microsoft.com/net/core
be tedious. You can download the SDK from the following link: <https://www.microsoft.com/net/core>
On Windows or MacOS, you can install Visual Studio, which comes with the .NET Core SDK already installed. You can download Visual Studio from the following
link: https://visualstudio.microsoft.com/downloads
link: <https://visualstudio.microsoft.com/downloads>
This article is directed at windows users, but you can also use Visual Studio on MacOS. If you are using Linux, you will need to use another IDE/Text Editor
and the dotnet CLI. You can download the dotnet CLI from the following link: https://dotnet.microsoft.com/download
and the dotnet CLI. You can download the dotnet CLI from the following link: <https://dotnet.microsoft.com/download>
# Creating a project
@@ -44,6 +44,7 @@ then clicking new->project. You will be prompted to select the type of project y
to name the project, and choose the location where you want to save it. Once you have created the project, you will be able to start writing code.
You should be greeted with the following code:
```cs
using System;
@@ -72,6 +73,7 @@ We will create a new class called Film, and then we will create a new method cal
We will also create a List of films, and then we will add a few films to the list.
Lets get started with writing code:
```cs
using System;
using System.Collections.Generic;
@@ -116,10 +118,10 @@ The `List<Film>` is a data structure that is used to store a list of items. The
Inside of the `Film` class, we have three properties, `Title`, `Year`, and `Director`. Properties are like variables, and they are used to store information.
I'm not going to go into nitty gritty details about what is going on here, as I expect you to be familiar with some of the concepts already. I really recommend
reading up on the C# language, and reading the documentation, at https://docs.microsoft.com/dotnet/csharp to get a better understanding of what is going on here.
reading up on the C# language, and reading the documentation, at <https://docs.microsoft.com/dotnet/csharp> to get a better understanding of what is going on here.
I fully understand that dotnet can be a bit confusing at first, but I hope that this article will help you get started with .NET. I hope you enjoyed the article!
# Contact
If you have any questions, comments, or concerns, feel free to contact me at https://www.owenboreham.tech/contact
If you have any questions, comments, or concerns, feel free to contact me at <https://www.owenboreham.tech/contact>

View File

@@ -11,7 +11,7 @@
}
---
Interviewing for frontend engineering positions can be difficult. Theres a lot to keep in mind for any interview, but frontend interviews always seem to have so many things to be cognizant of.
Interviewing for frontend engineering positions can be difficult. Theres a lot to keep in mind for any interview, but frontend interviews always seem to have so many things to be cognizant of.
While weve discussed [5 tips for tech recruiting](https://coderpad.io/blog/5-tips-for-tech-recruiting/), lets take a look at some of the things we feel are more specific to a frontend technical screening.
@@ -21,7 +21,7 @@ Whether youre using a JavaScript framework or simply adding logic to a vanill
Some codebases may follow OOP principles while others will heavily utilize functional programming paradigms. Make sure you ask frontend candidates questions that are relevant to your project. If your app extensively utilizes classes in JavaScript, you might ask about prototype inheritance or focus on the `this` keyword. Likewise, if youre primarily using functional coding, you might check if theyre familiar with functions-as-values - asking them to make generic functions that utilize callbacks or returned functions.
Regardless of the code style you utilize, you may want to ask about JavaScript basics like variable scoping between `var`, `let`, or `const` and when each is appropriate. That said, try to avoid asking questions about niche specifics in a language. Unless youre hiring for engineering work on a JavaScript runtime, your candidate doesnt need to know the engine level specifics of things like “[Temporal Dead Zone](https://2ality.com/2015/10/why-tdz.html)”.
Regardless of the code style you utilize, you may want to ask about JavaScript basics like variable scoping between `var`, `let`, or `const` and when each is appropriate. That said, try to avoid asking questions about niche specifics in a language. Unless youre hiring for engineering work on a JavaScript runtime, your candidate doesnt need to know the engine level specifics of things like “[Temporal Dead Zone](https://2ality.com/2015/10/why-tdz.html)”.
Likewise, “gotcha” questions specifically designed to be confusing or obtuse are unhelpful in gauging real-world problem-solving in any technical interview.
@@ -35,7 +35,6 @@ If your exercise doesnt include a design to utilize, I wouldnt evaluate th
With mobile devices becoming more and more predominant in todays society, its important to know that your candidate can scale the applications UI to non-desktop devices. Even if a design isnt provided, ask your candidate to include a view for smaller screens as well.
While JavaScript is more than able to conditionally render logic based on screen size, its suggested to utilize CSSs media queries whenever possible. This allows your app to adjust to various-sized screens (and often helps with SEO).
# Frameworks
@@ -60,7 +59,7 @@ A major part of collaborating within a team effectively is a candidates empat
## Accessibility
Part of a front-end engineers role is to make sure that the application theyre building is usable by all users. Making sure that users with screen-readers, color blindness, or other impairments are able to use your application as easily as other users is important.
Part of a front-end engineers role is to make sure that the application theyre building is usable by all users. Making sure that users with screen-readers, color blindness, or other impairments are able to use your application as easily as other users is important.
This could mean bringing up problems with color contrasts in a provided design, making sure that the candidate is using semantic HTML, or even that theyre utilizing the right `aria` attributes. Dont forget that CSS can impact screen-reader support through properties like flexboxs “[order](https://developer.mozilla.org/en-US/docs/Web/CSS/order)”
@@ -70,12 +69,11 @@ Empathy isnt just something that should be expressed to users. After all, com
For example, code comments that explain how a particular bit of code can be a form of documentation. If youre using a typed programming language like TypeScript in an interview, maybe explicit typings can be a form of documentation. Both of these can be demonstrated through a representative [Take-Home](https://coderpad.io/blog/hire-better-faster-and-in-a-more-human-way-with-take-homes/). Maybe the candidate is able to provide comments to a particularly confusing bit of code or add typing interfaces where `unknown` or `any` mightve otherwise sufficed.
In other scenarios, creating example projects that showcase components design can help be a bridge of communication between designers, engineers, and product managers. You can help encourage candidates to add a development route that acts as a showcase of their UI components.
# Ignore Style Differences
While most of this article has been focused on things ***to\*** do and look for in an interview, lets look at something that shouldnt be focused on: code style. While there are certainly instances where `for` loops make more sense than `forEach`, or `function() {}` declarations are required instead of `() => {}` functions, most of the time they shouldnt be taken as a positive or negative indicator of code quality. Some engineers might prefer newer syntaxes such as object destructuring but other engineers might have experience prior to the introduction of those syntax introductions.
While most of this article has been focused on things \***to\*** do and look for in an interview, lets look at something that shouldnt be focused on: code style. While there are certainly instances where `for` loops make more sense than `forEach`, or `function() {}` declarations are required instead of `() => {}` functions, most of the time they shouldnt be taken as a positive or negative indicator of code quality. Some engineers might prefer newer syntaxes such as object destructuring but other engineers might have experience prior to the introduction of those syntax introductions.
There are exceptions to this - using a single `for` loop may be more performant than multiple. Maybe code maintenance and readability is more important than performance for parts of your application.

View File

@@ -42,7 +42,9 @@ My holistic vision would consist of:
- Lots of full-filled content, such as video courses or pictures to serve alongside their written content
- A single place to host a course for someone
- An independent creator feeling comfortable enough to host content here without having to make their landing page in a separate service. As such, we'll need to provide a lightweight customization of a page to showcase their own brand/course.
- Focus on groups rather than single courses. Subscribing to a single content group/creator rather than "React course #1" which has no clear distinction from another "React course #1"
While the first point doesn't inform us of much at this early stage (we'll touch on UI tooling selection later), we can glean from the second point that we'll have to maintain some kind of storage layer. This will be something we'll need to keep in mind as we structure our goals.
@@ -91,7 +93,7 @@ Looking at what we need to do from the previous section, I can say that we could
- Courses will need content, so a way to upload/view content on courses
While thinking about these features, I want to keep the implementation details to a minimum, just enough to suffice with our resources by ignoring the nuances of certain permission features. However, notice how, despite thinking about the features minimally, *I'm also mentally mapping how the data should be structured and thinking about long-term implications* in such a way that we can add them later without refactoring everything. This balance during architecture can be tough to achieve and becomes more and more natural with experience.
While thinking about these features, I want to keep the implementation details to a minimum, just enough to suffice with our resources by ignoring the nuances of certain permission features. However, notice how, despite thinking about the features minimally, _I'm also mentally mapping how the data should be structured and thinking about long-term implications_ in such a way that we can add them later without refactoring everything. This balance during architecture can be tough to achieve and becomes more and more natural with experience.
# Requirements {#data-requirements}
@@ -103,23 +105,23 @@ Finally, I look at the data requirements and features and start thinking about w
- My data isn't likely to change structure very much
As a result, I'd feel comfortable using SQL for something like this.
As a result, I'd feel comfortable using SQL for something like this.
- I need user authentication
I don't like rolling my own auth solution, so I'll probably use [passport](https://www.npmjs.com/package/passport) since it's been well tested and stable. If I want to enable users to sign in from their Google accounts or something in the future, I should keep that in mind even if I'm not building that functionality right away
I don't like rolling my own auth solution, so I'll probably use [passport](https://www.npmjs.com/package/passport) since it's been well tested and stable. If I want to enable users to sign in from their Google accounts or something in the future, I should keep that in mind even if I'm not building that functionality right away
- I am going to be focusing on per-user UI (achievements, dashboards, etc.)
As such, my use of something like [Gatsby](https://www.gatsbyjs.org/) for static site generation (SSG) isn't realistically beneficial. We could go with server-side rendering (SSR) with something like [Next.JS](https://nextjs.org/), but due to using a lot of media (video/picture), I'd argue there's not much of a return-on-investment (ROI) by building SSR-first since the content has to be loaded by the DOM regardless.
As such, my use of something like [Gatsby](https://www.gatsbyjs.org/) for static site generation (SSG) isn't realistically beneficial. We could go with server-side rendering (SSR) with something like [Next.JS](https://nextjs.org/), but due to using a lot of media (video/picture), I'd argue there's not much of a return-on-investment (ROI) by building SSR-first since the content has to be loaded by the DOM regardless.
- I'm not likely to have many forms in my application - primarily focusing on viewing rather than form creation
Sometimes it's important to know what an application is and _isn't_ going to be using. If we were highly focused on forms, I might advocate for [Angular](https://angular.io/) to be used in the front-end (since I have found their form system to be quite robust). However, since I know my team is not as familiar with Angular as other options and we have a limited budget, we likely won't be moving forward with it
Sometimes it's important to know what an application is and _isn't_ going to be using. If we were highly focused on forms, I might advocate for [Angular](https://angular.io/) to be used in the front-end (since I have found their form system to be quite robust). However, since I know my team is not as familiar with Angular as other options and we have a limited budget, we likely won't be moving forward with it
- However, we'll be hoping to have a lot of live-streamed user content in the future
Stuff like "live quizzes," live streaming/playback of video, anything that requires tracking of time/etc is all a great use case for event-based programming. One of the most prominent implementations of this in JavaScript is [RxJS](https://github.com/ReactiveX/rxjs).
Stuff like "live quizzes," live streaming/playback of video, anything that requires tracking of time/etc is all a great use case for event-based programming. One of the most prominent implementations of this in JavaScript is [RxJS](https://github.com/ReactiveX/rxjs).
So there we have it - a non-Angular, REST API, Passport authenticated, SQL DB, non-SSR, RxJS powered application
@@ -131,7 +133,7 @@ From here, things start becoming a lot more subjective and a lot more social.
While I personally prefer Vue, after talking with my team, it became clear that they're much more comfortable with React. Because React has a large ecosystem with a sturdy backing, I'm not against using it since I feel it can sustain our product's growth over time.
Moving onto CSS was more of the same: It was less "what can support this specific use-case" and more "what is familiar and can sustain our growth?".
Moving onto CSS was more of the same: It was less "what can support this specific use-case" and more "what is familiar and can sustain our growth?".
This example is where things get really tricky because you often are not just picking a framework or library, but often a philosophy of CSS as well. After a long-form discussion with my (front-end focused) team about this, we decided to go with Styled Components and Material UI. These tools were decided on due to their flexibility, general A11Y support (for MUI), themability, and our comfort with the tools. The size and stability also took a role in this discussion.
@@ -149,15 +151,16 @@ Each tool and usage will weigh these questions differently. If I'm looking for a
# Conclusion
To recap, it's a mixture of:
- Proper planning (focusing on features and experience rather than tech)
- This point should take double priority, as understanding what to start building after picking the tech is important
- This point should take double priority, as understanding what to start building after picking the tech is important
- Expertise
I knew that SQL would suffice for our data thanks to my experience scaffolding various applications with SQL and NoSQL alike
I knew that SQL would suffice for our data thanks to my experience scaffolding various applications with SQL and NoSQL alike
- Research
The only reason I knew that working with binary data over GQL is due to research I did ahead of time before even writing any product code
The only reason I knew that working with binary data over GQL is due to research I did ahead of time before even writing any product code
- Communication
This one is often overlooked but is **critical** - especially within teams. _Leverage each other's strengths and weaknesses and be open and receptive to suggestions/concerns_
This one is often overlooked but is **critical** - especially within teams. _Leverage each other's strengths and weaknesses and be open and receptive to suggestions/concerns_
That's by **no** means an easy feat to do, despite reading as if they were. Don't worry if you're not able to execute these skills flawlessly - goodness knows I can't! I'm sure a lot of the decisions I made here, even with the group I spoke to, could have been better guided in different ways. These are the skills that I think I value the most in seniors developer, especially communication. _Communication becomes critical when working with medium/larger teams (or really, groups of any size) since reasonable minds may differ on toolsets that they might see strengths/weaknesses in_.
Have a similar question to the one Lindsey asked? Like conversations like this? Have something to add? [Join us in our Discord server](https://discord.gg/FMcvc6T) to jump into the community and engage in conversations like this! We wouldn't have the quality of our content without our community!
Have a similar question to the one Lindsey asked? Like conversations like this? Have something to add? [Join us in our Discord server](https://discord.gg/FMcvc6T) to jump into the community and engage in conversations like this! We wouldn't have the quality of our content without our community!

View File

@@ -1,4 +1,4 @@
---
---
{
title: "How to Upgrade to React 18",
description: "React 18 introduces some awesome features that I'm sure you can't wait to try! Here's how you can get started with React 18 today!",
@@ -8,82 +8,81 @@
attached: [],
license: 'coderpad',
originalLink: 'https://coderpad.io/blog/how-to-upgrade-to-react-18/'
}
---
React 18 is the latest in a long line of major releases of React. With it you gain access to: [new features for Suspense](https://reactjs.org/docs/concurrent-mode-suspense.html), new [useId](https://github.com/reactwg/react-18/discussions/111), [useSyncExternalStore](https://github.com/reactwg/react-18/discussions/86), and [useDeferredValue](https://github.com/reactwg/react-18/discussions/100) hooks, as well as the new [startTransition](https://github.com/reactwg/react-18/discussions/100) API.
While React 18 is not yet a stable release, testing out your application can be useful.
Like with previous major releases of React, React 18 is a fairly easy migration for most apps.
While [Strict Mode has received some changes](https://github.com/reactwg/react-18/discussions/19) that may impact your app, and [automatic batching](https://github.com/reactwg/react-18/discussions/21) may introduce some new edge cases, they only impact apps that dont [follow the Rules of React properly](https://reactjs.org/docs/hooks-rules.html).
Outside of those considerations, lets upgrade!
## Installation
First, start by installing React 18:
```
npm i react@18.0.0-rc.0 react-dom@18.0.0-rc.0
```
Or, if you use `yarn`:
```
yarn add react@18.0.0-rc.0 react-dom@18.0.0-rc.0
```
If youre using Create React App, you may also want to [upgrade to the newest v5](https://github.com/facebook/create-react-app/releases/tag/v5.0.0) as well using:
```
npm i react-scripts@5
```
Or
```
yarn add react-scripts@5
```
Then, make sure to upgrade any dependencies that might rely on React.
For example, upgrade [React Redux to v8](https://github.com/reduxjs/react-redux/releases/tag/v8.0.0-beta.2) or [SWR to 1.1.0](https://github.com/vercel/swr/releases/tag/1.1.0)
## Update `render` method
After you install React 18, you may receive an error when your app is running:
> Warning: ReactDOM.render is no longer supported in React 18. Use createRoot instead. Until you switch to the new API, your app will behave as if it's running React 17. Learn more:[ https://reactjs.org/link/switch-to-createroot](https://reactjs.org/link/switch-to-createroot)
This is because previously, in React 17 and before, youd have a file - usually called `index.js` or `index.ts` - that included the following code:
```
}
---
React 18 is the latest in a long line of major releases of React. With it you gain access to: [new features for Suspense](https://reactjs.org/docs/concurrent-mode-suspense.html), new [useId](https://github.com/reactwg/react-18/discussions/111), [useSyncExternalStore](https://github.com/reactwg/react-18/discussions/86), and [useDeferredValue](https://github.com/reactwg/react-18/discussions/100) hooks, as well as the new [startTransition](https://github.com/reactwg/react-18/discussions/100) API.
While React 18 is not yet a stable release, testing out your application can be useful.
Like with previous major releases of React, React 18 is a fairly easy migration for most apps.
While [Strict Mode has received some changes](https://github.com/reactwg/react-18/discussions/19) that may impact your app, and [automatic batching](https://github.com/reactwg/react-18/discussions/21) may introduce some new edge cases, they only impact apps that dont [follow the Rules of React properly](https://reactjs.org/docs/hooks-rules.html).
Outside of those considerations, lets upgrade!
## Installation
First, start by installing React 18:
```
npm i react@18.0.0-rc.0 react-dom@18.0.0-rc.0
```
Or, if you use `yarn`:
```
yarn add react@18.0.0-rc.0 react-dom@18.0.0-rc.0
```
If youre using Create React App, you may also want to [upgrade to the newest v5](https://github.com/facebook/create-react-app/releases/tag/v5.0.0) as well using:
```
npm i react-scripts@5
```
Or
```
yarn add react-scripts@5
```
Then, make sure to upgrade any dependencies that might rely on React.
For example, upgrade [React Redux to v8](https://github.com/reduxjs/react-redux/releases/tag/v8.0.0-beta.2) or [SWR to 1.1.0](https://github.com/vercel/swr/releases/tag/1.1.0)
## Update `render` method
After you install React 18, you may receive an error when your app is running:
> Warning: ReactDOM.render is no longer supported in React 18. Use createRoot instead. Until you switch to the new API, your app will behave as if it's running React 17. Learn more:[ https://reactjs.org/link/switch-to-createroot](https://reactjs.org/link/switch-to-createroot)
This is because previously, in React 17 and before, youd have a file - usually called `index.js` or `index.ts` - that included the following code:
```
const rootElement = document.getElementById("root");
ReactDOM.render(<App />, rootElement);
```
While this code will continue to function for this release, it will not allow you to leverage most of the new features of React 18. Further, itll be removed in a future release of React.
To fix this issue, replace this code with the following:
```
ReactDOM.render(<App />, rootElement);
```
While this code will continue to function for this release, it will not allow you to leverage most of the new features of React 18. Further, itll be removed in a future release of React.
To fix this issue, replace this code with the following:
```
const rootElement = document.getElementById("root");
ReactDOM.createRoot(rootElement).render(
<App />
);
```
When finished, you should be able to verify the version of React youre using with `{React.version}`
<iframe src="https://app.coderpad.io/sandbox?question_id=200107" loading="lazy"></iframe>
## Conclusion
As promised, the update to React 18 is fairly straightforward! Most applications should be able to upgrade without too many problems.
If you run into issues during your migration and youre using `StrictMode`, try temporarily removing it to see if you run into any issues. [React 18 introduced some changes that may impact some apps.](https://github.com/reactwg/react-18/discussions/19)
We hope you enjoy the new [React concurrent features](https://github.com/reactwg/react-18/discussions/4) and happy hacking!
);
```
When finished, you should be able to verify the version of React youre using with `{React.version}`
<iframe src="https://app.coderpad.io/sandbox?question_id=200107" loading="lazy"></iframe>
## Conclusion
As promised, the update to React 18 is fairly straightforward! Most applications should be able to upgrade without too many problems.
If you run into issues during your migration and youre using `StrictMode`, try temporarily removing it to see if you run into any issues. [React 18 introduced some changes that may impact some apps.](https://github.com/reactwg/react-18/discussions/19)
We hope you enjoy the new [React concurrent features](https://github.com/reactwg/react-18/discussions/4) and happy hacking!

View File

@@ -34,8 +34,8 @@ Any sufficiently useful programming language needs an ecosystem to rely on. One
`npm` is a combination of two things:
1) The registry - the servers and databases that host the packages with their specific named packages.
2) The client-side CLI utility - the program that runs on your computer in order to install and manage the packages on your local disk
1. The registry - the servers and databases that host the packages with their specific named packages.
2. The client-side CLI utility - the program that runs on your computer in order to install and manage the packages on your local disk
When, say, Facebook wants to publish a new version of `react`, someone from the React team (with publishing credentials) will setup and build the production version of the React source code, open the client-side utility in order to run the command `npm publish`, which will send the production code to the registry. From there, when you install `react` using the `npm` command on your device, it will pull the relevant files from the registry onto your local machine for you to use.
@@ -43,13 +43,13 @@ While the registry is vital for the usage of the CLI utility, most of the time w
# Setting Up Node {#setup-node}
Before we explain how to install Node, let's explain something about the release process of the software.
Before we explain how to install Node, let's explain something about the release process of the software.
When it comes to install options there are two:
When it comes to install options there are two:
1) LTS
1. LTS
2) Current
2. Current
The "LTS" release stands for "long-term support" and is considered the most "stable" release that is recommended for production usage. This is because LTS releases will receive critical bug fixes and improvements even after a new version comes along. LTS releases often see years of support.
@@ -59,7 +59,7 @@ NodeJS switches back and forth between LTS and non-LTS stable releases. For exam
## Installing Node {#installing-node}
You can find pre-built binaries ready-to-install from [NodeJS' website](https://nodejs.org/en/download/). Simply download the package you want and install it.
You can find pre-built binaries ready-to-install from [NodeJS' website](https://nodejs.org/en/download/). Simply download the package you want and install it.
> If you're unsure which version of Node to go with, stick to the LTS release
@@ -177,10 +177,8 @@ Just as there's a method for installing `yarn` natively on macOS, you can do the
choco install yarn
```
> There are other methods to install Yarn on Windows if you'd rather. [Look through `yarn`'s official docs for more](https://classic.yarnpkg.com/en/docs/install/#windows-stable)
# Using Node {#using-node}
Now that you have it setup, let's walk through how to use Node. First, start by opening your terminal.
@@ -237,7 +235,7 @@ Then, in your terminal, `cd` into the directory the `index.js` file is and run `
![Windows Terminal showing the program output](./output-js.png)
This particular program will automatically exits Node once it's completed running, but not all do. Some programs, like the following, may run until manually halted:
This particular program will automatically exits Node once it's completed running, but not all do. Some programs, like the following, may run until manually halted:
```javascript
// index.js
@@ -388,9 +386,9 @@ While you can use these numbers arbitrarily, most projects follow [a standard ca
The basics of semantic versioning can be broken down into three parts:
1) The major version
2) The minor version
3) The patch version
1. The major version
2. The minor version
3. The patch version
In SemVer, a package version might look something like `MAJOR.MINOR.PATCH`. A package with `2.1.3` has a "**major** version" of `2`, a "**minor** version" of `1`, and a "**patch** version" of `3`.
@@ -528,13 +526,13 @@ Worried that your dependencies might not resolve the same version on systems lik
## Lock Files {#package-lock}
Once you run `npm i` on a project with dependencies, you'll notice a new file in your root folder: `package-lock.json`. This file is called your **"lockfile"**. **This file is auto-generated by `npm` and should not be manually modified.**
Once you run `npm i` on a project with dependencies, you'll notice a new file in your root folder: `package-lock.json`. This file is called your **"lockfile"**. **This file is auto-generated by `npm` and should not be manually modified.**
> If you're using `yarn`, you'll notice instead this file is called `yarn.lock`. It serves the same purpose as `package-lock.json` and should be treated similarly
While your `package.json` describes which versions you'd _prefer_ to be installed, your lockfile nails down exactly which version of the dependency (and sub dependencies) were resolved and installed when it came time to install your packages. This allows you to use commands like `npm ci` to install directly from this lockfile and install the exact same version of packages you had installed previously.
This can be incredibly helpful for debugging package resolution issues as well as making sure your CI/CD pipeline installs the correct versions of deps.
This can be incredibly helpful for debugging package resolution issues as well as making sure your CI/CD pipeline installs the correct versions of deps.
While it's imperative not to track your `node_modules` folder, you **want to commit your `package-lock` file in your git repo**. This ensures that things like CI pipelines are able to run the same versions of dependencies you're utilizing on your local machine.

View File

@@ -41,7 +41,6 @@ Once the copying of the files from the Android Studio environment to `Assets` ha
This will naturally incur a question for developers who have tried to maintain a system of duplication of any size:
**How do you manage dependencies between these two folders?**
## Managing Android Dependencies {#android-dependencies}
Luckily for us, managing Android code dependencies in Unity has a thought-out solution from a large company: Google. [Because Google writes a Firebase SDK for Unity](https://firebase.google.com/docs/unity/setup), they needed a solid way to manage native dependencies within Unity.
@@ -111,17 +110,13 @@ dependencies {
This will take all of the AAR files and JAR files and treat them as if they were synced by Android Studio's Gradle sync.
For more information on how to manage your app's dependencies from within Unity, you may want to check out [this article created by the Firebase developers](https://medium.com/firebase-developers/how-to-manage-your-native-ios-and-android-dependencies-in-unity-like-firebase-921659843aef), who coincidentally made the plugin for managing Android dependencies in Unity.
# Call Android code from C# {#call-android-from-c-sharp}
It's great that we're able to manage those dependencies, but they don't mean much if you're not able to utilize the code from them!
For example, take the following library: https://github.com/jaredrummler/AndroidDeviceNames
For example, take the following library: <https://github.com/jaredrummler/AndroidDeviceNames>
That library allows you to grab metadata about a user's device. This might be useful for analytics or bug reporters you may be developing yourself. Let's see how we're able to integrate this Java library in our C# code when building for the Android platform.
@@ -162,11 +157,11 @@ withInstance.request(handleOnFinished);
You can see that we have a few steps here:
1) Make a new `Callback` instance
- Provide an implementation of `onFinished` for said instance
2) Call `DeviceName.with` to create a request we can use later
- This means that we have to gain access to the currently running context to gain device access. When calling the code from Unity, it means we have to get access to the `UnityPlayer` context that Unity engine runs on
3) Call that request's `request` method with the `Callback` instance
1. Make a new `Callback` instance
- Provide an implementation of `onFinished` for said instance
2. Call `DeviceName.with` to create a request we can use later
- This means that we have to gain access to the currently running context to gain device access. When calling the code from Unity, it means we have to get access to the `UnityPlayer` context that Unity engine runs on
3. Call that request's `request` method with the `Callback` instance
For each of these steps, we need to have a mapping from the Java code to C# code. Let's walk through these steps one-by-one

View File

@@ -58,7 +58,7 @@ Something to keep in mind is that these disabilities may not be permanent. For i
> Microsoft originally created this chart as part of their [Inclusive Toolkit](https://download.microsoft.com/download/b/0/d/b0d4bf87-09ce-4417-8f28-d60703d672ed/inclusive_toolkit_manual_final.pdf) manual
Creating an application that's accessible means that you're making a better experience for *all* of your users.
Creating an application that's accessible means that you're making a better experience for _all_ of your users.
By making your services accessible to more people, you are most importantly making them more equitable, but there is often a business case for accessibility. Opening your doors to more users may create an additional financial incentive, and many organizations have a legal requirement to meet accessibility guidelines. For instance, the U.S. Federal Government is subject to [Section 508](https://www.section508.gov/manage/laws-and-policies), which requires compliance with [Web Content Accessibility Guidelines (also known as WCAG, which we'll touch on later)](#wcag). Likewise, private US companies may be subject to compliance due to the "Americans with Disabilities Act" (shortened to "ADA"). The U.S. isn't the only country with these requirements, either. According to [WCAG's reference page for various legal laws](https://www.w3.org/WAI/policies/), there are at least 40 such laws in place around the world.
@@ -76,7 +76,7 @@ There are different scales of accessibility as well. [WCAG includes three differ
> - Level AA includes all Level A and AA requirements. Many organizations strive to meet Level AA.
> - Level AAA includes all Level A, AA, and AAA requirements.
Meeting AA requirements is typically seen as a good commitment to accessibility, but AAA will open more doors to your users and is the gold standard for accessible user experience.
Meeting AA requirements is typically seen as a good commitment to accessibility, but AAA will open more doors to your users and is the gold standard for accessible user experience.
Far from a comprehensive list, A requires:
@@ -102,7 +102,7 @@ Interested in reading the full list? [Read the quick reference to WCAG 2.1](http
# Smartly using Semantic HTML Tags {#html-semantic-tags}
One of the easiest things you can do for your application's accessibility is to use semantic HTML tags.
One of the easiest things you can do for your application's accessibility is to use semantic HTML tags.
Let's say we have HTML to display fruits in a list:
@@ -147,7 +147,7 @@ In our previous example, we used an HTML attribute [`aria-label`](https://develo
A super small small subsection of `aria-` attributes includes:
- [`aria-labelledby`](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/ARIA_Techniques/Using_the_aria-labelledby_attribute) — Associate the element with another element's text as the label
- [`aria-labelledby`](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/ARIA_Techniques/Using_the_aria-labelledby_attribute) — Associate the element with another element's text as the label
- `aria-expanded` — A Boolean value meant to communicate when a dropdown is expanded
- `aria-valuemin` — The minimum allowed value in a numerical input
- `aria-valuemax` — The maximum allowed value of a numerical input
@@ -156,7 +156,7 @@ Additional to `aria` props, [the `role` property](https://developer.mozilla.org/
# Classy CSS {#css}
While HTML relays a significant amount of information to assistive technologies like screen readers, it's not the only thing used to inform those tools. Certain CSS rules can change the functionality as well. After all, screen readers (and other tools) don't look through the source code of a website. Instead, they're looking at [the accessibility tree](https://developers.google.com/web/fundamentals/accessibility/semantics-builtin/the-accessibility-tree): a modified version of the DOM. The accessibility tree and the DOM are both constructed by the browser from the website's source code.
While HTML relays a significant amount of information to assistive technologies like screen readers, it's not the only thing used to inform those tools. Certain CSS rules can change the functionality as well. After all, screen readers (and other tools) don't look through the source code of a website. Instead, they're looking at [the accessibility tree](https://developers.google.com/web/fundamentals/accessibility/semantics-builtin/the-accessibility-tree): a modified version of the DOM. The accessibility tree and the DOM are both constructed by the browser from the website's source code.
> Want to learn more about the DOM, how the browser constructs it, and what it's used for internally? [This article helps explain this in detail](https://unicorn-utterances.com/posts/understanding-the-dom/).
@@ -178,7 +178,7 @@ For this reason, there's a frequently used CSS class used to hide elements visua
}
```
There are many ways which CSS can influence assistive technologies. [Ben Myers covers this more in his blog post](https://benmyers.dev/blog/css-can-influence-screenreaders/).
There are many ways which CSS can influence assistive technologies. [Ben Myers covers this more in his blog post](https://benmyers.dev/blog/css-can-influence-screenreaders/).
# Contrast is Cool {#contrast}
@@ -188,7 +188,7 @@ While there are various reasons a user might not be able to see weakly contraste
<img alt="Dark gray text on a black background" src="./color_fail.png" style="max-width: 600px; width: 100%"/>
Now, compare that to highly contrasting colors:
Now, compare that to highly contrasting colors:
<img alt="White text on a black background" src="./color_pass.png" style="max-width: 600px; width: 100%"/>
@@ -221,7 +221,7 @@ Many phones using iOS and Android allow users to change the font size on their m
</figure>
</div>
Not only do you have these settings on mobile devices, but they're available on desktop as well.
Not only do you have these settings on mobile devices, but they're available on desktop as well.
Using Chrome, go to [your settings page](chrome://settings/?search=font+size), and you should be able to set your font size.
@@ -231,22 +231,21 @@ You can do the same in Firefox in [your preferences](about:preferences#general).
![Font settings in Firefox](./firefox_font_size.png)
## Implementation {#font-rem}
While browsers have the ability to set the font size, if you're using `px`, `vw`, `vh`, or other unit values for your fonts, the browser will not update these font sizes for you. In order to have your application rescale the font size to match the browser settings, you'll need to use the `rem` unit.
You can think of `rem` as a multiplier to apply to the default font size. When the browser's font size is set to `16px`:
- `1rem` will be `16px` (1 * 16px)
- `1.5rem` will be `24px` (1.5 * 16px)
- `3rem` will be `48px` (3 * 16px)
- `1rem` will be `16px` (1 \* 16px)
- `1.5rem` will be `24px` (1.5 \* 16px)
- `3rem` will be `48px` (3 \* 16px)
Likewise, when the browser's font size is set to `20px`:
- `1rem` will be `20px` (1 * 20px)
- `1.5rem` will be `30px` (1.5 * 20px)
- `3rem` will be `60px` (3 * 20px)
- `1rem` will be `20px` (1 \* 20px)
- `1.5rem` will be `30px` (1.5 \* 20px)
- `3rem` will be `60px` (3 \* 20px)
> Something to keep in mind is that `rem` is a _relative_ font size. It's relative to the root element's font size. _This means that you cannot set a default `px` value font size in CSS to the `<html>` tag or to the `:root` selector, as it will disable font scaling, even if the rest of your page is using `rem` values._
@@ -302,7 +301,7 @@ If anyone is ever advertising to you that your inaccessible project can be made
## Assistance is Amicable {#eslint}
While full automation will never be possible for improving a project's accessibility, not everyone proposing assistance in the process is trying to sell snake oil.
While full automation will never be possible for improving a project's accessibility, not everyone proposing assistance in the process is trying to sell snake oil.
For example, [Deque's open-source Axe project](https://github.com/dequelabs/axe-core) can help identify issues such as common HTML semantic errors, contrast problems, and more. There are even libraries that help integrate Axe into your project's linters, such as one for React called [`eslint-plugin-jsx-a11y`](https://github.com/jsx-eslint/eslint-plugin-jsx-a11y).
@@ -322,7 +321,7 @@ As mentioned in [a previous section](#no-automation), the process to make your a
While there is plenty you can do to make existing functionality accessibility friendly, it's often forgotten that a strongly accessible app may opt to add specific functionality for its users with disabilities.
Some great examples of things like this are sites with lots of user-generated content. For example, Twitter allows its users to [add alternative (alt) text to their uploaded images and GIFs](https://help.twitter.com/en/using-twitter/picture-descriptions). Likewise, YouTube has the ability to [add subtitles and captions](https://support.google.com/youtube/answer/2734796?hl=en) to uploaded videos on their platform.
Some great examples of things like this are sites with lots of user-generated content. For example, Twitter allows its users to [add alternative (alt) text to their uploaded images and GIFs](https://help.twitter.com/en/using-twitter/picture-descriptions). Likewise, YouTube has the ability to [add subtitles and captions](https://support.google.com/youtube/answer/2734796?hl=en) to uploaded videos on their platform.
Oftentimes, you'll find that these features benefit everyone, not just assistive technology users. You may want to watch a video in a crowded area; with closed captions, that's a much easier sell than trying to hear over others and interrupting everyone around you.
@@ -382,4 +381,4 @@ We hope you've enjoyed learning from our accolade-worthy alliterative headlines.
There are so many things that we wanted to include in this article but couldn't. Like most parts of engineering, the field of accessible design and the nuances within can be incredibly complex in fringe scenarios. Getting accessibility in a great place for your users takes active effort - just like any other part of building your app. Because of this, we encourage you to do [further research](#further-reading) on the topic. Don't be afraid to ask questions of community members, either! Many in the community are incredibly helpful and friendly.
Speaking of community, we'd love to hear your thoughts on this article. Did you learn something from it? Have questions about something accessibility-related? Think we missed something? [Join our Slack community](https://bit.ly/coderpad-slack) and chat with us or [send us a Tweet](https://twitter.com/coderpad)!
Speaking of community, we'd love to hear your thoughts on this article. Did you learn something from it? Have questions about something accessibility-related? Think we missed something? [Join our Slack community](https://bit.ly/coderpad-slack) and chat with us or [send us a Tweet](https://twitter.com/coderpad)!

View File

@@ -35,7 +35,7 @@ While the Shadow DOM and HTML templates are undoubtedly useful in applications,
## What are Custom Elements?
At their core, custom elements essentially allow you to create new HTML tags. These tags are then used to implement custom UI and logic that can be used throughout your application.
At their core, custom elements essentially allow you to create new HTML tags. These tags are then used to implement custom UI and logic that can be used throughout your application.
```
<!-- page.html -->
@@ -52,7 +52,7 @@ While we tend to think of HTML tags as directly mapping to a single DOM element,
![Chrome DevTools showing the page header element expand into multiple tags](./chrome.png)
Because of this, were able to improve an apps organization by reducing the amount of tags visible in a single file to read with better flow.
Because of this, were able to improve an apps organization by reducing the amount of tags visible in a single file to read with better flow.
But custom elements arent just made up of HTML - youre able to associate JavaScript logic with these tags as well! This enables you to keep your logic alongside its associated UI. Say your header is a dropdown thats powered by JavaScript. Now you can keep that JavaScript inside of your “page-header” component, keeping your logic consolidated.
@@ -62,16 +62,15 @@ Finally, a significant improvement that components provide is composability. You
While many implementations of components have differences, one concept that is fairly universal is “lifecycle methods”. At their core, lifecycle methods enable you to run code when events occur on an element. Even frameworks like React, which haved moved away from classes, still have similar concepts of doing actions when a component is changed in some way.
Lets take a look at some of the lifecycle methods that are baked into the browsers implementation.
Custom elements have 4 lifecycle methods that can be attached to a component.
| connectedCallback | Ran when attached to the DOM |
| ------------------------ | ------------------------------------------------------------ |
| disconnectedCallback | Ran when unattached to the DOM |
| connectedCallback | Ran when attached to the DOM |
| ------------------------ | -------------------------------------------------------------------------------- |
| disconnectedCallback | Ran when unattached to the DOM |
| attributeChangedCallback | Ran when one of the web components attributes is changed. Must explicitly track |
| adoptedCallback | Ran when moved from one HTML document to another |
| adoptedCallback | Ran when moved from one HTML document to another |
> While each of them has their uses, well primarily be focusing on the first 3. `adoptedCallback` is primarily useful in niche circumstances and is therefore difficult to make a straightforward demo of.
@@ -158,7 +157,6 @@ Because this JavaScript object is simple and only utilizes [primitive data types
### Serializing Limitations
While simple objects and arrays can be serialized relatively trivially, there are limitations. For example, take the following code:
```javascript
@@ -175,8 +173,7 @@ If we wanted to send this object to a server from a client remotely with the met
`window`, while available in the browser, is not available in NodeJS, which the server may likely be written in. Should we attempt to serialize the `window` object and pass it along with the method? What about methods on the `window` object? Should we do the same with those methods?
On the other end of the scale, while `console.log` ***is\*** implemented in both NodeJS and browsers alike, its implemented using native code in both runtimes. How would we even begin to serialize native methods, even if we wanted to? *Maybe* we could pass machine code? Even ignoring the security concerns, how would we handle the differences in machine code between a users ARM device and a servers x86_64 architecture?
On the other end of the scale, while `console.log` \***is\*** implemented in both NodeJS and browsers alike, its implemented using native code in both runtimes. How would we even begin to serialize native methods, even if we wanted to? _Maybe_ we could pass machine code? Even ignoring the security concerns, how would we handle the differences in machine code between a users ARM device and a servers x86\_64 architecture?
All of this becomes a problem before you even consider that your server may well not be running NodeJS. How would you even begin to represent the concept of `this` in a language like Java? How would you handle the differences between a dynamically typed language like JavaScript and C++?
@@ -198,7 +195,6 @@ It simply omits the key from the JSON string. This is important to keep in mind
### HTML Attribute Strings
Why are we talking about serialization in this article? To answer that, I want to mention two truths about HTML elements.
- HTML attributes are case insensitive
@@ -210,14 +206,12 @@ The first of these truths is simply that for any attribute, you can change the k
<input type="checkbox"/>
```
And:
```html
<input tYpE="checkbox"/>
```
The second truth is much more relevant to us in this discussion. While it might seem like you can assign non-string values to an attribute, theyre always parsed as strings under-the-hood.
You might think about being tricky and using JavaScript to assign non-string values to an attribute:
@@ -282,7 +276,7 @@ While this tends to be the extent of HTML elements deserializing of attribute
As we touched on shortly, if we simply try to pass an array to an attribute using JavaScripts `setAttribute`, it will not include the brackets. This is due to `Array.toString()`s output.
If we attempted to pass the array ``["test", "another", "hello"]`` from JS to an attribute, the output would look like this:
If we attempted to pass the array `["test", "another", "hello"]` from JS to an attribute, the output would look like this:
```javascript
<script>
@@ -313,7 +307,6 @@ If we attempted to pass the array ``["test", "another", "hello"]`` from JS to an
Because of the output of `toString`, its difficult to convert the attribute value back into a string. As such, we only display the data inside of a `<p>` tag. But lists dont belong in a single paragraph tag! They belong in a `ul` with individual `li`s per item in the list. After all, [semantic HTML is integral for an accessible website](https://coderpad.io/blog/introduction-to-web-accessibility-a11y/)!
Lets instead use `JSON.stringify` to serialize this data, pass that string to the attribute value, then deserialize that in the element using `JSON.parse`.
```html
@@ -350,9 +343,7 @@ Using this method, were able to get an array in our `render` method. From the
## Pass Array of Objects
While an array of strings is a straightforward demonstration of serializing attributes, its hardly representative of real-world data structures.
While an array of strings is a straightforward demonstration of serializing attributes, its hardly representative of real-world data structures.
Lets start working towards making our data more realistic. A good start might be to turn our array of strings into an array of objects. After all, we want to be able to mark items “completed” in a todo app.
@@ -403,11 +394,9 @@ Lets take a look at how we can display this in a reasonable manner using our
</my-component>
```
> Remember, checked=”false” will leave a checkbox checked. This is because “false” is a truthy string. Reference our “serializing limitations” sections for more reading.
Now that were displaying these checkboxes, lets add a way to toggle them!
Now that were displaying these checkboxes, lets add a way to toggle them!
```javascript
var todoList = [];
@@ -423,7 +412,6 @@ function changeElement() {
}
```
Now, all we need to do is run the function “toggleAll” on a button press and it will update the checkboxes in our custom element.
Now that we have a way to toggle all checkboxes, lets look at how we can toggle individual todo items.
@@ -531,13 +519,13 @@ If this isnt a good long-term solution, what can we do to fix our issue with
## Pass via Props, not Attributes
Attributes provide a simple method of passing primitive data to your custom elements. However, as weve demonstrated, it falls flat in more complex usage due to the requirement to serialize your data.
Attributes provide a simple method of passing primitive data to your custom elements. However, as weve demonstrated, it falls flat in more complex usage due to the requirement to serialize your data.
Knowing that were unable to bypass this limitation using attributes, lets instead take advantage of JavaScript classes to pass data more directly.
Because our components are classes that extend `HTMLElement`, were able to access our properties and methods from our custom elements parent. Lets say we want to update `todos` and render once the property is changed.
To do this, well simply add a method to our components class called “`setTodos`”. This method will then be accessible when we query for our element using `document.querySelector`.
To do this, well simply add a method to our components class called “`setTodos`”. This method will then be accessible when we query for our element using `document.querySelector`.
```javascript
class MyComponent extends HTMLElement {
@@ -568,16 +556,15 @@ function changeElement() {
Now, if we toggle items in our todo list, our `h1` tag updates as we would expect: weve solved the mismatch between our DOM and our data layer!
Because were updating the *properties* of our custom elements, we call this “passing via properties”, which solves the serialization issues of “passing via attributes”.
Because were updating the _properties_ of our custom elements, we call this “passing via properties”, which solves the serialization issues of “passing via attributes”.
But thats not all! Properties have a hidden advantage over attributes for data passing as well: memory size.
When we were serializing our todos into attributes, we were duplicating our data. Not only were we keeping the todo list in-memory within our JavaScript, but the browser keeps loaded DOM elements in memory as well. This means that for every todo we added, not only were we keeping a copy in JavaScript, but in the DOM as well (via attribute string).
But surely, thats the only way memory is improved when migrating to properties, right? Wrong!
Because keep in mind, on top of being loaded in-memory in JS in our main `script` tag, and in the browser via the DOM, we were also deserializing it in our custom element as well! This meant that we were keeping a *third* copy of our data initialized in-memory simultaneously!
Because keep in mind, on top of being loaded in-memory in JS in our main `script` tag, and in the browser via the DOM, we were also deserializing it in our custom element as well! This meant that we were keeping a _third_ copy of our data initialized in-memory simultaneously!
While these performance considerations might not matter in a demo application, they would add significant complications in production-scale apps.
@@ -587,7 +574,6 @@ Weve covered a lot today! Weve introduced some of the core concepts at pla
While we spoke a lot about passing data by attributes vs. properties today, there are pros and cons to both. Ideally, we would want the best of both worlds: the ability to pass data via property in order to avoid serialization, but keep the simplicity of attributes by reflecting their value alongside the related DOM element.
Something else weve lost since the start of this article is code readability in element creation. Originally, when we were using `innerHTML`, we were able to see a visual representation of the output DOM. When we needed to add event listeners, however, we were required to switch to `document.createElement`. Preferably, we could attach event listeners without sacrificing the in-code HTML representation of our custom elements rendered output.
While these features may not be baked into the web component specifications themselves, there are other options available. In our next article, well take a look at a lightweight framework we can utilize to build better web components that can integrate with many other frontend stacks!

View File

@@ -173,22 +173,27 @@ can interact with it. From here, a few different events can occur...
Let's say that another Activity comes into the foreground (but this activity is still visible behind
it; imagine a popup or something that the user will return to your app from).
- `onPause()` called; stop doing anything significant - playing music or any continuous task not running
in another component (like a `Service`) should be ceased.
in another component (like a `Service`) should be ceased.
Then, if the user returns to the activity...
- `onResume()` called; resume whatever was paused previously
If the user leaves your activity completely, then you will get:
- `onPause()` called; probably stop doing stuff maybe
- `onStop()` called; okay, REALLY stop doing stuff now
Then, if the user navigates back to your activity...
- `onRestart()` called
- `onStart()` called
- `onResume()` called
When the application is completely closed by the user, then you will receive:
- `onPause()` called
- `onStop()` called
- `onDestroy()` called
@@ -215,7 +220,7 @@ See: [Service documentation](https://developer.android.com/reference/android/app
A broadcast receiver can be seen as more of an "event" that occurs once and is over. They can run
independently from a UI, the same a Service, but only for a short period of time (I believe they are
terminated by the system after ~10 seconds - citation needed). However, they are given a `Context`,
terminated by the system after \~10 seconds - citation needed). However, they are given a `Context`,
and can fire an intent to start other components of the app if needed.
Broadcast receivers are a little special in that they don't have to be declared explicitly in the

View File

@@ -151,6 +151,7 @@ What if you had a way to mutate or modify this function? What if this way of mut
> As always, feel free to search more on them (the terms "JavaScript ORM" might help) and always know that not knowing a thing is always okay 🤗
Here's an example [from a library built to do just that](https://typeorm.io/#/) that allows you to preserve the TypeScript type to save data in specified field types in your database:
```typescript
import {Entity, PrimaryGeneratedColumn, Column} from "typeorm";

View File

@@ -41,7 +41,7 @@ This “intuitive” code comes loaded with assumptions and processes that we re
- What is this doing under the hood?
- Are we able to utilize functions in potentially unexpected ways?
While you *could* get away with never knowing the answers to these questions, being a great developer often involves understanding how the tools we use actually work and JavaScript functions are no exception.
While you _could_ get away with never knowing the answers to these questions, being a great developer often involves understanding how the tools we use actually work and JavaScript functions are no exception.
For example, do you know what “function currying” is and why its useful? Or do you know how `[].map()` and `[].filter` are implemented?
@@ -49,7 +49,7 @@ Fret not, dear reader, as we will now take a look at all these questions.
# Why are we able to assign a function to a variable?
To understand why we're able to assign a function to a variable, let's analyze what happens when *anything* is assigned to a variable.
To understand why we're able to assign a function to a variable, let's analyze what happens when _anything_ is assigned to a variable.
## How memory works
@@ -68,13 +68,13 @@ This will create two sections of memory that your compiler will keep around for
This might be visually represented like so:
![A big block called "memory" with two items in it. One of them has a name of "helloMessage" and is address `0x7de35306` and the other is "byeMessage" with an address of `0x7de35306`.](./memory_block.png)
![A big block called "memory" with two items in it. One of them has a name of "helloMessage" and is address 0x7de35306 and the other is "byeMessage" with an address of 0x7de35306.](./memory_block.png)
It's important to remember that the memory address itself doesn't store the name, your compiler does. When you create blocks of memory via variables, the compiler gets back a number that it can use to look up the variable's value inside of a "stack" of memory.
You can *loosely* think of this memory stack as an array that the compiler looks through in order to get the data based on an index. This number can be huge because your computer likely has multiple gigabytes of RAM. Even 16GB is equivalent to 1.28e+11 bytes. Because of this, memory addresses are often colloquially shortened to [hexadecimal representations](https://unicorn-utterances.com/posts/non-decimal-numbers-in-tech).
You can _loosely_ think of this memory stack as an array that the compiler looks through in order to get the data based on an index. This number can be huge because your computer likely has multiple gigabytes of RAM. Even 16GB is equivalent to 1.28e+11 bytes. Because of this, memory addresses are often colloquially shortened to [hexadecimal representations](https://unicorn-utterances.com/posts/non-decimal-numbers-in-tech).
This means that our *0x7de35306* memory address is associated with bit number 2112049926, or just over the 0.2GB mark.
This means that our _0x7de35306_ memory address is associated with bit number 2112049926, or just over the 0.2GB mark.
> This explanation of memory is a very generalized explanation of how memory allocation works. [You can read more about memory stacks here.](https://en.wikipedia.org/wiki/Stack-based_memory_allocation)
@@ -98,7 +98,7 @@ console.log(memoryBlocks[0x7de35306]);
console.log(memoryBlocks[0x7de35307]);
```
> This code is simply pseudocode and will not actually run. Instead, your computer will compile down to ["machine code" or "assembly code"](https://unicorn-utterances.com/posts/how-computers-speak#assembly-code), which will in turn run on "bare metal". What's more, this is a drastic oversimplification of how your browser's JIT compiler and your system's memory management*actually* works under-the-hood.
> This code is simply pseudocode and will not actually run. Instead, your computer will compile down to ["machine code" or "assembly code"](https://unicorn-utterances.com/posts/how-computers-speak#assembly-code), which will in turn run on "bare metal". What's more, this is a drastic oversimplification of how your browser's JIT compiler and your system's memory management\_actually\_ works under-the-hood.
## How does this relate to function storage?
@@ -122,7 +122,7 @@ const sayHello = () => {
sayHello();
```
As you might correctly assume, this means that both of these syntaxes allow a function to be stored in memory.
As you might correctly assume, this means that both of these syntaxes allow a function to be stored in memory.
Using our pseudocode again, this might look like:
@@ -181,7 +181,7 @@ function sayThis(message) {
sayThis("Hello");
```
Here, we're passing a string as a property to the `sayThis` function.
Here, we're passing a string as a property to the `sayThis` function.
Just like you can pass in integers, strings, or arrays to a function, you might be surprised to know you can also pass in functions into a function:
@@ -199,7 +199,7 @@ doThis(sayHello);
This will output the same "Hello" as our previous`sayThis` usage.
Not only can you call these functions that are passed as parameters, but you can pass parameters to *those* functions as well.
Not only can you call these functions that are passed as parameters, but you can pass parameters to _those_ functions as well.
```javascript
function callThisFn(callback) {
@@ -225,7 +225,7 @@ In case this isn't clear, let's do our previous trick of calling a function with
# What about returning a function from another function?
As a functions input, parameters are only half of the story of any function's capabilities just as any function can output a regular variable, they can also output another function:
As a functions input, parameters are only half of the story of any function's capabilities just as any function can output a regular variable, they can also output another function:
```javascript
function getMessage() {
@@ -255,7 +255,7 @@ messageFn();
getMessageFn()();
```
This code block is an extension on the "returned value" idea. Here, we're returning*another* *function* from `getMessageFn`. This function is then assigned to `messageFn` which we can then in turn call itself.
This code block is an extension on the "returned value" idea. Here, we're returning\_another\_ _function_ from `getMessageFn`. This function is then assigned to `messageFn` which we can then in turn call itself.
Meta, right?
@@ -292,7 +292,7 @@ sayHello(); // Will log "Hello, world"
# How do you pass data from one function to another? A pipe function!
The concepts we've spoken about today are commonly utilized when programming in a style called "functional programming." Functional programming is a style of programming - similar to["Object Oriented Programming" (OOP)](https://www.educative.io/blog/object-oriented-programming) - that utilizes functions as a method to pass, change, and structure data.
The concepts we've spoken about today are commonly utilized when programming in a style called "functional programming." Functional programming is a style of programming - similar to["Object Oriented Programming" (OOP)](https://www.educative.io/blog/object-oriented-programming) - that utilizes functions as a method to pass, change, and structure data.
Functional programming relies heavily on the properties of functions that we've looked at today: passing functions to other functions, returning functions from functions, and more.
@@ -532,7 +532,6 @@ console.log(sum); // 6
Now that you've mastered the fundamentals of JavaScript functions, you can build more kinds of APIs for your applications. These APIs can help you make debugging easier, consolidate your application logic, and more.
The functional programming paradigms we've touched on today are immensely popular in ecosystems like React applications and library development. In particular, [React uses these concepts alongside its `useEffect` API.](https://coderpad.io/blog/development/rules-of-reacts-useeffect/)
These concepts aren't unique to JavaScript, either! Python utilizes similar ideas in its ["list comprehension" functionality.](https://coderpad.io/blog/development/python-list-comprehension-guide/)

View File

@@ -46,12 +46,13 @@ For a more detailed explanation of connecting to freenode, [Freenode's documenta
First, you'll want to choose a nick. This will be something that all users will see and address you by, so it should be easy to remember. If you have a twitter or github handle, it is best to make it as similar as possible to that in order to stay consistent. In the following steps, replace the information surrounded by `<>` with the relevant data.
1. Send the command `/nick <username>`, followed by a message to `NickServ` by running `/msg NickServ REGISTER <password> <email@example.com>`.
1. Send the command `/nick <username>`, followed by a message to `NickServ` by running `/msg NickServ REGISTER <password> <email@example.com>`.
2. You should receive an email with another command to run, along the lines of `/msg NickServ VERIFY REGISTER <username> <code>`. This will confirm your identity to freenode and reserve the nickname for your use.
3. If you plan to use your account from multiple devices simultaneously, you will need to have one username for each. You can join them to your current account by:
- Setting your nick to a new username: `/nick <username2>`
- Identifying with your existing credentials: `/msg NickServ IDENTIFY <username> <password>`
- Grouping the nick with your account: `/msg NickServ GROUP`
- Setting your nick to a new username: `/nick <username2>`
- Identifying with your existing credentials: `/msg NickServ IDENTIFY <username> <password>`
- Grouping the nick with your account: `/msg NickServ GROUP`
Each time you reconnect to freenode, you will need to log in. [Freenode's registration docs](https://freenode.net/kb/answer/registration) have more information on this, but it is possible to simply run `/msg NickServ IDENTIFY <username> <password>` each time you connect.

View File

@@ -22,7 +22,7 @@ What does that mean?
Well, as it turns out, anything that happens in the browser basically happens out in the open. Anyone who knows how to open a developer console can see the output of the JavaScript console, the results of network requests/responses, and anything hidden in the HTML or CSS of the current page. While you are able to mitigate this type of reverse-engineering by randomizing variable names in a build step (often called "Obfuscating" your code), even a fairly quick Google session can often undo all of the efforts you took to muddy the waters. The browser is a terrible place to try to store or use secret information like unencrypted passwords or API keys - and React runs in the browser!
In other words, React keeps no secrets from the user which means that it's a terrible place to keep *your* secrets.
In other words, React keeps no secrets from the user which means that it's a terrible place to keep _your_ secrets.
So, what is the answer? How do you keep your API keys from falling into the hands of vicious web scraping bots in React? It's simple, really. You don't keep your secrets in React at all.
@@ -30,7 +30,7 @@ We can't keep things like API keys a secret in React because it runs in the brow
# What is a Proxy Server? {#proxy}
If you are unfamiliar with the term "proxy server", that's alright! If you think about how a React app would typically interface with an API, you'd have a `GET` call to the API server in order to get the data you want from the API. However, for APIs that require an API key of "client_secret", we have to include an API key along with the `GET` request in order to get the data we want. This is a perfectly understandable method for securing and limiting an API, but it introduces the problem pointed out above: We can't simply bundle the API key in our client-side code. As such, we need a way to keep the API key out of reach of our users but still make data accessible. To do so, we can utilize another server (that we make and host ourselves) that knows the API key and uses it to make the API call _for_ us. Here's what an API call would look like without a proxy server:
If you are unfamiliar with the term "proxy server", that's alright! If you think about how a React app would typically interface with an API, you'd have a `GET` call to the API server in order to get the data you want from the API. However, for APIs that require an API key of "client\_secret", we have to include an API key along with the `GET` request in order to get the data we want. This is a perfectly understandable method for securing and limiting an API, but it introduces the problem pointed out above: We can't simply bundle the API key in our client-side code. As such, we need a way to keep the API key out of reach of our users but still make data accessible. To do so, we can utilize another server (that we make and host ourselves) that knows the API key and uses it to make the API call _for_ us. Here's what an API call would look like without a proxy server:
![API request](./api_request.svg)

View File

@@ -15,21 +15,21 @@ Since I transitioned from working all day on my personal MacBook Pro to receivin
While I loved the feeling of knowing that I could open and run anything on the MacBook Pro — aka a conventional laptop — the idea of moving solely to the efficient machine that is the iPad Pro was appealing for various reasons. Yet the question remained: how would I continue the work on side-projects, whether they be software or hardware? There is a lot of talk these days about how [LumaFusion](https://freecadweb.org/) is real competition to Adobe Premiere, or that [Affinity Photo](https://affinity.serif.com/en-gb/photo/ipad/) has nothing to fear from desktop Photoshop. While I do spend some time with such creative apps, how am I supposed to maintain [my personal webpage](https://pierrejacquier.com), write code for my Raspberry Pi, or create CAD models for 3D printing?
The answer is mostly through remote access. Fear not, dear reader, well try to rely on tools that are native or at least *feel* native on the iPad Pro, not just cheap Teamviewer-ing. What started as just a 9 to 5 setup challenge, not the other way around, is now much more than that.
The answer is mostly through remote access. Fear not, dear reader, well try to rely on tools that are native or at least _feel_ native on the iPad Pro, not just cheap Teamviewer-ing. What started as just a 9 to 5 setup challenge, not the other way around, is now much more than that.
*Note 1: This is merely a shoutout to great products Im using daily and isnt sponsored. The links are not affiliated either. Ill try to provide different options as well as keep some focus on open-source software.*
_Note 1: This is merely a shoutout to great products Im using daily and isnt sponsored. The links are not affiliated either. Ill try to provide different options as well as keep some focus on open-source software._
*Note 2: The new iPad Air now features most of the laptop-like abilities of its Pro brother; therefore, Ill only use the term “iPad” in the following. But bear in mind: the cheapest 2020 8th-gen iPad still has the old form-factor and a Lightning port, making it incompatible with some of the following.*
_Note 2: The new iPad Air now features most of the laptop-like abilities of its Pro brother; therefore, Ill only use the term “iPad” in the following. But bear in mind: the cheapest 2020 8th-gen iPad still has the old form-factor and a Lightning port, making it incompatible with some of the following._
![Photo by [Ernest Ojeh](https://unsplash.com/@namzo) on [Unsplash](https://unsplash.com/s/photos/magic-keyboard?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)](cover.jpeg)
![Photo by Ernest Ojeh on Unsplash](cover.jpeg)
# External Monitor
I was lucky enough to have an [LG Ultrafine 4K](https://www.apple.com/shop/product/HMUA2VC/A/lg-ultrafine-4k-display) display in my possession for use with the MacBook Pro. These fancy displays are designed hand-in-hand with Apple are compatible with both Thunderbolt devices like the MacBook and with standard USB-C devices like the iPad. However, you cant use the same cable, so if you are planning to buy this one, make sure youre using the one with an iPad label on it before returning it out of frustration!
I was lucky enough to have an [LG Ultrafine 4K](https://www.apple.com/shop/product/HMUA2VC/A/lg-ultrafine-4k-display) display in my possession for use with the MacBook Pro. These fancy displays are designed hand-in-hand with Apple are compatible with both Thunderbolt devices like the MacBook and with standard USB-C devices like the iPad. However, you cant use the same cable, so if you are planning to buy this one, make sure youre using the one with an iPad label on it before returning it out of frustration!
I believe its common knowledge that you cant just work out of a laptop form-factor all day without destroying your neck. Here, the cheapest option would be to go with one of the pretty cool arm mounts specifically designed to put the iPad right in front of your eyes or just a pile of books. This allows the expensive pixels to get the amount of attention they deserve while also enabling instant video calling.
While an external monitor is really comfortable, one thing to note is the lack of *full* external monitor support with iPadOS at the time of writing (version 14.4). Connecting a USB-C display like the Ultrafine essentially triggers an AirPlay mirroring of the iPads screen and is therefore really not as satisfying as a standard laptop+display setup with an extended desktop for greater multitasking. Depending on its size and resolution, youll most likely end up with black bars around the mirrored video flux. It's annoying even though you do get used to it. But there are ways around it.
While an external monitor is really comfortable, one thing to note is the lack of _full_ external monitor support with iPadOS at the time of writing (version 14.4). Connecting a USB-C display like the Ultrafine essentially triggers an AirPlay mirroring of the iPads screen and is therefore really not as satisfying as a standard laptop+display setup with an extended desktop for greater multitasking. Depending on its size and resolution, youll most likely end up with black bars around the mirrored video flux. It's annoying even though you do get used to it. But there are ways around it.
The workaround is indeed in the ability of iOS apps to specify how an external monitor should be used. For instance, [Netflix](http://netflix.com) uses the iPads display for media controls while broadcasting the content onto the monitor. Luma Fusion has a mode for the video editor to stay on the iPads screen while live previewing on the monitor in full-screen.
@@ -39,23 +39,23 @@ And in a clever workaround, a popular app called [Shiftscreen](http://shiftscree
I have to say, its an excellent option for specific tasks, but after a while I just learned to love the mirrored interface. Now Im rarely spending my time in apps that provide full monitor support. App Switching via Cmd+Tab or the gestures is hugely satisfying, plus I dont think it has brought my productivity down at all. In fact, it might have improved my focus on the task at hand.
# External Keyboard and Mouse/Trackpad
# External Keyboard and Mouse/Trackpad
Speaking of hitting keys and performing gestures. While the iPad itself has an incredibly mobile form-factor, we owe ourselves a decent desk setup. After all, we are turning it into our personal workstation.
On the high-end of the spectrum lies the incredible [Magic Keyboard for iPad](https://www.apple.com/shop/product/MXQT2LL/A/magic-keyboard-for-ipad-air-4th-generation-and-ipad-pro-11-inch-2nd-generation-us-english). Its heavy. Its stupidly expensive. But its my best purchase of the year. It effectively turns the iPad into a laptop with its form-factor, Trackpad, and additional charging port. On top of that, the keys have nothing to envy from a real Magic Keyboard. It's so good it made it to my main desk. Thankfully though, Logitech came up with a much more affordable option, the [Folio Touch](https://www.logitech.com/en-ca/products/ipad-keyboards/folio-touch.html) for 11" iPads.
Now even cheaper options are possible, such as just getting a good regular Bluetooth keyboard to complete a stand mount like the mechanical [Keychron K2](https://www.keychron.com/products/keychron-k2-wireless-mechanical-keyboard), as well as a mouse — I cant *not *recommend the [MX Master series](https://www.logitech.com/en-ca/products/mice/mx-master-3.910-005620.html), but the Pebble, for something on the pocketable side, is excellent and very affordable in the [K380 combo](https://www.logitech.com/en-ca/products/combos/k380-m350-keyboard-mouse-combo.html). The external [Magic Trackpad](https://www.apple.com/shop/product/MRMF2LL/A/magic-trackpad-2-space-gray) 2 is incredible to use and works well with iPadOS but is, granted, on the expensive end of the spectrum of pointing devices.
Now even cheaper options are possible, such as just getting a good regular Bluetooth keyboard to complete a stand mount like the mechanical [Keychron K2](https://www.keychron.com/products/keychron-k2-wireless-mechanical-keyboard), as well as a mouse — I cant \*not \*recommend the [MX Master series](https://www.logitech.com/en-ca/products/mice/mx-master-3.910-005620.html), but the Pebble, for something on the pocketable side, is excellent and very affordable in the [K380 combo](https://www.logitech.com/en-ca/products/combos/k380-m350-keyboard-mouse-combo.html). The external [Magic Trackpad](https://www.apple.com/shop/product/MRMF2LL/A/magic-trackpad-2-space-gray) 2 is incredible to use and works well with iPadOS but is, granted, on the expensive end of the spectrum of pointing devices.
![A quite simple desk setup. Doing some LumaFusion edits!](setup.jpeg)
*Update: After quite some time with the Magic Keyboard for iPad on my desk, I revised the setup, with the tablet laying flat under the monitor to quickly take handwritten notes or drawings, with the Keychron K2 + Magic Trackpad 2 as input devices. Well see if it sticks!*
_Update: After quite some time with the Magic Keyboard for iPad on my desk, I revised the setup, with the tablet laying flat under the monitor to quickly take handwritten notes or drawings, with the Keychron K2 + Magic Trackpad 2 as input devices. Well see if it sticks!_
# Software Dev: Remote Server and Native Apps
Lets get into some real engineering tools. As I'm sure you already know, theres no walled garden like iOS/iPadOS. Apps are fully contained, *Files* is some kind of file explorer yet remains very limited, and firing up a local command line is pure fantasy.
Lets get into some real engineering tools. As I'm sure you already know, theres no walled garden like iOS/iPadOS. Apps are fully contained, _Files_ is some kind of file explorer yet remains very limited, and firing up a local command line is pure fantasy.
*How the heck do we write and run our beloved code, Pierre? 💁*
_How the heck do we write and run our beloved code, Pierre? 💁_
As is so common these days, the trick is in the cloud. While it comes with its drawbacks such as spotty connections, offloading the work to a remote, safe, always-accessible machine has some nice things going for it. A hosted instance can be fired up in a matter of minutes these days. To stay somewhat minimalist, I chose to keep an old laptop plugged in inside a closet, but there are many options out there.
@@ -67,7 +67,7 @@ For a proper IDE, there are ways to run the undisputed leader of the last years
The big downside here is the [lack of scrolling support](https://github.com/microsoft/vscode/issues/106232). Lucky for me, Im navigating keyboard-only with the help of the [Vim extension](https://marketplace.visualstudio.com/items?itemName=vscodevim.vim) ([Colemak version](https://marketplace.visualstudio.com/items?itemName=ollyhayes.colmak-vim)), but its a real barrier to entry. Its related to a bug in the web engine. Nonetheless, there are a few hosted solutions with the same problem, such as [Stackblitz](https://stackblitz.com/) and [GitHub Codespaces](https://github.com/features/codespaces) — which both get honourable mentions yet arent open source — so Im confident that issue might get solved soon.
*Update: the scrolling issue is fixed as of iPadOS 14.5 Beta 1, which requires enrolling [here](https://beta.apple.com/sp/betaprogram/). This means more people will be able to enjoy a proper coding experience on the iPad, and is really good news.*
_Update: the scrolling issue is fixed as of iPadOS 14.5 Beta 1, which requires enrolling [here](https://beta.apple.com/sp/betaprogram/). This means more people will be able to enjoy a proper coding experience on the iPad, and is really good news._
The example below shows VS Code running in Safari for iPad. It would be a great use case for Shiftscreen, which could have both VS Code and whatever docs on the side for a true multitasking experience on an external display.
@@ -85,17 +85,17 @@ A great way to make sure some coding work gets done when we get to travel again
# Hardware Dev: Remote Server and Remote Desktop
Here comes the harder part: hardware (*easy)*.
Here comes the harder part: hardware (_easy)_.
To put it bluntly, theres no SSHing into Computer Aided Design software. Theres no way of vim-editing a SolidWorks file. [OpenSCAD](https://www.openscad.org/) might be an exception to these statements, yet its most definitely niche.
Two options I have explored:
* use some kind of Remote Desktop software to access our closet computer/server and run the software remotely;
- use some kind of Remote Desktop software to access our closet computer/server and run the software remotely;
* choose a web-based CAD software.
- choose a web-based CAD software.
The first option could apply to a broad range of software beyond just CAD. The real bottleneck here is the quality of the iPad app. Ive gone through many of the free options *à la* TeamViewer or Chrome Remote Desktop. But none provided mandatory things for my use case like full mouse buttons support (including click-and-drag with the wheel, for instance). Jumping into more premium territory, Splashtop remains free for personal and same-network use and has a great iPad app, but has a monthly fee for real remote access. The one that ended up meeting all my needs was the $19.99 [Jump Destkop](https://jumpdesktop.com/). With its outdated app icon and steep price tag, this was clearly not my first choice. But their Fluid Remote Desktop protocol for Windows and macOS has just been a very smooth experience. It works wonders with external mice and Trackpads, and it has full external monitor support on the iPad with automatic resolution matching. On top of that, it supports VNC (even over SSH tunnels) to connect to Linux hosts such as a (local) Raspberry Pi or other instances I use for work on a daily basis.
The first option could apply to a broad range of software beyond just CAD. The real bottleneck here is the quality of the iPad app. Ive gone through many of the free options _à la_ TeamViewer or Chrome Remote Desktop. But none provided mandatory things for my use case like full mouse buttons support (including click-and-drag with the wheel, for instance). Jumping into more premium territory, Splashtop remains free for personal and same-network use and has a great iPad app, but has a monthly fee for real remote access. The one that ended up meeting all my needs was the $19.99 [Jump Destkop](https://jumpdesktop.com/). With its outdated app icon and steep price tag, this was clearly not my first choice. But their Fluid Remote Desktop protocol for Windows and macOS has just been a very smooth experience. It works wonders with external mice and Trackpads, and it has full external monitor support on the iPad with automatic resolution matching. On top of that, it supports VNC (even over SSH tunnels) to connect to Linux hosts such as a (local) Raspberry Pi or other instances I use for work on a daily basis.
And it has stood the test of time: the whole design for my [GeeXY 3D printer project](https://www.notion.so/Geeetech-CoreXY-Conversion-GeeXY-b46d9f7b4b0643faa60bd2f20399c0b6) was created through Jump Desktop on the iPad. No regrets so far!

View File

@@ -174,7 +174,7 @@ Once this is done, you can send a test message to a public channel and see it pr
# App Interactivity {#interactive-message-package}
While listening to events alone can be very useful in some circumstances, oftentimes having a way to interact with your application can be very helpful. As a result, the Slack SDK also includes the `@slack/interactive-messages` package to help you provide interactions with the user more directly. Using this package, you can respond to the user's input. For example, let's say we wanted to replicate the [PlusPlus](https://go.pluspl.us/) Slack bot as a way to track a user's score.
While listening to events alone can be very useful in some circumstances, oftentimes having a way to interact with your application can be very helpful. As a result, the Slack SDK also includes the `@slack/interactive-messages` package to help you provide interactions with the user more directly. Using this package, you can respond to the user's input. For example, let's say we wanted to replicate the [PlusPlus](https://go.pluspl.us/) Slack bot as a way to track a user's score.
We want to have the following functionality for an MVP:
@@ -187,7 +187,7 @@ Each of these messages will prompt the bot to respond with a message in the same
## Setup {#interactive-bot-setup}
First and foremost, something you'll need to do is add a new OAuth permission to enable the functionality for the bot to write to the channel. Go into the dashboard and go to the "OAuth & Permissions" tab. The second section of the screen should be called "Scopes", where you can add the `chat:write:bot` permission.
![The permissions searching for "chat" which shows that "chat:write:bot" permission we need to add](./chat_write_bot_oauth.png)
![The permissions searching for "chat" which shows that "chat:write:bot" permission we need to add](./chat_write_bot_oauth.png)
After enabling the new OAuth permission, you'll need to reinstall your app. This is because you're changing the permissions of your apps and you need to accept the new permissions when you reinstall the app. If you scroll to the top of the same OAuth page, you should see a `Reinstall App` button that will help you do this easily.
@@ -281,7 +281,7 @@ console.log(state); // {word1: 2, word2: -1}
Following this pattern, let's go through and add a few lines of code to the last example to fulfill the expected behavior:
```javascript
````javascript
const { tablize } = require('batteries-not-included/utils');
/**
@@ -342,7 +342,7 @@ slackEvents.on('message', async event => {
console.log(`Successfully send message ${result.ts} in conversation ${event.channel}`);
}
});
```
````
As you can see, we're able to add in the functionality for the score-keeping relatively easily with little additional code. Slightly cheating, but to pretty-print the score table, we're using a `tablize` package that's part of [the "batteries not included" library we've built](https://github.com/unicorn-utterances/batteries-not-included) in order to provide an ASCII table for our output.
@@ -352,7 +352,7 @@ Even though the bot works well so far, it's not ideal to keep a score in memory.
> This section will cover the setup of MongoDB Atlas, if you'd like to [skip ahead to the code section where we switch our in-memory store with a MongoDB database, you can click here](#mongodb-code)
To remain consistent in keeping our app setup as trivial as possible, we'll be using MongoDB Atlas. Atlas enables us to have a serverless MongoDB service at our disposal. In order to use Atlas, you'll need to [sign up for an account](https://cloud.mongodb.com/user#/atlas/register/accountProfile).
To remain consistent in keeping our app setup as trivial as possible, we'll be using MongoDB Atlas. Atlas enables us to have a serverless MongoDB service at our disposal. In order to use Atlas, you'll need to [sign up for an account](https://cloud.mongodb.com/user#/atlas/register/accountProfile).
Once done, you'll need to "Build a new cluster" in order to create a database cluster for your Slack app.
@@ -398,7 +398,7 @@ Now that we understand the URI we need to pass to the Node driver to connect to
## The Code {#mongodb-code}
```javascript
````javascript
const { createEventAdapter } = require('@slack/events-api');
const { WebClient } = require('@slack/web-api');
const { MongoClient } = require('mongodb');
@@ -499,7 +499,7 @@ dbClient.connect(err => {
console.log(`server listening on port ${port}`);
});
});
```
````
If you do a diff against the previous code, you'll see that we were able to add the database using only 4 or 5 new operations. These operations are to:
@@ -576,7 +576,7 @@ We'll want to update it so that the `start` command uses the signing secret from
"start": "npm run verify",
```
We need to allow Heroku to dictate the port to host our verification command as well, to get past their firewall they automatically route to the app's subdomain; hence the `--port` attribute.
We need to allow Heroku to dictate the port to host our verification command as well, to get past their firewall they automatically route to the app's subdomain; hence the `--port` attribute.
After making this change, we'll run:
@@ -585,7 +585,7 @@ After making this change, we'll run:
And watch as our app gets deployed:
![The app being deployed during the `git push`](./heroku_initial_deploy.png)
![The app being deployed during the git push](./heroku_initial_deploy.png)
After this, we can go back to the Slack app dashboard and change the Event Subscription URL.

View File

@@ -12,7 +12,7 @@
If you've ever used something like [Gatsby](https://www.gatsbyjs.org/) or [NuxtJS](https://nuxtjs.org/), you may already be familiar with Static Site Generation (SSG). If not, here's a quick rundown: You're able to export a React application to simple HTML and CSS during a build-step. This export means that (in some cases), you can disable JavaScript and still navigate your website as if you'd had it enabled. It also often means much faster time-to-interactive times, as you no longer have to run your JavaScript to render your HTML and CSS.
For a long time, React and Vue have had all of the SSG fun... Until now.
For a long time, React and Vue have had all of the SSG fun... Until now.
Recently, a group of extremely knowledgeable developers has created [Scully, a static site generator for Angular projects](https://github.com/scullyio/scully). If you prefer Angular for your stack, you too can join in the fun! You can even trivially migrate existing Angular projects to use Scully!
@@ -62,7 +62,7 @@ We'll need to run the `npm run scully` command later on, but for now, let's focu
# Adding Markdown Support
While Scully [_does_ have a generator to add in blog support](https://github.com/scullyio/scully/blob/master/docs/blog.md), we're going to add it in manually. Not only will this force us to understand our actions a bit more to familiarize ourselves with how Scully works, but it means this article is not reliant on the whims of a changing generator.
While Scully [_does_ have a generator to add in blog support](https://github.com/scullyio/scully/blob/master/docs/blog.md), we're going to add it in manually. Not only will this force us to understand our actions a bit more to familiarize ourselves with how Scully works, but it means this article is not reliant on the whims of a changing generator.
> This isn't a stab at Scully by any means, if anything I mean it as a compliment. The team consistently improves Scully and I had some suggestions for the blog generator at the time of writing. While I'm unsure of these suggestions making it into future versions, it'd sure stink to throw away an article if they were implemented.
@@ -100,7 +100,7 @@ const routes: Routes = [
]
```
This imports the `blog.module` file to use the further children routes defined there. If we now start serving the site and go to `localhost:4200/blog`, we should see the message "blog works!" at the bottom of the page.
This imports the `blog.module` file to use the further children routes defined there. If we now start serving the site and go to `localhost:4200/blog`, we should see the message "blog works!" at the bottom of the page.
### Routing Fixes {#router-outlet}
@@ -119,6 +119,7 @@ ng g component homepage -m App
```
This will create a new `homepage` component under `src/app/homepage`. It's only got a basic HTML file with `homepage works!` present, but it'll suffice for now. Now we just need to update the `app-routing.module.ts` file to tell it that we want this to be our new home route:
```typescript
import { HomepageComponent } from "./homepage/homepage.component";
@@ -156,10 +157,9 @@ const routes: Routes = [
That's it! Now, if you go to `localhost:4200/blog`, you should see the `blog works!` message and on the `/blog/asdf` route, you should see `blog-post works!`. With this, we should be able to move onto the next steps!
## The Markdown Files {#frontmatter}
To start, let's create a new folder at the root of your project called `blog`. It's in this root folder that we'll add our markdown files that our blog posts will live in. Let's create a new markdown file under `/blog/test-post.md`.
To start, let's create a new folder at the root of your project called `blog`. It's in this root folder that we'll add our markdown files that our blog posts will live in. Let's create a new markdown file under `/blog/test-post.md`.
```markdown
---
@@ -175,7 +175,7 @@ How are you doing?
> Keep in mind that the file name will be the URL for the blog post later on. In this case, the URL for this post will be `/blog/test-post`.
The top of the file `---` block is called the "frontmatter"_. You're able to put metadata in this block with a key/value pair. We're then able to use that metadata in the Angular code to generate specific UI based on this information in the markdown file. Knowing that we can store arbitrary metadata in this frontmatter allows us to expand the current frontmatter with some useful information:
The top of the file `---` block is called the "frontmatter"\_. You're able to put metadata in this block with a key/value pair. We're then able to use that metadata in the Angular code to generate specific UI based on this information in the markdown file. Knowing that we can store arbitrary metadata in this frontmatter allows us to expand the current frontmatter with some useful information:
```markdown
---
@@ -296,7 +296,7 @@ Now, we can access the server at the bottom of the build output:
The server is available on "http://localhost:1668/"
```
Finally, if we go to [http://localhost:1668/blog/test-post](http://localhost:1668/blog/test-post), we can see the post contents alongside our header and footer.
Finally, if we go to <http://localhost:1668/blog/test-post>, we can see the post contents alongside our header and footer.
![A preview of the post as seen on-screen](./hello_world_blog_post.png)
@@ -412,7 +412,7 @@ export class BlogComponent {
</ul>
```
This code should give us a straight list of blog posts and turn them into links for us to access our posts with!
This code should give us a straight list of blog posts and turn them into links for us to access our posts with!
![A preview of the post list as seen on-screen](./post_list_preview.png)
@@ -466,10 +466,6 @@ export class BlogPostComponent {
![A preview of the post list as seen on-screen](./post_page_preview.png)
# Conclusion
While this blog site is far from ready from release, it's functional. It's missing some core SEO functionality as well as general aesthetics, but that could be easily remedied. Using a package like [`ngx-meta`](https://www.npmjs.com/package/@ngx-meta/core) should allow you to make short work of the SEO meta tags that you're missing where areas adding some CSS should go a long way with the visuals of the site.

View File

@@ -35,7 +35,7 @@ As you can see we're passing the `onChange` and value props to `SimpleForm`. Thi
While `SimpleForm` is displaying the data to the user, the logic itself stays within `App`. `SimpleForm` contains no state or application logic; we call components like these "dumb" components. "Dumb" components are utilized for styling and composability, but not for app logic or state.
This is what a set of proper React components *should* look like. This pattern of raising state out of the component itself and leaving "dumb" component comes from the guidance of the React team itself. This pattern is called[ "lifting state up"](https://reactjs.org/docs/lifting-state-up.html).
This is what a set of proper React components _should_ look like. This pattern of raising state out of the component itself and leaving "dumb" component comes from the guidance of the React team itself. This pattern is called[ "lifting state up"](https://reactjs.org/docs/lifting-state-up.html).
Now that we have a better understanding of the patterns to follow let's take a look at the wrong way to do things.

View File

@@ -24,11 +24,11 @@ I use the [Visual Studio Code](https://code.visualstudio.com) editor with the [l
# Conditional logic with the "/execute" command
In the previous post, we ended on an interesting question &mdash; how do we write a command that only executes if the player is standing on a particular block?
In the previous post, we ended on an interesting question how do we write a command that only executes if the player is standing on a particular block?
Well, Minecraft actually has a specific command for checking preconditions and other attributes of a command before running it - the [`/execute`](https://minecraft.fandom.com/wiki/Commands/execute) command!
This command can be used with an indefinite number of arguments, which might make it confusing to understand by reading its documentation &mdash; but this effectively means that you can add any number of preconditions to this command.
This command can be used with an indefinite number of arguments, which might make it confusing to understand by reading its documentation but this effectively means that you can add any number of preconditions to this command.
For example:
@@ -40,11 +40,11 @@ This uses two subcommands of the `execute` command: `if block ~ ~ ~ air` checks
Try running this command in Minecraft! As long as you're standing in an air block, you should see its message appear in the chat. If you stand underwater or in any block that isn't air (such as bushes/foliage), it should stop executing.
| Standing in air | Standing in water |
|-----------------|-------------------|
| Standing in air | Standing in water |
| ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| ![A Minecraft player standing on land, in a highlighted block of air](./if_block_air.png) | ![A Minecraft player standing in a pond, in a highlighted block of water](./if_block_water.png) |
If we want to negate this condition, we can replace the `if` subcommand with `unless` &mdash; this will print its message as long as the player *isn't* standing in air.
If we want to negate this condition, we can replace the `if` subcommand with `unless` this will print its message as long as the player _isn't_ standing in air.
```shell
execute unless block ~ ~ ~ air run say "You aren't standing in air!"
@@ -54,24 +54,24 @@ You could also change the block identifier to look for a different type of block
# Position syntax
So what do the tildes (`~ ~ ~`) mean in the previous command? This is referring to *the current position* (in the X, Y, and Z axes) of the player that is executing the command. There are a few different ways to write positions like these in Minecraft, which I'll explain here:
So what do the tildes (`~ ~ ~`) mean in the previous command? This is referring to _the current position_ (in the X, Y, and Z axes) of the player that is executing the command. There are a few different ways to write positions like these in Minecraft, which I'll explain here:
- ###### Absolute coordinates
Coordinates can be written as a fixed position in the world - say, `32 60 -94` (these coordinates can be obtained by opening the [F3 debug screen](https://minecraft.fandom.com/wiki/Debug_screen) and finding the "Targeted block" position.
- ###### Current coordinates (tilde notation)
Using the tilde symbols (`~ ~ ~`) will reference *the current position* that the command is executed at. This can also be mixed with static values, such as `32 ~ -94`, which will reference the block at (x: 32, z: -94) using the player's current y-axis.
Using the tilde symbols (`~ ~ ~`) will reference _the current position_ that the command is executed at. This can also be mixed with static values, such as `32 ~ -94`, which will reference the block at (x: 32, z: -94) using the player's current y-axis.
- ###### Relative coordinates
These positions can also be *offset* by a certain number of blocks in any direction by adding a number after the tilde. For example, `~2 ~-4 ~3` will move 2 blocks horizontally from the player's x-axis, 4 blocks down in the y-axis, and 3 blocks horizontally in the z-axis.
These positions can also be _offset_ by a certain number of blocks in any direction by adding a number after the tilde. For example, `~2 ~-4 ~3` will move 2 blocks horizontally from the player's x-axis, 4 blocks down in the y-axis, and 3 blocks horizontally in the z-axis.
- ###### Directional coordinates (caret notation)
Similar to relative coordinates, directional coordinates (`^ ^ ^`) will start from wherever the command is executed from. However, any offsets will be applied relative to *wherever the current player or entity is looking.* For example, `^2 ^-4 ^3` will move 2 blocks to the left of the player, 4 blocks downward, and 3 blocks in front of the direction the player faces.
Similar to relative coordinates, directional coordinates (`^ ^ ^`) will start from wherever the command is executed from. However, any offsets will be applied relative to _wherever the current player or entity is looking._ For example, `^2 ^-4 ^3` will move 2 blocks to the left of the player, 4 blocks downward, and 3 blocks in front of the direction the player faces.
To experiment with the position syntax and see where certain positions end up in the world, we can add coordinates to the `/summon` command to spawn entities at a specific location. `/summon pig ~ ~ ~` would use the current position of the player (its default behavior), while `/summon pig ~ ~-4 ~` would probably spawn the pig underground. If you spawn too many pigs, you can use `/kill @e[type=pig]` to remove them.
An important note when using these positions: for players (and most other entities), any positions will actually start *at the player's feet.* If we want to start at the player's head, we can use the `anchored eyes` subcommand to correct this &mdash; using directional coordinates, `/execute anchored eyes run summon pig ^ ^ ^4` should summon a pig 4 blocks forward in the exact center of wherever the player is looking.
An important note when using these positions: for players (and most other entities), any positions will actually start _at the player's feet._ If we want to start at the player's head, we can use the `anchored eyes` subcommand to correct this using directional coordinates, `/execute anchored eyes run summon pig ^ ^ ^4` should summon a pig 4 blocks forward in the exact center of wherever the player is looking.
## Positions in an "/execute" subcommand
> In the following sections, it might help to keep in mind that every command has a specific *context* that it executes in. This context consists of a **position in the world** and a **selected entity** that runs the command. When you type a command in Minecraft's text chat, the **position** is your current location in the world, and the **selected entity** is your player.
> In the following sections, it might help to keep in mind that every command has a specific _context_ that it executes in. This context consists of a **position in the world** and a **selected entity** that runs the command. When you type a command in Minecraft's text chat, the **position** is your current location in the world, and the **selected entity** is your player.
>
> This context affects what blocks, locations, and entities certain commands and syntax will be referring to. The `/execute` command can change this context for any commands that it runs, which is what you'll see in the following example...
@@ -88,7 +88,7 @@ These two commands do the same thing! When we use `positioned ^ ^ ^4`, we're mov
If you recall the function we created in the previous chapter, we ended up making a single command (`/function fennifith:animals/spawn`) that spawns a bunch of animals at once.
If we use `/execute` to set the position of this function before it runs, this will also affect the location of *every command in that function.*
If we use `/execute` to set the position of this function before it runs, this will also affect the location of _every command in that function._
```shell
execute anchored eyes positioned ^ ^ ^4 run function fennifith:animals/spawn
@@ -120,15 +120,17 @@ execute align xz run summon pig ~0.5 ~ ~0.5
# Entity selectors
So we've figured out how to use the position of the player, but how can we refer to other entities in the world? If you've paid attention to the `/kill @e[type=pig]` command from earlier, this is actually using an *entity selector* to reference all of the pigs in the world. We're using the `@e` variable (all entities in the world), and filtering it by `type=pig` to only select the entities that are pigs.
So we've figured out how to use the position of the player, but how can we refer to other entities in the world? If you've paid attention to the `/kill @e[type=pig]` command from earlier, this is actually using an _entity selector_ to reference all of the pigs in the world. We're using the `@e` variable (all entities in the world), and filtering it by `type=pig` to only select the entities that are pigs.
Here's a list of some other selector variables we can use:
- `@p` targets only the **nearest player** to the command's execution
- `@a` targets **every player** in the world (useful for multiplayer servers / realms)
- `@e` targets **every player, animal, and entity** in the world
- `@s` targets only **the entity that executed the command**
And here are some of the ways that we can apply the filter attributes:
- `[type=player]` selects the entity type (`pig`, `cow`, `item_frame`, etc.)
- `[gamemode=survival]` can select players in a specific game mode (`creative`, `spectator`, etc.)
- `[limit=1]` will restrict the total number of entities that can be picked by the selector
@@ -136,28 +138,28 @@ And here are some of the ways that we can apply the filter attributes:
Using these selectors, we can use `@e[type=pig,sort=nearest,limit=3]` to reference the three nearest pigs to player that executes the command.
What if we use `/kill @a[type=pig]`? This won't select anything &mdash; because `@a` only selects *player* entities. Similarly, `@s[type=pig]` won't select anything either, because `@s` refers to the entity that runs the command &mdash; which is you, an entity of `type=player`.
What if we use `/kill @a[type=pig]`? This won't select anything because `@a` only selects _player_ entities. Similarly, `@s[type=pig]` won't select anything either, because `@s` refers to the entity that runs the command which is you, an entity of `type=player`.
## Entities in an "/execute" subcommand
Just like how `/execute positioned <x> <y> <z>` can be used to set the position of the command it runs, the `/execute as <entity>` subcommand can be used to set the entity that runs the command. This will effectively *change the entity that `@s` refers to* in anything it executes. Let's use this with our `/kill @e[type=pig]` command!
Just like how `/execute positioned <x> <y> <z>` can be used to set the position of the command it runs, the `/execute as <entity>` subcommand can be used to set the entity that runs the command. This will effectively _change the entity that `@s` refers to_ in anything it executes. Let's use this with our `/kill @e[type=pig]` command!
```shell
kill @e[type=pig]
execute as @e[type=pig] run kill @s
```
An important note about how this feature works is that, after the `as @a[type=pig]` subcommand, it will actually run any following subcommands *once for every entity it selects.* This means that it is individually running `kill @s` once for every entity of `type=pig`.
An important note about how this feature works is that, after the `as @a[type=pig]` subcommand, it will actually run any following subcommands _once for every entity it selects._ This means that it is individually running `kill @s` once for every entity of `type=pig`.
## Entity positions in an "/execute" subcommand
So, we *could* use this with our `if block ~ ~ ~ air` command from earlier to select only the pig entities that are standing in a block of air... but that might not work quite as we expect.
So, we _could_ use this with our `if block ~ ~ ~ air` command from earlier to select only the pig entities that are standing in a block of air... but that might not work quite as we expect.
```shell
execute as @e[type=pig] if block ~ ~ ~ air run kill @s
```
You'll notice that this is actually affecting *all* pigs in the world... unless you stand underwater or in a block of foliage, in which case it won't do anything. This is because, while the `as <entity>` command changes the executing entity, it doesn't affect the position of the command's execution &mdash; it's still running at your location.
You'll notice that this is actually affecting _all_ pigs in the world... unless you stand underwater or in a block of foliage, in which case it won't do anything. This is because, while the `as <entity>` command changes the executing entity, it doesn't affect the position of the command's execution it's still running at your location.
While we can use relative positions with the `positioned ~ ~ ~` subcommand, you'll notice that there isn't any way to refer to a selected entity in this syntax... that's why we'll need to use the `at <entity>` subcommand instead!
@@ -167,7 +169,7 @@ execute as @e[type=pig] at @s if block ~ ~ ~ air run kill @s
This command first selects all `@e[type=pig]` entities, then - for each pig - changes the position of the command to the position of `@s` (the selected entity). As a result, the position at `~ ~ ~` now refers to the position of `@s`.
This can also be used with functions, same as before! However, I'm going to add a `limit=5` onto our entity selector here &mdash; otherwise it might spawn an increasing number of entities each time it runs, which could cause lag in your game if executed repeatedly.
This can also be used with functions, same as before! However, I'm going to add a `limit=5` onto our entity selector here otherwise it might spawn an increasing number of entities each time it runs, which could cause lag in your game if executed repeatedly.
```shell
execute as @e[type=pig,limit=5] at @s run function fennifith:animals/spawn
@@ -183,55 +185,56 @@ Here are a few examples of this in use:
With the `[distance=<range>]` attribute, entities will be selected if they are within a specific radius of a position. However, for this to work as expected, the value needs to be a **range**, not a number. For example, `[distance=6]` will only select entities at a distance of exactly 6 blocks away.
Ranges can be specified by placing two dots (`..`) as the range between two numbers. If either side is left out, the range is interpreted as *open*, and will accept any number in that direction. By itself, `..` is a range that includes all numbers, `5..` will accept any number above 5, `..5` accepts any number below 5, and `1..5` accepts any number between 1 and 5.
Ranges can be specified by placing two dots (`..`) as the range between two numbers. If either side is left out, the range is interpreted as _open_, and will accept any number in that direction. By itself, `..` is a range that includes all numbers, `5..` will accept any number above 5, `..5` accepts any number below 5, and `1..5` accepts any number between 1 and 5.
| `@e[distance=..5]` | `@e[distance=5..]` | `@e[distance=2..5]` |
|--------------------|--------------------|---------------------|
| `@e[distance=..5]` | `@e[distance=5..]` | `@e[distance=2..5]` |
| ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| ![A circle showing the selected area within a radius of 5 blocks](./select-radius-lt-5.svg) | ![A circle showing the selected area beyond a radius of 5 blocks](./select-radius-gt-5.svg) | ![A circle showing the selected area between a radius of 2 and 5 blocks](./select-radius-2-5.svg) |
## Area selection
The `[x=]`, `[y=]`, and `[z=]` attributes will filter entities by their exact position. However, since entities can move to positions in-between blocks, their coordinates usually aren't in whole numbers &mdash; so it is unlikely that these filters by themselves will select any entities.
The `[x=]`, `[y=]`, and `[z=]` attributes will filter entities by their exact position. However, since entities can move to positions in-between blocks, their coordinates usually aren't in whole numbers so it is unlikely that these filters by themselves will select any entities.
However, these attributes can be paired with `[dx=]`, `[dy=]`, and `[dz=]` to select a range of values on the X, Y, and Z axes. For example, `[y=10,dy=20]` will filter any entity with a position between `Y=10` and `Y=30`.
Using all of these attributes togther can create a box area to search for entities within. For example, `@e[x=1,y=2,z=3,dx=10,dy=20,dz=30]` is effectively creating a box that is 10 blocks wide, 20 blocks high, 30 blocks deep, starting at the position (1, 2, 3).
| `@e[x=5,z=1]` | `@e[x=5,dx=10]` | `@e[x=5,z=1,dx=10,dz=5]` |
|---------------|-----------------|--------------------------|
| `@e[x=5,z=1]` | `@e[x=5,dx=10]` | `@e[x=5,z=1,dx=10,dz=5]` |
| ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------ |
| ![A point showing the selected position at 5, 1](./select-area-5-1.svg) | ![An area showing the selected range on the X axis from 5 to 15](./select-area-x-5-15.svg) | ![A box showing the selected area from 5, 1 to 15, 6](./select-area-5-1-to-15-6.svg) |
# Challenge: Using "/execute" in our tick.mcfunction
In the previous post, we got our data pack to print a message on every game tick. Let's try to change that &mdash; see if you can write a command that will check *the block below the player* to see if it is `air`. If the block underneath the player is air, they are probably falling, so let's print "aaaaaaaaaaaaaaaaaaaa" in the text chat.
In the previous post, we got our data pack to print a message on every game tick. Let's try to change that see if you can write a command that will check _the block below the player_ to see if it is `air`. If the block underneath the player is air, they are probably falling, so let's print "aaaaaaaaaaaaaaaaaaaa" in the text chat.
<details>
<summary>Need a hint?</summary>
There is some potential for confusion here, as the `tick` event doesn't actually run with any particular entity or position in the Minecraft world &mdash; by default, the location of `~ ~ ~` will be at (0, 0, 0), and `@s` will not refer to any entity.
There is some potential for confusion here, as the `tick` event doesn't actually run with any particular entity or position in the Minecraft world by default, the location of `~ ~ ~` will be at (0, 0, 0), and `@s` will not refer to any entity.
You'll need to use a different selector to find the player and get their position before using the `if block` condition.
You'll need to use a different selector to find the player and get their position before using the `if block` condition.
</details>
<details>
<summary>Solution</summary>
This command should select the player, get their position, and execute `say aaaaaaaaaaaaa` for every tick when the player is falling down or jumping in the air.
This command should select the player, get their position, and execute `say aaaaaaaaaaaaa` for every tick when the player is falling down or jumping in the air.
```shell
```shell
# at each player position...
# | if the block below is air...
# | | print "aaaaa" in the chat!
execute at @a if block ~ ~-1 ~ air run say "aaaaaaaaaaaaaaaaaaaa!"
```
```
There are a few other approaches that could be used here &mdash; if you used `as @a at @s`, you'll notice that `say` actually prints your username before its message. This is because you've changed the selected entity to you, the player; so you're sending the message as yourself.
There are a few other approaches that could be used here if you used `as @a at @s`, you'll notice that `say` actually prints your username before its message. This is because you've changed the selected entity to you, the player; so you're sending the message as yourself.
If you try to flip the order of those two subcommands, `at @a as @s` won't actually select the right entity. You'll need to use `at @a as @p` to get the nearest player to the position of the selected player — which is a bit redundant when `as @a` could simply select the player entities to begin with.
If you try to flip the order of those two subcommands, `at @a as @s` won't actually select the right entity. You'll need to use `at @a as @p` to get the nearest player to the position of the selected player &mdash; which is a bit redundant when `as @a` could simply select the player entities to begin with.
</details>
**Note:** If you use the `as` and `at` subcommands together, be aware that both will run any consecutive subcommands *for every entity they select.* So `as @a at @a`, on a multiplayer server, will first select every player entity, then (for every player entity) will run at the position of every player entity. If `n = the number of players`, this will result in the command running `n*n` times in total.
**Note:** If you use the `as` and `at` subcommands together, be aware that both will run any consecutive subcommands _for every entity they select._ So `as @a at @a`, on a multiplayer server, will first select every player entity, then (for every player entity) will run at the position of every player entity. If `n = the number of players`, this will result in the command running `n*n` times in total.
You can try this with `@e[type=pig]` to see how many times it prints:
@@ -246,4 +249,4 @@ So far, we've started using conditional logic and covered most of the syntax you
Between articles, feel free to experiment with [other commands](https://minecraft.fandom.com/wiki/Commands), such as `/setblock` or `/playsound`. Most of these won't be directly mentioned in the rest of this series, so it'll be useful to read through this list to figure out what each command can do.
In the next post, we'll cover an entirely different feature of Minecraft: *player scoreboards!* These will allow us to keep count of different variables, detect certain in-game actions, and store a player-specific or global state in our data packs.
In the next post, we'll cover an entirely different feature of Minecraft: _player scoreboards!_ These will allow us to keep count of different variables, detect certain in-game actions, and store a player-specific or global state in our data packs.

View File

@@ -20,7 +20,7 @@ The data packs built in this series can be found in the [unicorn-utterances/mc-d
Minecraft's data pack system allows players to fundamentally modify existing behavior of the game by "replacing" or adding to its data files. Data packs typically use `.mcfunction` files to specify their functionality as a list of commands for the game to run, and `.json` files for writing advancements or loot tables.
One thing to note: While data packs are simple to use and enable a huge amount of functionality, they do have a couple drawbacks. One is that, while data packs allow most game features to be *changed*, they do not allow players to *add new features* into the game (although some can convincingly create that illusion with a few tricks).
One thing to note: While data packs are simple to use and enable a huge amount of functionality, they do have a couple drawbacks. One is that, while data packs allow most game features to be _changed_, they do not allow players to _add new features_ into the game (although some can convincingly create that illusion with a few tricks).
If you want to add new controls to the game, integrate with external services, or provide a complex user interface, a Minecraft modding framework such as [Fabric](https://fabricmc.net) or [Spigot](https://www.spigotmc.org/wiki/spigot/) might be better for you.
@@ -31,7 +31,7 @@ If you want to add new controls to the game, integrate with external services, o
- ###### Able to modify the user interface and settings menus
Some data packs have used innovative (and highly complex) workarounds to this [using modified item textures](https://www.youtube.com/watch?v=z4tvTrqhBZE), but in general, Minecraft's controls and user interface cannot be fundamentally changed without the use of a mod.
- ###### Can add entirely new functionality to the game
While data packs *can* add things like custom mobs or items through a couple workarounds, there are always some limitations. Mods can add *any* code to the game with no restrictions on their behavior.
While data packs _can_ add things like custom mobs or items through a couple workarounds, there are always some limitations. Mods can add _any_ code to the game with no restrictions on their behavior.
- ###### More performant than data packs when running large operations
This obviously depends on how well their functionality is written, but mods can provide much better performance with multithreading, asynchronous code, and generally faster access to the data they need. In comparison, data packs are limited by the performance of the commands available to them.
@@ -42,7 +42,7 @@ If you want to add new controls to the game, integrate with external services, o
- ###### Generally simpler to test and write
While some modding tools can provide fairly seamless testing & debugging, they all require programming knowledge in Java and/or Kotlin, and it can be tedious to set up a development environment for that if you don't have one already. Most data pack behavior can be written in any text editor and tested right in the text chat of your game!
- ###### Safer to make mistakes with
Since data packs are restricted to interacting with the commands Minecraft provides, it typically isn't possible to do anything that will entirely break your game. Mods can run any arbitrary code on your system, however &mdash; which means there's a higher chance that things can go wrong.
Since data packs are restricted to interacting with the commands Minecraft provides, it typically isn't possible to do anything that will entirely break your game. Mods can run any arbitrary code on your system, however which means there's a higher chance that things can go wrong.
- ###### Typically better update compatibility
While some commands do change in new Minecraft updates, I have (anecdotally) found the changes to be less impactful than the work required to bring mods up to date with new versions. Since mods often use [mixins](https://github.com/SpongePowered/Mixin/wiki) and directly interact with Minecraft's internal code, they can be affected by under-the-hood changes that wouldn't make any difference to a data pack.
@@ -50,7 +50,7 @@ If you want to add new controls to the game, integrate with external services, o
I usually prefer to write data packs for most things I work on, as I find them to be more useful to a wider audience because of their easier installation process. Some players simply don't want the trouble of setting up another installation folder or using a different Minecraft loader to play with a specific mod, and data packs can work with almost any combination of other mods and server technology.
With that said, data packs can certainly be tedious to write at times &mdash; while they are easier to build for simple functionality that can be directly invoked through commands, more complex behavior might be better off as a mod if those advantages are more appealing. Nothing is without its drawbacks, and any choice here is a valid one.
With that said, data packs can certainly be tedious to write at times while they are easier to build for simple functionality that can be directly invoked through commands, more complex behavior might be better off as a mod if those advantages are more appealing. Nothing is without its drawbacks, and any choice here is a valid one.
# Writing our first Minecraft function
@@ -78,7 +78,7 @@ Now let's see if we can put these into a function!
## Building a data pack folder structure
We'll need to make a new folder to build our data pack in &mdash; I'll name mine "1-introduction" to reflect the name of this article. We then need to place a "pack.mcmeta" file inside this folder to describe our pack.
We'll need to make a new folder to build our data pack in I'll name mine "1-introduction" to reflect the name of this article. We then need to place a "pack.mcmeta" file inside this folder to describe our pack.
```json
{
@@ -92,7 +92,7 @@ We'll need to make a new folder to build our data pack in &mdash; I'll name mine
The `"pack_format": 10` in this file corresponds to Minecraft 1.19; typically, the format changes with each major update, so for newer versions you might need to increase this number...
| Minecraft Version | `"pack_format"` value |
|-------------------|-----------------------|
| ----------------- | --------------------- |
| 1.19 | `"pack_format": 10` |
| 1.18.2 | `"pack_format": 9` |
| 1.18-1.18.1 | `"pack_format": 8` |
@@ -104,9 +104,9 @@ We then need to create a series of folders next to this file, which should be ne
data/fennifith/functions/animals/
```
In this path, the `fennifith/` folder can be called a *namespace* &mdash; this should be unique to avoid potential clashes if someone tries to use multiple data packs at once; if two data packs use exactly the same function name, at least one of them won't work as expected.
In this path, the `fennifith/` folder can be called a _namespace_ this should be unique to avoid potential clashes if someone tries to use multiple data packs at once; if two data packs use exactly the same function name, at least one of them won't work as expected.
The namespace and the `animals/` folder can be renamed as you like, but the `data/` and `functions/` folders must stay the same for the data pack to work. Additionally, it is important that the "functions" folder is exactly *one level* below the "data" folder. For example, `data/functions/` or `data/a/b/functions/` would **not** be valid structures.
The namespace and the `animals/` folder can be renamed as you like, but the `data/` and `functions/` folders must stay the same for the data pack to work. Additionally, it is important that the "functions" folder is exactly _one level_ below the "data" folder. For example, `data/functions/` or `data/a/b/functions/` would **not** be valid structures.
Finally, we should make our `.mcfunction` file in this folder. I'm going to name mine `spawn.mcfunction`:
@@ -142,7 +142,7 @@ To turn this folder into a data pack, we simply need to convert the "1-introduct
This can be done by holding down the Shift key and selecting both the `pack.mcmeta` and `data/` files in the file explorer. Then, right click and choose "Send to > Compressed (zipped) folder".
This should create a zip file in the same location &mdash; you might want to rename this to the name of your data pack. Right click & copy it so we can move it to the Minecraft world!
This should create a zip file in the same location you might want to rename this to the name of your data pack. Right click & copy it so we can move it to the Minecraft world!
To find the location of your world save, open Minecraft and find the "testing" world that we created earlier. Click on it, then choose the "Edit" option, and "Open World Folder".
@@ -152,7 +152,7 @@ In the Explorer window that opens, enter the "datapacks" folder. Right click and
This can be done by opening your data pack in Finder and selecting both the `pack.mcmeta` and `data/` files. Control-click or tap the selected files using two fingers, then choose "Compress" from the options menu.
You should now have a file named "Archive.zip" &mdash; you might want to rename this to the name of your data pack. Then, copy this file so we can move it to the Minecraft world!
You should now have a file named "Archive.zip" you might want to rename this to the name of your data pack. Then, copy this file so we can move it to the Minecraft world!
To find the location of your world save, open Minecraft and find the "testing" world that we created earlier. Click on it, then choose the "Edit" option, and "Open World Folder".
@@ -173,7 +173,7 @@ Then, assuming you named your world "testing", the command `ls ~/.minecraft/save
Now that we've installed the data pack, you should be able to enter the world save again (or use the `/reload` command if you still have it open). But nothing happens!
That's because, while our function exists, it isn't connected to any game events &mdash; we still need to type a command to actually run it. Here's what the command should look like for my function:
That's because, while our function exists, it isn't connected to any game events we still need to type a command to actually run it. Here's what the command should look like for my function:
```shell
/function fennifith:animals/spawn
@@ -187,7 +187,7 @@ In order to run a function automatically, Minecraft provides two built-in [funct
## Using the "load" event
We'll start with `load` &mdash; for which we'll need to create two new files in our folder structure! Below, I'm creating a new `load.mcfunction` next to our previous function, and a `minecraft/tags/functions/load.json` file for the `load` tag.
We'll start with `load` for which we'll need to create two new files in our folder structure! Below, I'm creating a new `load.mcfunction` next to our previous function, and a `minecraft/tags/functions/load.json` file for the `load` tag.
```shell
1-introduction/
@@ -204,7 +204,7 @@ We'll start with `load` &mdash; for which we'll need to create two new files in
spawn.mcfunction
```
Note that, while I'm using the `fennifith/` namespace for my functions, the tag file lives under the `minecraft/` namespace. This helps to keep some data isolated from the rest of the game &mdash; any files in the `minecraft/` folder are *modifying Minecraft's functionality,* while anything in a different namespace is creating something that belongs to my data pack.
Note that, while I'm using the `fennifith/` namespace for my functions, the tag file lives under the `minecraft/` namespace. This helps to keep some data isolated from the rest of the game any files in the `minecraft/` folder are _modifying Minecraft's functionality,_ while anything in a different namespace is creating something that belongs to my data pack.
Inside `load.json`, we can add a JSON array that contains the name of our load function as follows:
@@ -230,16 +230,16 @@ To invoke the "load" tag manually, you can either use the `/reload` command, or
> **Be aware** that when using the tick event, it is very easy to do things that cause humongous amounts of lag in your game. For example, connecting this to our `spawn.mcfunction` from earlier might have some adverse consequences when summoning approximately 100 animals per second.
Now, what if we try adding a file for the `tick` event with the same contents? We could add a `tick.json` file pointing to a `fennifith:animals/tick` function &mdash; and write a `tick.mcfunction` file for it to run.
Now, what if we try adding a file for the `tick` event with the same contents? We could add a `tick.json` file pointing to a `fennifith:animals/tick` function and write a `tick.mcfunction` file for it to run.
The chat window fills up with "Hello, world" messages! Every time the `tick` function tag is invoked (the game typically runs 20 ticks per second) it adds a new message! This is probably not something we want to do.
Could there be a way to check some kind of condition before running our commands? For example, if we wanted to run our `say` command when the player stands on a specific block...
Try experimenting! See if you can find a command that does this &mdash; and check out the next post in this series for the solution!
Try experimenting! See if you can find a command that does this and check out the next post in this series for the solution!
# Conclusion
If your data pack hasn't worked first try &mdash; don't worry! There are a lot of steps here, and the slightest typo or misplacement will cause Minecraft to completely ignore your code altogether. If you're ever stuck and can't find the issue, the [Unicorn Utterances discord](https://discord.gg/FMcvc6T) is a great place to ask for help!
If your data pack hasn't worked first try don't worry! There are a lot of steps here, and the slightest typo or misplacement will cause Minecraft to completely ignore your code altogether. If you're ever stuck and can't find the issue, the [Unicorn Utterances discord](https://discord.gg/FMcvc6T) is a great place to ask for help!
So far, we've covered the basics of data packs and how to write them &mdash; but there's a lot more to get into. Next, we'll start writing conditional behavior using block positions and entity selectors!
So far, we've covered the basics of data packs and how to write them but there's a lot more to get into. Next, we'll start writing conditional behavior using block positions and entity selectors!

View File

@@ -16,7 +16,7 @@ In the last article in the series, we outlined what a packet architected network
# Commonalities {#udp-and-tcp-both}
Let's start by talking about what similarities UDP and TCP have. While they do have their distinct differences, they share a lot in common.
Let's start by talking about what similarities UDP and TCP have. While they do have their distinct differences, they share a lot in common.
Since they're both packet-based, they both require an "address" of sorts to infer where they've come from and where they're going.
@@ -65,22 +65,20 @@ Let's say you're developing a web application using React and want to see it hos
# UDP {#udp}
Now that we've explained what IP addresses are and what ports are let's walk through how UDP is unique. _UDP stands for "User datagram protocol."_ You may be familiar with "User" and "Protocol," but the term **"datagram"** may be new.
Now that we've explained what IP addresses are and what ports are let's walk through how UDP is unique. _UDP stands for "User datagram protocol."_ You may be familiar with "User" and "Protocol," but the term **"datagram"** may be new.
If you're familiar with how a telegram (like the old-school messaging method, not the new-age messaging platform) used to work, you may already be familiar with how a datagram works.
*A datagram is a unidirectional, non-verifiably-sent piece of communication that contains data.*
_A datagram is a unidirectional, non-verifiably-sent piece of communication that contains data._
Whoa. What's that even mean?
When you send a letter through the mail (barring any additional "protections" you might add to a valuable package. We'll get to that later), you have no way of knowing if it made it to the intended recipient.
When you send a letter through the mail (barring any additional "protections" you might add to a valuable package. We'll get to that later), you have no way of knowing if it made it to the intended recipient.
Because the packet of information could be lost somewhere or sustain damage, which makes the data unreadable (say, via data corruption), you are unable to reliably ensure that it was received.
Likewise, if you've sent multiple packets at once, you have no way of knowing if your data is received in the same order they came in. While this isn't much of a problem for small-scale communication, this can become a problem for larger-scale bi-directional data transfer.
## When is UDP Useful? {#udp-uses}
UDP is useful for various low-level communication used to set up networks in ways that we'll touch later in the series. That said, there are also application-level usages for UDP's core strength: Speed. See, because UDP does not engage in any form of delivery confirmation, it tends to be significantly faster than it's TCP counterpart. As such, if you require high-speed data throughput and can afford to lose some data, UDP is the way to go. This speed is why it's often utilized in video calling software. You can scale up/down the video quality based on which packets are able to make it through but keep latency low due to pressing forward when packets don't arrive in time.
@@ -93,11 +91,11 @@ That's what TCP is for HTTP packets. TCP stands for "Transmission Control Protoc
The three-step handshake is broken down to these steps:
1) The client sends a request to the host, asking if it's acceptable to connect. It includes a "Synchronize Sequence Number" (SYN), which tells which packet number the communication is going to start with. This step is formally known as SYN
1. The client sends a request to the host, asking if it's acceptable to connect. It includes a "Synchronize Sequence Number" (SYN), which tells which packet number the communication is going to start with. This step is formally known as SYN
2) The host then acknowledges (ACK) the request, and sends it's own SYN. This step is formally known as SYN/ACK
2. The host then acknowledges (ACK) the request, and sends it's own SYN. This step is formally known as SYN/ACK
3) The client acknowledges the SYN from the host, and data starts transmitting. This step is formally known as ACK.
3. The client acknowledges the SYN from the host, and data starts transmitting. This step is formally known as ACK.
When you disconnect from the host, a similar disconnect handshake is done. Once the setup handshake is completed, and data starts flowing, every request to host will be returned by an acknowledgment of delivery. This ACK makes sure that you know your packets are delivered. If your packet acknowledgment is not resolved within a certain time, TCP includes the idea of timers running on the client that will re-send the packet.

View File

@@ -10,7 +10,6 @@
}
---
# Defining Mutable and Immutable
Mutable means "can change". Immutable means "cannot change". And these meanings remain the same in the technology world. For example, a mutable string can be changed, and an immutable string cannot be changed.
@@ -54,7 +53,7 @@ The biggest problem with mutable variables is that they are not thread-safe. Thr
What this means is that threads can access a data structure without producing unexpected results.
Take this example from [Statics &amp; Thread Safety: Part I](https://odetocode.com/Articles/313.aspx) for instance: Say I've got a shopping cart with 10 items at my local shop. I go to checkout and the clerk grabs each item and puts it through the register and then computes my cost. Without human error, we would expect the correct total to be shown.
Take this example from [Statics & Thread Safety: Part I](https://odetocode.com/Articles/313.aspx) for instance: Say I've got a shopping cart with 10 items at my local shop. I go to checkout and the clerk grabs each item and puts it through the register and then computes my cost. Without human error, we would expect the correct total to be shown.
Now imagine if we had 5 checkout lanes, each one with one clerk, but only 1 shared register. If multiple clerks are putting in items through the register at the same time, no one would get their correct total.
@@ -62,7 +61,7 @@ The solution is to ensure that only 1 clerk will have access at any one time to
# How does immutability solve this issue?
Immutability solves this issue by ensuring that a data structure cannot be modified, only read. Create once, read many times. So what if you need to perform an operation on an immutable data structure? You'd return the result in a *new* immutable instance of the data structure.
Immutability solves this issue by ensuring that a data structure cannot be modified, only read. Create once, read many times. So what if you need to perform an operation on an immutable data structure? You'd return the result in a _new_ immutable instance of the data structure.
So how does this look in typescript?
@@ -96,4 +95,3 @@ testUser = testUser.increaseAgeByOne(); // instance B
Now in the scenario that Thread 1 is reading `instance A` and Thread 2 wants to increase the age, it will have to do so by creating an `instance B` instead of directly modifying `instance A`, so it is assured that Thread 1 will produce expected behaviour.
Thanks for taking the time to read this article, and make sure to check other Unicorn Utterance's blog posts!

View File

@@ -39,7 +39,7 @@ Binary, on the other hand, is _base two_. **This means that there are only two s
> For the Latin enthusiasts, binary comes from "binarius" meaning "two together". _Deca_, meaning 10, is where "decimal" comes from.
> Additionally, the term "radix" is sometimes used instead of "base" when describing numeral systems, especially in programming.
Instead of using numbers, which can get very confusing very quickly while learning for the first time, let's use **`X`**s and **`O`**s as our two symbols for our first few examples. _An **`X`** represents if a number is present and that we should add it to the final sum; an **`O`** means that the number is not present and that we should not add it_.
Instead of using numbers, which can get very confusing very quickly while learning for the first time, let's use \*\*`X`\*\*s and \*\*`O`\*\*s as our two symbols for our first few examples. _An **`X`** represents if a number is present and that we should add it to the final sum; an **`O`** means that the number is not present and that we should not add it_.
Take the following example:
![A "X" on the two column and a "X" on the ones column which add together to make 3](./base_2_3_symbols.svg)
@@ -66,26 +66,26 @@ Once each of these powers is laid out, we can start adding `1`s where we have th
- Is `64` less than or equal to `50`? No. That's a **`0`**.
- Is `32 <= 50`? Yes, therefore that's a **`1`**.
- `50 - 32 = 18`
- `50 - 32 = 18`
- Moving down the list, is `16 <= 18`? Yes, that's a **`1`**.
-`18 - 16 = 2`
\-`18 - 16 = 2`
- Is `8 <= 2`? No, that's a **`0`**.
- `4 <= 2`? No, that's a **`0`** as well.
- `2 <= 2`? Yes, that's a **`1`**.
- `2 - 2 = 0`
- `2 - 2 = 0`
- Now that we're left with `0`, we know that the rest of the digits will be `0`.
Add up all those numbers:
| Column | Value |
| ------ | ------- |
| `64` | **`0`** |
| `32` | **`1`** |
| `16` | **`1`** |
| `8` | **`0`** |
| `4` | **`0`** |
| `2` | **`1`** |
| `1` | **`0`** |
| Column | Value |
| ------ | ------- |
| `64` | **`0`** |
| `32` | **`1`** |
| `16` | **`1`** |
| `8` | **`0`** |
| `4` | **`0`** |
| `2` | **`1`** |
| `1` | **`0`** |
And voilà, you have the binary representation of `50`: **`0110010`**.
@@ -115,15 +115,15 @@ Assuming we have a _ones_ column, a _sixteens_ column, and a _two-hundred fifty
- Is `256` less than or equal to `50`? No. That's a **`0`**
- Is `16 <= 50`? Yes. So we know it's _at least `1`_.
- Now, how many times can you put `16` in `50`?
- `16 * 2 = 32` and `32 <= 50`, so it's _at least_ _`2`_.
- `16 * 3 = 48` and `48 <= 50` so it's _at least_ _`3`_.
- `16 * 4 = 64`. However, `64 > 50`, therefor the _sixteenth_ place cannot be _`4`_, therefore it must be **`3`**.
- Now that we know the most we can have in the _sixteenth_ place, we can subtract the sum (`48`) from our result (`50`).
- `50 - 48 = 2`
- Now, how many times can you put `16` in `50`?
- `16 * 2 = 32` and `32 <= 50`, so it's _at least_ _`2`_.
- `16 * 3 = 48` and `48 <= 50` so it's _at least_ _`3`_.
- `16 * 4 = 64`. However, `64 > 50`, therefor the _sixteenth_ place cannot be _`4`_, therefore it must be **`3`**.
- Now that we know the most we can have in the _sixteenth_ place, we can subtract the sum (`48`) from our result (`50`).
- `50 - 48 = 2`
- Now onto the _ones_ place: how many _ones_ can fit into _`2`_?
- `1 * 1 = 1` and `1 <= 2`, so it's _at least_ _`1`_.
- `1 * 2 = 2` and `2 <= 2` and because these numbers are equal, we know that there must be **`2`** _twos_.
- `1 * 1 = 1` and `1 <= 2`, so it's _at least_ _`1`_.
- `1 * 2 = 2` and `2 <= 2` and because these numbers are equal, we know that there must be **`2`** _twos_.
Now if we add up these numbers:
@@ -153,13 +153,13 @@ In order to add a number larger than `15` in the hexadecimal system, we need to
>
> Binary works in the same manner. The first 5 columns/digits of binary are: `1`, `2`, `4`, `8`, `16`. These numbers align respectively to their binary exponents: 2<sup>0</sup>, 2<sup>1</sup>, 2<sup>2</sup>, 2<sup>3</sup>, 2<sup>4</sup>.
>
> It's also worth noting that decimal numbers can be written out the same way.
> It's also worth noting that decimal numbers can be written out the same way.
>
> _`732`_ for example, in base 10, can be written as (7 × 10<sup>2</sup>) + (3 × 10<sup>1</sup>) + (2 × 10<sup>0</sup>).
## To Binary {#hexadecimal-to-binary}
Remember that at the end of the day, hexadecimal is just another way to represent a value using a specific set of symbols. Just as we're able to convert from binary to decimal, we can convert from hexadecimal to binary and vice versa.
Remember that at the end of the day, hexadecimal is just another way to represent a value using a specific set of symbols. Just as we're able to convert from binary to decimal, we can convert from hexadecimal to binary and vice versa.
In binary, the set of symbols is much smaller than in hexadecimal, and as a result, the symbolic representation is longer.
![The hexadecimal number "032" and the binary number "110010" which both represent the decimal value 50](./binary_vs_hexadecimal.svg)

View File

@@ -39,7 +39,7 @@ Here's an example:
// Prints out the value num
cout << num << endl;
```
This should print out something like...
```
@@ -57,7 +57,7 @@ Pointers can also get a lot more complex and must be used in certain situations.
In simple terms, a reference is simply the address of whatever you're passing. The difference between a pointer and a reference lies in the fact that a reference is simply the **address** to where a value is being stored and a pointer is simply a variable that has it's own address as well as the address its pointing to. I like to consider the **&** operator the "reference operator" even though I'm pretty sure that's not actually what it is called. I used this operator in the last example, and it's pretty straightforward.
```cpp
```cpp
int num = 12;
int *val = &num;
int **doublePointer = &val;
@@ -82,7 +82,7 @@ The output will look something like this...
Address of val: 0xffffcc10
0xffffcc08 : 0xffffcc10 : 0xffffcc1c : 12
```
As you can see, all the **&** operator does is gives you the memory address at its specific spot in memory. I also included a small example of a double-pointer which just contains one more layer of abstraction then a single pointer. You can see how the memory addresses line up in the output.
Heres what this looks like in memory with more easily understandable addresses in a “0x…” format.

View File

@@ -39,23 +39,23 @@ So I developed the most simple thing that would work.
There are 2 task types:
* Tasks: You can nest tasks within tasks. They can be anything: feature implementation, refactoring, writing tests, chores, etc. Anything except issues or bugs
- Tasks: You can nest tasks within tasks. They can be anything: feature implementation, refactoring, writing tests, chores, etc. Anything except issues or bugs
* Bugs: They can be nested under tasks, but not under other bugs. They must be atomic, meaning they can't be broken down into smaller pieces
- Bugs: They can be nested under tasks, but not under other bugs. They must be atomic, meaning they can't be broken down into smaller pieces
### Tags
There are a few tags that I like to use when defining bugs and tasks:
* Feature: For tasks the implement a feature
- Feature: For tasks the implement a feature
* Refactor: For tasks that relate to a refactor of some sorts
- Refactor: For tasks that relate to a refactor of some sorts
* Chore: For tasks that may be boring / don't affect the end result too much like internal documentation and configuration
- Chore: For tasks that may be boring / don't affect the end result too much like internal documentation and configuration
* Test: For tasks or bugs about testing
- Test: For tasks or bugs about testing
* Blocker: For bugs that are very urgent
- Blocker: For bugs that are very urgent
By using these tags it becomes easier to manage tasks and bugs as you can filter through them. Some people might prefer making each tag it's own task type, but personally I prefer to have the one type of task with one or more tags instead.
@@ -109,7 +109,7 @@ But I still wanted to put out a general sort of guide to coming up with your own
### 1. How do you want to break down your to-dos?
In my case, I wanted to break them down into Tasks and Bugs, each with optional tags. And it works great for me because it integrates well with Azure DevOps. But if you're using a different system then maybe you want to rethink this. Maybe you want Test Writing to be it's own type of task because you find it feels better in your system. It's *your* methodology so remember to tailor it to yourself.
In my case, I wanted to break them down into Tasks and Bugs, each with optional tags. And it works great for me because it integrates well with Azure DevOps. But if you're using a different system then maybe you want to rethink this. Maybe you want Test Writing to be it's own type of task because you find it feels better in your system. It's _your_ methodology so remember to tailor it to yourself.
### 2. Which branching strategy do you want?

View File

@@ -11,7 +11,7 @@
}
---
Python list comprehensions allow for powerful and readable list mutations. In this article, we'll learn many different ways in how they can be used and where they're most useful.
Python list comprehensions allow for powerful and readable list mutations. In this article, we'll learn many different ways in how they can be used and where they're most useful.
Python is an incredibly powerful language thats widely adopted across a wide range of applications. As with any language of sufficient complexity, Python enables multiple ways of doing things. However, the community at large has agreed that code should follow a specific pattern: be “Pythonic”. While “Pythonic” is a community term, the official language defines what they call [“The Zen of Python” in PEP 20](https://www.python.org/dev/peps/pep-0020/). To quote just a small bit of it:
@@ -23,12 +23,10 @@ Python is an incredibly powerful language thats widely adopted across a wide
>
> Flat is better than nested.
Introduced in Python 2.0 with [PEP 202](https://www.python.org/dev/peps/pep-0202/), list comprehensions help align some of these goals for common operations in Python. Lets explore how we can use list comprehensions and where they serve the Zen of Python better than alternatives.
## What is List Comprehension?
Lets say that we want to make an array of numbers, counting up from 0 to 2. We could assign an empty array, use `range` to create a generator, then `append` to that array using a `for` loop
```python
@@ -234,7 +232,6 @@ safe_numbers = [x for x in range(6) if (x%2==0 or x%3==0) and x is not restricte
## Conclusion & Challenge
Weve covered a lot about list comprehension in Python today! Were able to build complex logic into our applications while maintaining readability in most situations. However, like any tool, list comprehension can be abused. When you start including too many logical operations to comfortably read, you should likely migrate away from list comprehension to use full-bodied `for` loops.
For example, given this sandbox code pad of a long and messy list comprehension, how can you refactor to remove all usage of list comprehensions? Avoid using `map`, `filter` or other list helpers, either. Simply use nested `for` loops and `if` conditionals to match the behavior as it was before.

View File

@@ -14,6 +14,7 @@
Today at work we had a silly bug that exposes how reliant I am on Go's type system and compiler. I personally am too comfortable building a Docker image and assuming that the most egregious bugs were caught simply because the build was successful.
## The Bug
Python doesn't require you to specify a return value. In fact, you can have a function that may not explicitly return at all. Since Python is a scripting language, it will automatically return when it hits the bottom of the function being called. When this happens without returning a specific value, any variable assigned to the function call will be `None`. A silly but illustrating example:
```python
@@ -21,6 +22,7 @@ def change_string(do_it: bool) -> str:
if do_it:
return "changed!"
```
This function accepts a boolean that determines whether to do anything at all. According to the type hints and the function name, a string is the expected return type. You can assign the output of this function to a variable like normal:
```python
@@ -38,6 +40,7 @@ print(type(my_string))
```
## Lesson Learned
We switched to Go for the concurrency benefits but also the type system and compiler helps save us from these runtime errors. The same function in Go would result in a compile time error:
```go
@@ -58,4 +61,4 @@ func main() {
> ./main.go:10:1: missing return
```
This Python service isn't a candidate for rewriting in Go any time soon. Remembering to be more thorough in my code review and testing would have saved me from an embarrassing run time error that had some client impact today.
This Python service isn't a candidate for rewriting in Go any time soon. Remembering to be more thorough in my code review and testing would have saved me from an embarrassing run time error that had some client impact today.

View File

@@ -18,7 +18,7 @@ Today, we'll be walking through two definitions of refs:
- A [reference to DOM elements](#dom-ref)
We'll also be exploring additional functionality to each of those two definitions, such as [component refs](#forward-ref), [adding more properties to a ref](#use-imperative-handle), and even exploring [common code gotchas associated with using `useRef`](#refs-in-use-effect).
We'll also be exploring additional functionality to each of those two definitions, such as [component refs](#forward-ref), [adding more properties to a ref](#use-imperative-handle), and even exploring [common code gotchas associated with using `useRef`](#refs-in-use-effect).
> As most of this content relies on the `useRef` hook, we'll be using functional components for all of our examples. However, there are APIs such as [`React.createRef`](https://reactjs.org/docs/refs-and-the-dom.html#creating-refs) and [class instance variables](https://www.seanmcp.com/articles/storing-data-in-state-vs-class-variable/) that can be used to recreate `React.useRef` functionality with classes.
@@ -91,8 +91,6 @@ However, that's not the case. [To quote Dan Abramov](https://github.com/facebook
> }
> ```
Because of this implementation, when you mutate the `current` value, it will not cause a re-render.
Thanks to the lack of rendering on data storage, it's particularly useful for storing data that you need to keep a reference to but don't need to render on-screen. One such example of this would be a timer:
@@ -173,25 +171,26 @@ Because `useRef` relies on passing by reference and mutating that reference, if
<iframe src="https://stackblitz.com/edit/react-use-ref-mutable-fixed-code?ctl=1&embed=1" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
> * I would not solve it this way in production. `useState` accepts a callback which you can use as an alternative (much more recommended) route:
> - I would not solve it this way in production. `useState` accepts a callback which you can use as an alternative (much more recommended) route:
>
> ```jsx
> const dataRef = React.useRef();
>
>
> const [timerVal, setTimerVal] = React.useState(0);
>
>
> const clearTimer = () => {
> clearInterval(dataRef.current);
> };
>
>
> React.useEffect(() => {
> dataRef.current = setInterval(() => {
> setTimerVal(tVal => tVal + 1);
> }, 500);
>
>
> return () => clearInterval(dataRef.current);
> }, [dataRef]);
> ```
>
> We're simply using a `useRef` to outline one of the important properties about refs: mutation.
# DOM Element References {#dom-ref}
@@ -248,7 +247,7 @@ It's worth noting that the `ref` attribute also accepts a function. While [we'll
)
```
# Component References {#forward-ref}
# Component References {#forward-ref}
HTML elements are a great use-case for `ref`s. However, there are many instances where you need a ref for an element that's part of a child's render process. How are we able to pass a ref from a parent component to a child component?
@@ -363,11 +362,11 @@ const App = () => {
> ```jsx
> class App extends React.Component {
> compRef = React.createRef();
>
>
> componentDidMount() {
> console.log(this.compRef.current);
> }
>
>
> render() {
> return (
> <Container ref={this.compRef}>
@@ -753,9 +752,11 @@ The way `useEffect` _actually_ works is much more passive. During a render, `use
Why does this come into play when `ref`s are used? Well, there are two things to keep in mind:
- Refs rely on object mutation rather than reassignment
- When a `ref` is mutated, it does not trigger a re-render
- `useEffect` only does the array check on re-render
- Ref's current property set doesn't trigger a re-render ([remember how `useRef` is _actually_ implemented](#use-ref-mutate))
Knowing this, let's take a look at an offending example once more:
@@ -783,7 +784,6 @@ This code behaves as we might initially expect, not because we've done things pr
Because `useEffect` happens _after_ the first render, `elRef` is already assigned by the time `elRef.current.style` has its new value assigned to it. However, if we somehow broke that timing expectancy, we'd see different behavior.
What do you think will happen if you make the `div` render happen _after_ the initial render?
```jsx
@@ -856,7 +856,7 @@ Because of the unintended effects of tracking a `ref` in a `useEffect`, the core
[Dan Abramov Said on GitHub:](https://github.com/facebook/react/issues/14387#issuecomment-503616820)
> As I mentioned earlier, if you put [ref.current] in dependencies, you're likely making a mistake. Refs are for values whose changes don't need to trigger a re-render.
> As I mentioned earlier, if you put \[ref.current] in dependencies, you're likely making a mistake. Refs are for values whose changes don't need to trigger a re-render.
>
> If you want to re-run effect when a ref changes, you probably want a callback ref instead.

View File

@@ -1,4 +1,4 @@
---
---
{
title: "Rules of React's useEffect",
description: "useEffect is prolific in React apps. Here are four rules associated with the hook and in-depth explanations of why they're important.",
@@ -8,39 +8,39 @@
attached: [],
license: 'coderpad',
originalLink: 'https://coderpad.io/blog/development/rules-of-reacts-useeffect/'
}
---
Reacts `useEffect` is a powerful API with lots of capabilities, and therefore flexibility. Unfortunately, this flexibility often leads to abuse and misuse, which can greatly damage an apps stability.
The good news is that if you follow a set of rules designated to protect you during coding, your application can be secure and performant.
No, were not talking about Reacts “[Rules of Hooks](https://reactjs.org/docs/hooks-rules.html)”, which includes rules such as:
- No conditionally calling hooks
- Only calling hooks inside of hooks or component
- Always having items inside of the dependency array
These rules are good, but can be detected automatically with linting rules. It's good that they're there (and maintained by Meta), but overall, we can pretend like everyone has them fixed because their IDE should throw a warning.
Specifically, I want to talk about the rules that can only be caught during manual code review processes:
- Keep all side effects inside `useEffect`
- Properly clean up side effects
- Don't use `ref` in `useEffect`
- Don't use `[]` as a guarantee that something only happens once
While these rules may seem obvious at first, we'll be taking a deep dive into the "why" of each. As a result, you may learn something about how React works under the hood - even if you're a React pro.
## Keep all side effects inside `useEffect`
For anyone familiar with Reacts docs, youll know that this rule has been repeated over and over again. But why? Why is this a rule?
After all, what would prevent you from storing logic inside of a `useMemo` and simply having an empty dependency array to prevent it from running more than once?
Lets try that out by running a network request inside of a `useMemo`:
```jsx
}
---
Reacts `useEffect` is a powerful API with lots of capabilities, and therefore flexibility. Unfortunately, this flexibility often leads to abuse and misuse, which can greatly damage an apps stability.
The good news is that if you follow a set of rules designated to protect you during coding, your application can be secure and performant.
No, were not talking about Reacts “[Rules of Hooks](https://reactjs.org/docs/hooks-rules.html)”, which includes rules such as:
- No conditionally calling hooks
- Only calling hooks inside of hooks or component
- Always having items inside of the dependency array
These rules are good, but can be detected automatically with linting rules. It's good that they're there (and maintained by Meta), but overall, we can pretend like everyone has them fixed because their IDE should throw a warning.
Specifically, I want to talk about the rules that can only be caught during manual code review processes:
- Keep all side effects inside `useEffect`
- Properly clean up side effects
- Don't use `ref` in `useEffect`
- Don't use `[]` as a guarantee that something only happens once
While these rules may seem obvious at first, we'll be taking a deep dive into the "why" of each. As a result, you may learn something about how React works under the hood - even if you're a React pro.
## Keep all side effects inside `useEffect`
For anyone familiar with Reacts docs, youll know that this rule has been repeated over and over again. But why? Why is this a rule?
After all, what would prevent you from storing logic inside of a `useMemo` and simply having an empty dependency array to prevent it from running more than once?
Lets try that out by running a network request inside of a `useMemo`:
```jsx
const EffectComp = () => {
const [activity, setActivity] = React.useState(null);
@@ -52,14 +52,14 @@ const EffectComp = () => {
}, [])
return <p>{activity}</p>
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=205251" loading="lazy"></iframe>
Huh. It works first try without any immediately noticeable downsides. This works because `fetch` is asynchronous, meaning that it doesnt block the [event loop](https://www.youtube.com/watch?v=8aGhZQkoFbQ&vl=en). Instead, lets change that code to be a synchronous `XHR` request and see if that works too.
```
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=205251" loading="lazy"></iframe>
Huh. It works first try without any immediately noticeable downsides. This works because `fetch` is asynchronous, meaning that it doesnt block the [event loop](https://www.youtube.com/watch?v=8aGhZQkoFbQ\&vl=en). Instead, lets change that code to be a synchronous `XHR` request and see if that works too.
```
function getActivity() {
var request = new XMLHttpRequest();
request.open('GET', 'https://www.boredapi.com/api/activity', false); // `false` makes the request synchronous
@@ -76,30 +76,30 @@ const EffectComp = () => {
}, []);
return <p>Hello, world! {data}</p>;
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=205252" loading="lazy"></iframe>
Here, we can see behavior that we might not expect to see. When using useMemo alongside a blocking method, the entire screen will halt before drawing anything. The initial paint is then made after the fetch is finally finished.
<video src="./useMemoRendering.mp4"></video>
However, if we use useEffect instead, this does not occur.
<video src="./useEffectRendering.mp4"></video>
Here, we can see the initial paint occur, drawing the “Hello” message before the blocking network call is made.
Why does this happen?
### Understanding hook lifecycles
The reason `useEffect` is still able to paint but useMemo cannot is because of the timings of each of these hooks. You can think of `useMemo` as occurring right in line with the rest of your render code.
In terms of timings, the two pieces of code are very similar:
```jsx
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=205252" loading="lazy"></iframe>
Here, we can see behavior that we might not expect to see. When using useMemo alongside a blocking method, the entire screen will halt before drawing anything. The initial paint is then made after the fetch is finally finished.
<video src="./useMemoRendering.mp4"></video>
However, if we use useEffect instead, this does not occur.
<video src="./useEffectRendering.mp4"></video>
Here, we can see the initial paint occur, drawing the “Hello” message before the blocking network call is made.
Why does this happen?
### Understanding hook lifecycles
The reason `useEffect` is still able to paint but useMemo cannot is because of the timings of each of these hooks. You can think of `useMemo` as occurring right in line with the rest of your render code.
In terms of timings, the two pieces of code are very similar:
```jsx
const EffectComp = () => {
const [data, setData] = React.useState(null);
@@ -115,48 +115,48 @@ const EffectComp = () => {
setData(getActivity().activity);
return <p>Hello, world! {data}</p>;
}
```
This inlining behavior occurs because `useMemo` runs during the “render” phase of a component. `useEffect`, on the other hand, runs **after** a component renders out, which allows an initial render before the blocking behavior halts things for us.
Those among you that know of “useLayoutEffect” may think you have found a gotcha in what I just said.
“Ahh, but wouldnt useLayoutEffect also prevent the browser from drawing until the network call is completed?”
Not quite! You see, while useMemo runs during the render phase, useLayoutEffect runs during the “*commit”* phase and therefore renders the initial contents to screen first.
> [useLayoutEffects signature is identical to useEffect, but it fires synchronously after all DOM mutations.](https://reactjs.org/docs/hooks-reference.html#uselayouteffect)
See, the commit phase is the part of a components lifecycle *after* React is done asking all the components what they want the UI to look like, has done all the diffing, and is ready to update the DOM.
![img](./hooks_lifecycle.png)
> If youd like to learn more about how React does its UI diffing and what this process all looks like under the hood, take a look at [Dan Abramovs wonderful “React as a UI Runtime” post](https://overreacted.io/react-as-a-ui-runtime/).
>
> Theres also [this awesome chart demonstrating how all of the hooks tie in together](https://github.com/Wavez/react-hooks-lifecycle) that our chart is a simplified version of.
Now, this isnt to say that you should optimize your code to work effectively with blocking network calls. After all, while `useEffect` allows you to render your code, a blocking network request still puts you in the uncomfortable position of your user being unable to interact with your page.
Because JavaScript is single-threaded, a blocking function will prevent user interaction from being processed in the event loop.
> If you read the last sentence and are scratching your head, youre not alone. The idea of JavaScript being single-threaded, what an “event loop” is, and what “blocking” means are all quite confusing at first.
>
> We suggest taking a look at [this great explainer talk from Philip Robers](https://www.youtube.com/watch?v=8aGhZQkoFbQ) to understand more.
That said, this isnt the only scenario where the differences between `useMemo` and `useEffect` cause misbehavior with side effects. Effectively, theyre two different tools with different usages and attempting to merge them often breaks things.
Attempting to use `useMemo` in place of `useEffect` leads to scenarios that can introduce bugs, and it may not be obvious whats going wrong at first. After long enough, with enough of these floating about in your application, its sort of “death by a thousand paper-cuts”.
These papercuts aren't the only problem, however. After all, the APIs for useEffect and useMemo are not the same. This incongruity between APIs is especially pronounced for network requests because a key feature is missing from the `useMemo` API: effect cleanup.
## Always clean up your side effects
Occasionally, when using `useEffect`, you may be left with something that requires cleanup. A classic example of this might be a network call.
Say you have an application to give bored users an activity to do at home. Lets use a network request that retrieves an activity from an API:
```jsx
}
```
This inlining behavior occurs because `useMemo` runs during the “render” phase of a component. `useEffect`, on the other hand, runs **after** a component renders out, which allows an initial render before the blocking behavior halts things for us.
Those among you that know of “useLayoutEffect” may think you have found a gotcha in what I just said.
“Ahh, but wouldnt useLayoutEffect also prevent the browser from drawing until the network call is completed?”
Not quite! You see, while useMemo runs during the render phase, useLayoutEffect runs during the “_commit”_ phase and therefore renders the initial contents to screen first.
> [useLayoutEffects signature is identical to useEffect, but it fires synchronously after all DOM mutations.](https://reactjs.org/docs/hooks-reference.html#uselayouteffect)
See, the commit phase is the part of a components lifecycle _after_ React is done asking all the components what they want the UI to look like, has done all the diffing, and is ready to update the DOM.
![img](./hooks_lifecycle.png)
> If youd like to learn more about how React does its UI diffing and what this process all looks like under the hood, take a look at [Dan Abramovs wonderful “React as a UI Runtime” post](https://overreacted.io/react-as-a-ui-runtime/).
>
> Theres also [this awesome chart demonstrating how all of the hooks tie in together](https://github.com/Wavez/react-hooks-lifecycle) that our chart is a simplified version of.
Now, this isnt to say that you should optimize your code to work effectively with blocking network calls. After all, while `useEffect` allows you to render your code, a blocking network request still puts you in the uncomfortable position of your user being unable to interact with your page.
Because JavaScript is single-threaded, a blocking function will prevent user interaction from being processed in the event loop.
> If you read the last sentence and are scratching your head, youre not alone. The idea of JavaScript being single-threaded, what an “event loop” is, and what “blocking” means are all quite confusing at first.
>
> We suggest taking a look at [this great explainer talk from Philip Robers](https://www.youtube.com/watch?v=8aGhZQkoFbQ) to understand more.
That said, this isnt the only scenario where the differences between `useMemo` and `useEffect` cause misbehavior with side effects. Effectively, theyre two different tools with different usages and attempting to merge them often breaks things.
Attempting to use `useMemo` in place of `useEffect` leads to scenarios that can introduce bugs, and it may not be obvious whats going wrong at first. After long enough, with enough of these floating about in your application, its sort of “death by a thousand paper-cuts”.
These papercuts aren't the only problem, however. After all, the APIs for useEffect and useMemo are not the same. This incongruity between APIs is especially pronounced for network requests because a key feature is missing from the `useMemo` API: effect cleanup.
## Always clean up your side effects
Occasionally, when using `useEffect`, you may be left with something that requires cleanup. A classic example of this might be a network call.
Say you have an application to give bored users an activity to do at home. Lets use a network request that retrieves an activity from an API:
```jsx
const EffectComp = () => {
const [activity, setActivity] = React.useState(null);
@@ -167,14 +167,14 @@ const EffectComp = () => {
}, [])
return <p>{activity}</p>
}
```
While this works for a single activity, what happens when the user completes the activity?
Lets give them a button to rotate between new activities and include a count of how many times the user has requested an activity.
```jsx
}
```
While this works for a single activity, what happens when the user completes the activity?
Lets give them a button to rotate between new activities and include a count of how many times the user has requested an activity.
```jsx
const EffectComp = () => {
const [activity, setActivity] = React.useState(null);
const [num, setNum] = React.useState(1);
@@ -194,20 +194,20 @@ const EffectComp = () => {
<button onClick={() => setNum(num + 1)}>Request new activity</button>
</div>
)
}
```
Just as we intended, we get a new network activity if we press the button. We can even press the button multiple times to get a new activity per press.
But wait, what happens if we slow down our network speed and press the “request” button rapidly?
<video src="./before_signal.mp4"></video>
Oh no! Even tho weve stopped clicking the button, our network requests are still coming in. This gives us a sluggish feeling experience, especially when latency times between network calls are high.
Well, this is where our cleanup would come into effect. Lets add an [AbortSignal](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal) to cancel a request when we request a new one.
```jsx
}
```
Just as we intended, we get a new network activity if we press the button. We can even press the button multiple times to get a new activity per press.
But wait, what happens if we slow down our network speed and press the “request” button rapidly?
<video src="./before_signal.mp4"></video>
Oh no! Even tho weve stopped clicking the button, our network requests are still coming in. This gives us a sluggish feeling experience, especially when latency times between network calls are high.
Well, this is where our cleanup would come into effect. Lets add an [AbortSignal](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal) to cancel a request when we request a new one.
```jsx
const EffectComp = () => {
const [activity, setActivity] = React.useState(null);
const [num, setNum] = React.useState(1);
@@ -234,36 +234,36 @@ const EffectComp = () => {
<button onClick={() => setNum(num + 1)}>Request new activity</button>
</div>
)
}
```
If we open our network request tab, youll notice how our network calls are now being canceled when we initialize a new one.
![img](./cancelled_request.png)
This is a good thing! It means that instead of a jarring experience of jumpiness, youll now only see a single activity after the end of a chain of clicking.
<video src="./after_signal.mp4"></video>
While this may seem like a one-off that we created ourselves using artificial network slowdowns, this is the real-world experience users on slow networks may experience!
Whats more, when you consider API timing differences, this problem may be even more widespread.
Lets say that youre using a [new React concurrent feature](https://coderpad.io/blog/why-react-18-broke-your-app/), which may cause an interrupted render, forcing a new network call before the other has finished.
The first call hangs on the server for slightly longer for whatever reason and takes 500ms, but the second call goes through immediately in 20ms. But oh no, during that 480ms there was a change in the data!
![img](./manual_waterfall.png)
This means that our `.then` which runs `setActivity` will execute on the first network call complete with stale data (showing “10,000”) **after** the second network call.
This is important to catch early, because these shifts in behavior can be immediately noticeable to a user when it happens. These issues are also often particularly difficult to find and work through after the fact.
## Dont use refs in useEffect
If youve ever used a useEffect to apply an `addEventListener`, you may have written something like the following:
```jsx
}
```
If we open our network request tab, youll notice how our network calls are now being canceled when we initialize a new one.
![img](./cancelled_request.png)
This is a good thing! It means that instead of a jarring experience of jumpiness, youll now only see a single activity after the end of a chain of clicking.
<video src="./after_signal.mp4"></video>
While this may seem like a one-off that we created ourselves using artificial network slowdowns, this is the real-world experience users on slow networks may experience!
Whats more, when you consider API timing differences, this problem may be even more widespread.
Lets say that youre using a [new React concurrent feature](https://coderpad.io/blog/why-react-18-broke-your-app/), which may cause an interrupted render, forcing a new network call before the other has finished.
The first call hangs on the server for slightly longer for whatever reason and takes 500ms, but the second call goes through immediately in 20ms. But oh no, during that 480ms there was a change in the data!
![img](./manual_waterfall.png)
This means that our `.then` which runs `setActivity` will execute on the first network call complete with stale data (showing “10,000”) **after** the second network call.
This is important to catch early, because these shifts in behavior can be immediately noticeable to a user when it happens. These issues are also often particularly difficult to find and work through after the fact.
## Dont use refs in useEffect
If youve ever used a useEffect to apply an `addEventListener`, you may have written something like the following:
```jsx
const RefEffectComp = () => {
const buttonRef = React.useRef();
@@ -285,26 +285,26 @@ const RefEffectComp = () => {
<p>{count}</p>
<button ref={buttonRef}>Click me</button>
</div>
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=205242" loading="lazy"></iframe>
<video src="./button_incrementing.mp4"></video>
While this might make intuitive sense due to utilizing `useEffect`s cleanup, this code is actually not correct. You should not utilize a `ref` or `ref.current` inside of a dependency array for a hook.
This is because **changing refs does not force a re-render and therefore useEffect never runs when the value changes.**
While most assume that `useEffect` “listens” for changes in this array and runs the effect when it changes, this is an inaccurate mental model.
A more apt mental model might be: “useEffect only runs at most once per render. However, as an optimization, I can pass an array to prevent the side effect from running if the variable references inside of the array have not changed.”
This shift in understanding is important because the first version can easily lead to bugs in your app. For example, instead of rendering out the button immediately, lets say that we need to defer the rendering for some reason.
Simple enough, well add a `setTimeout` and a boolean to render the button.
```jsx
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=205242" loading="lazy"></iframe>
<video src="./button_incrementing.mp4"></video>
While this might make intuitive sense due to utilizing `useEffect`s cleanup, this code is actually not correct. You should not utilize a `ref` or `ref.current` inside of a dependency array for a hook.
This is because **changing refs does not force a re-render and therefore useEffect never runs when the value changes.**
While most assume that `useEffect` “listens” for changes in this array and runs the effect when it changes, this is an inaccurate mental model.
A more apt mental model might be: “useEffect only runs at most once per render. However, as an optimization, I can pass an array to prevent the side effect from running if the variable references inside of the array have not changed.”
This shift in understanding is important because the first version can easily lead to bugs in your app. For example, instead of rendering out the button immediately, lets say that we need to defer the rendering for some reason.
Simple enough, well add a `setTimeout` and a boolean to render the button.
```jsx
const RefEffectComp = ()=>{
const buttonRef = React.useRef();
@@ -344,20 +344,20 @@ const RefEffectComp = ()=>{
<p>{count}</p>
{shouldRender && <button ref={buttonRef}>Click me</button>}
</div>
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=205243" loading="lazy"></iframe>
Now, if we wait a second for the button to render and click it, our counter doesnt go up!
<video src="./button_not_incrementing.mp4"></video>
This is because once our `ref` is set after the initial render, it doesnt trigger a re-render and our `useEffect` never runs.
A better way to write this would be to utilize a [“callback ref”](https://unicorn-utterances.com/posts/react-refs-complete-story#callback-refs), and then use a `useState` to force a re-render when its set.
```jsx
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=205243" loading="lazy"></iframe>
Now, if we wait a second for the button to render and click it, our counter doesnt go up!
<video src="./button_not_incrementing.mp4"></video>
This is because once our `ref` is set after the initial render, it doesnt trigger a re-render and our `useEffect` never runs.
A better way to write this would be to utilize a [“callback ref”](https://unicorn-utterances.com/posts/react-refs-complete-story#callback-refs), and then use a `useState` to force a re-render when its set.
```jsx
const RefEffectComp = ()=>{
const [buttonEl, setButtonEl] = React.useState();
@@ -396,27 +396,27 @@ const RefEffectComp = ()=>{
<p>{count}</p>
{shouldRender && <button ref={buttonElRef => setButtonEl(buttonElRef)}>Click me</button>}
</div>
}
```
This will force the re-render when `ref` is set after the initial render and, in turn, cause the `useEffect` to trigger as expected.
To be fair, this “rule” is more of a soft rule than anything. There are absolutely instances - such as setTimeout timers - where utilizing a ref inside of a useEffect make sense. Just make sure you have a proper mental model about refs and useEffect and youll be fine.
> Want to refine your understanding of refs even further? [See my article outlining the important details of refs for more.](https://unicorn-utterances.com/posts/react-refs-complete-story)
## Dont expect an empty dependency array to only run once
While previous versions of React allowed you to utilize an empty array to guarantee that a `useEffect` would only run once, [React 18 changed this behavior](https://coderpad.io/blog/why-react-18-broke-your-app/). As a result, now `useEffect` may run any number of times when an empty dependency array passes, in particular when a [concurrent feature is utilized](https://github.com/reactwg/react-18/discussions/46#discussioncomment-846786).
Concurrent features are new to React 18 and allow React to pause, halt, and remount a component whenever React sees it appropriate.
As a result, this may break various aspects of your code.
You can [read more about how an empty dependency array can break in your app from our article about React 18s changes to mounting.](https://coderpad.io/blog/why-react-18-broke-your-app/)
## Conclusion
Reacts useEffect is an instrumental part of modern React applications. Now that you know more about its internals and the rules around it, you can build stronger and more dynamic programs!
If you want to continue learning skills that will help make your React apps better, I suggest taking a look at [our guide to React Unidirectionality](https://coderpad.io/blog/master-react-unidirectional-data-flow/), which outlines a good way to keep your application flow more organized.
}
```
This will force the re-render when `ref` is set after the initial render and, in turn, cause the `useEffect` to trigger as expected.
To be fair, this “rule” is more of a soft rule than anything. There are absolutely instances - such as setTimeout timers - where utilizing a ref inside of a useEffect make sense. Just make sure you have a proper mental model about refs and useEffect and youll be fine.
> Want to refine your understanding of refs even further? [See my article outlining the important details of refs for more.](https://unicorn-utterances.com/posts/react-refs-complete-story)
## Dont expect an empty dependency array to only run once
While previous versions of React allowed you to utilize an empty array to guarantee that a `useEffect` would only run once, [React 18 changed this behavior](https://coderpad.io/blog/why-react-18-broke-your-app/). As a result, now `useEffect` may run any number of times when an empty dependency array passes, in particular when a [concurrent feature is utilized](https://github.com/reactwg/react-18/discussions/46#discussioncomment-846786).
Concurrent features are new to React 18 and allow React to pause, halt, and remount a component whenever React sees it appropriate.
As a result, this may break various aspects of your code.
You can [read more about how an empty dependency array can break in your app from our article about React 18s changes to mounting.](https://coderpad.io/blog/why-react-18-broke-your-app/)
## Conclusion
Reacts useEffect is an instrumental part of modern React applications. Now that you know more about its internals and the rules around it, you can build stronger and more dynamic programs!
If you want to continue learning skills that will help make your React apps better, I suggest taking a look at [our guide to React Unidirectionality](https://coderpad.io/blog/master-react-unidirectional-data-flow/), which outlines a good way to keep your application flow more organized.

View File

@@ -47,11 +47,11 @@ fn get_version(_lang: CodeLang) -> &'static str {
}
```
While this code *works*, its not very functional. If you pass in “CodeLang::JavaScript”, the version number isnt correct. Lets take a look at how we can fix that in the next section.
While this code _works_, its not very functional. If you pass in “CodeLang::JavaScript”, the version number isnt correct. Lets take a look at how we can fix that in the next section.
# Matching
While you *could* use `if` statements to detect which enum is passed in, like so:
While you _could_ use `if` statements to detect which enum is passed in, like so:
```rust
fn get_version(lang: CodeLang) -> &'static str {
@@ -138,7 +138,6 @@ fn main() {
}
```
Were able to expand our `if let` expression from before to access the value within:
```rust
@@ -185,7 +184,7 @@ fn get_version<'a>(lang: CodeLang) -> Option<&'a str> {
}
```
By doing this, we can make our logic more representative and check if a value is `None`
By doing this, we can make our logic more representative and check if a value is `None`
```rust
fn main() {
@@ -272,7 +271,7 @@ pub fn map<U, F: FnOnce(T) -> U>(self, f: F) -> Option<U> {
}
```
As you can see, we matched our implementation very similarly, matching `Some` to another `Some` and `None` to another `None`
As you can see, we matched our implementation very similarly, matching `Some` to another `Some` and `None` to another `None`
## And Then Operator
@@ -490,7 +489,6 @@ All of these features are used regularly in Rust applications: enums, matching,
Lets close with a challenge. If you get stuck anywhere along the way or have comments/questions about this article, you can join our[ public chat community where we talk about general coding topics as well as interviewing](http://bit.ly/coderpad-slack).
Lets say that we have the “patch” version of a software tracked. We want to expand the logic of our code to support checking “5.1.2” and return “2” as the “patch” version. Given the modified regex to support three optional capture groups:
```

View File

@@ -68,7 +68,7 @@ fix(pagination): fixed pagination throwing errors when an odd number of items in
feat(pagination): added new "first" and "last" events when pagination is moved to first or the last page
```
Your tooling knows only to bump the patch release because your first example is listed as a *type* of `fix`. However, in the second example, you have a _type_ of `feat` that tells your tooling to bump your release version by a minor number.
Your tooling knows only to bump the patch release because your first example is listed as a _type_ of `fix`. However, in the second example, you have a _type_ of `feat` that tells your tooling to bump your release version by a minor number.
Likewise, to tell your tooling that a commit introduces a breaking change, you'll do something along the lines of this:
@@ -86,7 +86,7 @@ An immediate question that might be asked is, "why would I put the scope of chan
# Step 1: Commit Message Enforcement {#commit-lint}
Any suitable set of tooling should have guide-rails that help you follow the rules you set for yourself (and your team). Like a linter helps keeps your codebase syntactically consistent, _Conventional Commit setups often have a linter setup of their own_. This linter isn't concerned about your code syntax, but rather your commit message syntax.
Any suitable set of tooling should have guide-rails that help you follow the rules you set for yourself (and your team). Like a linter helps keeps your codebase syntactically consistent, _Conventional Commit setups often have a linter setup of their own_. This linter isn't concerned about your code syntax, but rather your commit message syntax.
Just as you have many options regarding what linting ruleset you'd like to enforce on your codebase, you have a few options provided to you for your commit messages. You can utilize [the default linting rules out-of-the-box](https://github.com/conventional-changelog/commitlint/tree/master/@commitlint/config-conventional), follow [the Angular Team's guidelines](https://github.com/conventional-changelog/commitlint/tree/master/@commitlint/config-angular), or even [utilize the format that Jira has set out](https://github.com/Gherciu/commitlint-jira).
@@ -175,7 +175,7 @@ Finally, `standard-version` needs to have a starting point to append the CHANGEL
npm run release -- --first-release
```
To generate your initial `CHANGELOG.md` file. This will also create a tag of the current state so that every subsequent release can change your version numbers.
To generate your initial `CHANGELOG.md` file. This will also create a tag of the current state so that every subsequent release can change your version numbers.
## Usage {#use-standard-version}
@@ -197,7 +197,6 @@ All notable changes to this project will be documented in this file. See [standa
Initial release
```
Let's say we introduce a new version that has a set of features and bug fixes:
```markdown
@@ -254,4 +253,3 @@ Keep in mind, simply because you have a new tool to manage releases doesn't mean
While the outline we've provided should suffice for most usage, each of these tools includes many options that you're able to utilize customize the process to your liking.
Find options you think we should cover in this article? Have questions about how to get `conventional-commit` and `standard-version` working? Let us know! We've got a comments section down below as well as [a Discord Community](https://discord.gg/FMcvc6T) that we use to chat.

View File

@@ -10,7 +10,7 @@
}
---
Last week, I started setting up continuous integrations for some of my projects. The basic idea of a continuous integration is that you have a server to build your project on a regular basis, verify that it works correctly, and deploy it to wherever your project is published. In this case, my project will be deployed to the releases of its GitHub repository and an alpha channel on the Google Play Store. In order to do this, I decided to use [Travis CI](https://travis-ci.com/), as it seems to be the most used and documented solution (though there are others as well). Throughout this blog, I will add small snippets of the files I am editing, but (save for the initial `.travis.yml`) never an entire file. If you get lost or would like to see a working example of this, you can find a sample project [here](/redirects/?t=github&d=TravisAndroidExample).
Last week, I started setting up continuous integrations for some of my projects. The basic idea of a continuous integration is that you have a server to build your project on a regular basis, verify that it works correctly, and deploy it to wherever your project is published. In this case, my project will be deployed to the releases of its GitHub repository and an alpha channel on the Google Play Store. In order to do this, I decided to use [Travis CI](https://travis-ci.com/), as it seems to be the most used and documented solution (though there are others as well). Throughout this blog, I will add small snippets of the files I am editing, but (save for the initial `.travis.yml`) never an entire file. If you get lost or would like to see a working example of this, you can find a sample project [here](/redirects/?t=github\&d=TravisAndroidExample).
A small preface, make sure that you create your account on [travis-ci.com](https://travis-ci.com/), not [travis-ci.org](https://travis-ci.org/). Travis previously had their free plans on their .org site and only took paying customers on .com, but they have since begun [migrating all of their users](https://docs.travis-ci.com/user/open-source-on-travis-ci-com/) to travis-ci.com. However, for some reason they have decided _not to say anything about it_ when you create a new account, so it would be very easy to set up all of your projects on their .org site, then (X months later) realize that you have to move to .com. This isn't a huge issue, but it could be a little annoying if you have _almost 100 repositories_ like I do which you would have to change (though I have only just started using Travis, so it doesn't actually affect me). Just something to note.
@@ -51,7 +51,7 @@ Not a bad idea. This will easily give Travis the ability to sign our APK. Isn't
No, they can't. This is because the values passed to the command are two [environment variables](https://docs.travis-ci.com/user/environment-variables/#defining-variables-in-repository-settings) which are stored only on Travis. As long as you _don't_ check the "show value in log" box when you create an environment variable, they will never be output anywhere in your build logs, and nobody will be able to see them or know what they are.
If you are worried about security (or if you aren't worried enough), I highly recommend that you read [Travis's documentation](https://docs.travis-ci.com/user/best-practices-security/#Steps-Travis-CI-takes-to-secure-your-data) on best practices regarding secure data.
If you are worried about security (or if you aren't worried enough), I highly recommend that you read [Travis's documentation](https://docs.travis-ci.com/user/best-practices-security/#Steps-Travis-CI-takes-to-secure-your-data) on best practices regarding secure data.
## Part A. Encrypting files
@@ -75,7 +75,7 @@ Side-note: if your keystore is a `.keystore` file, it shouldn't make a differenc
Pick a key and a password. They shouldn't be excessively long, but not tiny either. Do not use special characters. In this example, I will use "php" as the key and "aaaaa" as the password.
Add them to Travis CI as environment variables. You can do this by going to your project page in Travis, clicking on "More Options > Settings", then scrolling down to "Environment Variables". I will name mine "enc_keystore_key" and "enc_keystore_pass", respectively.
Add them to Travis CI as environment variables. You can do this by going to your project page in Travis, clicking on "More Options > Settings", then scrolling down to "Environment Variables". I will name mine "enc\_keystore\_key" and "enc\_keystore\_pass", respectively.
Now, time to encrypt the file. Run this command in the terminal:
@@ -95,7 +95,7 @@ That's it! Push your changes to `.travis.yml` as well as `key.jks.enc`, and Jeky
## Part B. Dummy files
This isn't entirely necessary, but you can use some fake "dummy" files to add to version control alongside the "real" encrypted ones. When Travis decrypts your encrypted files, they will be overwritten, but otherwise they serve as quite a nice substitute to prevent anyone from getting their hands on the real files (and to prevent you from uploading the real ones by accident). You can find a few (`key.jks`, `service.json`, and `secrets.tar`) in the sample project [here](/redirects/?t=github&d=TravisAndroidExample).
This isn't entirely necessary, but you can use some fake "dummy" files to add to version control alongside the "real" encrypted ones. When Travis decrypts your encrypted files, they will be overwritten, but otherwise they serve as quite a nice substitute to prevent anyone from getting their hands on the real files (and to prevent you from uploading the real ones by accident). You can find a few (`key.jks`, `service.json`, and `secrets.tar`) in the sample project [here](/redirects/?t=github\&d=TravisAndroidExample).
## Part C. Signing the APK
@@ -103,7 +103,7 @@ Now we want to actually use the key to sign our APKs. This requires a few change
Full credit, this solution was taken from [this wonderful article](https://android.jlelse.eu/using-travisci-to-securely-build-and-deploy-a-signed-version-of-your-android-app-94afdf5cf5b4) that describes almost the same thing that I have been explaining since the start of this article.
I'll create three environment variables that will be used here: the keystore password as "keystore_password", the keystore alias as "keystore_alias", and the alias's password as "keystore_alias_password". Note that special characters cannot be used in these either.
I'll create three environment variables that will be used here: the keystore password as "keystore\_password", the keystore alias as "keystore\_alias", and the alias's password as "keystore\_alias\_password". Note that special characters cannot be used in these either.
```gradle
android {
@@ -152,7 +152,7 @@ deploy:
tags: true
```
Now, you _could_ follow this exactly and place your GitHub token directly in your `.travis.yml`, but that's just asking for trouble. Luckily, you can use MORE ENVIRONMENT VARIABLES! Enter your API key with the name ex. "GITHUB_TOKEN", and write `api_key: "$GITHUB_TOKEN"` instead.
Now, you _could_ follow this exactly and place your GitHub token directly in your `.travis.yml`, but that's just asking for trouble. Luckily, you can use MORE ENVIRONMENT VARIABLES! Enter your API key with the name ex. "GITHUB\_TOKEN", and write `api_key: "$GITHUB_TOKEN"` instead.
This should now create a release with a built (and signed) APK each time there is a new tag. Fair enough; all you have to do for it to deploy is create a new tag.
@@ -192,7 +192,7 @@ before_deploy:
- export APP_VERSION=$(./gradlew :app:printVersionName)
```
This creates an environment variable ("APP_VERSION") containing our app's version name, which we can then reference from the actual deployment as follows...
This creates an environment variable ("APP\_VERSION") containing our app's version name, which we can then reference from the actual deployment as follows...
```yml
deploy:
@@ -212,7 +212,7 @@ Yay! Now we have fully automated releases on each push to master. Because of the
# Step 4. Deploying to the Play Store
Travis doesn't have a deployment for the Play Store, so we will have to use a third party tool. I found [Triple-T/gradle-play-publisher](https://github.com/Triple-T/gradle-play-publisher/), which should work, except there isn't an option to deploy an existing APK without building the project. Not only would a deployment that requires building a project _twice_ be super wasteful and take... well, twice as long, [I ran into problems signing the APK](https://jfenn.me/redirects/?t=twitter&d=status/1061620100409761792) when I tried it, so... let's not. Instead, we'll modify the `script` to run the `./gradlew publish` command when a build is triggered from the master branch.
Travis doesn't have a deployment for the Play Store, so we will have to use a third party tool. I found [Triple-T/gradle-play-publisher](https://github.com/Triple-T/gradle-play-publisher/), which should work, except there isn't an option to deploy an existing APK without building the project. Not only would a deployment that requires building a project _twice_ be super wasteful and take... well, twice as long, [I ran into problems signing the APK](https://jfenn.me/redirects/?t=twitter\&d=status/1061620100409761792) when I tried it, so... let's not. Instead, we'll modify the `script` to run the `./gradlew publish` command when a build is triggered from the master branch.
## Part A. Setup
@@ -222,7 +222,7 @@ You can either encrypt it as a separate file, or you can put them both in a tar
## Part B. Publishing
Now we can modify the `script` section of our `.travis.yml` to run the `./gradlew publish` command when a build is triggered from the master branch. This can be done using the "TRAVIS_BRANCH" environment variable which Travis handily creates for us. In other words...
Now we can modify the `script` section of our `.travis.yml` to run the `./gradlew publish` command when a build is triggered from the master branch. This can be done using the "TRAVIS\_BRANCH" environment variable which Travis handily creates for us. In other words...
```yml
script:
@@ -249,4 +249,4 @@ deploy:
Hopefully this blog has gone over the basics of using Travis to deploy to GitHub and the Play Store. In later blogs, I hope to also cover how to implement UI and Unit tests, though I have yet to actually use them myself so I cannot yet write an article about them.
If you would like to see a working example of all of this, you can find it in a sample project [here](https://jfenn.me/redirects/?t=github&d=TravisAndroidExample).
If you would like to see a working example of all of this, you can find it in a sample project [here](https://jfenn.me/redirects/?t=github\&d=TravisAndroidExample).

View File

@@ -30,6 +30,7 @@ function returnProp(returnProp: string): string {
returnProp('Test'); // ✅ Esto esta bien
returnProp(4); // ❌ Esto falla porque `4` no es un string
```
En este caso, queremos asegurarnos de que todos los tipos de entrada posibles estén disponibles para el tipo prop. Echemos un vistazo a algunas soluciones potenciales, con sus diversos pros y contras, y veamos si podemos encontrar una solución que se ajuste a los requisitos para proporcionar tipado a una función como ésta.
## Solución potencial 1: Unions {#generic-usecase-setup-union-solution}
@@ -69,7 +70,6 @@ La razón por la que la operación `shouldBeNumber + 4` produce este error es po
Para evitar los problemas de devolver explícitamente una unión, usted _PODRÍA_ utilizar la sobrecarga de funciones para proporcionar los tipos de retorno adecuados:
```typescript
function returnProp(returnProp: number): number;
function returnProp(returnProp: string): string;
@@ -85,7 +85,6 @@ Dicho esto, además de tener una odiosa información duplicada del tipo , este m
Por ejemplo, si quisiéramos pasar un objeto de algún tipo (como `{}`, un simple objeto vacío), no sería válido:
```typescript
returnProp({}) // El argumento de tipo '{}' no es asignable a un parámetro de tipo 'string'.
```
@@ -183,6 +182,7 @@ async function logTheValue(item) {
}
}
```
Si quisiéramos tipar la función `logTheValue`, querríamos asegurarnos de utilizar un tipo genérico para el parámetro de entrada `item`. Haciendo esto, podríamos usar ese mismo genérico para el prop de retorno de `loggedValue` para asegurar que ambos tienen la misma tipificación. Para ello, podríamos hacerlo inline:
```typescript
@@ -192,7 +192,7 @@ async function logTheValue<ItemT>(item: ItemT): Promise<{loggedValue: string, or
}
```
Con estas características, somos capaces de utilizar gran parte de la funcionalidad de los genéricos.
Con estas características, somos capaces de utilizar gran parte de la funcionalidad de los genéricos.
Sin embargo, sé que no he respondido para qué sirve realmente el `<>`. Bueno, al igual que las variables de tipo, también existe la posibilidad de pasar tipos como "argumentos de tipo" cuando los genéricos se aplican a una función.
@@ -204,7 +204,7 @@ logTheValue<number>(3);
# Non-Function Generics {#non-function-generics}
Como has visto antes con la interfaz `LogTheValueReturnType` - las funciones no son las únicas con genéricos. Además de usarlos dentro de las funciones e interfaces, también puedes usarlos en las clases.
Como has visto antes con la interfaz `LogTheValueReturnType` - las funciones no son las únicas con genéricos. Además de usarlos dentro de las funciones e interfaces, también puedes usarlos en las clases.
Las clases con genéricos pueden ser especialmente útiles para estructuras de datos como ésta:
@@ -321,8 +321,8 @@ const checkTimeStamp = <T extends {time: Date}>(obj: T): TimestampReturn<T> => {
En este caso, podemos confiar en el casting implícito de tipos para asegurarnos de que podemos pasar `{time: new Date()}` pero no `{}` como valores para `obj`.
# Conclusión
# Conclusión
¡Y eso es todo lo que tengo para los genérics! Sus usos son muy variados, ¡y ahora puedes aplicar tus conocimientos en el código! Esperamos tener más posts sobre TypeScript pronto - tanto más introductorios como avanzados.
¡Y eso es todo lo que tengo para los genérics! Sus usos son muy variados, ¡y ahora puedes aplicar tus conocimientos en el código! Esperamos tener más posts sobre TypeScript pronto - tanto más introductorios como avanzados.
¿Preguntas? ¿Opinión? Háblanos en los comentarios de abajo; ¡nos encantaría escucharte!
¿Preguntas? ¿Opinión? Háblanos en los comentarios de abajo; ¡nos encantaría escucharte!

View File

@@ -207,7 +207,7 @@ async function logTheValue<ItemT>(item: ItemT): Promise<LogTheValueReturnType<It
}
```
With these few features, we're able to utilize much of the functionality of generics.
With these few features, we're able to utilize much of the functionality of generics.
However, I know I haven't answered what the `<>` really is for. Well, much like type variables, there's also the ability to pass types as "type arguments" when generics are applied to a function.
@@ -219,7 +219,7 @@ logTheValue<number>(3);
# Non-Function Generics {#non-function-generics}
As you saw before with the `LogTheValueReturnType` interface — functions aren't the only ones with generics. In addition to using them within functions and interfaces, you can also use them in classes.
As you saw before with the `LogTheValueReturnType` interface — functions aren't the only ones with generics. In addition to using them within functions and interfaces, you can also use them in classes.
Classes with generics can be particularly helpful for data structures like this:
@@ -338,6 +338,6 @@ In this case, we can rely on implicit type casting to ensure that we're able to
# Conclusion
And that's all I have for generics! Their usages are far and wide, and now you're able to apply your knowledge in code! We're hoping to have more posts on TypeScript soon - both more introductory and advanced.
And that's all I have for generics! Their usages are far and wide, and now you're able to apply your knowledge in code! We're hoping to have more posts on TypeScript soon - both more introductory and advanced.
Questions? Feedback? Sound off in the comments below; we'd love to hear from you!

View File

@@ -37,7 +37,7 @@ Let's look through both.
## Winget {#winget}
One of the strongest advantages of `winget` is that it's built right into all builds of Windows 11 and most newer builds of Windows 10.
One of the strongest advantages of `winget` is that it's built right into all builds of Windows 11 and most newer builds of Windows 10.
What's more, you don't need to be in an elevated admin shell to install packages. Instead, installers will individually ask you to accept the dialog to give admin rights.
@@ -65,19 +65,20 @@ Finally, you can upgrade all of your `winget` installed packages simply by runni
## Chocolatey {#chocolatey}
[Chocolatey only takes a single PowerShell command to install](https://chocolatey.org/install), not unlike [Homebrew for macOS](https://brew.sh/). The comparisons with Homebrew don't stop there either. Much like it's *nix-y counterparts, Chocolatey is an unofficial repository of software that includes checks of verification for a select number of popular packages.
[Chocolatey only takes a single PowerShell command to install](https://chocolatey.org/install), not unlike [Homebrew for macOS](https://brew.sh/). The comparisons with Homebrew don't stop there either. Much like it's \*nix-y counterparts, Chocolatey is an unofficial repository of software that includes checks of verification for a select number of popular packages.
It's also popular amongst sysadmins due to its ease of deployment across multiple devices and stability.
You'll need to run it in an administrator window, but once you do, you'll find the utility straightforward. A simple `choco search package-name` will find related packages to the name you input where areas `choco install package-name` will install the package.
You can also use `choco list --local-only` to see a list of all locally installed packages.
You can also use `choco list --local-only` to see a list of all locally installed packages.
Finally, `choco upgrade all` will upgrade all locally installed packages.
### Manage Packages via GUI {#chocolatey-gui}
Readers, I won't lie to you. I'm not the kind of person to use a CLI for everything. I absolutely see their worth, but remembering various command is simply not my strong suit even if I understand the core concepts entirely. For people like me, you might be glad to hear that _Chocolatey has a GUI for installing, uninstalling, updating, and searching packages_. It's as simple as (Chocolate) pie! More seriously, installing the GUI is as simple as:
```
choco install ChocolateyGUI
```
@@ -106,11 +107,11 @@ choco install git.install--params "/GitAndUnixToolsOnPath"
### CLI Utilities {#cli-packages}
| Name | Choco Package | Winget Package | Explanation |
| ------------------------------------------------- | ------------- | -------------- | ------------------------------------------------------------ |
| [Micro Editor](https://github.com/zyedidia/micro) | `micro` | N/A | A great terminal editor (ala Nano). It even supports using the mouse! |
| [Bat](https://github.com/sharkdp/bat) | `bat` | N/A | A great alternative to `cat` with line numbers and syntax highlighting |
| [GitHub CLI](https://cli.github.com/) | `gh` | `GitHub.cli` | GitHub's official CLI for managing issues, PRs, and more |
| Name | Choco Package | Winget Package | Explanation |
| ------------------------------------------------- | ------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| [Micro Editor](https://github.com/zyedidia/micro) | `micro` | N/A | A great terminal editor (ala Nano). It even supports using the mouse! |
| [Bat](https://github.com/sharkdp/bat) | `bat` | N/A | A great alternative to `cat` with line numbers and syntax highlighting |
| [GitHub CLI](https://cli.github.com/) | `gh` | `GitHub.cli` | GitHub's official CLI for managing issues, PRs, and more |
| [NVM](https://github.com/coreybutler/nvm-windows) | `nvm` | N/A | "Node version manager" - Enables users to have multiple installs of different Node versions and dynamically switch between them |
| [Yarn](https://yarnpkg.com/) | `yarn` | `Yarn.Yarn` | An alternative to `npm` with better monorepo support. If installed through `choco`, it will support `nvm` switching seamlessly. |
@@ -128,12 +129,12 @@ winget install --id=GitHub.cli -e && winget install --id=Yarn.Yarn -e
### IDEs {#ides}
| Name | Choco Package | Winget Package | Explanation |
| ----------------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------------------------------------------------------- |
| [Visual Studio Code](https://code.visualstudio.com/) | `vscode` | `Microsoft.VisualStudioCode` | Popular Microsoft IDE for many languages |
| [Sublime Text](https://www.sublimetext.com/) | `sublimetext4` | `SublimeHQ.SublimeText.4` | Popular text editor with syntax support for many languages |
| Name | Choco Package | Winget Package | Explanation |
| ----------------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------------------------------ | ---------------------------------------------------------- |
| [Visual Studio Code](https://code.visualstudio.com/) | `vscode` | `Microsoft.VisualStudioCode` | Popular Microsoft IDE for many languages |
| [Sublime Text](https://www.sublimetext.com/) | `sublimetext4` | `SublimeHQ.SublimeText.4` | Popular text editor with syntax support for many languages |
| [Visual Studio](https://visualstudio.microsoft.com/) | `visualstudio2019professional` / `visualstudio2019community` | `Microsoft.VisualStudio.2019.Professional` / `Microsoft.VisualStudio.2019.Community` | Microsoft's flagship IDE |
| [Jetbrains Toolbox](https://www.jetbrains.com/toolbox-app/) | `jetbrainstoolbox` | `JetBrains.Toolbox` | The installer/updater for JetBrains' popular IDEs |
| [Jetbrains Toolbox](https://www.jetbrains.com/toolbox-app/) | `jetbrainstoolbox` | `JetBrains.Toolbox` | The installer/updater for JetBrains' popular IDEs |
You're able to install all of these packages using `choco`:
@@ -149,21 +150,21 @@ winget install --id=Microsoft.VisualStudioCode -e && winget install --id=Sublime
### Others {#utilities}
| Name | Choco Package | Winget Package | Explanation |
| ----------------------------------------------------------- | ------------------------------------------ | ------------------------------------------------- | ------------------------------------------------------------ |
| Name | Choco Package | Winget Package | Explanation |
| ----------------------------------------------------------- | ------------------------------------------ | ------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [PowerToys](https://github.com/microsoft/PowerToys) | `powertoys` | `Microsoft.PowerToys` | Built by MS itself, provides SVG/Markdown previews, provides utility for mass renaming, image resizing all from the file explorer itself. It also allows you to configure tiling and more. We'll talk about this more later |
| [Ext2Fsd](https://sourceforge.net/projects/ext2fsd/) | `ext2fsd` | N/A | A program that enables you to read/write from ex2/ex3/ex4 formatted filesystems |
| [VirtualBox](https://www.virtualbox.org/) | `virtualbox` | `Oracle.VirtualBox` | A program that allows you to create, run, and edit virtual machines |
| [VirtualBox Guest Additions](https://www.virtualbox.org/) | `virtualbox-guest-additions-guest.install` | N/A | The extension to `virtualbox` that provides better USB passthrough support |
| [FiraCode](https://github.com/tonsky/FiraCode) | `firacode` | N/A | A popular programming font that supports ligatures |
| [`scrcpy`](https://github.com/Genymobile/scrcpy) | `scrcpy` | N/A | A utility that allows you to mirror your Android phone screen via ADB |
| [Typora](https://typora.io/) | `typora` | `Typora.Typora` | A paid markdown editor with a "preview edit" mode allowing you to edit markdown files similarly to Word |
| [Postman](https://www.postman.com/) | `postman` | `Postman.Postman` | A REST API tester |
| [Firefox](https://www.mozilla.org/en-US/firefox/new/) | `Firefox` | `Mozilla.Firefox` | The popular web browser by Mozilla |
| [Licecap](https://www.cockos.com/licecap/) | `licecap` | `Cockos.LICEcap` | A quick-and-easy GIF capture software |
| [ScreenToGIF](https://www.screentogif.com/) | `screentogif` | `NickeManarin.ScreenToGif` | Another quick-and-easy GIF capture software with more software options |
| [7Zip](https://www.7-zip.org/) | `7zip` | `7zip.7zip` | Compressed file format manager. Allows you to extract files from various formats |
| [Java](https://www.oracle.com/java/technologies/downloads/) | `jdk` / `jre` | `Oracle.JDK.17` / `Oracle.JavaRuntimeEnvironment` | Java runtime and development kit |
| [Ext2Fsd](https://sourceforge.net/projects/ext2fsd/) | `ext2fsd` | N/A | A program that enables you to read/write from ex2/ex3/ex4 formatted filesystems |
| [VirtualBox](https://www.virtualbox.org/) | `virtualbox` | `Oracle.VirtualBox` | A program that allows you to create, run, and edit virtual machines |
| [VirtualBox Guest Additions](https://www.virtualbox.org/) | `virtualbox-guest-additions-guest.install` | N/A | The extension to `virtualbox` that provides better USB passthrough support |
| [FiraCode](https://github.com/tonsky/FiraCode) | `firacode` | N/A | A popular programming font that supports ligatures |
| [`scrcpy`](https://github.com/Genymobile/scrcpy) | `scrcpy` | N/A | A utility that allows you to mirror your Android phone screen via ADB |
| [Typora](https://typora.io/) | `typora` | `Typora.Typora` | A paid markdown editor with a "preview edit" mode allowing you to edit markdown files similarly to Word |
| [Postman](https://www.postman.com/) | `postman` | `Postman.Postman` | A REST API tester |
| [Firefox](https://www.mozilla.org/en-US/firefox/new/) | `Firefox` | `Mozilla.Firefox` | The popular web browser by Mozilla |
| [Licecap](https://www.cockos.com/licecap/) | `licecap` | `Cockos.LICEcap` | A quick-and-easy GIF capture software |
| [ScreenToGIF](https://www.screentogif.com/) | `screentogif` | `NickeManarin.ScreenToGif` | Another quick-and-easy GIF capture software with more software options |
| [7Zip](https://www.7-zip.org/) | `7zip` | `7zip.7zip` | Compressed file format manager. Allows you to extract files from various formats |
| [Java](https://www.oracle.com/java/technologies/downloads/) | `jdk` / `jre` | `Oracle.JDK.17` / `Oracle.JavaRuntimeEnvironment` | Java runtime and development kit |
You're able to install all of these packages using `choco`:
@@ -205,7 +206,7 @@ First, let's start with the third party offerings. We have many options, but the
![A preview of the cmder terminal open on the UU repo](./cmder.png)
As you can see, there's some custom logic for embedding Git metadata in the prompt, a custom `λ` prompt, and even contains some logic for more effective tab autocomplete. You're even able to install it via Chocolatey using `choco install cmder`!
As you can see, there's some custom logic for embedding Git metadata in the prompt, a custom `λ` prompt, and even contains some logic for more effective tab autocomplete. You're even able to install it via Chocolatey using `choco install cmder`!
The terminal itself contains all kinds of functionality:
@@ -360,7 +361,7 @@ One option to customize your windows shell styling is [OhMyPosh](https://ohmypos
For example, this is [my terminal theme](https://github.com/crutchcorn/dotfiles/blob/master/.myposh.json) that's being used in PowerShell
![My PowerShell terminal with an emoji at the start and a plethora of colors and arrows. It's running `wsl exa -l`](./my_terminal_example.png)
![My PowerShell terminal with an emoji at the start and a plethora of colors and arrows. It's running wsl exa -l](./my_terminal_example.png)
> That emoji at the start? That's randomized on every shell start with a preselected list of emoji. Pretty 🔥 if you ask me.
@@ -370,7 +371,7 @@ Once setting up OhMyPosh in CMD/PowerShell or OhMyZSH in WSL, you may notice tha
![A preview of ZSH theme "agnoster" without a proper font installed](./no_powerline.png)
To get some of these themes working properly, you may need to install a [powerline](https://github.com/ryanoasis/powerline-extra-symbols) enabled font. You have a few options to do this.
To get some of these themes working properly, you may need to install a [powerline](https://github.com/ryanoasis/powerline-extra-symbols) enabled font. You have a few options to do this.
You can do so by [cloning this repository using PowerShell](https://github.com/powerline/fonts). Then `cd fonts` and `./install.ps1`. This script will install all of the fonts one-by-one on your system, fixing the font issues in your terminal. Find which font is your favorite and remember the name of it.
@@ -392,8 +393,6 @@ Then, when you open the terminal, you should see the correct terminal display.
![A preview of ZSH theme "agnoster" with the proper font installed](./powerline.png)
## Make Configuration Changes {#terminal-system-config}
While terminals are important, another factor to be considered is the configuration of those terminal shells. It's important to keep system-level configuration settings in mind as well. For example, if you need to [make or modify environmental variables](#env-variables) or [make changes to the system path](#env-path). Luckily for us, they both live on the same path. As such, let's showcase how to reach the dialog that contains both of these settings before explaining each one in depth.
@@ -417,7 +416,7 @@ When working with the CLI, it's often important to have environmental variables
- User-specific
- System-level
Each of them follows their namesakes in their usage. If I set a user-specific environmental variable and change users, I will not receive the same value as the user I'd set the variable for. Likewise, if I set it for the system, it will apply to all users. The top of the "environmental variables" section applies to the user-level, whereas the bottom level applies to the system.
Each of them follows their namesakes in their usage. If I set a user-specific environmental variable and change users, I will not receive the same value as the user I'd set the variable for. Likewise, if I set it for the system, it will apply to all users. The top of the "environmental variables" section applies to the user-level, whereas the bottom level applies to the system.
In order to add a new one, simply select "New" on whichever level you want to create the environmental variables on. You should see this dialog appear:
@@ -440,14 +439,12 @@ It could be because you don't have the program attached to your system path. You
In order to add the file to the path, I need to edit the `path` environmental variable.
> [Just as there are two sets of environmental variables](#env-path), there are two sets of `path` env variables. As such, you'll have to decide if you want all users to access a variable or if you want to restrict it to your current user. In this example, I'll be adding it to the system.
> [Just as there are two sets of environmental variables](#env-path), there are two sets of `path` env variables. As such, you'll have to decide if you want all users to access a variable or if you want to restrict it to your current user. In this example, I'll be adding it to the system.
Find the `path` environmental variable and select `"Edit."`
Find the `path` environmental variable and select `"Edit."`
![The path dialog value](./path_dialog.png)
Just as before, you're able to delete and edit a value by highlighting and pressing the respective buttons to the left. Otherwise, you can press "new" which will allow you to start typing. Once you're done, you can press "OK" to save your new path settings.
> In order to get SCC running, you may have to close and then re-open an already opened terminal window. Otherwise, running `refreshenv` often updates the path so that you can use the new commands.
@@ -458,7 +455,7 @@ Just as before, you're able to delete and edit a value by highlighting and press
Git, by default, uses `vim` to edit files. While I understand and respect the power of `vim`, I have never got the hang of `:!qnoWaitThatsNotRight!qq!helpMeLetMeOut`. As such, I tend to change my configuration to use `micro`, the CLI editor mentioned in [the CLI packages section](#cli-packages). In order to do so, I can just run:
```
```
git config --global core.editor "micro"
```
@@ -494,9 +491,9 @@ git config --global core.autocrlf true
## WSL {#wsl}
Alright, alright, I'm sure you've been expecting to see this here. I can't beat around the bush any longer. Windows Subsystem for Linux (WSL) enables users to run commands on a Linux instance without having to dual-boot or run a virtual machine themselves.
Alright, alright, I'm sure you've been expecting to see this here. I can't beat around the bush any longer. Windows Subsystem for Linux (WSL) enables users to run commands on a Linux instance without having to dual-boot or run a virtual machine themselves.
> While the initial v1 worked by mapping system calls from Windows to Linux in a somewhat complex method, the new version (WSL2) works differently. WSL2 utilizes a Linux container in the background and enabling you to call into that container.
> While the initial v1 worked by mapping system calls from Windows to Linux in a somewhat complex method, the new version (WSL2) works differently. WSL2 utilizes a Linux container in the background and enabling you to call into that container.
>
> Because of the foundational differences, compatibility with programs should be better in WSL2. If you last tried WSL when it first launched and were underwhelmed, try it again today.
@@ -564,8 +561,6 @@ sudo apt install gedit
![Gedit running alongside Notepad](./linux_gui.png)
### USB Pass-thru {#wsl-usb}
For some development usage, having USB access from Linux is immensely useful. In particular, when dealing with Linux-only software for flashing microcontrollers or other embedded devices it's an absolute necessity.
@@ -580,23 +575,22 @@ When asking many of my Linux-favoring friends why they love Linux so much, I've
By default, Windows includes a myriad of shortcuts baked right in that allow you to have powerful usage of your system using nothing but your keyboard. Here are just a few that I think are useful to keep-in-mind:
| Key Combo | What It Does |
| --------------------------------------------------- | ------------------------------------------------------------ |
| <kbd>Win</kbd> + <kbd>S</kbd> | Perform a partial screenshot. Allow you to select what you want screenshotted |
| <kbd>Win</kbd> + <kbd>.</kbd> | Bring up the emoji picker. After pressing, start typing to search. |
| Key Combo | What It Does |
| --------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| <kbd>Win</kbd> + <kbd>S</kbd> | Perform a partial screenshot. Allow you to select what you want screenshotted |
| <kbd>Win</kbd> + <kbd>.</kbd> | Bring up the emoji picker. After pressing, start typing to search. |
| <kbd>Win</kbd> + <kbd>R</kbd> | Bring up the "Run" dialog. Will allow you to type in the internal executable name to run it |
| <kbd>Win</kbd> + <kbd>V</kbd> | Open the Windows clipboard manager |
| <kbd>Win</kbd> + <kbd>X</kbd> | Bring up a list of actions, including "Start PowerShell as Admin" |
| <kbd>Win</kbd> + <kbd>L</kbd> | Lock your screen |
| <kbd>Win</kbd> + <kbd>Tab</kbd> | Bring up the overview mode of all windows |
| <kbd>Win</kbd> + <kbd>E</kbd> | Open file explorer |
| <kbd>Win</kbd> + <kbd>S</kbd> | Open search dialog |
| <kbd>Win</kbd> + <kbd>D</kbd> | Show/hide the desktop |
| <kbd>Shift</kbd> + <kbd>F10</kbd> | Bring up the context menu for the selected item |
| <kbd>Win</kbd> + <kbd>Ctrl</kbd> + <kbd>D</kbd> | Add a new virtual desktop |
| <kbd>Win</kbd> + <kbd>Ctrl</kbd> + <kbd>Arrow</kbd> | Move between virtual desktops |
| <kbd>Win</kbd> + <kbd>Ctrl</kbd> + <kbd>F4</kbd> | Close current virtual desktop |
| <kbd>Win</kbd> + <kbd>V</kbd> | Open the Windows clipboard manager |
| <kbd>Win</kbd> + <kbd>X</kbd> | Bring up a list of actions, including "Start PowerShell as Admin" |
| <kbd>Win</kbd> + <kbd>L</kbd> | Lock your screen |
| <kbd>Win</kbd> + <kbd>Tab</kbd> | Bring up the overview mode of all windows |
| <kbd>Win</kbd> + <kbd>E</kbd> | Open file explorer |
| <kbd>Win</kbd> + <kbd>S</kbd> | Open search dialog |
| <kbd>Win</kbd> + <kbd>D</kbd> | Show/hide the desktop |
| <kbd>Shift</kbd> + <kbd>F10</kbd> | Bring up the context menu for the selected item |
| <kbd>Win</kbd> + <kbd>Ctrl</kbd> + <kbd>D</kbd> | Add a new virtual desktop |
| <kbd>Win</kbd> + <kbd>Ctrl</kbd> + <kbd>Arrow</kbd> | Move between virtual desktops |
| <kbd>Win</kbd> + <kbd>Ctrl</kbd> + <kbd>F4</kbd> | Close current virtual desktop |
## Window Tiling {#window-tiling}
@@ -620,17 +614,17 @@ I'm not sure about you, but when I get a new machine, I want it to feel _mine_.
## Free {#free-customization-software}
| Program Name | What It Is | Windows Compatibility |
| ------------------------------------------------------------ | ------------------------------------------------------------ | --------------------- |
| [Audio Band](https://github.com/dsafa/audio-band) | Adds an interactive music preview to the taskbar. Integrates with Spotify and others | Windows 10 |
| [QuickLook](https://github.com/QL-Win/QuickLook) | Adds MacOS like file preview on pressing spacebar | Windows 10, 11 |
| [EarTrumpet](https://github.com/File-New-Project/EarTrumpet) | Allows a more complex audio mixer. Support per-app volume control | Windows 10, 11 |
| [Rainmeter](https://www.rainmeter.net/) | Enables new interactive desktop widgets | Windows 7, 8, 10, 11 |
| [TranslucentTB](https://github.com/TranslucentTB/TranslucentTB) | Allows for more flexibility of taskbar | Windows 10, 11* |
| [RoundedTB](https://github.com/torchgm/RoundedTB) | Allows for a rounded, more macOS-dock-like taskbar | Windows 10, 11 |
| [TaskbarX](https://github.com/ChrisAnd1998/TaskbarX) | Like TranslucentTB but also supports centering icons in the TaskBar in Windows 10 | Windows 10, 11* |
| [Files UWP](https://github.com/duke7553/files-uwp/releases) | A modern rewrite of the file explorer in UWP | Windows 10, 11 |
| [Open-Shell](https://github.com/Open-Shell/Open-Shell-Menu) | An open-source replacement for the start menu | Windows 7, 8, 10 |
| Program Name | What It Is | Windows Compatibility |
| --------------------------------------------------------------- | ------------------------------------------------------------------------------------ | --------------------- |
| [Audio Band](https://github.com/dsafa/audio-band) | Adds an interactive music preview to the taskbar. Integrates with Spotify and others | Windows 10 |
| [QuickLook](https://github.com/QL-Win/QuickLook) | Adds MacOS like file preview on pressing spacebar | Windows 10, 11 |
| [EarTrumpet](https://github.com/File-New-Project/EarTrumpet) | Allows a more complex audio mixer. Support per-app volume control | Windows 10, 11 |
| [Rainmeter](https://www.rainmeter.net/) | Enables new interactive desktop widgets | Windows 7, 8, 10, 11 |
| [TranslucentTB](https://github.com/TranslucentTB/TranslucentTB) | Allows for more flexibility of taskbar | Windows 10, 11\* |
| [RoundedTB](https://github.com/torchgm/RoundedTB) | Allows for a rounded, more macOS-dock-like taskbar | Windows 10, 11 |
| [TaskbarX](https://github.com/ChrisAnd1998/TaskbarX) | Like TranslucentTB but also supports centering icons in the TaskBar in Windows 10 | Windows 10, 11\* |
| [Files UWP](https://github.com/duke7553/files-uwp/releases) | A modern rewrite of the file explorer in UWP | Windows 10, 11 |
| [Open-Shell](https://github.com/Open-Shell/Open-Shell-Menu) | An open-source replacement for the start menu | Windows 7, 8, 10 |
> \* Functionality may be limited or require further modification for some reason
@@ -638,16 +632,16 @@ I'm not sure about you, but when I get a new machine, I want it to feel _mine_.
> Just a reminder that none of this software mentioned here due to a sponsorship or financial arrangement of any kind. Please understand that this is all software that I personally use and wanted to share. I've tried my best to find some form of free/open-source replacement and linked them in the "Free" section.
| Program Name | What It Is | Windows Compatibility | Price |
| ----------------------------------------------------- | ------------------------------------------------------------ | --------------------- | ------------- |
| [DisplayFusion](http://www.displayfusion.com/) | A multi-monitor utility program. Enables tons of functionality to help manage multiple monitors | Windows 7, 8, 10, 11 | Starts at $29 |
| [OneCommander](http://onecommander.com/) | A replacement for the File Explorer with various improvements | Windows 10, 11 | $5 |
| [TrayStatus](https://www.traystatus.com/) | Status tray indicators for HDD, CPU, Capslock, and more | Windows 10, 11 | Starts at $10 |
| Program Name | What It Is | Windows Compatibility | Price |
| ----------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- | ------------- |
| [DisplayFusion](http://www.displayfusion.com/) | A multi-monitor utility program. Enables tons of functionality to help manage multiple monitors | Windows 7, 8, 10, 11 | Starts at $29 |
| [OneCommander](http://onecommander.com/) | A replacement for the File Explorer with various improvements | Windows 10, 11 | $5 |
| [TrayStatus](https://www.traystatus.com/) | Status tray indicators for HDD, CPU, Capslock, and more | Windows 10, 11 | Starts at $10 |
| [Groupy](https://www.stardock.com/products/groupy/) | A replacement for the [now-defunct Sets](https://www.zdnet.com/article/windows-10s-sets-feature-is-gone-and-not-expected-to-return/) functionality. Group unrelated programs into tabs, even if they didn't previously support tabs | Windows 10, 11 | $10 |
| [Start10](https://www.stardock.com/products/start10/) | A replacement for the Windows 10 start menu | Windows 10 | $5 |
| [Start11](https://www.stardock.com/products/start11/) | A replacement for the Windows 11 start menu | Windows 11 | $6 |
| [StartAllBack](https://www.startallback.com/) | Windows 11 start menu replacement |Windows 11|$5|
| [StartIsBack](https://www.startisback.com/) | Windows 10 start menu replacement |Windows 10|$5|
| [Start10](https://www.stardock.com/products/start10/) | A replacement for the Windows 10 start menu | Windows 10 | $5 |
| [Start11](https://www.stardock.com/products/start11/) | A replacement for the Windows 11 start menu | Windows 11 | $6 |
| [StartAllBack](https://www.startallback.com/) | Windows 11 start menu replacement | Windows 11 | $5 |
| [StartIsBack](https://www.startisback.com/) | Windows 10 start menu replacement | Windows 10 | $5 |
# Functionality {#functionality}
@@ -655,7 +649,7 @@ Windows also has some differing functionality to Linux/macOS in some critical wa
## Virtual Desktops {#virtual-desktops}
Longtime users of Linux will be quick to note that they've had virtual desktops for years. While a newer feature to the Windows product line, it too was actually introduced in Windows 10!
Longtime users of Linux will be quick to note that they've had virtual desktops for years. While a newer feature to the Windows product line, it too was actually introduced in Windows 10!
If [you press <kbd>Win</kbd> + <kbd>Tab</kbd>, it will open a task view](#built-in-keyboard-shortcuts). On the top right of your screen, you should see a "New desktop" button. If you press it, it will create a new desktop.
@@ -681,8 +675,6 @@ If you're a laptop user (or have a touchpad for your desktop) that supports Wind
![A preview of the touchpad settings page](./touchpad_virtual_desktop.png)
## Symbolic Links {#symlinks}
Symbolic links are a method of having a shortcut of sorts from one file/folder to another. Think of it as Windows Shortcuts but baked directly into the filesystem level. This may come as a surprise to some developers, but Windows actually has support for symbolic links!
@@ -695,12 +687,14 @@ Once done, you're able to run `mklink`, which provides you the ability to make a
### Usage {#using-mklink}
By default, it creates a soft link from the first argument to the second.
By default, it creates a soft link from the first argument to the second.
```
mklink Symlink SourceFile
```
You're also able to add `/D` to make a soft link to a directory:
```
mklink /D SymlinkDir SourceFolder
```
@@ -750,13 +744,13 @@ Users that have switched from macOS or Linux can tell you that most systems care
fsutil.exe file setCaseSensitiveInfo C:\path\to\folder enable
```
Once this is done, tada! Your directory is now case sensitive. That said, be warned that this setting does not trickle down to your subfolders: Only the parent will be case sensitive.
Once this is done, tada! Your directory is now case sensitive. That said, be warned that this setting does not trickle down to your subfolders: Only the parent will be case sensitive.
Luckily, any folders you create using WSL will be case sensitive by default, enabling you to have files with the same name present with only casing differences between them.
# Conclusion
You'll notice that despite the raw power and capabilities that WSL2 will be bringing to us right around the corner, that I didn't touch on it until later in the article. That's because, while it's an amazing toolset to be able to utilize for those that need it, it's not the only thing that you can do to enable your Windows instance to be powerful for development. Windows (and Microsoft as a whole) has come a long way in the past 10 years, and with their continued effort on projects like WSL, VS Code, and the Windows Terminal, the future looks brighter than ever.
You'll notice that despite the raw power and capabilities that WSL2 will be bringing to us right around the corner, that I didn't touch on it until later in the article. That's because, while it's an amazing toolset to be able to utilize for those that need it, it's not the only thing that you can do to enable your Windows instance to be powerful for development. Windows (and Microsoft as a whole) has come a long way in the past 10 years, and with their continued effort on projects like WSL, VS Code, and the Windows Terminal, the future looks brighter than ever.
I want to take a moment to stop and appreciate all of the hard work that the folks at Microsoft and everyone involved in the projects mentioned have done to enable the kind of work I do daily. Thank you.

View File

@@ -65,8 +65,8 @@ There are some rules for the tree that's created from these nodes:
- There must be one "root" or "trunk" node, and there cannot be more than one root
- There must be a one-to-many relationship with parents and children. A node:
- May have many children
- Cannot have more than one parent
- May have many children
- Cannot have more than one parent
- A non-root node may have many siblings as a result of the parent having many children
![A chart showing the aforementioned rules of the node relationships](./dom_relationship_rules.svg)
@@ -115,8 +115,6 @@ This tree relationship also enables CSS selectors such as the [general sibling s
>
> As mentioned before, they start at the root node, keep notes on what they've seen, then move to children. Then, they move to siblings, etc. Specific browsers may have slight deviations on this algorithm, but for the most part, they don't allow for upwards vertical movement of nodes within the DOM.
# Using The Correct Tags {#accessibility}
HTML, as a specification, has tons of tags that are able to be used at one's disposal. These tags contain various pieces of metadata internally to provide information to the browser about how they should be rendered in the DOM. This metadata can then be handled by the browser how it sees fit; it may apply default CSS styling, it may change the default interaction the user has with it, or even what behavior that element has upon clicking on it (in the case of a button in a form).
@@ -405,15 +403,13 @@ console.log(element.dataset.userInfo); // "[object Object]"
>
> For now, it will suffice just to know that you're only able to store strings in an element attribute.
## Events {#events}
Just as your browser uses the DOM to handle on-screen content visibility, your browser also utilizes the DOM for knowing how to handle user interactions. The way your browser handles user interaction is by listening for _events_ that occur when the user takes action or when other noteworthy changes occur.
For example, say you have a form that includes a default `<button>` element. When that button is pressed, it fires a `submit` event that then _bubbles_ up the DOM tree until it finds a [`<form>` element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/form). By default, this `<form>` element sends a [`GET` HTML request](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/GET) to the server once it receives the `submit` event.
![The bubble flow of the `submit` event](./submit_form.svg)
![The bubble flow of the submit event](./submit_form.svg)
_Bubbling_, as shown here, is the default behavior of any given event. Its behavior is to move an event up the DOM tree to the nodes above it, moving from child to parent until it hits the root. Parent nodes can respond to these events as expected, stop their upward motion on the tree, and more.
@@ -460,7 +456,6 @@ Let's look at an example of some code doing so:
</html>
```
In this example, we're adding click listeners to three squares, each one smaller than their parent square. This allows us to see the effect of bubbling in our console. If you click on the red square, you'd expect the event to bubble up to `<body>`, but not down to `#green`. Likewise, if you clicked on the green square, you'd expect the event to bubble up to both `#blue` and `#red` as well as `<body>`.
However, as you can see, we're running `stopPropagation` on the event in the blue square. This will make the click event stop bubbling. This means that any click events that are called on `#green` will not make it to `#red` as they will be stopped at `#blue`.
@@ -497,9 +492,7 @@ greenEl.addEventListener('click', () => {
As demonstrated by the code above, `stopPropagation` works as you might expect it to in capture mode as well!
![stopPropagation works similarly to how it does in bubble mode, just that it stops events from moving _down_ the tree](./capture_stop_propagation.svg)
![stopPropagation works similarly to how it does in bubble mode, just that it stops events from moving down the tree](./capture_stop_propagation.svg)
This means that when the user clicks on the red square, you'll see the following in your console:

View File

@@ -18,7 +18,7 @@ While React Native ships with Cocoapods support out-of-the-box, it's not immedia
# Install
Let's start with a caveat to adding Carthage to your projects: You cannot migrate your entire app's dependencies to use it. This is because React Native's dependencies [have not been packaged for usage in Carthage](https://github.com/facebook/react-native/issues/13835).
Let's start with a caveat to adding Carthage to your projects: You cannot migrate your entire app's dependencies to use it. This is because React Native's dependencies [have not been packaged for usage in Carthage](https://github.com/facebook/react-native/issues/13835).
As such, you will always have an additional package manager to consider with your native dependencies.
@@ -34,7 +34,7 @@ Now that you have Carthage installed, you can start using it in your projects.
# Usage
Just as npm has the `package.json` file, Carthage has the `Cartfile`. You'll want to create a file named `Cartfile` next to your `.xcworkspace` file. If your project was configured with the [React Native CLI](https://github.com/react-native-community/cli), this folder would be your `:projectRoot/ios` directory.
Just as npm has the `package.json` file, Carthage has the `Cartfile`. You'll want to create a file named `Cartfile` next to your `.xcworkspace` file. If your project was configured with the [React Native CLI](https://github.com/react-native-community/cli), this folder would be your `:projectRoot/ios` directory.
Once this file is created, you want to store your dependencies in the `ios/Cartfile` file. For [my React Native Git Client](https://gitshark.dev), we wanted to add a dependency called [Objective-Git](https://github.com/libgit2/objective-git) to our project. As such, our `Cartfile` looks like the following:
@@ -103,4 +103,3 @@ Please keep in mind that there may be additional steps that some dependencies wa
As with any decision made in engineering, the choice to add Carthage as a secondary native dependency for your React Native projects is a high contextual one. However, I hope that with the information on utilizing it properly, it alleviates some of the stress in integrating it.
If you run into any problems integrating Carthage, you can always ask for help in the comments down below or [join our Discord](https://discord.gg/FMcvc6T) and ask for help there as well.

View File

@@ -14,7 +14,6 @@ El aprendizaje en sí mismo es una cosa tan interesante para pensar.
Siempre me ha impulsado a aprender más sobre el mundo que me rodea. Encuentro fascinante el acto de simplemente entender un tema. Una de las cosas que más me ha gustado aprender es la Informática. Hay muchas personas con conocimientos excepcionales de las que he tenido la suerte de ser asesorado, ser adjacente o incluso ser amigo. Hoy estoy donde estoy, gracias a ellos.
Asimismo, me encanta ser capaz de transmitir las cosas que otros me han enseñado de una manera que considero expresiva y accesible para los demás. A medida que he ido creciendo como desarrollador y como persona, me he dado cuenta de que parece haber una falta de recursos en una serie de temas con los que me he encontrado. Como resultado, he pasado innumerables horas revisando recursos confusos, poco compilados o inaccesibles. A menudo, me he visto incapaz de aprender con los recursos y he tenido que confiar en "jugar" con el propio código o recurrir a otros y confiar en la afirmación verbal de la información para aprender mejor algunos temas. Poder tomar esa experiencia y mejorarla y compartirla es siempre una idea emocionante para mí.
Con el tiempo, me he encontrado con el deseo de compartir esa información cada vez más: uniéndome a los bootcamps para convertirme en un TA, escribiendo algunas publicaciones a pequeña escala, dando charlas. ¡Ha sido genial! Me encanta conocer gente nueva, escuchar su experiencia y, a menudo, aprender no sólo hablando con ellos, sino teniendo que enseñar (lo que me obliga a profundizar en las cosas que quiero enseñar y compartir).
@@ -22,6 +21,7 @@ Con el tiempo, me he encontrado con el deseo de compartir esa información cada
Hoy, estoy empezando con un nuevo proyecto para compartir aún más. Uno de los objetivos de dicho proyecto es hacer crecer lo que espero que sea una comunidad fantástica que pueda beneficiarse de las cosas que se comparten aquí y contribuir a un compromiso comunitario aún mayor. Quiero crear un blog. Bueno, eso puede ser lo que es ahora, pero quiero que sea más en el futuro y dejarlo así es subestimar la idea. Hablemos de los objetivos finales del proyecto.
# Objetivos finales
Quiero que este sitio se convierta en un centro de recursos completo. Mirando hacia un futuro lejano, nada me gustaría más que tener un contenido educativo que lleve desde una comprensión rudimentaria de los ordenadores hasta conceptos avanzados dentro de la informática.
Parte de esto incluiría tener una comunidad en torno a los contenidos: poder tener a otras personas involucradas en un espacio común donde se comparta, se cree y se discuta la información. Quiero que esta comunidad sea un lugar seguro para que cualquier persona que independientemente de su nivel de conocimientos, pueda aprender y sentirse segura y cómoda haciendo preguntas que de otro modo podrían temer o avergonzarse.
@@ -29,6 +29,7 @@ Parte de esto incluiría tener una comunidad en torno a los contenidos: poder te
Sin tener en cuenta el nivel de conocimientos, también reconozco que hay varios estilos de aprendizaje. Mientras que algunos pueden captar rápidamente la enseñanza verbal, otros pueden tener dificultades para aprender sin un texto que leer. Aunque el sitio se centra actualmente en el contenido de tipo artículo, me encantaría poder ampliar este proyecto a otras vías de contenido educativo en ciencias de la computación en el futuro.
# Objetivos actuales
Este proyecto va a ser un esfuerzo a largo plazo para tratar de escribir tan a menudo como sea posible para realizar este objetivo. Sin embargo, sé que es un objetivo elevado y no quiero hacerlo solo. Al crear este blog, me he asegurado de que otros autores tengan la menor barrera posible para contribuir. Tenemos páginas de autor construidas, filtrado y búsqueda en nuestras páginas, y [repositorio de GitHub abierto](https://github.com/crutchcorn/unicorn-utterances). Nos encantan y agradecemos las solicitudes de PR(Pull Request), nuevos contenidos, el mantenimiento del código en el sitio, los informes de errores y el debate general.
En cuanto al contenido, hay un artículo muy profundo que se está editando en estos momentos. En las próximas semanas, esté atento a las nuevas publicaciones en el sitio. Si utiliza el RSS para mantenerse al día con su contenido favorito, [tambien lo tenemos](https://unicorn-utterances.com/rss.xml)..
@@ -36,6 +37,7 @@ En cuanto al contenido, hay un artículo muy profundo que se está editando en e
Por último, quiero que sea inmediatamente accesible para personas con cualquier tipo de capacidad física. Se ha tenido mucho cuidado para asegurar que este sitio siga los requisitos de accesibilidad adecuados. Si hay algo en el sitio relacionado con la accesibilidad que no funcione, por favor háganoslo saber, será tratado con la misma diligencia que cualquier otro error que impida a los usuarios acceder al sitio.
# ¿Quién ha ayudado?
Aunque el sitio es joven, ya hemos tenido algunas personas increíbles que nos han ayudado a lo largo del camino creando lo que tenemos ahora (y lo que vamos a hacer en el futuro inmediato 🤫)
Empezando por el logo, he tenido la suerte de que el increíble [Vukasin](https://twitter.com/vukash_in) (creador de CandyCons, PixBit, etc.) haya creado un divertido y bonito logo que seguramente ya habrás visto (Si no es así, la página de inicio lo tiene en una calidad decente.)
@@ -47,6 +49,7 @@ Debido a las limitaciones de tiempo por mi parte, muchos de sus diseños no pudi
Por último, pero no por ello menos importante, el sitio ha contado con la increíble ayuda de [Evelyn Hathaway](https://twitter.com/evelynhathaway_). para ponerlo en marcha. Ella ha sido una ayuda increíble tanto en términos de dar sugerencias y retroalimentación en todos los extremos del sitio, así como conseguir que el alojamiento funcione correctamente, el manejo de SSL, redirecciones, etc. Sinceramente no podría hacerlo sin ella
## Lo que está por venir
Tenemos algunas cosas emocionantes por venir. Como se mencionó antes, va a haber un post en profundidad muy pronto. También tenemos muchos posts que han sido empezados, pero que necesitan ser editados y finalizados antes de ser enviados. Pero eso no es todo. Hay una lista absolutamente gigantesca de otros artículos en los que me gustaría trabajar, y parece que crece cada día.
Puedes mantenerte al día siguiéndome en [Twitter](https://twitter.com/crutchcorn) o utilizando nuestro canal [RSS](https://unicorn-utterances.com/rss.xml)
Puedes mantenerte al día siguiéndome en [Twitter](https://twitter.com/crutchcorn) o utilizando nuestro canal [RSS](https://unicorn-utterances.com/rss.xml)

View File

@@ -10,13 +10,13 @@
}
---
Learning itself is such an interesting thing to think about.
Learning itself is such an interesting thing to think about.
I have always been driven to learn more about the world around me. I find the act of simply understanding a topic fascinating. One of the things I've come to love learning about the most is Computer Science. There are so many people with exceptional knowledge that I've been blessed to be mentored by, be adjacent to, or even be friends with. Because of them, I am where I am today.
Likewise, I love being able to relay the things that others have taught me in a way that I feel to be expressive and accessible to others. As I've grown as a developer and person, I've found that there seems to be a lack of resources in a number of topics that I've come across. As a result, I've spent countless hours pouring over confusing, loosely compiled, or otherwise inaccessible resources. Oftentimes, I would find myself unable to learn with resources and had to rely on "playing" with the code itself or turning to others and relying on verbal affirmation of information in order to learn some topics better. Being able to take that experience and improve upon it and share it is always an exciting idea for me.
Likewise, I love being able to relay the things that others have taught me in a way that I feel to be expressive and accessible to others. As I've grown as a developer and person, I've found that there seems to be a lack of resources in a number of topics that I've come across. As a result, I've spent countless hours pouring over confusing, loosely compiled, or otherwise inaccessible resources. Oftentimes, I would find myself unable to learn with resources and had to rely on "playing" with the code itself or turning to others and relying on verbal affirmation of information in order to learn some topics better. Being able to take that experience and improve upon it and share it is always an exciting idea for me.
Over time, I've found myself wanting to share that information more and more: joining bootcamps to become a TA, writing some small-scale blog posts, giving talks. It's been a blast! I love meeting new people, hearing their experience, and often learning not only from talking to them, but by having to teach (which requires me to gain a deeper understanding in the things I want to teach and share).
Over time, I've found myself wanting to share that information more and more: joining bootcamps to become a TA, writing some small-scale blog posts, giving talks. It's been a blast! I love meeting new people, hearing their experience, and often learning not only from talking to them, but by having to teach (which requires me to gain a deeper understanding in the things I want to teach and share).
Today, I'm starting on a new project to share even more. One of the goals of said project is to grow what I hope to be a fantastic community that is able to benefit from the things shared here and contribute to even further community engagement. I want to start a blog. Well, that might be what it is now, but I want it to be more in the future and leaving it like that is underselling the idea. Let's talk about the project's ultimate goals.

View File

@@ -22,7 +22,7 @@ Virtual Memory uses what are called **page tables** that point to a memory map w
## What Virtual Memory looks like in C/C++ {#virtual-memory-cpp}
In C/C++ your virtual memory is broken up into ~4 basic "blocks" for where different aspects of your code are stored. The four memory areas are Code, Static/Global contexts, Stack, and Heap. The code section as you can probably guess is where your local code is held, it's specifically for the syntax of the area of the code that is being read. The Static/Global contexts are also as expected, either your global variables or your static methods that are set. The last two are the more complex areas and the two that you will want to have the most understanding in if you are working with a language that doesn't have garbage collection.
In C/C++ your virtual memory is broken up into \~4 basic "blocks" for where different aspects of your code are stored. The four memory areas are Code, Static/Global contexts, Stack, and Heap. The code section as you can probably guess is where your local code is held, it's specifically for the syntax of the area of the code that is being read. The Static/Global contexts are also as expected, either your global variables or your static methods that are set. The last two are the more complex areas and the two that you will want to have the most understanding in if you are working with a language that doesn't have garbage collection.
- Heap
- Stack
@@ -100,7 +100,7 @@ int main() {
}
```
Just so we understand what is going on here, I created a global vector pointer that I did not define. Therefore it is just on the stack represented as a '0'. When example1() is called it allocates memory for vec on the heap and instantiates a vector with all zeros. You can access the vector using the memory address on the stack. When I print out just "vec" it will print out the memory address of the location on the heap where it is stored, when I call *vec it then goes to that memory location on the heap. More on pointers in a later article.
Just so we understand what is going on here, I created a global vector pointer that I did not define. Therefore it is just on the stack represented as a '0'. When example1() is called it allocates memory for vec on the heap and instantiates a vector with all zeros. You can access the vector using the memory address on the stack. When I print out just "vec" it will print out the memory address of the location on the heap where it is stored, when I call \*vec it then goes to that memory location on the heap. More on pointers in a later article.
The other method, example2(), just creates a new local vector and sets vec equal to it. You'll see why this is problematic later on. When the program is run in the order example1() -> example2() everything will work fine. And here is the output:

View File

@@ -67,7 +67,6 @@ If you look at Vue's SFC (single file component) compiler source, there is a fun
What I came up with is a little script that will take a .vue file of the SFC and spit out how Vue interpretes the TypeScript.
```js
import { readFile, writeFile } from "fs";
import parseArgs from "minimist";

View File

@@ -1,4 +1,4 @@
---
---
{
title: "Web Components 101: Framework Comparison",
description: "While web components can be used standalone, they're paired best with a framework. With that in mind, which is the best and why?",
@@ -10,56 +10,55 @@
originalLink: 'https://coderpad.io/blog/web-components-101-framework-comparison/',
series: "Web Components 101",
order: 4
}
---
Alright alright, I know for a lot of the last article seemed like a big ad for Lit. That said, I promise Im not unable to see the advantages of other frameworks. Lit is a tool in a web developers toolbox. Like any tool, it has its pros and cons: times when its the right tool for the job, and other times when its less so.
That said, Id argue that using an existing framework is more often the better tool for the job than vanilla web components.
To showcase this, lets walk through some of these frameworks and compare and contrast them to home-growing web components.
# Pros and Cons of Vanilla Web Components
While web frameworks are the hot new jazz - its not like we couldnt make web applications before them. With the advent of W3C standardized web components (without Lit), doing so today is better than its ever been.
Here are some pros and cons of Vanilla JavaScript web components:
<table class="wp-block-table"> <tbody> <tr> <th> Pros </th> <th> Cons </th> </tr> <tr> <td> <ul> <li><span>No framework knowledge required</span></li> <li><span>Less reliance on framework</span></li> </ul> <ul> <li><span>Maintenance</span></li> <li><span>Bugs</span></li> <li><span>Security issues</span></li> </ul> <ul> <li><span>Smaller “hello world” size</span></li> <li><span>More control over render behavior</span></li> </ul> </td> <td> <ul> <li><span>Re-rendering un-needed elements is slow</span></li> <li><span>Handling event passing is tricky</span></li> <li><span>Creating elements can be overly verbose</span></li> <li><span>Binding to props requires element query</span></li> <li><span>Youll end up building Lit, anyway</span></li> </ul> </td> </tr> </tbody> </table>
To the vanilla way of doing things credit, theres a bit of catharsis knowing that youre relying on a smaller pool of upstream resources. Theres also a lessened likelihood of some bad push to NPM from someone on the Lit team breaking your build.
Likewise - for smaller apps - youre likely to end up with a smaller output bundle. Thats a huge win!
For smaller applications where performance is critical, or simply for the instances where you need to be as close to the DOM as possible, vanilla web components can be the way to go.
That said, its not all roses. After all, this series has already demonstrated that things like event passing and prop binding are verbose compared to Lit. Plus, things may not be as good as they seem when it comes to performance.
## Incremental Rendering
On top of the aforementioned issues with avoiding a framework like Lit, something we havent talked about much is incremental rendering. A great example of this would come into play if we had an array of items we wanted to render, and werent using Lit.
Every time we needed to add a single item to that list, our `innerHTML` trick would end up constructing a new element for every single item in the list. Whats worse is that every subelement would render as well!
This means that if you have an element like this:
```html
}
---
Alright alright, I know for a lot of the last article seemed like a big ad for Lit. That said, I promise Im not unable to see the advantages of other frameworks. Lit is a tool in a web developers toolbox. Like any tool, it has its pros and cons: times when its the right tool for the job, and other times when its less so.
That said, Id argue that using an existing framework is more often the better tool for the job than vanilla web components.
To showcase this, lets walk through some of these frameworks and compare and contrast them to home-growing web components.
# Pros and Cons of Vanilla Web Components
While web frameworks are the hot new jazz - its not like we couldnt make web applications before them. With the advent of W3C standardized web components (without Lit), doing so today is better than its ever been.
Here are some pros and cons of Vanilla JavaScript web components:
<table class="wp-block-table"> <tbody> <tr> <th> Pros </th> <th> Cons </th> </tr> <tr> <td> <ul> <li><span>No framework knowledge required</span></li> <li><span>Less reliance on framework</span></li> </ul> <ul> <li><span>Maintenance</span></li> <li><span>Bugs</span></li> <li><span>Security issues</span></li> </ul> <ul> <li><span>Smaller “hello world” size</span></li> <li><span>More control over render behavior</span></li> </ul> </td> <td> <ul> <li><span>Re-rendering un-needed elements is slow</span></li> <li><span>Handling event passing is tricky</span></li> <li><span>Creating elements can be overly verbose</span></li> <li><span>Binding to props requires element query</span></li> <li><span>Youll end up building Lit, anyway</span></li> </ul> </td> </tr> </tbody> </table>
To the vanilla way of doing things credit, theres a bit of catharsis knowing that youre relying on a smaller pool of upstream resources. Theres also a lessened likelihood of some bad push to NPM from someone on the Lit team breaking your build.
Likewise - for smaller apps - youre likely to end up with a smaller output bundle. Thats a huge win!
For smaller applications where performance is critical, or simply for the instances where you need to be as close to the DOM as possible, vanilla web components can be the way to go.
That said, its not all roses. After all, this series has already demonstrated that things like event passing and prop binding are verbose compared to Lit. Plus, things may not be as good as they seem when it comes to performance.
## Incremental Rendering
On top of the aforementioned issues with avoiding a framework like Lit, something we havent talked about much is incremental rendering. A great example of this would come into play if we had an array of items we wanted to render, and werent using Lit.
Every time we needed to add a single item to that list, our `innerHTML` trick would end up constructing a new element for every single item in the list. Whats worse is that every subelement would render as well!
This means that if you have an element like this:
```html
<li><a href=”https://example.com”><div class=”flex p-12 bg-yellow><span>Go to this location</span></div></a></li>
<li><a href=”https://example.com”><div class=”flex p-12 bg-yellow><span>Go to this location</span></div></a></li>
```
And only needed to update the text for a single item in the list, youd end up creating 4 more elements for the item you wanted to update… On top of recreating the 5 nodes (including the [Text Node](https://developer.mozilla.org/en-US/docs/Web/API/Text)) for every other item in the list.
## Building Your Own Framework
As a result of the downsides mentioned, many that choose to utilize vanilla web components often end up bootstrapping their own home-grown version of Lit.
Heres the problem with that: Youll end up writing Lit yourself, sure, but with none of the upsides of an existing framework.
This is the problem with diving headlong into vanilla web components on their own. Even in our small examples in the article dedicated to vanilla web components, we emulated many of the patterns found within Lit. Take this code from the article:
```html
<li><a href=”https://example.com”><div class=”flex p-12 bg-yellow><span>Go to this location</span></div></a></li>
```
And only needed to update the text for a single item in the list, youd end up creating 4 more elements for the item you wanted to update… On top of recreating the 5 nodes (including the [Text Node](https://developer.mozilla.org/en-US/docs/Web/API/Text)) for every other item in the list.
## Building Your Own Framework
As a result of the downsides mentioned, many that choose to utilize vanilla web components often end up bootstrapping their own home-grown version of Lit.
Heres the problem with that: Youll end up writing Lit yourself, sure, but with none of the upsides of an existing framework.
This is the problem with diving headlong into vanilla web components on their own. Even in our small examples in the article dedicated to vanilla web components, we emulated many of the patterns found within Lit. Take this code from the article:
```html
<script>
class MyComponent extends HTMLElement {
todos = [];
@@ -91,16 +90,16 @@ This is the problem with diving headlong into vanilla web components on their ow
customElements.define('my-component', MyComponent);
</script>
<script> class MyComponent extends HTMLElement { todos = []; connectedCallback() { this.render(); } // This function can be accessed in element query to set internal data externally setTodos(todos) { this.todos = todos; this.clear(); this.render(); } clear() { for (const child of this.children) { child.remove(); } } render() { this.clear(); // Do logic } } customElements.define('my-component', MyComponent);</script>
```
Here, were writing our own `clear` logic, handling dynamic value updates, and more.
The obvious problem is that wed then have to copy and paste most of this logic in many components in our app. But lets say that we were dedicated to this choice, and broke it out into a class that we could then extend.
Heck, lets even add in some getters and setters to make managing state easier:
```html
<script> class MyComponent extends HTMLElement { todos = []; connectedCallback() { this.render(); } // This function can be accessed in element query to set internal data externally setTodos(todos) { this.todos = todos; this.clear(); this.render(); } clear() { for (const child of this.children) { child.remove(); } } render() { this.clear(); // Do logic } } customElements.define('my-component', MyComponent);</script>
```
Here, were writing our own `clear` logic, handling dynamic value updates, and more.
The obvious problem is that wed then have to copy and paste most of this logic in many components in our app. But lets say that we were dedicated to this choice, and broke it out into a class that we could then extend.
Heck, lets even add in some getters and setters to make managing state easier:
```html
<script>
// Base.js
class OurBaseComponent extends HTMLElement {
@@ -137,12 +136,12 @@ Heck, lets even add in some getters and setters to make managing state easier
}
}
</script>
<script> // Base.js class OurBaseComponent extends HTMLElement { connectedCallback() { this.doRender(); } createState(obj) { return Object.keys(obj).reduce((prev, key) => { // This introduces bugs prev["_" + key] = obj[key]; prev[key] = { get: () => prev["_" + key], set: (val) => this.changeData(() => prev["_" + key] = val); } }, {}) } changeData(callback) { callback(); this.clear(); this.doRender(); } clear() { for (const child of this.children) { child.remove(); } } doRender(callback) { this.clear(); callback(); } }</script>
```
Now our usage should look fairly simple!
```html
<script> // Base.js class OurBaseComponent extends HTMLElement { connectedCallback() { this.doRender(); } createState(obj) { return Object.keys(obj).reduce((prev, key) => { // This introduces bugs prev["_" + key] = obj[key]; prev[key] = { get: () => prev["_" + key], set: (val) => this.changeData(() => prev["_" + key] = val); } }, {}) } changeData(callback) { callback(); this.clear(); this.doRender(); } clear() { for (const child of this.children) { child.remove(); } } doRender(callback) { this.clear(); callback(); } }</script>
```
Now our usage should look fairly simple!
```html
<script>
// MainFile.js
class MyComponent extends OurBaseComponent {
@@ -157,96 +156,94 @@ Now our usage should look fairly simple!
customElements.define('my-component', MyComponent);
</script>
<script> // MainFile.js class MyComponent extends OurBaseComponent { state = createState({todos: []}); render() { this.doRender(() => { this.innerHTML = `<h1>You have ${this.state.todos.length} todos</h1>` }) } } customElements.define('my-component', MyComponent);</script>
```
Thats only 13 lines to declare a UI component!
Only now you have a bug with namespace collision of state with underscores, your `doRender` doesnt handle async functions, and you still have many of the downsides listed below!
You could work on fixing these, but ultimately, youve created a basis of what Lit looks like today, but now youre starting at square one. No ecosystem on your side, no upstream maintainers to lean on.
# Pros and Cons of Lit Framework
With the downsides (and upsides) of vanilla web components in mind, lets compare the pros and cons of what building components using Lit looks like:
<table class="wp-block-table"> <tbody> <tr> <th> Pros </th> <th> Cons </th> </tr> <tr> <td> <ul> <li><span>Faster re-renders* that are automatically handled</span></li> <li><span>More consolidated UI/logic</span></li> <li><span>More advanced tools after mastery</span></li> <li><span>Smaller footprint than other frameworks</span></li> </ul> </td> <td> <ul> <li><span>Framework knowledge required</span></li> <li><span>Future breaking changes</span></li> <li><span>Not as widely known/used as other frameworks (Vue, React, Angular)</span></li> </ul> <p><span></span></p> </td> </tr> </tbody> </table>
While there is some overlap between this list of pros and cons and the one for avoiding Lit in favor of home-growing, theres a few other items here.
Namely, this table highlights the fact that Lit isnt the only framework for building web components. Theres huge alternatives like React, Vue, and Angular. These ecosystems have wider adoption and knowledge than Lit, which may make training a team to use Lit more difficult.
However, Lit has a key advantage over them, ignoring being able to output to web components for a moment - well come back to that.
Even compared to other frameworks, Lit is uniquely lightweight.
Compare the bundle sizes of Vue - a lightweight framework in its own right - compared to Lit.
![Lit weighs in at 16.3 kilobytes while Vue weighs in at 91.9 kilobytes](./bundlephobia.png)
While tree shaking will drastically reduce the bundle size of Vue for smaller applications, Lit will still likely win out for a simple component system.
# Other Frameworks
Lit framework isnt alone in being able to output to web components, however. In recent years, other frameworks have explored and implemented various methods of writing code for a framework that outputs to web components.
For example, the following frameworks have official support for creating web components without changing implementation code:
- [Vue](https://v3.vuejs.org/guide/web-components.html#definecustomelement)
- [Angular](https://angular.io/guide/elements)
- [Preact](https://github.com/preactjs/preact-custom-element)
Vue 3, in particular, has made massive strides in improving the web component development experience for their users.
Whats more is that these tools tend to have significantly larger ecosystems. Take Vue for example.
Want the ability to change pages easily? [Vue Router](https://router.vuejs.org/)
<script> // MainFile.js class MyComponent extends OurBaseComponent { state = createState({todos: []}); render() { this.doRender(() => { this.innerHTML = `<h1>You have ${this.state.todos.length} todos</h1>` }) } } customElements.define('my-component', MyComponent);</script>
```
Thats only 13 lines to declare a UI component!
Only now you have a bug with namespace collision of state with underscores, your `doRender` doesnt handle async functions, and you still have many of the downsides listed below!
You could work on fixing these, but ultimately, youve created a basis of what Lit looks like today, but now youre starting at square one. No ecosystem on your side, no upstream maintainers to lean on.
# Pros and Cons of Lit Framework
With the downsides (and upsides) of vanilla web components in mind, lets compare the pros and cons of what building components using Lit looks like:
<table class="wp-block-table"> <tbody> <tr> <th> Pros </th> <th> Cons </th> </tr> <tr> <td> <ul> <li><span>Faster re-renders* that are automatically handled</span></li> <li><span>More consolidated UI/logic</span></li> <li><span>More advanced tools after mastery</span></li> <li><span>Smaller footprint than other frameworks</span></li> </ul> </td> <td> <ul> <li><span>Framework knowledge required</span></li> <li><span>Future breaking changes</span></li> <li><span>Not as widely known/used as other frameworks (Vue, React, Angular)</span></li> </ul> <p><span></span></p> </td> </tr> </tbody> </table>
While there is some overlap between this list of pros and cons and the one for avoiding Lit in favor of home-growing, theres a few other items here.
Namely, this table highlights the fact that Lit isnt the only framework for building web components. Theres huge alternatives like React, Vue, and Angular. These ecosystems have wider adoption and knowledge than Lit, which may make training a team to use Lit more difficult.
However, Lit has a key advantage over them, ignoring being able to output to web components for a moment - well come back to that.
Even compared to other frameworks, Lit is uniquely lightweight.
Compare the bundle sizes of Vue - a lightweight framework in its own right - compared to Lit.
![Lit weighs in at 16.3 kilobytes while Vue weighs in at 91.9 kilobytes](./bundlephobia.png)
While tree shaking will drastically reduce the bundle size of Vue for smaller applications, Lit will still likely win out for a simple component system.
# Other Frameworks
Lit framework isnt alone in being able to output to web components, however. In recent years, other frameworks have explored and implemented various methods of writing code for a framework that outputs to web components.
For example, the following frameworks have official support for creating web components without changing implementation code:
- [Vue](https://v3.vuejs.org/guide/web-components.html#definecustomelement)
- [Angular](https://angular.io/guide/elements)
- [Preact](https://github.com/preactjs/preact-custom-element)
Vue 3, in particular, has made massive strides in improving the web component development experience for their users.
Whats more is that these tools tend to have significantly larger ecosystems. Take Vue for example.
Want the ability to change pages easily? [Vue Router](https://router.vuejs.org/)
Want a global store solution? [Vuex
](https://vuex.vuejs.org/)Prefer similar class based components? [Vue Class Component Library](https://class-component.vuejs.org/)
Prebuilt UI components? [Ant Design](https://www.antdv.com/docs/vue/introduce/)
While some ecosystem tools might exist in Lit, they certainly dont have the same breadth.
Thats not to say its all good in the general web component ecosystem. Some frameworks, like React, [have issues with Web Component interop](https://custom-elements-everywhere.com/), that may impact your ability to merge those tools together.
# Why Web Components?
You may be asking - if youre going to use a framework like Vue or React anyway, why even bother with web components? Couldnt you instead write an app in one of those frameworks, without utilizing web components?
You absolutely can, and to be honest - this is how most apps that use these frameworks are built.
But web components play a special role in companies that have multiple different projects: Consolidation.
Lets say that you work for BigCorp - the biggest corporation in Corpville.
BigCorp has dozens and dozens of full-scale applications, and not all of them are using the same frontend framework. This might sound irresponsible of BigCorps system architects, but in reality, sometimes a framework is better geared towards specific applications. Additionally, maybe some of the apps were part of an acquisition or merger that brought them into the company.
After all, the user doesnt care (or often, know) about what framework a tool is built with. You know what a user does care about? The fact that each app in a collection all have vastly different UIs and buttons.
![Two different apps, each with different text cutoff points in their button's text](./two_apps.png)
While this is clearly a bug, if both codebases implement the buttons on their own, youll inevitably end up with these types of problems; this being on top of the work-hours your teams have to spend redoing one-anothers work for their respective frameworks.
And thats all ignoring how difficult it can be to get designers to have consistency between different projects design components - like buttons.
Web Components solve this problem.
If you build a shared component system that exports web components, you can then use the same codebase across multiple frameworks.
Once the code is written and exported into web components, its trivial to utilize these new web components in your application. Like, it can be a [single line of code trivial.](https://v3.vuejs.org/guide/web-components.html#tips-for-a-vue-custom-elements-library)
From this point, youre able to make sure the logic and styling of these components are made consistent between applications - even if different frameworks.
# Conclusion
While web components have had a long time in the oven, they came out swinging! And while Lit isnt the only one at the table, theyve certainly found a strong foothold in capabilities.
Lits lightweightness, paired with web components abilities to integrate between multiple frameworks is an incredible one-two punch that makes it a strong candidate for any shared component system.
Whats more, the ability to transfer knowledge from other frameworks makes it an easy tool to place in your toolbox for usage either now or in the future.
Regardless; whether youre using Vue, React, Angular, Lit, Vanilla Web Components, or anything else, we wish you happy engineering!
](https://vuex.vuejs.org/)Prefer similar class based components? [Vue Class Component Library](https://class-component.vuejs.org/)
Prebuilt UI components? [Ant Design](https://www.antdv.com/docs/vue/introduce/)
While some ecosystem tools might exist in Lit, they certainly dont have the same breadth.
Thats not to say its all good in the general web component ecosystem. Some frameworks, like React, [have issues with Web Component interop](https://custom-elements-everywhere.com/), that may impact your ability to merge those tools together.
# Why Web Components?
You may be asking - if youre going to use a framework like Vue or React anyway, why even bother with web components? Couldnt you instead write an app in one of those frameworks, without utilizing web components?
You absolutely can, and to be honest - this is how most apps that use these frameworks are built.
But web components play a special role in companies that have multiple different projects: Consolidation.
Lets say that you work for BigCorp - the biggest corporation in Corpville.
BigCorp has dozens and dozens of full-scale applications, and not all of them are using the same frontend framework. This might sound irresponsible of BigCorps system architects, but in reality, sometimes a framework is better geared towards specific applications. Additionally, maybe some of the apps were part of an acquisition or merger that brought them into the company.
After all, the user doesnt care (or often, know) about what framework a tool is built with. You know what a user does care about? The fact that each app in a collection all have vastly different UIs and buttons.
![Two different apps, each with different text cutoff points in their button's text](./two_apps.png)
While this is clearly a bug, if both codebases implement the buttons on their own, youll inevitably end up with these types of problems; this being on top of the work-hours your teams have to spend redoing one-anothers work for their respective frameworks.
And thats all ignoring how difficult it can be to get designers to have consistency between different projects design components - like buttons.
Web Components solve this problem.
If you build a shared component system that exports web components, you can then use the same codebase across multiple frameworks.
Once the code is written and exported into web components, its trivial to utilize these new web components in your application. Like, it can be a [single line of code trivial.](https://v3.vuejs.org/guide/web-components.html#tips-for-a-vue-custom-elements-library)
From this point, youre able to make sure the logic and styling of these components are made consistent between applications - even if different frameworks.
# Conclusion
While web components have had a long time in the oven, they came out swinging! And while Lit isnt the only one at the table, theyve certainly found a strong foothold in capabilities.
Lits lightweightness, paired with web components abilities to integrate between multiple frameworks is an incredible one-two punch that makes it a strong candidate for any shared component system.
Whats more, the ability to transfer knowledge from other frameworks makes it an easy tool to place in your toolbox for usage either now or in the future.
Regardless; whether youre using Vue, React, Angular, Lit, Vanilla Web Components, or anything else, we wish you happy engineering!

View File

@@ -1,4 +1,4 @@
---
---
{
title: "Web Components 101: History",
description: "Web components have had a long history to get where they are today. Let's look back to see where they came from & their immense growth!",
@@ -10,26 +10,26 @@
originalLink: 'https://coderpad.io/blog/web-components-101-history/',
series: "Web Components 101",
order: 1
}
---
Web components enjoy large-scale usage today. From YouTube to GitHub and many other major organizations, its safe to say theyve made their way into commonplace frontend development practices.
That wasnt always the case. After all, web components had to start somewhere. And web development can be particularly picky with what succeeds and what doesnt.
So then, how did web components succeed? What was their path to broad adoption? And what are the origins behind the APIs used for modern web components?
Lets walk through a short history of web components and the related ecosystem to answer these questions.
# 2010: The Early Days of MVC in JS
While the concept of [“Model View Controller”, also commonly called MVC](https://en.wikipedia.org/wiki/Modelviewcontroller), has been around for some time, in JavaScript itself it failed to take hold early on.
However, in 2010, there was an explosion around MVC and its related cousin: Model View View-Controller (MVVC) ). This explosion came courtesy of a slew of new frameworks that launched only a few months apart from one-another.
[Knockout was one of the first to introduce strict MVC patterns inside of JavaScript in July 2010](https://github.com/knockout/knockout/releases/tag/v1.0.0). Knockout supported observable-based UI binding. Here, you could declare a Model, and bind data from said model directly to your HTML.
```html
}
---
Web components enjoy large-scale usage today. From YouTube to GitHub and many other major organizations, its safe to say theyve made their way into commonplace frontend development practices.
That wasnt always the case. After all, web components had to start somewhere. And web development can be particularly picky with what succeeds and what doesnt.
So then, how did web components succeed? What was their path to broad adoption? And what are the origins behind the APIs used for modern web components?
Lets walk through a short history of web components and the related ecosystem to answer these questions.
# 2010: The Early Days of MVC in JS
While the concept of [“Model View Controller”, also commonly called MVC](https://en.wikipedia.org/wiki/Modelviewcontroller), has been around for some time, in JavaScript itself it failed to take hold early on.
However, in 2010, there was an explosion around MVC and its related cousin: Model View View-Controller (MVVC) ). This explosion came courtesy of a slew of new frameworks that launched only a few months apart from one-another.
[Knockout was one of the first to introduce strict MVC patterns inside of JavaScript in July 2010](https://github.com/knockout/knockout/releases/tag/v1.0.0). Knockout supported observable-based UI binding. Here, you could declare a Model, and bind data from said model directly to your HTML.
```html
<!-- Demo of KnockoutJS -->
<table class="mails" data-bind="with: chosenFolderData">
<thead><tr><th>Subject</th></tr></thead>
@@ -47,18 +47,18 @@ function WebmailViewModel() {
};
ko.applyBindings(new WebmailViewModel());
</script>
```
![A list of emails based on their subjects](./knockout_demo.png)
While this works great for UI binding, it lacks the componentization aspect weve come to expect from modern frameworks.
------
This was improved in the ecosystem when [Backbone saw its first release in October 2010](https://github.com/jashkenas/backbone/releases/tag/0.1.0). It introduced a `[View](https://backbonejs.org/#View-extend)`, similar to what we might expect a component to be like today.
```javascript
</script>
```
![A list of emails based on their subjects](./knockout_demo.png)
While this works great for UI binding, it lacks the componentization aspect weve come to expect from modern frameworks.
---
This was improved in the ecosystem when [Backbone saw its first release in October 2010](https://github.com/jashkenas/backbone/releases/tag/0.1.0). It introduced a `[View](https://backbonejs.org/#View-extend)`, similar to what we might expect a component to be like today.
```javascript
var DocumentRow = Backbone.View.extend({
tagName: "li",
className: "document-row",
@@ -73,18 +73,18 @@ var DocumentRow = Backbone.View.extend({
render: function() {
...
}
});
```
Here, we can see that we can now bind events, classes, and more to a single tag. This aligns better with the types of components wed see in, say, React or Lit.
------
But thats not all we saw in October that year. We also saw the [initial release of Angular.js](https://github.com/angular/angular.js/releases/tag/v0.9.0) only 10 days after Backbones release.
Here, we can see that it introduced a concept of controllers into the document, similar to the `Model`s of Knockout. It allowed two-way bindings from UI to data and back.
```html
});
```
Here, we can see that we can now bind events, classes, and more to a single tag. This aligns better with the types of components wed see in, say, React or Lit.
---
But thats not all we saw in October that year. We also saw the [initial release of Angular.js](https://github.com/angular/angular.js/releases/tag/v0.9.0) only 10 days after Backbones release.
Here, we can see that it introduced a concept of controllers into the document, similar to the `Model`s of Knockout. It allowed two-way bindings from UI to data and back.
```html
<div ng-controller="TodoListController as todoList">
<ul>
<li ng-repeat="todo in todoList.todos">{{todo.text}}</li>
@@ -112,24 +112,24 @@ Here, we can see that it introduced a concept of controllers into the document,
todoList.todoText = "";
};
});
</script>
```
While Angular was the last of the three mentioned here, it had a huge impact. It was the first time Google released a JavaScript-based MVC based library into the wild.
Not only did they build the library, [they used it to build Googles Feedback tool](https://www.youtube.com/watch?v=r1A1VR0ibIQ) - which powers almost all of Googles products today. This represented a shift from their prior Java-based “[Google Web Toolkit” (GWT)](http://www.gwtproject.org/) that was widely used before.
Later, with the [acquisition of DoubleClick](https://www.nytimes.com/2007/04/14/technology/14DoubleClick.html), the team that was working on the [migration of the DoubleClick platform for Google decided to use Angular.js as well](https://www.youtube.com/watch?v=r1A1VR0ibIQ).
# 2011: A Glimmer in W3C Standards Eye
With Angular.js continuing to grow within Google, its no surprise that they continued researching in-JavaScript HTML bindings.
On this topic, Alex Russel - then a Senior Staff Engineer at Google, working on the web platform team - [gave a talk at the Fronteers conference](https://fronteers.nl/congres/2011/sessions/web-components-and-model-driven-views-alex-russell).
In this talk, he introduces a host of libraries that allow building custom elements with experimental new APIs.
```html
</script>
```
While Angular was the last of the three mentioned here, it had a huge impact. It was the first time Google released a JavaScript-based MVC based library into the wild.
Not only did they build the library, [they used it to build Googles Feedback tool](https://www.youtube.com/watch?v=r1A1VR0ibIQ) - which powers almost all of Googles products today. This represented a shift from their prior Java-based “[Google Web Toolkit” (GWT)](http://www.gwtproject.org/) that was widely used before.
Later, with the [acquisition of DoubleClick](https://www.nytimes.com/2007/04/14/technology/14DoubleClick.html), the team that was working on the [migration of the DoubleClick platform for Google decided to use Angular.js as well](https://www.youtube.com/watch?v=r1A1VR0ibIQ).
# 2011: A Glimmer in W3C Standards Eye
With Angular.js continuing to grow within Google, its no surprise that they continued researching in-JavaScript HTML bindings.
On this topic, Alex Russel - then a Senior Staff Engineer at Google, working on the web platform team - [gave a talk at the Fronteers conference](https://fronteers.nl/congres/2011/sessions/web-components-and-model-driven-views-alex-russell).
In this talk, he introduces a host of libraries that allow building custom elements with experimental new APIs.
```html
<script>
class Comment extends HTMLElement {
@@ -149,99 +149,98 @@ var c = new Comment("Howdy, pardner!");
document.body.appendChild(c);
</script>
<x-comment>...</x-comment>
```
Here, he utilized the [TraceUR compiler](https://web.archive.org/web/20210311050620/https://github.com/google/traceur-compiler) (a precursor to Babel) to add classes (remember, [`class` wouldnt land in JavaScript stable until ES6 in 2015](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes)) to build a new “custom element”.
This combined with their [new MDV library](https://web.archive.org/web/20110509081454/http://code.google.com/p/mdv) in order to create a similar development environment to what we have in browser APIs today.
Its important to note that at this stage, nothing was formalized inside of a specification - It was all experimental libraries acting as playgrounds for APIs.
That would change soon after.
# 2013: Things Start Heating Up
In early 2013 the Google team created a [Working Draft of a specification for Custom Elements](https://web.archive.org/web/20130608123733/http://www.w3.org/TR/custom-elements/). Alongside similar working drafts for Shadow DOM APIs, they were colloquially called “[Custom Elements v0](https://www.html5rocks.com/en/tutorials/webcomponents/customelements/)”.
With [Google Chromes release in 2008](https://googleblog.blogspot.com/2008/09/fresh-take-on-browser.html), they had the ability to quickly implement these non-standard APIs into Chrome in order to allow application developers to utilize them before specification stabilization.
One such example of this was [Polymer, which was a component library based on v0 APIs to provide two-way UI binding using MVC.](https://web.archive.org/web/20130515211406/http://www.polymer-project.org/) Its initial alpha release was announced in early 2013, alongside the specifications.
At [Google Dev Summit 2013, they walked through its capabilities ](https://www.youtube.com/watch?v=DH1vTVkqCDQ)and how it was able to run in other browsers by utilizing polyfills.
------
Facebook, not one to be outdone on the technical engineering front, [introduced React into public in 2013](https://www.youtube.com/watch?v=GW0rj4sNH2w)
While Polymer went deeper into the MVC route, [React relied more heavily on unidirectionality](https://coderpad.io/blog/master-react-unidirectional-data-flow/) in order to avoid state mutations.
# 2016 & 2017: Formative Years
While only the year prior, Polymer 1.0 was released with the usage of v0 custom element spec, [2016 saw the release of the custom element v1 specification](https://web.archive.org/web/20161030051600/http://w3c.github.io/webcomponents/spec/custom/).
This new version of the specification was not backwards compatible, and as a result required a shift to the new version of the specification in order to function properly. Polyfills were continued to be used as a stop-gate for browsers that didnt have a v0 implementation.
While [v1 was already implemented into Chrome in late 2016](https://web.archive.org/web/20161101052413/http://caniuse.com/#feat=custom-elementsv1), it wasnt until 2017 with the release of Polymer 2.0 that it would be adopted back into the library that helped draft the specification.
Because of this, while [YouTubes new Polymer rewrite](https://blog.youtube/news-and-events/a-sneak-peek-at-youtubes-new-look-and/) theoretically was a huge step towards the usage of web components, it posed a problem. Browsers like [Firefox without a v0 implementation were forced to continue to use Polyfills](https://web.archive.org/web/20180724154806/https://twitter.com/cpeterso/status/1021626510296285185), which are slower than native implementations.
# 2018 and Beyond: Maturity
2018 is where Web Components really found their foothold.
For a start, [Mozilla implemented the v1 specification APIs into their stable release of Firefox](https://www.mozilla.org/en-US/firefox/63.0/releasenotes/), complete with dedicated devtools. Finally, developers could use all of the web components APIs in their app, cross-browser, and without any concern for non-Chrome performance.
On top of that, Reacts unidirectionality seemed to have won over the Polymer team. The Polymer team announced that it would [migrate away from bidirectional binding and towards a one-way bound `LitElement`](https://www.polymer-project.org/blog/2018-05-02-roadmap-update#libraries)
That `LitElement` would then turn into a dedicated framework called “[Lit](https://coderpad.io/blog/web-components-101-lit-framework/)”, developed to replace Polymer as its successor, that would hit [v1 in 2019](https://github.com/lit/lit/releases/tag/v1.0.0) and [v2 in 2021](https://github.com/lit/lit/releases/tag/lit%402.0.0).
# Timeline
Whew! Thats a lot to take in. Lets see it all from a thousand foot view:
- 2010:
- [Knockout.js released](https://github.com/knockout/knockout/releases/tag/v1.0.0)
- [Backbone.js alpha released](https://github.com/jashkenas/backbone/releases/tag/0.1.0)
- [Angular.js made open-source](https://web.archive.org/web/20100413141437/http://getangular.com/)
- 2011:
- [MDV (Polymer predecessor) introduced at a conference](https://fronteers.nl/congres/2011/sessions/web-components-and-model-driven-views-alex-russell)
- 2013:
- [Working draft spec for Web Components (v0) released](https://web.archive.org/web/20130608123733/http://www.w3.org/TR/custom-elements/)
- [Polymer (Googles web component framework) announced](https://www.youtube.com/watch?v=DH1vTVkqCDQ)
- [React open-sourced](https://www.youtube.com/watch?v=GW0rj4sNH2w)
- 2015:
- [Polymer 1.0 released](https://web.archive.org/web/20150814004009/https://www.polymer-project.org/1.0/)
- 2016:
- [Custom elements v1 spec released](https://web.archive.org/web/20161030051600/http://w3c.github.io/webcomponents/spec/custom/)
- [YouTube rewritten in Polymer](https://blog.youtube/news-and-events/a-sneak-peek-at-youtubes-new-look-and/)
- 2017:
- [Polymer 2.0 released](https://github.com/Polymer/polymer/releases/tag/v2.0.0)
- 2018:
- [Polymer announces start of migration to “LitElement”](https://www.polymer-project.org/blog/2018-05-02-roadmap-update#libraries)
- [Firefox enables web components (Polyfills no longer needed)](https://www.mozilla.org/en-US/firefox/63.0/releasenotes/)
- 2019:
- [Lit framework 1.0 released](https://github.com/lit/lit/releases/tag/v1.0.0)
- 2021
- [Lit 2.0 released](https://github.com/lit/lit/releases/tag/lit%402.0.0)
# Conclusion
In the past 10 years weve seen massive changes to the web development ecosystem. No more is this more apparent than the development and continued growth of web components.
Hopefully this should put any future learnings about web components and [framework comparisons](https://coderpad.io/blog/web-components-101-framework-comparison/) into perspective.
Weve waited a long time to see many of these ideas fully standardized into the web platform, and, now that theyre here, theyre helping accelerate growth of many platforms.
Want to learn how to build them yourself?
We have articles about how to build web components [without a framework](https://coderpad.io/blog/intro-to-web-components-vanilla-js/) as well as using [Googles Lit framework](https://coderpad.io/blog/web-components-101-lit-framework/).
<x-comment>...</x-comment>
```
Here, he utilized the [TraceUR compiler](https://web.archive.org/web/20210311050620/https://github.com/google/traceur-compiler) (a precursor to Babel) to add classes (remember, [`class` wouldnt land in JavaScript stable until ES6 in 2015](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes)) to build a new “custom element”.
This combined with their [new MDV library](https://web.archive.org/web/20110509081454/http://code.google.com/p/mdv) in order to create a similar development environment to what we have in browser APIs today.
Its important to note that at this stage, nothing was formalized inside of a specification - It was all experimental libraries acting as playgrounds for APIs.
That would change soon after.
# 2013: Things Start Heating Up
In early 2013 the Google team created a [Working Draft of a specification for Custom Elements](https://web.archive.org/web/20130608123733/http://www.w3.org/TR/custom-elements/). Alongside similar working drafts for Shadow DOM APIs, they were colloquially called “[Custom Elements v0](https://www.html5rocks.com/en/tutorials/webcomponents/customelements/)”.
With [Google Chromes release in 2008](https://googleblog.blogspot.com/2008/09/fresh-take-on-browser.html), they had the ability to quickly implement these non-standard APIs into Chrome in order to allow application developers to utilize them before specification stabilization.
One such example of this was [Polymer, which was a component library based on v0 APIs to provide two-way UI binding using MVC.](https://web.archive.org/web/20130515211406/http://www.polymer-project.org/) Its initial alpha release was announced in early 2013, alongside the specifications.
At [Google Dev Summit 2013, they walked through its capabilities ](https://www.youtube.com/watch?v=DH1vTVkqCDQ)and how it was able to run in other browsers by utilizing polyfills.
---
Facebook, not one to be outdone on the technical engineering front, [introduced React into public in 2013](https://www.youtube.com/watch?v=GW0rj4sNH2w)
While Polymer went deeper into the MVC route, [React relied more heavily on unidirectionality](https://coderpad.io/blog/master-react-unidirectional-data-flow/) in order to avoid state mutations.
# 2016 & 2017: Formative Years
While only the year prior, Polymer 1.0 was released with the usage of v0 custom element spec, [2016 saw the release of the custom element v1 specification](https://web.archive.org/web/20161030051600/http://w3c.github.io/webcomponents/spec/custom/).
This new version of the specification was not backwards compatible, and as a result required a shift to the new version of the specification in order to function properly. Polyfills were continued to be used as a stop-gate for browsers that didnt have a v0 implementation.
While [v1 was already implemented into Chrome in late 2016](https://web.archive.org/web/20161101052413/http://caniuse.com/#feat=custom-elementsv1), it wasnt until 2017 with the release of Polymer 2.0 that it would be adopted back into the library that helped draft the specification.
Because of this, while [YouTubes new Polymer rewrite](https://blog.youtube/news-and-events/a-sneak-peek-at-youtubes-new-look-and/) theoretically was a huge step towards the usage of web components, it posed a problem. Browsers like [Firefox without a v0 implementation were forced to continue to use Polyfills](https://web.archive.org/web/20180724154806/https://twitter.com/cpeterso/status/1021626510296285185), which are slower than native implementations.
# 2018 and Beyond: Maturity
2018 is where Web Components really found their foothold.
For a start, [Mozilla implemented the v1 specification APIs into their stable release of Firefox](https://www.mozilla.org/en-US/firefox/63.0/releasenotes/), complete with dedicated devtools. Finally, developers could use all of the web components APIs in their app, cross-browser, and without any concern for non-Chrome performance.
On top of that, Reacts unidirectionality seemed to have won over the Polymer team. The Polymer team announced that it would [migrate away from bidirectional binding and towards a one-way bound `LitElement`](https://www.polymer-project.org/blog/2018-05-02-roadmap-update#libraries)
That `LitElement` would then turn into a dedicated framework called “[Lit](https://coderpad.io/blog/web-components-101-lit-framework/)”, developed to replace Polymer as its successor, that would hit [v1 in 2019](https://github.com/lit/lit/releases/tag/v1.0.0) and [v2 in 2021](https://github.com/lit/lit/releases/tag/lit%402.0.0).
# Timeline
Whew! Thats a lot to take in. Lets see it all from a thousand foot view:
- 2010:
- [Knockout.js released](https://github.com/knockout/knockout/releases/tag/v1.0.0)
- [Backbone.js alpha released](https://github.com/jashkenas/backbone/releases/tag/0.1.0)
- [Angular.js made open-source](https://web.archive.org/web/20100413141437/http://getangular.com/)
- 2011:
- [MDV (Polymer predecessor) introduced at a conference](https://fronteers.nl/congres/2011/sessions/web-components-and-model-driven-views-alex-russell)
- 2013:
- [Working draft spec for Web Components (v0) released](https://web.archive.org/web/20130608123733/http://www.w3.org/TR/custom-elements/)
- [Polymer (Googles web component framework) announced](https://www.youtube.com/watch?v=DH1vTVkqCDQ)
- [React open-sourced](https://www.youtube.com/watch?v=GW0rj4sNH2w)
- 2015:
- [Polymer 1.0 released](https://web.archive.org/web/20150814004009/https://www.polymer-project.org/1.0/)
- 2016:
- [Custom elements v1 spec released](https://web.archive.org/web/20161030051600/http://w3c.github.io/webcomponents/spec/custom/)
- [YouTube rewritten in Polymer](https://blog.youtube/news-and-events/a-sneak-peek-at-youtubes-new-look-and/)
- 2017:
- [Polymer 2.0 released](https://github.com/Polymer/polymer/releases/tag/v2.0.0)
- 2018:
- [Polymer announces start of migration to “LitElement”](https://www.polymer-project.org/blog/2018-05-02-roadmap-update#libraries)
- [Firefox enables web components (Polyfills no longer needed)](https://www.mozilla.org/en-US/firefox/63.0/releasenotes/)
- 2019:
- [Lit framework 1.0 released](https://github.com/lit/lit/releases/tag/v1.0.0)
- 2021
- [Lit 2.0 released](https://github.com/lit/lit/releases/tag/lit%402.0.0)
# Conclusion
In the past 10 years weve seen massive changes to the web development ecosystem. No more is this more apparent than the development and continued growth of web components.
Hopefully this should put any future learnings about web components and [framework comparisons](https://coderpad.io/blog/web-components-101-framework-comparison/) into perspective.
Weve waited a long time to see many of these ideas fully standardized into the web platform, and, now that theyre here, theyre helping accelerate growth of many platforms.
Want to learn how to build them yourself?
We have articles about how to build web components [without a framework](https://coderpad.io/blog/intro-to-web-components-vanilla-js/) as well as using [Googles Lit framework](https://coderpad.io/blog/web-components-101-lit-framework/).

View File

@@ -1,4 +1,4 @@
---
---
{
title: "Web Components 101: Lit Framework",
description: "Google pushed for web components, sure, but they didn't stop there. They also went on to make an amazing framework to help build them: Lit!",
@@ -10,24 +10,24 @@
originalLink: 'https://coderpad.io/blog/web-components-101-lit-framework/',
series: "Web Components 101",
order: 3
}
---
Recently we talked about [what web components are and how you can build a web app utilizing them with only vanilla JavaScript](https://coderpad.io/blog/intro-to-web-components-vanilla-js/).
While web components are absolutely usable with only vanilla JavaScript, more complex usage, especially pertaining to value binding, can easily become unwieldy.
One potential solution might be using a web component framework such as VueJS or React. However, web-standard components can still be a massive boon to development.
As such, theres a framework called [“Lit”](https://lit.dev/) that is developed specifically to leverage web components. With [Lit 2.0 recently launching as a stable release](https://lit.dev/blog/2021-09-21-announcing-lit-2/), we thought wed take a look at how we can simplify web component development.
# HTML
One of the greatest strengths of custom elements is the ability to contain multiple other elements. This makes it so that you can have custom elements for every scale: from a button to an entire page.
To do this in a vanilla JavaScript custom element, you can use `innerHTML` to create new child elements.
```html
}
---
Recently we talked about [what web components are and how you can build a web app utilizing them with only vanilla JavaScript](https://coderpad.io/blog/intro-to-web-components-vanilla-js/).
While web components are absolutely usable with only vanilla JavaScript, more complex usage, especially pertaining to value binding, can easily become unwieldy.
One potential solution might be using a web component framework such as VueJS or React. However, web-standard components can still be a massive boon to development.
As such, theres a framework called [“Lit”](https://lit.dev/) that is developed specifically to leverage web components. With [Lit 2.0 recently launching as a stable release](https://lit.dev/blog/2021-09-21-announcing-lit-2/), we thought wed take a look at how we can simplify web component development.
# HTML
One of the greatest strengths of custom elements is the ability to contain multiple other elements. This makes it so that you can have custom elements for every scale: from a button to an entire page.
To do this in a vanilla JavaScript custom element, you can use `innerHTML` to create new child elements.
```html
<script>
class MyComponent extends HTMLElement {
connectedCallback() {
@@ -42,12 +42,12 @@ class MyComponent extends HTMLElement {
customElements.define('hello-component', MyComponent);
</script>
<hello-component></hello-component>
```
This initial example looks fairly similar to what the Lit counterpart of that code looks like:
```html
<hello-component></hello-component>
```
This initial example looks fairly similar to what the Lit counterpart of that code looks like:
```html
<script type="module">
import { html, LitElement } from "https://cdn.skypack.dev/lit";
@@ -62,43 +62,43 @@ export class HelloElement extends LitElement {
window.customElements.define('hello-component', HelloElement);
</script>
<hello-component></hello-component>
```
<iframe src="https://app.coderpad.io/sandbox?question_id=194516" loading="lazy"></iframe>
There are two primary differences from the vanilla JavaScript example. First, we no longer need to use the `connectedCallback` to call `render`. The LitElements `render` function is called by Lit itself whenever needed - such as when data changes or for an initial render - avoiding the need to manually re-call the render method.
That said, Lit components fully support the same lifecycle methods as a vanilla custom elements.
The second, easier-to-miss change from the vanilla JavaScript component to the Lit implementation, is that when we set our HTML, we dont simply use a basic template literal (`<p>test</p>`): we pass the function `html` to the template literal (`html\`<p>test</p>\``).
This leverages [a somewhat infrequently used feature of template literals called tagged templates](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#tagged_templates). Tagged templates allow a template literal to be passed to a function. This function can then transform the output based on the string input and expected interpolated placeholders.
Because tagged templates return a value like any other function, you can assign the return value of `html` to a variable.
```javascript
<hello-component></hello-component>
```
<iframe src="https://app.coderpad.io/sandbox?question_id=194516" loading="lazy"></iframe>
There are two primary differences from the vanilla JavaScript example. First, we no longer need to use the `connectedCallback` to call `render`. The LitElements `render` function is called by Lit itself whenever needed - such as when data changes or for an initial render - avoiding the need to manually re-call the render method.
That said, Lit components fully support the same lifecycle methods as a vanilla custom elements.
The second, easier-to-miss change from the vanilla JavaScript component to the Lit implementation, is that when we set our HTML, we dont simply use a basic template literal (`<p>test</p>`): we pass the function `html` to the template literal (`html\`<p>test</p>\`\`).
This leverages [a somewhat infrequently used feature of template literals called tagged templates](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#tagged_templates). Tagged templates allow a template literal to be passed to a function. This function can then transform the output based on the string input and expected interpolated placeholders.
Because tagged templates return a value like any other function, you can assign the return value of `html` to a variable.
```javascript
render {
const el = html`
<p>Hello!</p>
`;
return el;
}
```
If you were to `console.log` this value, youd notice that its not an [HTMLElement](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement). Instead, its a custom value that Lit utilizes to render to proper DOM nodes.
# Event Binding
“If the syntax is so similar, why would I add a framework to build custom elements?”
Well, while the Vanilla JavaScript and Lit custom element code look similar for a small demo: The story changes dramatically when you look to scale up.
For example, if you wanted to render a button and add a click event to the button with vanilla JavaScript, youd have to abandon the `innerHTML` element assignment method.
First, well create an element using `document.createElement`, then add events, and finally utilize [an element method like `append`](https://developer.mozilla.org/en-US/docs/Web/API/Element/append) to add the node to the DOM.
```html
}
```
If you were to `console.log` this value, youd notice that its not an [HTMLElement](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement). Instead, its a custom value that Lit utilizes to render to proper DOM nodes.
# Event Binding
“If the syntax is so similar, why would I add a framework to build custom elements?”
Well, while the Vanilla JavaScript and Lit custom element code look similar for a small demo: The story changes dramatically when you look to scale up.
For example, if you wanted to render a button and add a click event to the button with vanilla JavaScript, youd have to abandon the `innerHTML` element assignment method.
First, well create an element using `document.createElement`, then add events, and finally utilize [an element method like `append`](https://developer.mozilla.org/en-US/docs/Web/API/Element/append) to add the node to the DOM.
```html
<script>
class MyComponent extends HTMLElement {
connectedCallback() {
@@ -120,24 +120,24 @@ class MyComponent extends HTMLElement {
window.customElements.define('hello-component', MyComponent);
</script>
<hello-component></hello-component>
```
While this works for the initial render, it doesnt handle any of the edgecases that,at scale,can cause long-term damage to your apps maintainability & performance.
For example, future re-renders of the element will duplicate the button. To solve this, you must iterate through all of the elements [`children`](https://developer.mozilla.org/en-US/docs/Web/API/Element/children) and [`remove`](https://developer.mozilla.org/en-US/docs/Web/API/Element/remove) them one-by-one.
Further, once the element is removed from the DOM, the click listener is not implicitly removed in the background. Because of this, its never released from memory and is considered a memory leak. If this issue continued to occur during long-term usage of your app, it would likely bloat memory usage and eventually crash or hang.
To solve this, youd need to assign a variable for every `addEventListener` you had present. This may be simple for one or two events, but add too many and it can be difficult to keep track.
And all of this ignores the maintenance standpoint: What does that code do at a glance?
It doesn't look anything like HTML and as a result, requires you to consistently context shift between writing standard HTML in a string and using the DOM APIs to construct elements.
Luckily, Lit doesnt have these issues. Heres the same button construction and rendering to a custom element using Lit instead of vanilla JavaScript:
```html
<hello-component></hello-component>
```
While this works for the initial render, it doesnt handle any of the edgecases that,at scale,can cause long-term damage to your apps maintainability & performance.
For example, future re-renders of the element will duplicate the button. To solve this, you must iterate through all of the elements [`children`](https://developer.mozilla.org/en-US/docs/Web/API/Element/children) and [`remove`](https://developer.mozilla.org/en-US/docs/Web/API/Element/remove) them one-by-one.
Further, once the element is removed from the DOM, the click listener is not implicitly removed in the background. Because of this, its never released from memory and is considered a memory leak. If this issue continued to occur during long-term usage of your app, it would likely bloat memory usage and eventually crash or hang.
To solve this, youd need to assign a variable for every `addEventListener` you had present. This may be simple for one or two events, but add too many and it can be difficult to keep track.
And all of this ignores the maintenance standpoint: What does that code do at a glance?
It doesn't look anything like HTML and as a result, requires you to consistently context shift between writing standard HTML in a string and using the DOM APIs to construct elements.
Luckily, Lit doesnt have these issues. Heres the same button construction and rendering to a custom element using Lit instead of vanilla JavaScript:
```html
<script type="module">
import { html, LitElement } from "https://cdn.skypack.dev/lit";
@@ -156,20 +156,20 @@ export class HelloElement extends LitElement {
window.customElements.define('hello-component', HelloElement);
</script>
<hello-component></hello-component>
```
<iframe src="https://app.coderpad.io/sandbox?question_id=194518" loading="lazy"></iframe>
Yup, thats all. Lit allows you to bind elements by using the `@` sign and passing the function as a placeholder to the `html` tagged template. Not only does this look much HTML-like, it handles event cleanup, re-rendering, and more.
# Attributes & Properties
As we learned before, there are two ways to pass values between and into components: attributes and values.
Previously, when we were using vanilla JavaScript, we had to define these separately. Moreover, we had to declare which attributes to dynamically listen to value changes of.
```javascript
<hello-component></hello-component>
```
<iframe src="https://app.coderpad.io/sandbox?question_id=194518" loading="lazy"></iframe>
Yup, thats all. Lit allows you to bind elements by using the `@` sign and passing the function as a placeholder to the `html` tagged template. Not only does this look much HTML-like, it handles event cleanup, re-rendering, and more.
# Attributes & Properties
As we learned before, there are two ways to pass values between and into components: attributes and values.
Previously, when we were using vanilla JavaScript, we had to define these separately. Moreover, we had to declare which attributes to dynamically listen to value changes of.
```javascript
class MyComponent extends HTMLElement {
connectedCallback() {
this.render();
@@ -187,12 +187,12 @@ class MyComponent extends HTMLElement {
const message = this.attributes.message.value || 'Hello world';
this.innerHTML = `<h1>${message}</h1>`;
}
}
```
In Lit, we declare attributes and properties using a static getter and treat them as normal values in any of our functions.
```javascript
}
```
In Lit, we declare attributes and properties using a static getter and treat them as normal values in any of our functions.
```javascript
import { html, LitElement } from "https://cdn.skypack.dev/lit";
export class HelloElement extends LitElement {
@@ -214,20 +214,20 @@ export class HelloElement extends LitElement {
}
}
window.customElements.define('hello-component', HelloElement);
```
For starters, we no longer have to manually call “render” when a propertys value is changed. Lit will re-render when values are changed.
Thats not all, though: Keen eyed readers will notice that were declaring a type associated with the `message` property.
Unlike the [React ecosystems PropTypes](https://github.com/facebook/prop-types), the `type` subproperty doesnt do runtime type validation. Instead, it acts as an automatic type converter.
This can be of great help as the knowledge that attributes can only be strings can be difficult to remember while debugging.
For example, we can tell Lit to convert an attribute to a Number and it will migrate from a string that looks like a number to an actual JavaScript type number.
```html
window.customElements.define('hello-component', HelloElement);
```
For starters, we no longer have to manually call “render” when a propertys value is changed. Lit will re-render when values are changed.
Thats not all, though: Keen eyed readers will notice that were declaring a type associated with the `message` property.
Unlike the [React ecosystems PropTypes](https://github.com/facebook/prop-types), the `type` subproperty doesnt do runtime type validation. Instead, it acts as an automatic type converter.
This can be of great help as the knowledge that attributes can only be strings can be difficult to remember while debugging.
For example, we can tell Lit to convert an attribute to a Number and it will migrate from a string that looks like a number to an actual JavaScript type number.
```html
<script type="module">
import { html, LitElement } from "https://cdn.skypack.dev/lit";
@@ -251,19 +251,18 @@ window.customElements.define('hello-component', HelloElement);
<!-- This will show "123 is typeof number" -->
<hello-component val="123"></hello-component>
<!-- This will show "NaN is typeof number" -->
<hello-component val="Test"></hello-component>
```
<iframe src="https://app.coderpad.io/sandbox?question_id=194519" loading="lazy"></iframe>
## Attribute Reactivity
One of the biggest benefits of not having to call `render` manually is that Lit is able to render contents when they need to update.
For example, given this example, the contents will render properly to update with new values.
```javascript
<hello-component val="Test"></hello-component>
```
<iframe src="https://app.coderpad.io/sandbox?question_id=194519" loading="lazy"></iframe>
## Attribute Reactivity
One of the biggest benefits of not having to call `render` manually is that Lit is able to render contents when they need to update.
For example, given this example, the contents will render properly to update with new values.
```javascript
import { html, LitElement } from "lit";
export class ChangeMessageElement extends LitElement {
@@ -289,18 +288,18 @@ export class ChangeMessageElement extends LitElement {
<hello-component message=${this.message}></hello-component>
`;
}
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=181069" loading="lazy"></iframe>
# Reactive Data Binding
This reactivity comes with its own set of limitations. While numbers and strings are able to be set fairly trivially, objects (and by extension arrays) are a different story.
This is because, in order for Lit to know what properties to update in render, an object must have a different reference value from one to another. [This is just how React and other frameworks detect changes in state as well.](https://www.coletiv.com/blog/dangers-of-using-objects-in-useState-and-useEffect-ReactJS-hooks/)
```javascript
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=181069" loading="lazy"></iframe>
# Reactive Data Binding
This reactivity comes with its own set of limitations. While numbers and strings are able to be set fairly trivially, objects (and by extension arrays) are a different story.
This is because, in order for Lit to know what properties to update in render, an object must have a different reference value from one to another. [This is just how React and other frameworks detect changes in state as well.](https://www.coletiv.com/blog/dangers-of-using-objects-in-useState-and-useEffect-ReactJS-hooks/)
```javascript
export class FormElement extends LitElement {
constructor() { /* ... */ }
static get properties() {
@@ -329,24 +328,24 @@ export class FormElement extends LitElement {
<todo-component todos=${this.todoList}></todo-component>
`;
}
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=181090" loading="lazy"></iframe>
You may also notice that were binding both the users input and output to set and reflect the state. [This is exactly how other frameworks like React also expect you to manage user state](https://coderpad.io/blog/master-react-unidirectional-data-flow/).
# Prop Passing with Lits Dot Synax
HTML attributes are not the only way to pass data to a web component. Properties on the element class are a way to pass more than just a string to an element.
While the `type` field can help solve this problem as well, youre still limited by serializability, meaning that things like functions wont be able to be passed by attributes.
While properties are a more robust method of data passing to web components, theyre seldomly used in vanilla JavaScript due to their complexity in coding.
For example, this is a simple demonstration of passing an array.
```html
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=181090" loading="lazy"></iframe>
You may also notice that were binding both the users input and output to set and reflect the state. [This is exactly how other frameworks like React also expect you to manage user state](https://coderpad.io/blog/master-react-unidirectional-data-flow/).
# Prop Passing with Lits Dot Synax
HTML attributes are not the only way to pass data to a web component. Properties on the element class are a way to pass more than just a string to an element.
While the `type` field can help solve this problem as well, youre still limited by serializability, meaning that things like functions wont be able to be passed by attributes.
While properties are a more robust method of data passing to web components, theyre seldomly used in vanilla JavaScript due to their complexity in coding.
For example, this is a simple demonstration of passing an array.
```html
<html>
<head>
<!-- Render object array as "ul", passing fn to checkbox change event -->
@@ -383,18 +382,18 @@ For example, this is a simple demonstration of passing an array.
<my-component id="mycomp"></my-component>
<button onclick="changeElement()">Change to 3</button>
</body>
</html>
```
First, you have to get a reference to the element using an API like `querySelector`. This means you need to introduce a new reference to the component and make sure the IDs match in both parts of code.
Then, just as is the case with updating attribute values, we need to manually call the “render” function in order to update the UI.
But those complaints aside, theres still one more: It places your data and component tags in two different areas. Because of this, it can be more difficult to debug or figure out what data is being passed to what component.
Lit takes a different approach. Within a Lit `html` tagged template, add a period before an attribute binding and suddenly it will pass as a property instead.
```html
</html>
```
First, you have to get a reference to the element using an API like `querySelector`. This means you need to introduce a new reference to the component and make sure the IDs match in both parts of code.
Then, just as is the case with updating attribute values, we need to manually call the “render” function in order to update the UI.
But those complaints aside, theres still one more: It places your data and component tags in two different areas. Because of this, it can be more difficult to debug or figure out what data is being passed to what component.
Lit takes a different approach. Within a Lit `html` tagged template, add a period before an attribute binding and suddenly it will pass as a property instead.
```html
<script type="module">
import { html, LitElement } from "https://cdn.skypack.dev/lit";
@@ -446,20 +445,20 @@ class ChangeMessageElement extends LitElement {
window.customElements.define('change-message-component', ChangeMessageElement);
</script>
<change-message-component></change-message-component>
```
<iframe src="https://app.coderpad.io/sandbox?question_id=194520" loading="lazy"></iframe>
This works because properties and attributes are both created at the same time with Lit.
However, due to the period binding not being HTML standard, it comes with the side effect of having to use a Lit template in order to bind properties. This tends not to be a problem in applications - since many tend to use and compose components throughout their applications.
# Array Rendering
In our article about vanilla JavaScript web components, we built a simple todo list. Lets take another look at that example, but this time using Lit for our component code. Well get started with a parent `FormElement`, which will manage the data and user input.
```javascript
<change-message-component></change-message-component>
```
<iframe src="https://app.coderpad.io/sandbox?question_id=194520" loading="lazy"></iframe>
This works because properties and attributes are both created at the same time with Lit.
However, due to the period binding not being HTML standard, it comes with the side effect of having to use a Lit template in order to bind properties. This tends not to be a problem in applications - since many tend to use and compose components throughout their applications.
# Array Rendering
In our article about vanilla JavaScript web components, we built a simple todo list. Lets take another look at that example, but this time using Lit for our component code. Well get started with a parent `FormElement`, which will manage the data and user input.
```javascript
class FormElement extends LitElement {
static get properties() {
return {
@@ -488,14 +487,14 @@ class FormElement extends LitElement {
<todo-component .todos=${this.todoList}></todo-component>
`;
}
}
```
Now that we have a form that contains an array, an important question arises: how do we iterate through an array in order to create individual elements for a list?
Well, while [React has `Array.map](https://reactjs.org/docs/lists-and-keys.html)` and [Vue has `v-for`](https://v3.vuejs.org/guide/list.html#mapping-an-array-to-elements-with-v-for), Lit uses a `repeat` function. Heres an example:
```javascript
}
```
Now that we have a form that contains an array, an important question arises: how do we iterate through an array in order to create individual elements for a list?
Well, while \[React has `Array.map](https://reactjs.org/docs/lists-and-keys.html)` and [Vue has `v-for`](https://v3.vuejs.org/guide/list.html#mapping-an-array-to-elements-with-v-for), Lit uses a `repeat` function. Heres an example:
```javascript
class TodoElement extends LitElement {
// ...
@@ -511,22 +510,22 @@ class TodoElement extends LitElement {
</ul>
`;
}
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=181092" loading="lazy"></iframe>
# Passing Functions
Before we step away from code to talk pros and cons about Lit itself (shh, spoilers!); lets take a look at a code sample that demonstrates many of the benefits over vanilla JavaScript web components weve talked about today.
Readers of the previous blog post will remember that when passing an array of objects to a web component, things looked pretty decent.
It wasnt until we tried binding event listeners to an array of objects that things got complex (and messy). Between needing to manually create elements using `document`, dealing with `querySelector` to pass properties, manually calling “render”, and needing to implement a custom “clear” method - it was a messy experience.
Lets see how Lit handles the job.
```javascript
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=181092" loading="lazy"></iframe>
# Passing Functions
Before we step away from code to talk pros and cons about Lit itself (shh, spoilers!); lets take a look at a code sample that demonstrates many of the benefits over vanilla JavaScript web components weve talked about today.
Readers of the previous blog post will remember that when passing an array of objects to a web component, things looked pretty decent.
It wasnt until we tried binding event listeners to an array of objects that things got complex (and messy). Between needing to manually create elements using `document`, dealing with `querySelector` to pass properties, manually calling “render”, and needing to implement a custom “clear” method - it was a messy experience.
Lets see how Lit handles the job.
```javascript
class TodoElement extends LitElement {
// ...
@@ -546,21 +545,21 @@ class TodoElement extends LitElement {
</ul>
`;
}
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=181093" loading="lazy"></iframe>
You will notice that were using a `filter` within our `render` method. Because this logic is within the `render` method, it will run on every UI update. This is important to note in case you have expensive operations: you should avoid running those within the render method.
Outside of this, however - thats all there is! It reads just like HTML would (with the added benefit of cleanup and prop passing), handles dynamic data, and more!
# Conclusion
The ability to leverage Lit in an application makes maintaining and improving a project easier than rolling web components yourself.
Lit demonstrates significant growth in web components from the early days of [Polymer](http://polymer-project.org/). This growth is in no small part due to the Lit team themselves, either!
Before it was a fully fledged framework, the project started from the `lit-html` package, which was an offshoot of Polymer. The Polymer team was instrumental in standardizing the modern variant of web components.
The ability to use Lit can strongly enhance web component development, but there are other options out there. Next time, well talk about what the competitors are doing, what the pros and cons of each are, and how you can make the best choice for your applications.
}
```
<iframe src="https://app.coderpad.io/sandbox?question_id=181093" loading="lazy"></iframe>
You will notice that were using a `filter` within our `render` method. Because this logic is within the `render` method, it will run on every UI update. This is important to note in case you have expensive operations: you should avoid running those within the render method.
Outside of this, however - thats all there is! It reads just like HTML would (with the added benefit of cleanup and prop passing), handles dynamic data, and more!
# Conclusion
The ability to leverage Lit in an application makes maintaining and improving a project easier than rolling web components yourself.
Lit demonstrates significant growth in web components from the early days of [Polymer](http://polymer-project.org/). This growth is in no small part due to the Lit team themselves, either!
Before it was a fully fledged framework, the project started from the `lit-html` package, which was an offshoot of Polymer. The Polymer team was instrumental in standardizing the modern variant of web components.
The ability to use Lit can strongly enhance web component development, but there are other options out there. Next time, well talk about what the competitors are doing, what the pros and cons of each are, and how you can make the best choice for your applications.

View File

@@ -1,4 +1,4 @@
---
---
{
title: "What do file extensions do?",
description: "A file extension isn't the only way a file is inditified, so what does it do?",
@@ -8,33 +8,33 @@
tags: ["computer science"],
attached: [],
license: "cc-by-nc-sa-4"
}
---
> A filename extension or file type is an identifier specified as a suffix to the name of computer file. - [Wikipedia](https://en.wikipedia.org/wiki/Filename_extension)
A long & terse explanation of a file extension exists but to boil it down into simpler terms the file extension is used by a computer to check against a registry of programs to see if a program is registered on the system that can open the file. While there are some differences based on how they are treated by the operating system you use, in most cases, they are used as a simple check to allow a system to see what program can open the file.
# Viewing the File Extension
When most people look at a file they probably don't see a file extension after the name. Often, by default, the file extension is hidden from the user. You can easily find articles that outline how to [turn file extensions on for macOS](https://support.apple.com/guide/mac-help/show-or-hide-filename-extensions-on-mac-mchlp2304/mac) or [how to turn it on for Windows](https://www.howtogeek.com/205086/beginner-how-to-make-windows-show-file-extensions/). Now when looking at the file its extension will be visible right after the name.
![A preview of what it's like to have file extensions on and off in Windows 10](./file_extensions.png)
As an example, most pictures you have will end in `.jpg`, `.png`, `.gif`, `.webp`, or `.avif`. A program will have a `.exe`, `.bash`, or `.bat`. A music file might have a `.mp3`, `.mp4`, `.flac`, or `.ogg`. A text file can be `.docx`, `.txt`, or `.odf`. Then there are spreadsheets, videos, hardware drivers, databases, and many other types of file extensions with more being made every day.
Keep in mind, these file extensions are just part of the file name, they aren't part of your file's contents. If you were to remove or change the file extension and then add it back, nothing bad would happen to your file and it would act the same as it did before.
# Opening the File
Whenever you open a file, whether it's through a double click or open a file in an operating system from a menu the computer will go and see if there is a program registered to open that type of file extension. If it can find one registered to open the file, it takes the file and sends it to the program. If it can't find one then the computer will ask you to choose one of the programs on your computer that are registered to be associated with the file type to be used to open those types of files from now on. Of course, you could also tell your computer to change the program used to open the file menu and telling it to "open with" for a different program or by editing the file association table manually to register the program for the future. Then the program starts up then it opens the file and then the program can show the file in the way it is supposed to.
# What happens if you open a file with another program
> This is a serious warning about the following information: You can make your files unrecoverable if you change the file extension or open a file with another program it is not designed to be opened with. This can lead to a permanent loss of data. The following is only an example and should not be taken as an endorsement to try this on your system. DO NOT TRY THIS AT HOME WITHOUT TAKING PROPER PRECAUTIONS!
A file on a computer is stored the same way as everything on a computer is: in a binary representation. Any program works with binary, and thus you could forcibly open a file with another program either by manually making the program open the file or by changing the file's extension. See the warning above. This can ruin the file depending on the program used to open it and should only be done if you know what you're doing and must do this. Sometimes though a file will open just fine. This can commonly be seen with Text files and Image files. Often when you change the file extension and open it in an appropriate program it will open just fine. Sometimes it won't, but that is due to the program and not the file. The file opens just fine because the very beginning of the binary representation is a set of [Magic Bytes](https://en.wikipedia.org/wiki/File_format#Magic_number) that identify the type of file beyond just the file extension. This identifying information is then used by the program to match the instruction set to operate the type of file. Not all programs operate this way though. Some of them only rely on opening known good files, and if used to open a file will just start operating on it right away. This is often how a program destroys a file because it was designed to work on specific types of files and because that program is single-use it is never designed to safely use other file types so it doesn't check for them and relies purely on the operating system to call the program. Although rare, these programs do exist. More often than not though the two systems are used together. If you don't change the extension or open a program with an unexpected program this is a rare occurrence.
# A basic understanding
Now armed with knowledge about what a file extension is, how it helps a computer, and a little about the backup mechanisms meant to aid this series of systems it probably makes more sense why there are different programs for different types of files.
}
---
> A filename extension or file type is an identifier specified as a suffix to the name of computer file. - [Wikipedia](https://en.wikipedia.org/wiki/Filename_extension)
A long & terse explanation of a file extension exists but to boil it down into simpler terms the file extension is used by a computer to check against a registry of programs to see if a program is registered on the system that can open the file. While there are some differences based on how they are treated by the operating system you use, in most cases, they are used as a simple check to allow a system to see what program can open the file.
# Viewing the File Extension
When most people look at a file they probably don't see a file extension after the name. Often, by default, the file extension is hidden from the user. You can easily find articles that outline how to [turn file extensions on for macOS](https://support.apple.com/guide/mac-help/show-or-hide-filename-extensions-on-mac-mchlp2304/mac) or [how to turn it on for Windows](https://www.howtogeek.com/205086/beginner-how-to-make-windows-show-file-extensions/). Now when looking at the file its extension will be visible right after the name.
![A preview of what it's like to have file extensions on and off in Windows 10](./file_extensions.png)
As an example, most pictures you have will end in `.jpg`, `.png`, `.gif`, `.webp`, or `.avif`. A program will have a `.exe`, `.bash`, or `.bat`. A music file might have a `.mp3`, `.mp4`, `.flac`, or `.ogg`. A text file can be `.docx`, `.txt`, or `.odf`. Then there are spreadsheets, videos, hardware drivers, databases, and many other types of file extensions with more being made every day.
Keep in mind, these file extensions are just part of the file name, they aren't part of your file's contents. If you were to remove or change the file extension and then add it back, nothing bad would happen to your file and it would act the same as it did before.
# Opening the File
Whenever you open a file, whether it's through a double click or open a file in an operating system from a menu the computer will go and see if there is a program registered to open that type of file extension. If it can find one registered to open the file, it takes the file and sends it to the program. If it can't find one then the computer will ask you to choose one of the programs on your computer that are registered to be associated with the file type to be used to open those types of files from now on. Of course, you could also tell your computer to change the program used to open the file menu and telling it to "open with" for a different program or by editing the file association table manually to register the program for the future. Then the program starts up then it opens the file and then the program can show the file in the way it is supposed to.
# What happens if you open a file with another program
> This is a serious warning about the following information: You can make your files unrecoverable if you change the file extension or open a file with another program it is not designed to be opened with. This can lead to a permanent loss of data. The following is only an example and should not be taken as an endorsement to try this on your system. DO NOT TRY THIS AT HOME WITHOUT TAKING PROPER PRECAUTIONS!
A file on a computer is stored the same way as everything on a computer is: in a binary representation. Any program works with binary, and thus you could forcibly open a file with another program either by manually making the program open the file or by changing the file's extension. See the warning above. This can ruin the file depending on the program used to open it and should only be done if you know what you're doing and must do this. Sometimes though a file will open just fine. This can commonly be seen with Text files and Image files. Often when you change the file extension and open it in an appropriate program it will open just fine. Sometimes it won't, but that is due to the program and not the file. The file opens just fine because the very beginning of the binary representation is a set of [Magic Bytes](https://en.wikipedia.org/wiki/File_format#Magic_number) that identify the type of file beyond just the file extension. This identifying information is then used by the program to match the instruction set to operate the type of file. Not all programs operate this way though. Some of them only rely on opening known good files, and if used to open a file will just start operating on it right away. This is often how a program destroys a file because it was designed to work on specific types of files and because that program is single-use it is never designed to safely use other file types so it doesn't check for them and relies purely on the operating system to call the program. Although rare, these programs do exist. More often than not though the two systems are used together. If you don't change the extension or open a program with an unexpected program this is a rare occurrence.
# A basic understanding
Now armed with knowledge about what a file extension is, how it helps a computer, and a little about the backup mechanisms meant to aid this series of systems it probably makes more sense why there are different programs for different types of files.

View File

@@ -14,20 +14,21 @@
Primitive obsession is an extremely common code smell, and when identified and fix, it greatly helps to reduce the amount of bugs that you may find in your code. This code smell is one that most developers can't intuitively identify.
# What are primitive types?
In order to know what primitive obsession is about, it's useful to firstly define primitive types. Primitive types are essentially the **basic building blocks** of a language. These are integers, strings, chars, floating-point numbers etc.
# What is Primitive obsession?
Primitive obsession is when your codebase relies on primitive types more than it should, and this results in them being able to control the logic of your application to some extent.
For example, you may have the following in C#:
```cs
class User {
public int Id { get; set; }
public string Name { get; set; }
}
```
```
And this may look like a perfectly good type. However it is flawed in various ways. For example, we're not able to easily enforce any sort of constraints.
@@ -98,6 +99,7 @@ Now we know for absolute certain that the `Password` of a `User` is always going
You could even go a step further and make it immutable, but I'll leave that for another time!
# Conclusion
Primitive Obsession is one of the least identified code smells, and for some reason isn't as popular as others.
A good way I've found to identify primitive obsession is to see if you often find yourself checking if a variable satisfies a set of rules. If that's the case then you're better of making a custom type for it, with a constructor that is able to validate the input of the value you're trying to assign to it.

View File

@@ -18,10 +18,10 @@ We'll walk through all of these questions and provide answers for each. First, w
While many sites today are built using a component-based framework like Angular, React, or Vue, there's nothing wrong with good ole' HTML. For sites like this, you typically provide an HTML file for each of the routes of your site. When the user requests one of the routes, your server will return the HTML for it. From there, [your browser parses that code and provides the content directly to the user](/posts/understanding-the-dom/). All in all, the process looks something like this:
1) You build HTML, CSS, JS
2) You put it on a server
3) The client downloads the HTML, CSS, JS from server
4) The client immediately sees content on screen
1. You build HTML, CSS, JS
2. You put it on a server
3. The client downloads the HTML, CSS, JS from server
4. The client immediately sees content on screen
![A diagram explaining how the steps above would flow](./normal.svg)
@@ -31,11 +31,11 @@ This is a reasonably straightforward flow once you get the hang of it. Let's tak
While you may not be familiar with this term, you're more than likely familiar with how you'd implement one of these; After all, this is the default when building an Angular, React, or Vue site. Let's use a React site as an example. When you build a typical React SPA without utilizing a framework like NextJS or Gatsby, you'd:
1) You build the React code
2) You put it on a server
3) The client downloads the React code from the server
4) The React code runs and generates the HTML/CSS on the client's computer
5) The user **then** sees the content on screen after React runs
1. You build the React code
2. You put it on a server
3. The client downloads the React code from the server
4. The React code runs and generates the HTML/CSS on the client's computer
5. The user **then** sees the content on screen after React runs
![A diagram explaining how the steps above would flow](./csr.svg)
@@ -45,12 +45,12 @@ This is because React's code has to initialize to render the components on scree
Because React has to initialize _somewhere_, what if we were to move the initial rendering off to the server? Imagine - for each request the user sends your way, you spin up an instance of React. Then, you're able to serve up the initial render (also called "fully hydrated") HTML and CSS to the user, ready to roll. That's just what server-side rendering is!
1) You build the React code
2) You put it on a server
3) The client requests data
4) The server runs the React code on the server to generate the HTML/CSS
5) The server then sends the generated HTML/CSS on screen
6) The user then sees the content on screen. React doesn't have to run on their computer
1. You build the React code
2. You put it on a server
3. The client requests data
4. The server runs the React code on the server to generate the HTML/CSS
5. The server then sends the generated HTML/CSS on screen
6. The user then sees the content on screen. React doesn't have to run on their computer
![A diagram explaining how the steps above would flow](./ssr.svg)
@@ -68,15 +68,15 @@ If SSR is ["passing the buck"](https://en.wikipedia.org/wiki/Buck_passing) to th
While the industry widely recognizes the term "Static Site Generation," I prefer the term "compile-side rendering" or "compile-time server-side rendering." This is because I feel they outline a better explanation of the flow of displaying content to the user. On an SSG site, you'd:
1) You build the React code
2) You generate the HTML and CSS on your development machine before deploying to a server (run build)
3) You put the generated built code on a server
4) The client downloads the HTML, CSS, JS from the built code on the server
5) The client immediately sees content on screen
1. You build the React code
2. You generate the HTML and CSS on your development machine before deploying to a server (run build)
3. You put the generated built code on a server
4. The client downloads the HTML, CSS, JS from the built code on the server
5. The client immediately sees content on screen
![A diagram explaining how the aforementioned steps would flow](./ssg.svg)
This simply extends the existing build process that many front-end frameworks have. After [Babel's done with its transpilation](https://babeljs.io/), it merely executes code to compile your initial screen into static HTML and CSS. This isn't entirely dissimilar from how SSR hydrates your initial screen, but it's done at compile-time, not at request time.
This simply extends the existing build process that many front-end frameworks have. After [Babel's done with its transpilation](https://babeljs.io/), it merely executes code to compile your initial screen into static HTML and CSS. This isn't entirely dissimilar from how SSR hydrates your initial screen, but it's done at compile-time, not at request time.
Since you're only hosting HTML and CSS again, you're able to host your site as you would a client-side rendered app: Using a CDN. This means that you can geo-sparse your hosting much more trivially but comes with the caveat that you're no longer to do rapid network queries to generate the UI as you could with SSR.
@@ -84,13 +84,12 @@ Since you're only hosting HTML and CSS again, you're able to host your site as y
It may be tempting to look through these options, find one that you think is the best, and [overfit](https://en.wiktionary.org/wiki/overfit) yourself into a conclusion that one is superior to all the others. That said, each of these methods has its strengths and weaknesses.
| Tool | Pros | Cons |
| ---------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| Vanilla HTM | <ul aria-label="HTML Pros"><li>Fast</li></ul> | <ul aria-label="HTML Cons"><li>Hard to scale</li></ul> |
| Client Side Rendering (CSR) | <ul aria-label="CSR Pros"><li>Easy to scale</li><li>Ease of engineering</li></ul> | <ul aria-label="CSR Cons"><li>Slow JS initialization</li><li>SEO concerns</li></ul> |
| Server Server Render (SSR) | <ul aria-label="SSR Pros"><li>Query based optimization</li><li>Better SEO handling</li><li>Usable without client JS enabled</li></ul> | <ul aria-label="SSR Cons"><li>Heavier server load</li><li>Needs specific server</li><li>More dev effort than CSR</li></ul> |
| Compile Time Rendering (SSG) | <ul aria-label="SSG Pros"><li>Layout based optimization</li><li>Better SEO handling</li><li>Usable without client JS enabled</li><li>CDN hostable</li></ul> | <ul aria-label="SSG Cons"><li>No access to query data</li><li>More dev effort than CSR</li></ul> |
| Tool | Pros | Cons |
| ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- |
| Vanilla HTM | <ul aria-label="HTML Pros"><li>Fast</li></ul> | <ul aria-label="HTML Cons"><li>Hard to scale</li></ul> |
| Client Side Rendering (CSR) | <ul aria-label="CSR Pros"><li>Easy to scale</li><li>Ease of engineering</li></ul> | <ul aria-label="CSR Cons"><li>Slow JS initialization</li><li>SEO concerns</li></ul> |
| Server Server Render (SSR) | <ul aria-label="SSR Pros"><li>Query based optimization</li><li>Better SEO handling</li><li>Usable without client JS enabled</li></ul> | <ul aria-label="SSR Cons"><li>Heavier server load</li><li>Needs specific server</li><li>More dev effort than CSR</li></ul> |
| Compile Time Rendering (SSG) | <ul aria-label="SSG Pros"><li>Layout based optimization</li><li>Better SEO handling</li><li>Usable without client JS enabled</li><li>CDN hostable</li></ul> | <ul aria-label="SSG Cons"><li>No access to query data</li><li>More dev effort than CSR</li></ul> |
Consider each of these utilities a tool in your toolbox. You may be working on a landing page for a client where SSG would fit best — working on an internal SPA that only has a limited budget allocated to it? Client-side rendering might be your best bet there! Are you working on a public-facing app that highly depends on real-time data? SSR's for you! Each of these has its utility in their problem-space. It's good to keep that in mind when selecting one for your next project.
@@ -122,9 +121,6 @@ All in all, while lighthouse might score you lower, you can rest assured that yo
As mentioned previously, having SSR and SSG in your toolbox are incredibly useful to have at your disposal. While not appropriate for every application, those that are tend to see great advantages from the concepts. Hopefully we've been able to provide a bit of insight that'll spark further learning and research into them.
Now you have familiarity with what SSR and SSG are, maybe you want to take a stab at implementing it? [We took a look recently at creating a blog using an Angular SSG solution called Scully](/posts/making-an-angular-blog-with-scully/).
As always, let us know what you think down in the comments below or [in our community Discord](https://discord.gg/FMcvc6T).

View File

@@ -11,9 +11,11 @@
license: 'cc-by-nc-nd-4'
}
---
Many programmers use a **loop** or **filter** where HashMap data structure could be considered.
## Finding user by id using Loops
```js
let userIdToBeSearched = 103;
const users = [
@@ -38,11 +40,13 @@ if (user) {
console.log("user does not exit with id: ", userIdToBeSearched);
}
```
The above solution has a time complexity of **O(n)**, where n represents the number of users. If there are 1 thousand users, in the worst case, we will search every user to find a match.
> Considering user id will be unique for each user, this is a good indication to use a HashMap instead of a loop since all keys in the Map are Unique.
## Finding user by id using Map
```js
let userIdToBeSearched = 103;
const users = new Map();
@@ -59,6 +63,7 @@ else {
console.log("user does not exit with id: ", userIdToBeSearched);
}
```
When using a **Map**, it takes constant time **O(1)** to find the user! All great, but note constructing the HashMap from the array still requires **O(n)** time.
In conclusion, use Map when frequently searching based on the **unique** field such as **id**. Please note Map cannot be used in case of searching based on the non-unique field such as **name**

View File

@@ -1,4 +1,4 @@
---
---
{
title: "Why React 18 Broke Your App",
description: "React 18's internal changes improved a lot, but may have broken your app in the process. Here's why and how you can fix it",
@@ -8,57 +8,56 @@
attached: [],
license: 'coderpad',
originalLink: 'https://coderpad.io/blog/development/why-react-18-broke-your-app/'
}
---
Youve just gotten done with [your React 18 upgrade](https://coderpad.io/blog/how-to-upgrade-to-react-18/), and, after some light QA testing, dont find anything. “An easy upgrade,” you think.
Unfortunately, down the road, you receive some internal bug reports from other developers that make it sound like your debounce hook isnt working quite right. You decide to make a minimal reproduction and create a demo of said hook.
You expect it to throw an “alert” dialog after a second of waiting, but weirdly, the dialog never runs at all.
<iframe src="https://app.coderpad.io/sandbox?question_id=200065" loading="lazy"></iframe>
This is strange because it was working just last week on your machine! Why did this happen? What changed?
**The reason your app broke in React 18 is that youre using `StrictMode`.**
Simply go into your `index.js` (or `index.ts`) file, and change this bit of code:
```jsx
}
---
Youve just gotten done with [your React 18 upgrade](https://coderpad.io/blog/how-to-upgrade-to-react-18/), and, after some light QA testing, dont find anything. “An easy upgrade,” you think.
Unfortunately, down the road, you receive some internal bug reports from other developers that make it sound like your debounce hook isnt working quite right. You decide to make a minimal reproduction and create a demo of said hook.
You expect it to throw an “alert” dialog after a second of waiting, but weirdly, the dialog never runs at all.
<iframe src="https://app.coderpad.io/sandbox?question_id=200065" loading="lazy"></iframe>
This is strange because it was working just last week on your machine! Why did this happen? What changed?
**The reason your app broke in React 18 is that youre using `StrictMode`.**
Simply go into your `index.js` (or `index.ts`) file, and change this bit of code:
```jsx
render(
<StrictMode>
<App />
</StrictMode>
);
```
To read like this:
```jsx
);
```
To read like this:
```jsx
render(
<App />
);
```
All of the bugs that were seemingly introduced within your app in React 18 are suddenly gone.
Only one problem: These bugs are real and existed in your codebase before React 18 - you just didnt realize it.
## Proof of broken component
Looking at our example from before, were using [React 18s `createRoot` API](https://coderpad.io/blog/how-to-upgrade-to-react-18/) to render our `App` inside of a `StrictMode` wrapper in lines 56 - 60.
<iframe src="https://app.coderpad.io/sandbox?question_id=200065" loading="lazy"></iframe>
Currently, when you press the button, it doesnt do anything. However, if you remove the
`StrictMode` and reload the page, you can see an `Alert` after a second of being debounced.
Looking through the code, lets add some `console.log`s into our `useDebounce`, since thats where our function is supposed to be called.
```jsx
);
```
All of the bugs that were seemingly introduced within your app in React 18 are suddenly gone.
Only one problem: These bugs are real and existed in your codebase before React 18 - you just didnt realize it.
## Proof of broken component
Looking at our example from before, were using [React 18s `createRoot` API](https://coderpad.io/blog/how-to-upgrade-to-react-18/) to render our `App` inside of a `StrictMode` wrapper in lines 56 - 60.
<iframe src="https://app.coderpad.io/sandbox?question_id=200065" loading="lazy"></iframe>
Currently, when you press the button, it doesnt do anything. However, if you remove the
`StrictMode` and reload the page, you can see an `Alert` after a second of being debounced.
Looking through the code, lets add some `console.log`s into our `useDebounce`, since thats where our function is supposed to be called.
```jsx
function useDebounce(cb, delay) {
const inputsRef = React.useRef({ cb, delay });
const isMounted = useIsMounted();
@@ -74,18 +73,18 @@ function useDebounce(cb, delay) {
}, delay),
[delay]
);
}
```
> ```
> Before function is called Object { inputsRef: {…}, delay: 1000, isMounted: false }
> ```
Oh! It seems like `isMounted` is never being set to true, and therefore the `inputsRef.current` callback is not being called: thats our function we wanted to be debounced.
Lets take a look at the `useIsMounted()` codebase:
```jsx
}
```
> ```
> Before function is called Object { inputsRef: {…}, delay: 1000, isMounted: false }
> ```
Oh! It seems like `isMounted` is never being set to true, and therefore the `inputsRef.current` callback is not being called: thats our function we wanted to be debounced.
Lets take a look at the `useIsMounted()` codebase:
```jsx
function useIsMounted() {
const isMountedRef = React.useRef(true);
React.useEffect(() => {
@@ -94,38 +93,38 @@ function useIsMounted() {
};
}, []);
return () => isMountedRef.current;
}
```
This code, at first glance, makes sense. After all, while were doing a cleanup in the return function of `useEffect` to remove it at first render, `useRef`'s initial setter runs at the start of each render, right?
Well, not quite.
## What changed in React 18?
In older versions of React, you would mount a component once and that would be it. As a result, the initial value of `useRef` and `useState` could almost be treated as if they were set once and then forgotten about.
In React 18, the React developer team decided to change this behavior and [re-mount each component more than once in strict mode](https://github.com/reactwg/react-18/discussions/19). This is in strong part due to the fact that a potential future React feature will have exactly that behavior.
See, one of the features that the React team is hoping to add in a future release utilizes a concept of “[reusable state](https://reactjs.org/docs/strict-mode.html#ensuring-reusable-state)”. The general idea behind reusable state is such that if you have a tab thats un-mounted (say when the user tabs away), then re-mounted (when the user tabs back), React will recover the data that was assigned to said tab component. This data being immediately available allows you to render the respective component immediately without hesitation.
Because of this, while data inside of, say, `useState` may be persisted, its imperative that effects are properly cleaned up and handled properly. [To quote the React docs](https://reactjs.org/docs/strict-mode.html#ensuring-reusable-state):
> This feature will give React better performance out-of-the-box but requires components to be resilient to effects being mounted and destroyed multiple times.
However, this behavior shift in Strict Mode within React 18 isnt just protective future-proofing from the React team: its also a reminder to follow Reacts rules properly and to clean up your actions as expected.
After all, the [React team themselves have been warning that an empty dependent array](https://reactjs.org/docs/hooks-reference.html#usememo) (`[]` as the second argument) should not guarantee that it only runs once for ages now.
In fact, this article may be a bit of a misnomer - [the React team says theyve upgraded thousands of components in Facebooks core codebase without significant issues](https://github.com/reactwg/react-18/discussions/19#discussioncomment-796197=). More than likely, a majority of applications out there will be able to upgrade to the newest version of React without any problems.
All that said, these React missteps crawl their way into our applications regardless. While the React team may not anticipate many breaking apps, these errors seem relatively common enough to warrant an explanation.
## How to fix the remounting bug
The code I linked before was written by me in a production application and it's wrong. Instead of relying on `useRef` to initialize the value once, we need to ensure the initialization runs on every instance of `useEffect`.
```jsx
}
```
This code, at first glance, makes sense. After all, while were doing a cleanup in the return function of `useEffect` to remove it at first render, `useRef`'s initial setter runs at the start of each render, right?
Well, not quite.
## What changed in React 18?
In older versions of React, you would mount a component once and that would be it. As a result, the initial value of `useRef` and `useState` could almost be treated as if they were set once and then forgotten about.
In React 18, the React developer team decided to change this behavior and [re-mount each component more than once in strict mode](https://github.com/reactwg/react-18/discussions/19). This is in strong part due to the fact that a potential future React feature will have exactly that behavior.
See, one of the features that the React team is hoping to add in a future release utilizes a concept of “[reusable state](https://reactjs.org/docs/strict-mode.html#ensuring-reusable-state)”. The general idea behind reusable state is such that if you have a tab thats un-mounted (say when the user tabs away), then re-mounted (when the user tabs back), React will recover the data that was assigned to said tab component. This data being immediately available allows you to render the respective component immediately without hesitation.
Because of this, while data inside of, say, `useState` may be persisted, its imperative that effects are properly cleaned up and handled properly. [To quote the React docs](https://reactjs.org/docs/strict-mode.html#ensuring-reusable-state):
> This feature will give React better performance out-of-the-box but requires components to be resilient to effects being mounted and destroyed multiple times.
However, this behavior shift in Strict Mode within React 18 isnt just protective future-proofing from the React team: its also a reminder to follow Reacts rules properly and to clean up your actions as expected.
After all, the [React team themselves have been warning that an empty dependent array](https://reactjs.org/docs/hooks-reference.html#usememo) (`[]` as the second argument) should not guarantee that it only runs once for ages now.
In fact, this article may be a bit of a misnomer - [the React team says theyve upgraded thousands of components in Facebooks core codebase without significant issues](https://github.com/reactwg/react-18/discussions/19#discussioncomment-796197=). More than likely, a majority of applications out there will be able to upgrade to the newest version of React without any problems.
All that said, these React missteps crawl their way into our applications regardless. While the React team may not anticipate many breaking apps, these errors seem relatively common enough to warrant an explanation.
## How to fix the remounting bug
The code I linked before was written by me in a production application and it's wrong. Instead of relying on `useRef` to initialize the value once, we need to ensure the initialization runs on every instance of `useEffect`.
```jsx
function useIsMounted() {
const isMountedRef = React.useRef(true);
React.useEffect(() => {
@@ -135,25 +134,25 @@ function useIsMounted() {
};
}, []);
return () => isMountedRef.current;
}
```
This is true for the inverse as well! We need to make sure to run cleanup on any components that we may have forgotten about before.
Many ignore this rule for `App` and other root elements that they dont intend to re-mount, but with new strict mode behaviors, that guarantee is no longer a safe bet.
To solve this application across your app, look for the following signs:
- Side effects with cleanup but no setup (like our example)
- A side effect without proper cleanup
- Utilizing `[]` in `useMemo` and `useEffect` to assume that said code will only run once
One this code is eliminated, you should be back to a fully functioning application and can re-enable StrictMode in your application!
## Conclusion
React 18 brings many amazing features to the table, such as [new suspense features](https://reactjs.org/docs/concurrent-mode-suspense.html), [the new useId hook](https://github.com/reactwg/react-18/discussions/111), [automatic batching](https://github.com/reactwg/react-18/discussions/21), and more. While refactor work to support these features may be frustrating at times, its important to remember that they a serve real-world benefit to the user.
For example, React 18 also introduces some functionality to debounce renders in order to create a much nicer experience when rapid user input needs to be processed.
For more on the React 18 upgrade process, take a look at [our instruction guide on how to upgrade to React 18](https://coderpad.io/blog/how-to-upgrade-to-react-18/)
}
```
This is true for the inverse as well! We need to make sure to run cleanup on any components that we may have forgotten about before.
Many ignore this rule for `App` and other root elements that they dont intend to re-mount, but with new strict mode behaviors, that guarantee is no longer a safe bet.
To solve this application across your app, look for the following signs:
- Side effects with cleanup but no setup (like our example)
- A side effect without proper cleanup
- Utilizing `[]` in `useMemo` and `useEffect` to assume that said code will only run once
One this code is eliminated, you should be back to a fully functioning application and can re-enable StrictMode in your application!
## Conclusion
React 18 brings many amazing features to the table, such as [new suspense features](https://reactjs.org/docs/concurrent-mode-suspense.html), [the new useId hook](https://github.com/reactwg/react-18/discussions/111), [automatic batching](https://github.com/reactwg/react-18/discussions/21), and more. While refactor work to support these features may be frustrating at times, its important to remember that they a serve real-world benefit to the user.
For example, React 18 also introduces some functionality to debounce renders in order to create a much nicer experience when rapid user input needs to be processed.
For more on the React 18 upgrade process, take a look at [our instruction guide on how to upgrade to React 18](https://coderpad.io/blog/how-to-upgrade-to-react-18/)

View File

@@ -14,7 +14,7 @@
Windows Subsystem for Linux (WSL) lets you run software designed for Linux. This gives Windows users access to tools and web developers environments closer resembling that of their peers or the webservers hosting their code.
## Getting Started
## Getting Started
First make sure Windows is updated, WSL required additional setup steps prior to version 2004. Then run open PowerShell (as Admin) and run `wsl --list --online`. This will list all the available OS's for WSL.
@@ -34,7 +34,7 @@ Ubuntu-18.04 Ubuntu 18.04 LTS
Ubuntu-20.04 Ubuntu 20.04 LTS
```
## Installing
## Installing
Pick your favorite flavor, mine is Ubuntu or Debian if I think I might need any older tools. Then run `wsl --install -d <Distro>`.
@@ -63,7 +63,7 @@ Installation successful!
user@MACHINE_NAME:~$
```
## Setup
## Setup
Run `sudo apt update` to refresh all your apt-get repos.

View File

@@ -1,4 +1,4 @@
---
---
{
title: "Writing better tests for Angular with Angular Testing Library",
description: "A simple explination of writing better tests for Angular applications and setting up Angular Testing Library",
@@ -8,79 +8,79 @@
tags: ["testing", "angular"],
attached: [],
license: "cc-by-nc-sa-4"
}
---
Some evangelicals say that before code ever exists, there always needs to be a test to know how the code should be written. That frankly isn't true. A test isn't _strictly_ needed to determine how to code. What **is** needed are tests that give confidence that as code is written, a change to already existing functionality doesn't happen and that new functionality will behave properly as time goes on. To this end, a lot of testing libraries and frameworks exist. Often times, tests are written in regards to the library or framework used and not to the end product's specifications. For Angular, this is especially true when the default testing implementation is for testing angular, and not for testing what a developer would use Angular to build. **Tests should be written in the same way a user would use them.** We don't need to test Angular; we need to test what we make with Angular.
# Writing tests for an Angular application does not mean testing Angular {#test-the-web-not-angular}
In regards to Angular and writing tests, we must first understand what the tests are for. For a great many projects, that means testing a webpage. In proper testing for a webpage, the underlying library should be able to be changed at any time for maintainability purposes, and the tests should still work. To that end, we must write tests for the web and not for Angular. When using the Angular CLI, it sets up some tests, but when looking closely at the tests, it becomes apparent that the tests are testing Angular and not the output.
```js
}
---
Some evangelicals say that before code ever exists, there always needs to be a test to know how the code should be written. That frankly isn't true. A test isn't _strictly_ needed to determine how to code. What **is** needed are tests that give confidence that as code is written, a change to already existing functionality doesn't happen and that new functionality will behave properly as time goes on. To this end, a lot of testing libraries and frameworks exist. Often times, tests are written in regards to the library or framework used and not to the end product's specifications. For Angular, this is especially true when the default testing implementation is for testing angular, and not for testing what a developer would use Angular to build. **Tests should be written in the same way a user would use them.** We don't need to test Angular; we need to test what we make with Angular.
# Writing tests for an Angular application does not mean testing Angular {#test-the-web-not-angular}
In regards to Angular and writing tests, we must first understand what the tests are for. For a great many projects, that means testing a webpage. In proper testing for a webpage, the underlying library should be able to be changed at any time for maintainability purposes, and the tests should still work. To that end, we must write tests for the web and not for Angular. When using the Angular CLI, it sets up some tests, but when looking closely at the tests, it becomes apparent that the tests are testing Angular and not the output.
```js
it('should create the app', () => {
const fixture = TestBed.createComponent(AppComponent);
const app = fixture.componentInstance;
expect(app).toBeTruthy();
});
```
This test isn't a very good test. It doesn't say anything about the actual output of the application component itself. When the output is a full, rich webpage and tests are testing Angular, then the tests won't do much when the content of the webpage is changed.
While the default testing setup does allow for the writing of tests that would test the outputted HTML they are still specific to Angular
```js
});
```
This test isn't a very good test. It doesn't say anything about the actual output of the application component itself. When the output is a full, rich webpage and tests are testing Angular, then the tests won't do much when the content of the webpage is changed.
While the default testing setup does allow for the writing of tests that would test the outputted HTML they are still specific to Angular
```js
it('should render title', () => {
const fixture = TestBed.createComponent(AppComponent);
fixture.detectChanges();
const compiled = fixture.nativeElement;
expect(compiled.querySelector('.content span').textContent).toContain('The app is running!');
});
```
That test looks a little better, but it's still very tied to Angular. The test requires in-depth knowledge of how Angular actually routes and moves all the bits around to write tests for it, and as a result, the tests are completely tied into Angular and the current API footprint. If — over the years — Angular is retired, these tests will no longer be valid.
If the tests were just tailored to the outputted DOM or containers it would be a much easier and more adaptable test.
```js
});
```
That test looks a little better, but it's still very tied to Angular. The test requires in-depth knowledge of how Angular actually routes and moves all the bits around to write tests for it, and as a result, the tests are completely tied into Angular and the current API footprint. If — over the years — Angular is retired, these tests will no longer be valid.
If the tests were just tailored to the outputted DOM or containers it would be a much easier and more adaptable test.
```js
test('should render counter', async () => {
await render(AppComponent);
expect(document.querySelector('.content span').innerText).toBe('The app is running!');
});
```
This test no longer even needs Angular to be the library chosen. It just requires that a render method, when given the component, will render it to the DOM present in the testing environment. This can be run in the Framework, and even tested against in a real world browser. This is a good test in that the first `span` inside of `.content` has the `innerText` value expected in the test. These are all JavaScript and DOM APIs and thus can be trusted in any environment that adheres to them.
Writing tests that don't rely on testing Angular, but instead rely on the DOM, allows the application to be tested in a way that a user would use the application instead of the way that Angular internally works.
# Fixing that shortcoming using Testing Library {#testing-library}
Thankfully, writing tests like these have been made simple by a testing library simply called "[Testing Library](https://testing-library.com)." Testing Library is a collection of libraries for various frameworks and applications. One of the supported libraries is Angular, through the [Angular Testing Library](https://testing-library.com/docs/angular-testing-library/intro). This can be used to test Angular apps in a simple DOM focused manner with some nice helpers to make it even easier to work with. It relies on [Jest](https://jestjs.io/) as an extension to the Jasmine testing framework to make testing easier, and more end-results focused. With that tooling, a project can have tests much less focused on Angular and much more focused on what is being made.
## Transitioning to Jest and Angular Testing Library {#transitioning-to-jest}
### Get rid of Karma {#remove-karma}
Angular ships with Karma alongside Jasmine for running tests and collecting coverage. With Jest, an Angular project no longer needs Karma or the other packages that would be installed by the Angular CLI.
#### Uninstall Karma
```bash
npm uninstall karma karma-chrome-launcher karma-coverage-istanbul-reporter karma-jasmine karma-jasmine-html-reporter
```
#### Remove the leftover configurations {#remove-karma-config}
Deleting the following will remove the leftover configuration files from the project:
```bash
});
```
This test no longer even needs Angular to be the library chosen. It just requires that a render method, when given the component, will render it to the DOM present in the testing environment. This can be run in the Framework, and even tested against in a real world browser. This is a good test in that the first `span` inside of `.content` has the `innerText` value expected in the test. These are all JavaScript and DOM APIs and thus can be trusted in any environment that adheres to them.
Writing tests that don't rely on testing Angular, but instead rely on the DOM, allows the application to be tested in a way that a user would use the application instead of the way that Angular internally works.
# Fixing that shortcoming using Testing Library {#testing-library}
Thankfully, writing tests like these have been made simple by a testing library simply called "[Testing Library](https://testing-library.com)." Testing Library is a collection of libraries for various frameworks and applications. One of the supported libraries is Angular, through the [Angular Testing Library](https://testing-library.com/docs/angular-testing-library/intro). This can be used to test Angular apps in a simple DOM focused manner with some nice helpers to make it even easier to work with. It relies on [Jest](https://jestjs.io/) as an extension to the Jasmine testing framework to make testing easier, and more end-results focused. With that tooling, a project can have tests much less focused on Angular and much more focused on what is being made.
## Transitioning to Jest and Angular Testing Library {#transitioning-to-jest}
### Get rid of Karma {#remove-karma}
Angular ships with Karma alongside Jasmine for running tests and collecting coverage. With Jest, an Angular project no longer needs Karma or the other packages that would be installed by the Angular CLI.
#### Uninstall Karma
```bash
npm uninstall karma karma-chrome-launcher karma-coverage-istanbul-reporter karma-jasmine karma-jasmine-html-reporter
```
#### Remove the leftover configurations {#remove-karma-config}
Deleting the following will remove the leftover configuration files from the project:
```bash
karma.config.js
src/test.ts
```
Once those two files are deleted, any references to `src/test.ts` will need to be removed. Removing the paths from the following file that reference them cleans it up easily enough:
```json
src/test.ts
```
Once those two files are deleted, any references to `src/test.ts` will need to be removed. Removing the paths from the following file that reference them cleans it up easily enough:
```json
tsconfig.spec.json
{
...,
@@ -88,22 +88,23 @@ tsconfig.spec.json
"src/test.ts", <- delete
...
]
}
```
The project also no longer needs the `test` key inside of `angular.json` as it stands, and thus it's contents can be removed. Don't worry, we'll be making `ng test` work again later.
```json
}
```
The project also no longer needs the `test` key inside of `angular.json` as it stands, and thus it's contents can be removed. Don't worry, we'll be making `ng test` work again later.
```json
angular.json
{
...,
"test": {} <- delete contents, but leave the key
....
}
```
Finally the project no longer needs the Jasmine types in the spec configuration
```json
}
```
Finally the project no longer needs the Jasmine types in the spec configuration
```json
tsconfig.spec.json
{
...,
@@ -114,43 +115,43 @@ tsconfig.spec.json
...
]
}
}
```
Now the project is ready for installing any other test runner.
### Setting up Jest {#setup-jest}
Now that the project has no Karma it can be setup with Jest
#### Install Jest
```bash
npm i -D @types/jest jest jest-preset-angular ts-jest @angular-builders/jest
```
This installs Jest, the types for Jest, a TypeScript pre-processor for Jest, and a preset that makes setting up Jest much easier.
#### Configure Jest
The project now needs to know how to best utilize Jest. Creating and modify the following files will allow Jest to load it's own configuration.
```js
}
```
Now the project is ready for installing any other test runner.
### Setting up Jest {#setup-jest}
Now that the project has no Karma it can be setup with Jest
#### Install Jest
```bash
npm i -D @types/jest jest jest-preset-angular ts-jest @angular-builders/jest
```
This installs Jest, the types for Jest, a TypeScript pre-processor for Jest, and a preset that makes setting up Jest much easier.
#### Configure Jest
The project now needs to know how to best utilize Jest. Creating and modify the following files will allow Jest to load it's own configuration.
```js
jest.config.js
module.exports = {
preset: 'jest-preset-angular',
setupFilesAfterEnv: [
'<rootDir>/jest.setup.ts'
]
};
```
```typescript
};
```
```typescript
jest.setup.ts
import 'jest-preset-angular';
```
```json
import 'jest-preset-angular';
```
```json
tsconfig.spec.json
{
...,
@@ -161,10 +162,10 @@ tsconfig.spec.json
...
]
}
}
```
```json
}
```
```json
tsconfig.json
{
...,
@@ -175,10 +176,10 @@ tsconfig.json
...
},
...
}
```
```json
}
```
```json
package.json
{
...,
@@ -189,10 +190,10 @@ package.json
...
},
...
}
```
```json
}
```
```json
angular.json
{
...,
@@ -200,19 +201,19 @@ angular.json
"builder": "@angular-builders/jest:run" <- new
}
....
}
```
Jest is now the test runner for the projectand it can be run with NPM, Yarn, or the Angular CLI. It can now be used in combination with Testing Library.
### Install Angular Testing Library
Now the project is ready to have better tests written for it and by using [Angular Testing Library](https://testing-library.com/docs/angular-testing-library/intro) the tests can be simplified with some great helpers.
```bash
npm install --save-dev @testing-library/angular
```
# Ready, Steady, Test! {#conclusion}
Now that the project has a better testing library with some great helpers better tests can be written. There are plenty of [great examples](https://testing-library.com/docs/angular-testing-library/examples) for learning and [Tim Deschryver](https://timdeschryver.dev/blog/good-testing-practices-with-angular-testing-library) has more examples to help in that endeavor, and the Angular Testing Library will make tests much simpler to write and maintain. With Angular, good tests, and plenty of confidence anyone would be happy to ship a project with this setup.
}
```
Jest is now the test runner for the projectand it can be run with NPM, Yarn, or the Angular CLI. It can now be used in combination with Testing Library.
### Install Angular Testing Library
Now the project is ready to have better tests written for it and by using [Angular Testing Library](https://testing-library.com/docs/angular-testing-library/intro) the tests can be simplified with some great helpers.
```bash
npm install --save-dev @testing-library/angular
```
# Ready, Steady, Test! {#conclusion}
Now that the project has a better testing library with some great helpers better tests can be written. There are plenty of [great examples](https://testing-library.com/docs/angular-testing-library/examples) for learning and [Tim Deschryver](https://timdeschryver.dev/blog/good-testing-practices-with-angular-testing-library) has more examples to help in that endeavor, and the Angular Testing Library will make tests much simpler to write and maintain. With Angular, good tests, and plenty of confidence anyone would be happy to ship a project with this setup.

View File

@@ -1,26 +1,25 @@
<div style="display: flex; flex-wrap: nowrap;">
<div>
## Why learn React, Angular, **and** Vue?!
By learning React, Angular, and Vue all at once you gain:
- Deeper understanding of core concepts than you'd have by only learning one framework
- Insight into different programming methodologies
- Appreciation for the "why" behind framework tradeoffs
- A superpower to learn similar UI frameworks much faster
Don't want to learn all three? **That's okay.** You can easily select a single framework and use this book to learn it front-to-back.
</div>
<div>
## Why learn React, Angular, **and** Vue?!
By learning React, Angular, and Vue all at once you gain:
- Deeper understanding of core concepts than you'd have by only learning one framework
- Insight into different programming methodologies
- Appreciation for the "why" behind framework tradeoffs
- A superpower to learn similar UI frameworks much faster
Don't want to learn all three? **That's okay.** You can easily select a single framework and use this book to learn it front-to-back.
</div>
<div class="hide-for-mobile hide-on-dark">
<img src="./hiker_with_bag.png" height="100%" alt="" data-nozoom="true" />
</div>
</div>
<div class="hide-for-mobile show-on-dark">
<img src="./hiker_with_bag_dark.png" height="100%" alt="" data-nozoom="true" />
</div>
</div>
</div>
</div>

View File

@@ -69,6 +69,7 @@ For instances where the frameworks diverge, you'll see tabs to see the relevant
For example, here's a "Hello world" component in each framework:
<!-- tabs:start -->
# React
```jsx
@@ -108,9 +109,9 @@ In the book print, these tabs will be turned into sub-headings.
This book is primarily for three sets of people:
1) Newcomers, who are looking to learn these frameworks for the first time.
2) Engineers who've learned one framework and are looking for an easy way to learn one of the others.
3) Those looking to 1-up their knowledge of these frameworks' internals
1. Newcomers, who are looking to learn these frameworks for the first time.
2. Engineers who've learned one framework and are looking for an easy way to learn one of the others.
3. Those looking to 1-up their knowledge of these frameworks' internals
This book will be starting with the very basics of what a component is, all the way into re-creating the core elements of
these frameworks from scratch. Don't believe me? [**Here's sneak peek of the "React Internals" chapter I wrote via a Twitter thread where I build `useState` in Vanilla JS**](https://twitter.com/crutchcorn/status/1527059744392814592).

View File

@@ -73,7 +73,7 @@ Como lo mencionamos antes, también tenemos un servidor de Discord donde podemos
</ul>
Si quieres aprender más sobre los patrocinios y su impacto en nuestro sitio, puedes leer [los detalles que publicamos en GitHub](https://github.com/unicorn-utterances/unicorn-utterances/issues?q=is%3Aissue+label%3Adisclosure+is%3Aclosed).
En pocas palabras: ningún patrocinador toma decisiones sobre el contenido publicado en el sitio.
# Declaración de Ética {#ethics}

View File

@@ -73,7 +73,7 @@ Et comme nous lavons déjà cité, on a un serveur Discord ou on parle tech,
</ul>
Pour plus dinformations concernant nos sponsors et leurs impact sur notre site, vous pouvez consulter [les divulgations que nous avons publiées sur GitHub](https://github.com/unicorn-utterances/unicorn-utterances/issues?q=is%3Aissue+label%3Adisclosure+is%3Aclosed).
(Bref: Les sponsors n'ont pas un effet direct sur le contenu du site)
# Code déthique {#ethics}

View File

@@ -73,7 +73,7 @@ As mentioned previously, we also have a Discord where we chat tech, help out wit
</ul>
If you want to learn more about our sponsorships and how they impact our site, you can read through [our disclosures that we've posted on GitHub](https://github.com/unicorn-utterances/unicorn-utterances/issues?q=is%3Aissue+label%3Adisclosure+is%3Aclosed).
TLDR: No sponsor has any say about the content hosted on the site
# Statement of Ethics {#ethics}

View File

@@ -1,3 +1,3 @@
These assets belong entirely to the sponsors themselves. We claim no rights
over these files in any way. As a result, please consult the associated
over these files in any way. As a result, please consult the associated
group for inquiries regarding rights of these assets.

View File

@@ -1,6 +1,5 @@
A rehype plugin for rendering tabbed content from HTML comments and headings.
![preview](https://user-images.githubusercontent.com/9100169/148681602-03f6f446-7dea-4efb-ad82-132f6a8debdd.gif)
This is particularly useful when paired with a `remark` parsing step.
@@ -27,7 +26,6 @@ Ciao!
<!-- tabs:end -->
```
### Markdown Example
```markdown
@@ -99,4 +97,4 @@ It would render "One" and "Three" as tab headings, but "Two" would be listed as
## Special Thanks
This syntax is inspired by: https://jhildenbiddle.github.io/docsify-tabs/#/
This syntax is inspired by: <https://jhildenbiddle.github.io/docsify-tabs/#/>