all: Add content, update for theme updates.

This commit is contained in:
William Floyd 2024-12-01 23:55:57 -06:00
parent ef6f9c719f
commit 1b1fd3e467
Signed by untrusted user who does not match committer: william
GPG key ID: B3EEEDD81893CAF9
10 changed files with 361 additions and 15 deletions

5
.vscode/settings.json vendored Normal file
View file

@ -0,0 +1,5 @@
{
"cSpell.words": [
"Gluster"
]
}

View file

@ -1,5 +1,7 @@
FROM floryn90/hugo:ext-alpine-onbuild AS hugo
FROM hugomods/hugo:exts as hugo
COPY . /src
RUN hugo --minify
FROM nginx:alpine-slim
COPY --from=hugo /target /usr/share/nginx/html
COPY --from=hugo /src/public /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf

View file

@ -82,7 +82,7 @@ url = "about/"
[[languages.en.menu.main]]
name = "Resume"
weight = 2
url = "https://github.com/W-Floyd/misc-job/releases/download/release/William.Floyd.pdf"
url = "https://github.com/W-Floyd/misc-job/releases/download/release/William_Floyd.pdf"
[[languages.en.menu.main]]
name = "Posts"

View file

@ -0,0 +1,102 @@
---
title: "Ghetto NAS Part 1"
date: "2023-08-29"
author: "William Floyd"
#featured_image: "media/IMG_20220126_225541.webp"
categories: [
"Sys Admin",
"Hardware",
"Software"
]
tags: [
"NAS",
"3D Printing",
"Gluster",
"Homelab"
]
series: ["Ghetto NAS"]
list: never
draft: true
---
This is an ongoing project to build a custom NAS on the most minimal budget possible.
# Use Case
Storing a large (30TB+) amount of infrequently accessed data that must still be immediately accessible (primarily Jellyfin, Nextcloud), with some level of safety.
Some details about my use case:
* There will be no external network access except via a single local client mounting the drive and sharing via ZeroTier
* There will be very few clients total
* Most data is replaceable, though inconveniently so (media may be reacquired / restored from backups)
* Neither latency nor throughput are very important
# Bill of Materials
| Quantity | Item | Per Unit Cost | Notes |
|----------|------------------------------|---------------|-----------------------------------------------------------------------------------------------|
| 3 | Dell Wyse 3030LT Thin Client | $11 | Ebay - Fairly common, though may run out eventually - other thin clients will no doubt appear |
| 3 | HGST 10TB He10 510 | $80 | Amazon / Ebay - Very common, can pick these up any day |
| 3 | ORICO 3.5in to USB enclosure | $25 | Amazon - Could use another, this is what I chose, does the job for me |
| 5 | Ethernet Cables | $2.5 | Amazon - $12.50 / 5 pack - Or whatever you have lying around |
| 1 | 8 Port Ethernet Switch | $13 | Amazon - Or whatever you have lying around |
| 0.5kg | PLA | $20 | For the NAS enclosure |
# Rationale
In order of importance for my use case: Price > Redundancy > Performance
## Hardware
### Thin Client
You simply cannot beat a whole working Linux box for $11.
With 2GB RAM, 4GB eMMC, 1 GbE, 1 USB 3 port, and a bundled power adapter, it does the bare minimum I need.
### HDD
Similarly, **used** enterprise drives deliver an amazing value.
For less than $9/TB or just over $10/TB with the enclosure, these drives are the cheapest possible way to get storage right now.
By using external enclosures we can also upgrade to larger drives in future, with minimal effort.
No shucking required!
I buy ones that have a 5 year warranty (spoiler - it's worth having!).
### Networking
1GbE is plenty enough for me, but if in future I need more speed, I can find a network switch with 10GbE uplink and scale horizontally a fair bit.
For now, a cheap unmanaged GbE switch will do just fine.
### UPS
Not 100% required, but the peace of mind in having the whole system on a UPS is worth it.
## Software
### Gluster
I am using Gluster to run my NAS cluster.
This is in large part due to its very modest hardware requirements, especially memory.
I can run my nodes with less than 50% memory utilization, and not fill my limited eMMC storage either.
It is very easy to work with, and offers flexible redundancy configurations.
#### Configuration
I am using Gluster with a dispersed volume, using the native client on my main server to mount the volume.
Dispersed lets me add clusters of bricks fairly easily, which suits my needs well.
### Netdata
This lets me know if/when drives get full, lets me know drive temperature from SMART data, and will email me if any hosts go offline.
# Experiences so far
I've been too busy to document the whole process, but I currently have a 2 x (2 + 1) array running (if I'd known I'd need 6 drives, I'd have done 1 x (4 + 2), but I didn't know at first).
Capacity is 60TB raw, 40TB usable.
## HDD Failures
That 5 year warranty I mentioned?
I've needed it twice so far - one drive died about 1 month in, and a second died 2 months in.
To their credit, the vendor got me a return package label within one business day each time, and refunded me as soon as the return package arrived.
For now, I continue to use these drives because the $/TB is so good, but in future I may upgrade to some larger drives in the same way to keep power costs down.
## Power Draw
6 x HDDs + 6 x Thin Clients + Network Switch + 12V Power Supply, draws about 40W at the wall under regular load (serving files).

View file

@ -0,0 +1,150 @@
---
title: "Ghetto NAS Part 2"
date: "2024-02-16"
author: "William Floyd"
#featured_image: "media/IMG_20220126_225541.webp"
categories: [
"Sys Admin",
"Hardware",
"Software"
]
tags: [
"NAS",
"3D Printing",
"Gluster",
"Homelab"
]
series: ["Ghetto NAS"]
list: never
draft: true
---
I've been running the Gluster array from [part one](../ghetto-nas-part-01/) of this series for some months now, and am looking to improve my setup as I move to a new location and have new requirements.
# Existing Hardware
As a reminder/update, here is my existing hardware setup:
* Used HP Z440
* CPU
* Intel Xeon 1650-v4 (6 core, 12 thread, 3.6/4.0GHZ)
* Memory
* 128GB LRDDR4 @ 2133MT/s
* Storage
* 1TB NVME boot drive via PCIE adapter
* 8TB shucked WD Easystore (bought new)
* 14TB shucked WD Easystore (bought new)
* GPU
* Dell GTX 1080 (for gaming)
* Intel Arc A380 (for transcoding)
* 6 x Gluster Nodes
* Dell Wyse 3030 LT Thin Client
* CPU
* Intel Celerton N2807 (2 core, 0.5/2.167GHz)
* Memory
* 2GB Memory
* Storage
* 4GB MMC boot drive
* ORICO 3.5" SATA to USB 3.0 desktop adapter
* 10TB HGST He10 (refurbished, 5 year warranty)
* Generic 360W 12V power supply for Thin Clients and HDDs
* Generic Gigabit ethernet switch for all thin clients and workstation
# Requirements
Given my experiences with my existing solution, my new setup must (continue) to be:
* Able to support my existing 40TB usable space, scalable up to ~100TB
* Easily maintainable
* Performant
* Mostly quiet
* Cost effective
* Initial cost
* Cost over time (aiming for 5 year lifecycle)
* Power efficient
* Fewer Gluster nodes
* Large disks > many disks
* Reliable
* ECC Memory
* Redundant storage
This leaves me with the following requirements:
* Must support a `n x (4 + 2)` disk arrangement (~67% usable space with 2 disks of redundancy, especially as I plan to use used drives)
* Disks must be 10TB or larger
* Disks must be cheap
* Disks should have reasonable warranty
Additional observations/experience:
* The 4GB storage on the Dell Wyse 3030 LT nodes is difficult to work in. If the storage fills, it can result in a node failing to come online after a restart
* Network latency results in slow directory operations via Gluster
* The workstation is already well capable of handling this many drives, it makes more sense to connect them directly to the drives as it is their only client
With this in mind, I want to move away from multiple storage nodes and consolidate into a more unified storage system
# Options
## NAS
### Prebuilt
Easiest option, but not my ideal as I want to learn, and know my system wholely.
Hardware is too expensive, no expandability, so I'm not going to do it.
Good more many people's cases though.
### Custom built
Solid option, but too expensive - I already have a workstation, I don't want another desktop holding all the drives and not doing anything useful otherwise. More of a sunk cost issue than a failure of this option, I just can't justify redundant hardware like this. Also, power draw would be increased as I'd be adding a system, not replacing.
If I were to do this, these are some of the options I've looked at:
* Mini ITX motherboard
* [All in one](https://www.aliexpress.us/item/3256806141617147.html) ([alternative](https://www.aliexpress.us/item/3256806353828287.html)) - $125-$160 depending on spec
* 6 SATA ports, PCIE, 4x2.5GbE, NVME
* Power efficient (<10W TDP)
* No ECC, memory not included
* No brand support
* [Xeon Kit](https://www.aliexpress.us/item/3256805579918121.html) - ~$135
* 6(?) SATA ports, PCIE, 2x2.5GbE, NVME(?)
* Powerful, not power efficient (90W TDP)
* ECC memory included
* No brand support
* Cooler not included
* More of a replacement to my workstation
* [3D printed case](https://modcase.com.au/products/nas)
* NAS Case
* [Silverstone DS308B](https://www.silverstonetek.com/en/product/info/server-nas/DS380/)
* Too expensive ($200+)
* [Generic 8 bay ITX enclosure](https://www.amazon.com/KCMconmey-Internal-Compatible-Backplane-Enclosure/dp/B0BXKSS8YY/)
* Too expensive ($150)
* No brand support
* Leaves empty bays if expanding in 6 drive increments
Overall something I've strongly considered, mostly for space savings, but cost is keeping me away, as it's basically a whole new PC for each new node (unless I'm expanding somehow otherwise, which I could do via the workstation anyway).
## JBOD
Requires an external HBA/SATA expander from the workstation.
### Prebuilt (ex-Enterprise)
Strong option, moderately easy to set up.
Concerns are:
* Power draw
* Noise
* Need for rack mounting
* More bays than I need
If I were to do this (and I may do some day), I would probably get an EMC KTN-STL3, a 15 bay chassis.
### Custom built (from scratch)
Too much work, don't want to *need* to design my own PCB for this.
### Custom built (using ex-Enterprise parts)
A few options,
https://www.supermicro.com/manuals/other/BPN-SAS3-815TQ.pdf
# Physical layout
I had begun modelling and came close to 3D printing an all in one cluster enclosure for 3 clients and 3 drives that would include a power distribution board, fan controller with temperature sensor, and panel mounted Ethernet ports.
This was never finished, and as I look to

View file

@ -0,0 +1,77 @@
---
title: "Test page"
date: "2024-01-29"
author: "William Floyd"
categories: [
"Test",
"Categories"
]
tags: [
"Test",
"Tag"
]
render: always
list: never
draft: true
---
foobar
# Icons
{{< icons/icon mdi linkedin >}}
{{< icons/icon vendor=mdi name=book color=red >}}
# Notices
{{< notice note >}}
One note here.
{{< /notice >}}
{{< notice tip >}}
I'm giving a tip about something.
{{< /notice >}}
{{< notice example >}}
This is an example.
{{< /notice >}}
{{< notice question >}}
Is this a question?
{{< /notice >}}
{{< notice info >}}
Notice that this box contain information.
{{< /notice >}}
{{< notice warning >}}
This is the last warning!
{{< /notice >}}
{{< notice error >}}
There is an error in your code.
{{< /notice >}}
# Mermaid
{{<mermaid>}}
sequenceDiagram
participant Alice
participant Bob
Alice->>John: Hello John, how are you?
loop Healthcheck
John->>John: Fight against hypochondria
end
Note right of John: Rational thoughts <br/>prevail!
John-->>Alice: Great!
John->>Bob: How about you?
Bob-->>John: Jolly good!
{{</mermaid>}}
# Math
$$
y=mx+b
$$
{{< mathjax >}}

6
go.mod
View file

@ -3,6 +3,8 @@ module github.com/W-Floyd/blog
go 1.21.6
require (
github.com/W-Floyd/hugo-coder-iconify v0.0.0-20240129201341-4f3330156529 // indirect
github.com/hugomods/icons/vendors/mdi v0.3.2 // indirect
github.com/Templarian/MaterialDesign-SVG v7.4.47+incompatible // indirect
github.com/W-Floyd/hugo-coder-iconify v0.0.0-20241202054008-a454e55210d9 // indirect
github.com/hugomods/icons v0.6.6 // indirect
github.com/hugomods/icons/vendors/mdi v0.3.8 // indirect
)

16
go.sum
View file

@ -1,6 +1,10 @@
github.com/W-Floyd/hugo-coder-iconify v0.0.0-20240129201341-4f3330156529 h1:PJEi8xBVqWrqny2HdiZ2str5lsUtnd8uEbtIgKm2meQ=
github.com/W-Floyd/hugo-coder-iconify v0.0.0-20240129201341-4f3330156529/go.mod h1:2QYy4+nngkg5dum3LHzrLqLpdGjpdrHin/5BuaJu2Jk=
github.com/hugomods/icons v0.6.0 h1:G6RU93okhPPRDh/jqcew9gwkcYpSpg0rCBv4S6yUAFw=
github.com/hugomods/icons v0.6.0/go.mod h1:cIkSvK6W0q6N4U6n9KGz+QfRWQXAW0INd+1P31gPNGg=
github.com/hugomods/icons/vendors/mdi v0.3.2 h1:59KlTgBNiKGlPXzaQ6zn+VLYstFb4zABKwlHfzL8ADY=
github.com/hugomods/icons/vendors/mdi v0.3.2/go.mod h1:yHIDYxNoBV8RCAc4Uordp6rr4GObPrtBAimShBBFdmc=
github.com/Templarian/MaterialDesign-SVG v7.4.47+incompatible h1:w99yYrLvkTj5A9Dxd8mzNoFQpOBx9aRS03RrrqjzLuw=
github.com/Templarian/MaterialDesign-SVG v7.4.47+incompatible/go.mod h1:SRSiaLOZazGp4UpKPQRm37h4A3cKLHXaybAqaJ7Lfx8=
github.com/W-Floyd/hugo-coder-iconify v0.0.0-20240430165218-588e06a82746 h1:409+UvlNKXrYIh11P4t3HDOHPn4iHz80zEGxJKmJ+PY=
github.com/W-Floyd/hugo-coder-iconify v0.0.0-20240430165218-588e06a82746/go.mod h1:2QYy4+nngkg5dum3LHzrLqLpdGjpdrHin/5BuaJu2Jk=
github.com/W-Floyd/hugo-coder-iconify v0.0.0-20241202054008-a454e55210d9 h1:l285Jqk2m1SRl0GfYNcaOj0Z67NXWMikB6lmu6iAb1E=
github.com/W-Floyd/hugo-coder-iconify v0.0.0-20241202054008-a454e55210d9/go.mod h1:2QYy4+nngkg5dum3LHzrLqLpdGjpdrHin/5BuaJu2Jk=
github.com/hugomods/icons v0.6.6 h1:gGlafcBDRP7sSID+tgLcWdog+s/QBj8DIfU+h9tZj1U=
github.com/hugomods/icons v0.6.6/go.mod h1:cIkSvK6W0q6N4U6n9KGz+QfRWQXAW0INd+1P31gPNGg=
github.com/hugomods/icons/vendors/mdi v0.3.8 h1:Tw4DooGpAHK2Lk+r6UnuWC/NesGPdR0x0U1Ullu2Og8=
github.com/hugomods/icons/vendors/mdi v0.3.8/go.mod h1:UzARXfBJorEKYC9o+Ap6rsKSiI+UFkNWJjKn+DR2tpA=

10
push.sh
View file

@ -1,11 +1,15 @@
#!/bin/bash
sudo docker build -t w-floyd/blog . || {
docker build -t w-floyd/blog . || {
echo 'Failure to build'
exit
}
sudo docker save w-floyd/blog | bzip2 | pv | ssh "${1}" docker load
ssh "${1}" docker-compose -f /root/server-config/docker-compose.yml --project-directory /root/server-config up --remove-orphans -d
docker save w-floyd/blog | bzip2 | pv | ssh "${1}" docker load
ssh "${1}" \
docker-compose \
-f /root/server-config/docker-compose.yml \
--project-directory /root/server-config \
up --remove-orphans -d
exit

@ -1 +1 @@
Subproject commit 943d8597b5aa37f3ee23905c5c85e2ca4f0ed455
Subproject commit a454e55210d9b71f4aeadfe54f2e64f7e35c47ab