xtremeownage 3 months ago • 100%
46 watts.. but, yea, I expected lower.
But, suppose when its spinning 4x seagate exos, they like their juice.
It apparently doesn't allow HDD hibernation while containers are running, and doesn't appear to like to use any sleep states.
xtremeownage 3 months ago • 100%
Key word, is idle.
Synology... and HDD hibernation don't really go together very well. If you have containers running, it won't let the HDDs hibernate at all. And- I have a minio instance running.
xtremeownage 4 months ago • 100%
Eh... Mutually assured destruction.
It's a very scary phrase.
xtremeownage 5 months ago • 100%
That, is a pretty good deal. Better start picking up some MD1200s!
xtremeownage 5 months ago • 100%
Its a going. One day at a time.
xtremeownage 7 months ago • 100%
Nope, not at all.
Behind every successful story, is a lot of failures. (or- really rich parents).
xtremeownage 7 months ago • 100%
I agree, I'd be picking up a bunch of those, if that were the case.
xtremeownage 7 months ago • 100%
esp32-c6 (supports zigbee), is pretty cheap.
xtremeownage 7 months ago • 83%
No.
I wouldn't vote for Hillary period for many reasons. Her sex is not one of them.
A random fact, I actually did vote for a woman to be president. But, it damn sure was not Hillary. There is too much stink associated with her. Too much shit swept under the rug.
xtremeownage 10 months ago • 100%
The other admin now "owns" this instance, and hosts it in the EU.
I am just a glorified moderator now.
xtremeownage 10 months ago • 100%
I'd say, you have a small instance.
I used to host lemmyonline.com, which has somewhere around 50-100 users.
It was upwards of 50-80g of disk space, and it used a pretty good chunk of bandwidth. CPU/Memory requirements were not very high though.
xtremeownage 10 months ago • 80%
I'd gladly donate a few TB, but Not about to fill my entire array for books i'll never read...
xtremeownage 10 months ago • 100%
Yes.
xtremeownage 10 months ago • 100%
Nope.
Still just feel like a kid, with extra responsibilities, while raising my own kids. Guess sometime around 50 or so i'll start feeling like an "adult"
Although, at least I call myself a dumbass, after doing something stupid, or wasting money on crap.
xtremeownage 10 months ago • 100%
I'm gonna wait a few years, until prices go waaay down.... and I plan on doubling/tripling the PV capacity, which will make everything much more effective, as well.
xtremeownage 10 months ago • 100%
How did you get that rate? We pay 33 cents, and it was 24 cents just a few months ago… wouldn’t be surprised if it goes up again next year and the year after since even 33 cents is government subsidised (so - there’s no cheaper option available).
All about location. There are supposedly many in my area on a different coop utility, who are only paying 0.03c/kwh.
Ooof. Why’d you do that? We simply put (a bit over) 5kW of panels on the roof, and a good 5kW inverter. One day of sun generates about as much power as we use in a week, and even if it’s overcast we still come out ahead.
I had a few other goals I wanted to accomplish-
-
Reliability. The grid here isn't the most stable, and blinks a few times per week. And, a time or two per year, we have an outage. This solution has handled this fantastically well, so well, that I don't even notice when the grid has dropped unless I specifically go for it.
-
Apart of this, was bringing some of my wiring/electrical up to code. This accounted for 10k of the price-tag... I relocated/replaced the mains panel across the house to a location more suitable then my daughter's closet. Also- the panel itself, was pretty old, and needed to be modernized.
One more issue- my PV is undersized a bit. Adding another 3kw, would yield much better returns for me.
Its undersized, because if I oversized it, and sent more energy than I consumed, my lovely utility slaps on a 42$ fee.... which is no-bueno.
xtremeownage 10 months ago • 100%
If the ROI => 25 years, then it's not worth it- because the hardware and equipment is considered deprecated at that point.
If it lasts 30 years, sure, its making good use of itself. But- everything is rated between 15-25 years. As such, after that period, it's considered end of life, and no longer supported.
Now- I will note, it is not worth it for the "Rate I currently pay", which is 0.08c/kwh. If next year, my electricity rates tripled, it would vastly reduce the amount of time until this solution reached ROI. And- I am betting that electricity does not get cheaper in the future, otherwise I would have not have pulled the trigger on a 50,000$ project, where the math told me it wasn't the best idea.
Also, if you really want to see everything quantified- I plan on publishing all of the math, and numbers at the one year mark... which will be around march. -> https://static.xtremeownage.com/pages/Projects/Solar-Project/
xtremeownage 10 months ago • 50%
This platform* is getting overrun by trolls and tankies. I’m going back to Reddit
xtremeownage 10 months ago • 100%
Coming from someone who owns them-
Nah, it's not worth it.... at least, if you strictly look at "saving money" overall.
ROI is on average 10-25 years, depending on your current cost of energy. The components/inverters/etc, are usually rated for 20-25 years.
At least- this applies if you have a properly licensed contractor install everything. If you do everything yourself, its extremely worth it, and would achieve ROI in a decade or less.
xtremeownage 10 months ago • 100%
What will realistically happen?
Nothing. Companies will just spend a hair more money finding ways to circumvent the new taxes. And, if the new taxes were not easily circumvented- they would just relocate the company to another country with lower taxes.
In the end, the consumer is paying the taxes, and not the company itself, either way.
xtremeownage 11 months ago • 100%
Anti-DDOS, eh?
You lost me there. There is no self-hosted anti-ddos solution that is going to be effective.... Because any decent DDOS attack, can easily completely overwhelm your WAN connection. (And potentially even your ISP's upstream(s) )#
xtremeownage 11 months ago • 100%
It could end up being a shart.
xtremeownage 11 months ago • 100%
But, it has no network connectivity! That is against God's will.
(Thus, no telemetry either)
Very interesting os though. Lots of very cool concepts
xtremeownage 11 months ago • 66%
Not a clue.
Maybe they like the pretty dashboard pihole has.
xtremeownage 11 months ago • 100%
I use vlans to work with it.
xtremeownage 11 months ago • 66%
unbound as a DNS filter and resolver
Its.... worked as a recursive resolver, with filtering/blacklist features for years now?
xtremeownage 11 months ago • 50%
Hmm... I need to go get some lemonade from panera...
Sounds good
xtremeownage 11 months ago • 100%
I saw it through one of the apps which scrapes reddit comments for archival.
Reddit quit making those stats public a while back, sadly
xtremeownage 11 months ago • 100%
Don't make the same mistake reddit did, by assuming active users = engagement.
Look at reddit's stats, active users didn't drop very drastically when everyone left. However, engagement/comments dropped drastically.
xtremeownage 11 months ago • 96%
Sorry.... watching a sponsored video for world of tanks for the 10th time, or simply safe, or whatever other garbage is there isn't going to make me want to purchase it.
I value my time... If I didn't use sponsor block, I'm still going to skip right past it.... This, just does it for me.
xtremeownage 12 months ago • 100%
Well, I use plex, because I have used plex for a decade, and it just works.
That being said, if I were to use an alternative, Jellyfin is quite fantastic. I actually have a pod running it, just in the event that plex pulls a stupid move, causing me to lose faith in its platform.
But, that being said, I like the plex interface more then Jellyfin, and have grown accustomed to it.
Also, Kodi while powerful and extensible... just feels like a bear compared to Jellyfin.
xtremeownage 12 months ago • 100%
Lots of our panels are produced in north america.
My panels, for example, are from Canadian Solar.
xtremeownage 12 months ago • 100%
...
xtremeownage 12 months ago • 100%
You don't see porn on the front page of lemmy either, if you used the "subscribed" view, instead of treating "ALL" as if it doesn't contain everything.
xtremeownage 12 months ago • 85%
For certain projects I monetize, there are reasons I don't share the code.
Patents don't magically find people infringing your intellectual property. The owness is on you.
That being said, I have bills to pay, and mouths to feed. Giving my solutions away for free, doesn't help those issues.
xtremeownage 12 months ago • 100%
Home assistant vs Homeseer.
Home seer will cost you 300-500$.
It's add-ons and extensions are all paid.
Home assistant is literally better in every way possible.
xtremeownage 12 months ago • 100%
Bitwarden / VaultWarden also does totp
xtremeownage 12 months ago • 100%
high uptime, doesn't many anything.
SSDs are rated by how much data can be written to them, as flash as finite write-endurance.
xtremeownage 12 months ago • 33%
eh.... no. its not.
- coming from an actual developer.
xtremeownage 1 year ago • 100%
Hence my reaction to these issues. https://lemmyonline.com/post/459013
But.... under new management now, in Germany. https://lemmyonline.com/post/587565
As promised, if I brought the instance offline, I would give you a heads up in advance. Here are the reasons for me coming to this decision- ## Moderation / Administration Lemmy has absolutely ZERO administration tools, other then the ability to create a report. This, makes it extremely difficult to properly administer anything. As well, other then running reports and queries against the local database manually, I literally do not have insight into anything. I can't even see a list of which users are registered on this instance, without running a query on the database. ## Personal Liability I host lemmyonline.com on some of my personal infrastructure. It shares servers, storage, etc. It is powered via my home solar setup, and actually doesn't cost much to keep online. However- for a project which compensates me exactly $0.00 USD (No- I still don't take donations). It is NOT worth the additional liability I am taking on. That liability being- currently trolls/attackers are literally uploading child-porn to lemmy. Thumbnails and content gets synced to this instance. At that point, I am on the hook for this content. This, also goes back to the problem of literally having basically no moderation capabilities either. Once something is posted, it is sent everywhere. Here in the US, they like to send no-knock raids out. That is no-bueno. ## Project Inefficiencies One issue I have noticed, every single image/thumbnail, appears to get cached by pictrs. This data is never cleaned up, never purged.... so, it will just keep growing, and growing. The growth, isn't drastic, around 10-30G of new data per week- however, this growth isn't going to be sustainable, especially due to again- this project compensates me nothing. While- hosting 100G of content, isn't going to be a problem. When we start looking 1T, 10T, etc.... That costs money. Its not as simple as tossing another disk into my cluster. The storage needs redundancy. So, you need multiple disks there. Then, you need backups. A few more disks here. Then, we need offsite backups. These cost $/TB stored. I don't mind hosting putting some resources up front to host something that takes a nominal amount of resources. However- based on my stats, its going to continue to grow forever as there is no purge/timeout/lifespan attached to these objects. ## I don't enjoy lemmy enough to want to put up with the above headaches. Lets face it. You have already seen me complain about the general negativity around lemmy. The quality of content here, just isn't the same. I have posted lots of interesting content to try and get collaboration going. But, it just doesn't happen. I just don't see nearly as much interesting content, as I want to interact with. ### Summary- I get no benefit from hosting lemmy online. It was a fun side project for a while. I refuse to attempt to monetize it as well. As such, since I don't enjoy it, and the process of keeping on top of the latest attacks for the week is time consuming, and boresome, The plan is simple. The servers will go offline 2023-09-04. ### If you wish to migrate your account to another instance- Here is a tool recently released. https://github.com/gusVLZ/lemmy_handshake
A heads up.... Since, attackers/etc are now uploading CSAM (child porn....) to lemmy, which gets federated to other instances.... Because I really don't want any reason for the feds to come knocking on my door, as of this time, pictrs is now disabled. This means.... if you try to post an image, it will fail. As well, you will notice other issues potentially. Driver for this: https://lemmyonline.com/post/454050 This- is a hobby for me. Given the complete and utter lack of moderation tools to help me properly filter content, the nuclear approach is the only approach here.
> Both CloudNordic and Azero said that they were working to rebuild customers’ web and email systems from scratch, albeit without their data. Yea.... Don't bother. But, do expect to hear from my lawyers..... > CloudNordic said that it “had no knowledge that there was an infection.” > CloudNordic and Azero are owned by Denmark-registered Certiqa Holding, which also owns Netquest, a provider of threat intelligence for telcos and governments. Edit- https://www.cloudnordic.com/
I am just wondering... is it me- or is there a LOT of just general negativity here. Every other post I see is... 1. America is bad. 2. Capitalism is bad. Socialism/Communism is good. 3. If you don't like communism, you are a fascist nazi. Honestly, it's kind of killing my mood with Lemmy. There are a few decent communities/subs here, but, the quality of content appears to be falling. I mean, FFS. It can't just be me that is noticing this. It honestly feels like I am supporting a communist platform here. ![](https://lemmyonline.com/pictrs/image/8269230a-bfe4-48b3-bc12-df81ad2622ed.png) I am on social media to post and read about things related to technology, automation, race cars, etc. Every other technology post, is somebody bashing on Elon Musk (actually- that is deserved), or talking about Reddit (Let it go. Seriously. We are here, it is there). On my hobby of liking racecars, I guess, half of the people on lemmy feel it is OK to vandalize a car for being too big.... and car hate culture is pretty big. All of this is really turning off my mood regarding lemmy.
![](https://lemmyonline.com/pictrs/image/6dfbbd60-8363-4648-afdc-2ec7373173b9.png) Knock on wood, I have not used them in quite a while.
My adventures in building out a ceph cluster for proxmox storage. As a random note, my particular instance (lemmyonline.com) is hosted on that particular ceph cluster.
Well... That didn't last long...
I can't say for sure- but, there is a good chance I might have a problem. The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes. It is going into my r730XD. Which... is getting pretty full. This will fill up the last empty PCIe slots. ![](https://lemmyonline.com/pictrs/image/05627bcb-b03e-4223-b878-a35ed074a008.webp) But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation. ![](https://lemmyonline.com/pictrs/image/3f6d2e08-aa3c-48e8-b878-3df08df554e4.webp) As a result, it now has more HDDs, and NVMes then I can count. ![](https://lemmyonline.com/pictrs/image/4f402cc2-47cf-43f3-8377-9ae4b1513f70.webp) What's the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs.... Figured I would share. Seeing a bunch of SSDs is always a pretty sight. And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.
Sorry for the ~30 seconds of downtime earlier, however, we are now updated to version 0.18.4. Base Lemmy Changes: https://github.com/LemmyNet/lemmy/compare/0.18.3...0.18.4 Lemmy UI Changes: https://github.com/LemmyNet/lemmy-ui/compare/0.18.3...0.18.4 Official patch notes: https://join-lemmy.org/news/2023-08-08_-_Lemmy_Release_v0.18.4 #### Lemmy * Fix fetch instance software version from nodeinfo (#3772) * Correct logic to meet join-lemmy requirement, don’t have closed signups. Allows Open and Applications. (#3761) * Fix ordering when doing a comment_parent type list_comments (#3823) #### Lemmy-UI * Mark post as read when clicking “Expand here” on the preview image on the post listing page (#1600) (#1978) * Update translation submodule (#2023) * Fix comment insertion from context views. Fixes #2030 (#2031) * Fix password autocomplete (#2033) * Fix suggested title " " spaces (#2037) * Expanded the RegEx to check if the title contains new line caracters. Should fix issue #1962 (#1965) * ES-Lint tweak (#2001) * Upgrading deps, running prettier. (#1987) * Fix document title of admin settings being overwritten by tagline and emoji forms (#2003) * Use proper modifier key in markdown text input on macOS (#1995)
So, last month, my kubernetes cluster decided to literally eat shit while I was out on a work conference. When I returned, I decided to try something a tad different, by rolling out proxmox to all of my servers. Well, I am a huge fan of hyper-converged, and clustered architectures for my home network / lab, so, I decided to give ceph another try. I have previously used it in the past with relative success with Kubernetes (via rook/ceph), and currently leverage longhorn. ## Cluster Details 1. Kube01 - Optiplex SFF - i7-8700 / 32G DDR4 - 1T Samsung 980 NVMe - 128G KIOXIA NVMe (Boot disk) - 512G Sata SSD - 10G via ConnectX-3 2. Kube02 - R730XD - 2x E5-2697a v4 (32c / 64t) - 256G DDR4 - 128T of spinning disk. - 2x 1T 970 evo - 2x 1T 970 evo plus - A few more NVMes, and Sata - Nvidia Tesla P4 GPU. - 2x Google Coral TPU - 10G intel networking 3. Kube05 - HP z240 - i5-6500 / 28G ram - 2T Samsung 970 Evo plus NVMe - 512G Samsung boot NVMe - 10G via ConnectX-3 4. Kube06 - Optiplex Micro - i7-6700 / 16G DDR4 - Liteon 256G Sata SSD (boot) - 1T Samsung 980 ## Attempt number one. I installed and configured ceph, using Kube01, and Kube05. I used a mixture of 5x 970 evo / 970 evo plus / 980 NVMe drives, and expected it to work pretty decently. It didn't. The IO was so bad, it was causing my servers to crash. I ended up removing ceph, and using LVM / ZFS for the time being. Here are some benchmarks I found online: https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit#gid=0 https://www.proxmox.com/images/download/pve/docs/Proxmox-VE_Ceph-Benchmark-202009-rev2.pdf The TLDR; after lots of research- Don't use consumer SSDs. Only use enterprise SSDs. ## Attempt / Experiment Number 2. I ended up ordering 5x 1T Samsung PM863a enterprise sata drives. After, reinstalling ceph, I put three of the drives into kube05, and one more into kube01 (no ports / power for adding more then a single sata disk...). And- put the cluster together. At first, performance wasn't great.... (but, was still 10x the performance of the first attempt!). But, after updating the crush map to set the failure domain to OSD rather then host, performance picked up quite dramatically. This- is due to the current imbalance of storage/host. Kube05 has 3T of drives, Kube01 has 1T. No storage elsewhere. BUT.... since this was a very successful test, and it was able to deliver enough IOPs to run my I/O heavy kubernetes workloads.... I decided to take it up another step. ### A few notes- Can you guess which drive is the samsung 980 EVO, and which drives are enterprise SATA SSDs? (look at the latency column) ![](https://lemmyonline.com/pictrs/image/105a22b7-ec5c-4e4c-bdda-c2a3c7534f8f.png) ## Future - Attempt #3 The next goal, is to properly distribute OSDs. Since, I am maxed out on the number of 2.5" SATA drives I can deploy... I picked up some NVMe. 5x 1T Samsung PM963 M.2 NVMe. I picked up a pair of dual-spot half-height bifurcation cards for Kube02. This will allow me to place 4 of these into it, with dedicated bandwidth to the CPU. The remaining one, will be placed inside of Kube01, to replace the 1T samsung 980 NVMe. This should give me a pretty decent distribution of data, and with all enterprise drives, it should deliver pretty acceptable performance. More to come....
Nothing fancy or dramatic. Just- tuning the idle.
Very interesting youtube channel. Fellow takes a car, puts a weird engine in it, and then tweaks and tweaks to maximize hp and fuel economy. Currently working on a renault with a lawnmower engine. Previously, had a saturn with a 3 cyl kubota diesel. Got 80+ mph./
Since, my doctor recommend that I put more fiber in my diet- I decided to comply. So.... in a few hours, I will be running a few OS2 runs across my house, with 10G LR SFP+ modules. Both runs will be from my rack to the office. One run will be dedicated for the incoming WAN connection (Coupled with the existing fiber that.... I don't want to re terminate). The other, will be replacing the 10G copper run already in place, to save 10 or 20w of energy. This, was sparked due to a 10GBase-T module overheating, and becoming very intermittent earlier this week causing a bunch of issues. After replacing the module, links came back up and started working normally.... but... yea, I need to replace the 10G copper links. With only twinax and fiber 10G links plugged into my 8-port aggregation switch, it is only pulling around 5 watts, which is outstanding, given a single 10GBase-T module uses more then that. Edit, Also, I ordered the wrong modules. BUT... the hard part of running the fiber is done!
Surprisingly, I guess this didn't exist. Well, if you like talking automotive technology, it does now. [Turbocharged](/c/turbocharged@lemmyonline.com)
Here is one of my projects from a few years back. Shoving a 1,000hp engine, into a long-bed pickup truck. It was a fun project, especially since nobody would ever look at it, and think, hey, that might be fast. Nope, It was just an ugly, longbed, single-cab pickup truck. Completely gutted on the interior with nothing but a seat, steering wheel, shifter, and gauges. Sadly, I have retired this project due to it not being very practical for anything. Coming soon, whenever I get off of my ass- I will be putting this powerplant into a 1987 4x4 suburban... Don't hold your breath too much, since covid occured I have not been driving too much, and this project has not been high on my list of priorities.
Just sharing the latest experiment from Garage54 to get some posts flowing. If you have not see these guys, they do some really interesting experiments and projects.
Koenigsegg has not yet failed to surprise me with some of the technology. The free-valve engine, and now, a 2,300hp hybrid sports car.
In, addition to updating lemmy just now- The storage issues have been resolved, the hosting issues have been resolved.... And things should return back to stable and reliable now. [Lemmy 0.18.3 Notes](https://lemmyonline.com/post/150849)
Turns out... its ceph storage. Despite having 7x OSDs on bare metal NVMe... despite having DEDICATED 10G network connectivity.... Its having significant performance issues. Any spikes in IO (Large file transfers, backups. Even copying files to a different server) would cause huge IO delays, causing things to break or drop offline. There are no errors shown. The configuration is pretty standard. I have no idea why it is having so many issues. I have cleared off a new NVMe, and will move this server to it tomorrow, and hopefully end all of the issues from this week... Assuming I have any users left here. (I wouldn't blame you for leaving, it has been a really bad week for LemmyOnline) IF, my assumptions are incorrect, then f-it, I will just run lemmy on a bare metal server I have on standby. ## Update Server migrated to local storage. Was, nearly unnoticeable, unless you did something in the 3 minute window it took to clone/restore/etc.
Just finished migrating to a different server... hopefully this helps some.
As a continuation from the [FIRST POST](https://lemmyonline.com/post/103699) As you have likely noticed, there are still issues. To summarize the first post.... catastrophic software/hardware failure, which meant needing to restore from backups. I decided to take the opportunity to rebuild newer, and better. As such, I decided to give proxmox a try, with a ceph storage backend. After, getting a simple k8s environment back up and running on the cluster, and restoring the backups- lemmy online, was mostly back in business using the existing manifests. Well, the problem is.... when heavy backend IO occurs (during backups, big operations, installing large software....), the [longhorn.io](longhorn.io) storage used in the k8s environment, kind of... "dies". And- as I have seen today, this is not an infrequent issue. I have had to bounce the VM multiple times today to restore operations. I am currently working on building out a new VM specifically for LemmyOnline, to seperate it from the temporary k8s environment. Once, this is up and running, things should return to stable, and normal.
![](https://lemmyonline.com/pictrs/image/f845d35a-ac0b-4f43-88ff-695e5a3a3ad1.png) Yup. always gotta be that one single threaded program. In this case, appears to be frigate.
I don't know about y'all.... But, I am really looking forward to CS:II
My apologies for the past day or so of downtime. I had a work conference all of last week. On the last morning around 4am, before I headed back to my timezone, "something" inside of my kubernetes cluster took a dump. While- I can remotely reboot nodes, and even access them... the scope of what went wrong was far above what I can accomplish remotely via my phone. After returning home yesterday evening, I started plugging away a bit, and quickly realized.... something was seriously wrong with the cluster. As such, from previous experience, I found it was quicker to just tear it down, rebuild it, and restore from backups. So- I started that process. However, since, I had not seen my wife in a week, I felt spending some time with her was slightly more important at the time. But- I was able to finish getting everything restored today. Due, to the issues before, I will be rebuilding some areas of my infrastructure to be slightly more redundant. Whereas before- I had bare-metal machines running ubuntu, going forward, I will be leveraging proxmox for compute clustering and HA, along with ceph for storage HA. That being said, sometime soon, I will have ansible playbooks setup to get everything pushed out and running. Again- My apologies for the downtime. It was completely unexpected, and came out of the blue. I honestly still have no idea what happened. The best suspicion I have, is disk failure.... and after rebooting the machine, it came back to life? Regardless, Will work to improve this moving forward. Also- I don't plan on being out of town soon... so, that will help too. There may be some slight downtime later on as I am working on and moving things around. If- that is the case, it will be short. But- for now- the goal is just restoring my other services and getting back up and running. ## Update 2023-07-23 CST There are still a few kinks being worked out. I have noticed occasionally things are disconnecting still. Working on ironing out the issues still. Please bear with me. (This issue appears to be due to a single realtek nic in the cluster... realtek = bad) ### Update 9:30pm CST Well, it has been a "fun" evening. I have been finding issues left and right. 1. A piece of bad fiber cable. 2. The aforementioned server with a realtek NIC which was bringing down the entire cluster. 3. STP/RSTP issues, likely caused by the above two issues. Still, working and improving... ## Update 2023-07-24 ### Update 9am CST Working out a few minor kinks still. Finish line is in sight. ### Update 5pm CST Happened to find a SFP+ module which was in the process of dying. Swapped it out with a new one, and... magically, many of the spotty network issues went away. ![](https://lemmyonline.com/pictrs/image/24a38987-42ee-432a-910c-c733b83dd731.png) Have new fiber ordered, will install later this week. ### Update 9pm CST 1. Broken/Intermittent SFP+ Module replaced. 2. Server with crappy realtek nic removed. Re-added server with 10G SFP+ connectivity. 3. Clustered servers moved to dedicated switch. 4. New fiber stuff ordered to replace longer-distance (50ft) 10G copper runs. I am aware of current performance issues. These will start going away as I expand out the cluster. Still focusing on rebuilding everything to a working state.
cross-posted from: https://lemmyonline.com/post/53654 > I dug through the prime deals and picked out the relevant devices, for which I have personal experience using, and would recommend to others. > > Every device linked, will work with home assistant, more or less natively. The majority of them, are flashable to esphome or tasmota. And ALL of them will work 100% locally.
cross-posted from: https://lemmyonline.com/post/53654 > I dug through the prime deals and picked out the relevant devices, for which I have personal experience using, and would recommend to others. > > Every device linked, will work with home assistant, more or less natively. The majority of them, are flashable to esphome or tasmota. And ALL of them will work 100% locally.
I dug through the prime deals and picked out the relevant devices, for which I have personal experience using, and would recommend to others. Every device linked, will work with home assistant, more or less natively. The majority of them, are flashable to esphome or tasmota. And ALL of them will work 100% locally.
https://old.lemmyonline.com/ Just, an alternative front-end... if you pretty.... a different front-end.