Content Navigation
- 1. Graphics Card (GPU)
- 1.1 Basic Parameters for Evaluating Graphics Card Performance
- 1.2 Some Suggestions When Choosing a Graphics Card
- 1.3 GPU Tier Breakdown by Brand
- 1.4 Personal Take
- 1.5 GPU Benchmark Hierarchy
- 1.6 Nvidia Card Specs
- 1.7 NVIDIA Technologies Compares Between Different GPU Series
- 1.8 Graphics Card Buying Reference
- 2. Processor (CPU)
- 3. Memory (RAM)
- 4. Solid State Drives (SSD)
- 5. Cooling System
- 6. Operating System
- 7. Pre-built PC vs. DIY Assembly
- 8. Computer Hardware Priority
1. Graphics Card (GPU)
1.1 Basic Parameters for Evaluating Graphics Card Performance
What computer hardware is needed to play with AI? Graphics cards are the most critical hardware in the AI field, and the most important parameter to focus on within graphics cards is the video memory (VRAM) specification.
-
VRAM Capacity
- Determines the scale of data the GPU can process at one time, directly impacting the execution capability of tasks like model training and high-resolution rendering.
- It’s like a water pool; if the pool is small, it can’t hold enough water, and the water will overflow, leading to a VRAM overflow (out of memory).
- Many AI applications have minimum VRAM requirements, similar to a computer’s RAM; if it’s too small, the application may fail to open or run very slowly.
-
VRAM Bandwidth
- Bandwidth = Memory Clock Speed × Memory Bus Width / 8.
- High bandwidth improves data throughput efficiency, which is especially crucial for AI training and scientific computing.
- It’s like a pipe for transporting water. If the pipe is small, water transport is slow, and efficiency is low.
-
Memory Interface Width
- Similar to the number of lanes on a highway; the more lanes (wider bus width), the greater the amount of data that can pass through per unit of time.
- Common bus widths range from 128-bit to 512-bit.
- For example, the RTX 5090 has a 512-bit bus width, the RTX 5080 has 256-bit, and the RTX 5060 has 128-bit.
- It’s very clear that Higher numerical values indicate better performance.
1.2 Some Suggestions When Choosing a Graphics Card
-
The 4060 Ti 16GB is the recommended minimum choice when purchasing a new graphics card.
- Of course, cards with slightly lower specifications can also be used; various 8GB versions of the 4000 and 3000 series are still acceptable, but there’s a difference between just getting by and performing well.
- If possible, buy the highest-end models like the 4090 or 5090.
-
Try to choose NVIDIA graphics cards.
- The vast majority of AI applications are developed based on NVIDIA, and their documentation is more comprehensive. There are more NVIDIA users worldwide, making it easier to solve problems if they arise; with AMD, you might have to figure things out on your own.
- It’s not that AMD cards can’t be used; many AI projects theoretically support AMD, but their efficiency is lower than NVIDIA’s, and they come with more hassles. Not recommended for non-professional users.
- This is quite straightforward, namely what is commonly referred to as “software ecosystem issues”.
- NVIDIA RTX Series
-
Regarding 30-series graphics cards:
- Their performance is still decent, but there is a significant risk of them being used mining cards.
- For example, the 3060 Ti 12GB version offers good value for money and is worth buying if you can confirm the seller is trustworthy and the price is right.
-
Regarding 40-series graphics cards:
- Except for the 4090, other models do not offer a significant enough improvement over the 30-series.
-
Regarding 50-series graphics cards:
- They may not offer a significant enough improvement over the 40-series, but in 2025, they will be the latest and most powerful.
-
- What about AMD graphics cards?
- As the saying goes, “Tactics are like water, ever-changing and shapeless; situations are constantly evolving and interconverting.”
- On one hand, everyone knows Nvidia cards are good, so many people buy them, they are expensive, and sometimes out of stock.
- On the other hand, AMD might be aware that its own graphics card product line isn’t as dominant as Nvidia’s. Consequently, sometimes certain item can offer very good value, with slightly lower prices but higher performance parameters.
- Therefore, I believe that in 2025, AMD graphics cards will also be a good choice.
1.3 GPU Tier Breakdown by Brand
1.3.1 ASUS
- Budget: DUAL
- Entry-level: TUF Gaming (formerly ATS)
- Mid-range: ProArt (designer-focused)
- High-end/Sub-flagship: TUF Gaming (gamer-focused)
- Flagship: ROG Strix
- Ultra-flagship: ROG Matrix
1.3.2 MSI
- Budget: Ventus
- Mid-range: Gaming (discontinued)
- High-end/Sub-flagship: Gaming X Trio
- Flagship: Suprim X
- Ultra-flagship: Lightning (discontinued)
1.3.3 GIGABYTE
(Refer to ASUS/MSI tiers for analogous product lines.)
1.3.4 Colorful
Colorful is a reputable second-tier brand with reliable quality and after-sales support. Its iGame series is the flagship lineup:
- Budget: Netral/Lei Feng
- Entry-level: BattleAx
- Mid-range: Advanced (triple-fan) / Ultra (dual-fan)
- Flagship: iGame Vulcan/Neptune
- Ultra-flagship: Kudan
1.3.5 ZOTAC
A Hong Kong-based manufacturer with strong R&D capabilities (Sapphire, its subsidiary, dominates AMD’s GPU market).
- Budget: Twin Edge
- Mid-range: X-Gaming
- High-end: Trinity OC
- Flagship: AMP Extreme/PGF
1.3.6 GALAX
- Budget: Black/White General
- Mid-range: Metal Master
- Flagship: HOF (Hall of Fame)
1.4 Personal Take
- What we commonly refer to as a “graphics card” encompasses the core components: GPU chip, VRAM, circuit board, backplate, casing, cooling system, and fans.
- The GPU chip reigns supreme—exclusively fabricated by industry giants Nvidia and AMD.
- Brands like ASUS and MSI procure these chips from Nvidia/AMD, then layer on proprietary cooling solutions, aesthetic designs, and engineering tweaks to create consumer-ready products.
- Therefore, graphics cards with the same GPU chip but from different brands generally has similar performance. The differences in products mainly stem from the business strategy, design, and focus of different manufacturers.
- For instance, both first-tier and second-tier brands manufacture RTX 4090 GPUs, yet their actual performance gap is likely under 10%, or even as little as 5%, sometimes negligible, while price tags can differ by hundreds of dollars.
- Thus, in certain scenarios, strong second-tier or regular second-tier brands offer significantly better value.
- A Price-Performance Example
Metric ASUS ROG Strix 4090 ZOTAC AMP Extreme 4090 Boost Clock 2640 MHz 2580 MHz VRM Phases 24+4 20+3 Price $1,999 $1,599 Performance gap: <5% Price gap: 25%
Although ZOTAC is a second-tier brand, and i Know ASUS ROG Strix 4090 is a good item, for these two specific products, it might actually be the better choice, especially for budget-conscious gamers.
1.5 GPU Benchmark Hierarchy
1.6 Nvidia Card Specs
1.6.1 Compare 50 Series Specs
1.6.2 Compare 40 Series Specs
GPU Engine Specs: | |||||||||
NVIDIA CUDA® Cores | 16384 | 10240 | 9728 | 8448 | 7680 | 7168 | 5888 | 4352 | 3072 |
Shader Cores | Ada Lovelace 83 TFLOPS | Ada Lovelace 52 TFLOPS | Ada Lovelace 49 TFLOPS | Ada Lovelace 44 TFLOPS | Ada Lovelace 40 TFLOPS | Ada Lovelace 36 TFLOPS | Ada Lovelace 29 TFLOPS | Ada Lovelace 22 TFLOPS | Ada Lovelace 15 TFLOPS |
Ray Tracing Cores | 3rd Generation 191 TFLOPS | 3rd Generation 121 TFLOPS | 3rd Generation 113 TFLOPS | 3rd Generation 102 TFLOPS | 3rd Generation 93 TFLOPS | 3rd Generation 82 TFLOPS | 3rd Generation 67 TFLOPS | 3rd Generation 51 TFLOPS | 3rd Generation 35 TFLOPS |
Tensor Cores (AI) | 4th Generation 1321 AI TOPS | 4th Generation 836 AI TOPS | 4th Generation 780 AI TOPS | 4th Generation 706 AI TOPS | 4th Generation 641 AI TOPS | 4th Generation 568 AI TOPS | 4th Generation 466 AI TOPS | 4th Generation 353 AI TOPS | 4th Generation 242 AI TOPS |
Boost Clock (GHz) | 2.52 | 2.55 | 2.51 | 2.61 | 2.61 | 2.48 | 2.48 | 2.54 | 2.46 |
Base Clock (GHz) | 2.23 | 2.29 | 2.21 | 2.34 | 2.31 | 1.98 | 1.92 | 2.31 | 1.83 |
Memory Specs: | |||||||||
Standard Memory Config | 24 GB GDDR6X | 16 GB GDDR6X | 16 GB GDDR6X | 16 GB GDDR6X | 12 GB GDDR6X | 12 GB GDDR6X | 12 GB GDDR6 / 12 GB GDDR6X | 16 GB GDDR6 or 8 GB GDDR6 | 8 GB GDDR6 |
Memory Interface Width | 384-bit | 256-bit | 256-bit | 256-bit | 192-bit | 192-bit | 192-bit | 128-bit | 128-bit |
Display Support: | |||||||||
Maximum Digital Resolution (1) | 4K at 240Hz or 8K at 60Hz with DSC, HDR | 4K at 240Hz or 8K at 60Hz with DSC, HDR | 4K at 240Hz or 8K at 60Hz with DSC, HDR | 4K at 240Hz or 8K at 60Hz with DSC, HDR | 4K at 240Hz or 8K at 60Hz with DSC, HDR | 4K at 240Hz or 8K at 60Hz with DSC, HDR | 4K at 240Hz or 8K at 60Hz with DSC, HDR | 4K at 240Hz or 8K at 60Hz with DSC, HDR | 4K at 240Hz or 8K at 60Hz with DSC, HDR |
Standard Display Connectors | HDMI(2), 3x DisplayPort(3) | HDMI(2), 3x DisplayPort(3) | HDMI(2), 3x DisplayPort(3) | HDMI(2), 3x DisplayPort(3) | HDMI(2), 3x DisplayPort(3) | HDMI(2), 3x DisplayPort(3) | HDMI(2), 3x DisplayPort(3) | HDMI(2), 3x DisplayPort(3) | HDMI(2), 3x DisplayPort(3) |
Multi Monitor | up to 4(4) | up to 4(4) | up to 4(4) | up to 4(4) | up to 4(4) | up to 4(4) | up to 4(4) | up to 4(4) | up to 4(4) |
HDCP | 2.3 | 2.3 | 2.3 | 2.3 | 2.3 | 2.3 | 2.3 | 2.3 | 2.3 |
Technology Support: | |||||||||
NVIDIA Architecture | Ada Lovelace | Ada Lovelace | Ada Lovelace | Ada Lovelace | Ada Lovelace | Ada Lovelace | Ada Lovelace | Ada Lovelace | Ada Lovelace |
Ray Tracing | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
DLSS 3 | DLSS 3 | DLSS 3 | DLSS 3 | DLSS 3 | DLSS 3 | DLSS 3 | DLSS 3 | DLSS 3 | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
PCI Express Gen 4 | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Resizable BAR | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
NVIDIA FreeStyle | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
NVIDIA Highlights | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
NVIDIA GPU Boost™ | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
NVIDIA NVLink™ (SLI-Ready) | No | No | No | No | No | No | No | No | No |
Vulkan RT API, OpenGL 4.6 | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
2x 8th Generation | 2x 8th Generation | 2x 8th Generation | 2x 8th Generation | 2x 8th Generation | 1x 8th Generation | 1x 8th Generation | 1x 8th Generation | 1x 8th Generation | |
5th Generation | 5th Generation | 5th Generation | 5th Generation | 5th Generation | 5th Generation | 5th Generation | 5th Generation | 5th Generation | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
8.9 | 8.9 | 8.9 | 8.9 | 8.9 | 8.9 | 8.9 | 8.9 | 8.9 | |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Card Dimensions: | |||||||||
Length | 304 mm | 304 mm | 304 mm | Varies by manufacturer | Varies by manufacturer | 244 mm | 244 mm | 244 mm | Varies by manufacturer |
Width | 137 mm | 137 mm | 137 mm | Varies by manufacturer | Varies by manufacturer | 112 mm | 112 mm | 112 mm | Varies by manufacturer |
Slot | 3-Slot | 3-Slot | 3-Slot | Varies by manufacturer | Varies by manufacturer | 2-Slot | 2-Slot | 2-Slot | 2-Slot |
– | Varies by manufacturer | Varies by manufacturer | Varies by manufacturer | Varies by manufacturer | Founders Edition Yes Varies by manufacturer | Founders Edition Yes Varies by manufacturer | – | – | |
Thermal and Power Specs: | |||||||||
Maximum GPU Temperature (in C) | 90 | 90 | 90 | 90 | 90 | 90 | 90 | 90 | 90 |
Total Graphics Power (W) | 450 | 320 | 320 | 285 | 285 | 220 | 200 | 165 or 160 | 115 |
Required System Power (W) (5) | 850 | 750 | 750 | 700 | 700 | 650 | 650 | 550 | 550 |
Required Power Connectors | 3x PCIe 8-pin cables (adapter in box) OR 1x 450 W or greater PCIe Gen 5 cable | 3x PCIe 8-pin cables (adapter in box) OR 1x 450 W or greater PCIe Gen 5 cable | 3x PCIe 8-pin cables (adapter in box) OR 1x 450 W or greater PCIe Gen 5 cable | 2x PCIe 8-pin cables (adapter in box) OR 300 W or greater PCIe Gen 5 cable | 2x PCIe 8-pin cables (adapter in box) OR 300 W or greater PCIe Gen 5 cable | 2x PCIe 8-pin cables (adapter in box) OR 300 W or greater PCIe Gen 5 cable. Certain manufacturer models may use 1x PCIe 8-pin cable | 2x PCIe 8-pin cables (adapter in box) OR 300 W or greater PCIe Gen 5 cable Certain manufacturer models may use 1x PCIe 8-pin cable | 1x PCIe 8-pin cables (adapter in box) OR 300 W or greater PCIe Gen 5 cable. Certain manufacturer models may use 1x PCIe 8-pin cable |
1.6.3 Compare 30 Series Specs
1.6.4 Compare 20 Series Specs
1.7 NVIDIA Technologies Compares Between Different GPU Series
Generally speaking, the newest is always the best.
RTX 50 Series | RTX 40 Series | RTX 30 Series | RTX 20 Series | GTX 16 Series | GTX 10 Series | ||
---|---|---|---|---|---|---|---|
NVIDIA Architecture | Architecture Name | Blackwell | Ada Lovelace | Ampere | Turing | Turing | Pascal |
Streaming Multiprocessors | 2x FP32 | 2x FP32 | 2x FP32 | 1x FP32 | 1x FP32 | 1x FP32 | |
Ray Tracing Cores | Gen 4 | Gen 3 | Gen 2 | Gen 1 | – | – | |
Tensor Cores (AI) | Gen 5 | Gen 4 | Gen 3 | Gen 2 | – | – | |
Platform | NVIDIA DLSS | DLSS 4 Super Resolution DLAA Ray Reconstruction Frame Generation Multi Frame Generation |
DLSS 3.5 Super Resolution DLAA Ray Reconstruction Frame Generation |
DLSS 2 Super Resolution DLAA Ray Reconstruction |
DLSS 2 Super Resolution DLAA Ray Reconstruction |
– | – |
NVIDIA Reflex | Reflex 2 Low Latency Mode Frame Warp (Coming Soon) |
Reflex 2 Low Latency Mode Frame Warp (Coming Soon) |
Reflex 2 Low Latency Mode Frame Warp (Coming Soon) |
Reflex 2 Low Latency Mode Frame Warp (Coming Soon) |
Reflex Low Latency Mode |
Reflex Low Latency Mode |
|
NVIDIA Broadcast | Yes | Yes | Yes | Yes | – | – | |
NVIDIA App | Yes | Yes | Yes | Yes | Yes | Yes | |
Game Ready Drivers | Yes | Yes | Yes | Yes | Yes | Yes | |
NVIDIA Studio Drivers | Yes | Yes | Yes | Yes | Yes | Yes | |
NVIDIA ShadowPlay | Yes | Yes | Yes | Yes | Yes | Yes | |
NVIDIA Highlights | Yes | Yes | Yes | Yes | Yes | Yes | |
NVIDIA Ansel | Yes | Yes | Yes | Yes | Yes | Yes | |
NVIDIA Freestyle | Yes | Yes | Yes | Yes | Yes | Yes | |
VR Ready | Yes | Yes | Yes | Yes | GTX 1650 Super or higher | GTX 1060 or higher | |
NVIDIA Omniverse | Yes | Yes | Yes | Yes | – | – | |
RTX Remix | Yes | Yes | RTX 3060 Ti or greater | – | – | – | |
Additional Features | PCIe | Gen 5 | Gen 4 | Gen 4 | Gen 3 | Gen 3 | Gen 3 |
NVIDIA Encoder (NVENC) | Gen 9 | Gen 8 | Gen 7 | Gen 7 | Gen 6 | Gen 6 | |
NVIDIA Decoder (NVDEC) | Gen 6 | Gen 5 | Gen 5 | Gen 4 | Gen 4 | Gen 3 | |
AV1 Encode | Yes | Yes | – | – | – | – | |
AV1 Decode | Yes | Yes | Yes | – | – | – | |
CUDA Capability | 12.0 | 8.9 | 8.6 | 7.5 | 7.5 | 6.1 | |
DX12 Ultimate | Yes | Yes | Yes | Yes | – | – |
1.8 Graphics Card Buying Reference
2. Processor (CPU)
- In the field of artificial intelligence, CPUs are also important, but not as crucial as GPUs, which represent a do-or-die matter.
-
I3 processor may suffice for basic use but is not recommended. It is advised to use at least an i5 or preferably an i7 or i9.
-
While the CPU is not the most critical component, some AI-related applications can utilize CPU computing power. Naturally, a more powerful CPU provides a better overall experience.
2.1 Intel CPU Performance Degradation Issues
-
Intel 13th and 14th generation processors are known to suffer from performance degradation. This typically manifests as a significant drop in performance after extended use, potentially leading to system instability and even the Blue Screen of Death (BSOD).
-
Performance loss can reach up to 30%, and this decline is irreversible.
-
The issue does not always occur, but it is more likely under prolonged high-load conditions. For light-use scenarios like daily office work, the probability of encountering such issues is significantly lower.
-
Generally speaking, the choice comes down to either Intel’s 12th-gen CPUs or AMD’s.
2.2 Can AMD Processors Be Used for Artificial Intelligence?
-
Due to its historically dominant position in both consumer and server markets, Intel has a higher market share. As a result, certain software and frameworks include optimizations specifically tailored for Intel CPUs, potentially offering a slight advantage.
-
However, the performance degradation issues present in Intel’s 13th and 14th generation CPUs are a serious concern that cannot be ignored.
-
Therefore, in 2025, AMD processors have become a very viable and competitive choice for AI-related tasks.
3.
- Start with at least 32GB of RAM. Dual-channel 32GB (16GB + 16GB) is recommended, and 64GB (32GB + 32GB) is better, 128GB or 256 GB is much beter— more capacity is always better for performance.
- Nowadays, many applications load data into RAM to boost efficiency, as RAM read/write speeds significantly exceed those of hard drives.
- You can even use software like RAMDisk to simulate excess RAM as a virtual hard disk. Such a virtual RAM drive is particularly well-suited for specific scenarios, such as serving as a cache area for download software.
- Memory modules are easy to purchase and well-protected within the motherboard (similar to CPUs), with minimal risk of damage. Focus on distinguishing between DDR4/DDR5 and verifying maximum capacity compatibility
- There’s no need to buy so-called “geek-grade” memory modules, nor is it worth paying a premium for slightly tighter timings from hand-picked memory chips. A standard 64GB memory kit is far more practical than a 32GB top-tier one.
- DDR5 offers superior performance. While the actual improvement may not be groundbreaking, it’s still worth considering because in the semiconductor industry, the latest technology is generally the optimal choice. However, don’t forget to verify whether your motherboard supports DDR5.
3.1 Buying Reference
4. Solid State Drives (SSD)
4.1 Everything You Need to Know About SSDs
First, let’s clarify some common technical terms. I understand that this involves many specialized terms and acronyms like M.2, NVMe, SATA, PCIe, AHCI, which can be overwhelming for general consumers. I will dedicate a separate article to explain these in detail later.
For now, we’ll focus on the core SSD-related considerations:
- Larger capacity is better, as it allows storing more data.
- Larger SSDs provide more storage space for data, applications, and multimedia.
- High-capacity drives (e.g., 2TB vs. 1TB) also tend to have higher Total Bytes Written (TBW) ratings, which directly correlate with lifespan.
- Faster read/write speeds are better.
- Prioritize SSDs with PCIe 4.0/5.0 interfaces and NVMe protocols, which deliver sequential speeds exceeding 7,000 MB/s.
- Avoid SATA-based SSDs (max ~550 MB/s) for performance-critical tasks like gaming or video editing.
- Longer design lifespan is better, which is related to the following metrics:
-
- Total Bytes Written (TBW),Higher TBW values indicate greater durability. For example, a 2TB SSD often has double the TBW of a 1TB model.
- Over-Provisioning (OP)
- Controller algorithms,while specific algorithms are rarely disclosed, mid-to-high-tier SSDs typically employ advanced wear-leveling and garbage collection mechanisms to optimize lifespan.
- NAND flash type (e.g., SLC, MLC, TLC, QLC)
- Hierarchy: SLC > MLC > TLC > QLC in terms of durability and performance.
- SLC/MLC: Primarily used in enterprise-grade SSDs due to high costs.
- TLC: Dominates consumer markets with balanced cost and reliability.
- QLC: Marketed as cost-effective but lacks long-term user validation.
- After-sales Policy
- Although it can be very frustrating when SSD damage or data loss occurs even under normal use, it is still a consolation if the manufacturer or seller can provide good after-sales service, such as free replacement.
4.2 SSD Buying Reference
5.
As the performance of computer hardware such as graphics cards, processors, and PCIe 5.0 solid-state drives (SSDs) becomes increasingly powerful, the heat generation from these hardware components is also going up a lot.
Therefore, the cooling system is becoming increasingly important. At best, it affects the hardware’s performance; at worst, it could damage the hardware.
5.1 CPU Coolingn
Liquid Cooler (Water Cooling Radiator)
-
Advantages
-
Theoretically, liquid cooling systems are better, offering higher cooling efficiency.
-
More space-saving; some ITX builds can only use liquid cooling.
-
-
Disadvantages
-
Uncertain lifespan; product quality, the computer’s usual workload, and vibrations from moving or shipping can all affect its usable years. It could last two to three years, or even seven to eight years.
-
In certain special circumstances, there might be a risk of liquid leakage, which could then lead to further damage to other hardware components.
-
Air Cooler (Air Cooling Radiator)
-
Advantages
-
Durable and reliable.
-
-
Disadvantages
-
High-end air coolers, designed for better performance, are often has a large volume, and some computer cases may not be able to accommodate them.
-
Cooling efficiency is not as good as liquid cooling.
-
My Choice
-
I personally prefer air cooling. Buying a top-tier air cooler means it will likely not break and can be used for many years.
5.2 Solid State Drive (SSD) Cooling
Some high-performance SSDs nowadays are very powerful but generate an exaggerated amount of heat. There are now products specifically designed for SSD cooling. If the case and motherboard space allow for installation, it is recommended to install one.
5.3 Graphics Card Cooling
6. Operating System
For general users, Windows 10 & Windows 11 are preferred.
Unlike graphics cards, the operating system is not a life-or-death issue. Most AI applications also support Linux and macOS. Furthermore, programs can be run within containers using virtualization technologies like virtual machines (VMs) or Docker, but this can be more troublesome for ordinary users.
7. Pre-built PC vs. DIY Assembly
It’s generally much better to buy individual components and assemble the PC yourself. Aspects like user experience and computer hardware upgradeability are far superior compared to pre-built systems.
As commercial entities, pre-built PC manufacturers always have profit as their primary consideration. Some manufacturers might sacrifice quality to squeeze out more profit.
8. Computer Hardware Priority
- Graphics Card > Memory >= SSD > CPU > Motherboard >= Power Supply
- Although I rank the power supply last, I am also very clear that if the power supply is insufficient or unstable, there is a risk of damaging other hardware.
- My meaning is: ensure you have enough to eat first, then consider eating well. (This is a metaphorical way of saying: meet basic needs first, then pursue higher quality.)