Positioning
AI Infrastructure Intelligence for investors, founders, operators, analysts, and B2B sales teams. Map the companies building the AI data center stack.
Content system
Turn one research thesis into SEO pages, maps, profiles, comparison pages, newsletter issues, LinkedIn posts, X posts, and Chinese social drafts.
AI Infrastructure Intelligence for investors, founders, operators, analysts, and B2B sales teams. Map the companies building the AI data center stack.
Investors, founders, infrastructure operators, analysts, supplier sales teams, recruiters, and researchers who need a practical view of the AI infrastructure ecosystem.
Website pages create search surface area. LinkedIn and X distribute insights. Newsletter builds owned audience. Chinese drafts repurpose the map for WeChat, Zhihu, and Xiaohongshu.
Week 1 builds SEO foundation, week 2 publishes maps and distribution, week 3 adds comparisons and Chinese repurposing, and week 4 launches newsletter plus paid watchlist preparation.
| Day | Channel | Title | Keyword | Repurpose plan |
|---|---|---|---|---|
| 1 | Website | Launch AI Infrastructure Map homepage Core page | AI infrastructure companies | Turn hero thesis into LinkedIn post and X thread. |
| 2 | Website | Publish Optical Interconnect & CPO category Category page | AI optical interconnect companies | Create optical stack visual for LinkedIn. |
| 3 | Website | Publish AI Networking category Category page | AI networking companies | Repurpose into short X post on data movement. |
| 4 | Website | Publish Power & Cooling category Category page | AI data center power companies | Make four-bottleneck carousel for LinkedIn. |
| 5 | Website | Publish company directory and first profiles Company profiles | AI data center infrastructure companies | Share neutral company map note. |
| 6 | AI infrastructure is no longer only about GPUs Social post | AI infrastructure | Use as newsletter opening theme. | |
| 7 | X | The next AI bottlenecks Short post | AI data center infrastructure | Collect replies for FAQ additions. |
| 8 | Website | Publish AI Infrastructure Stack Map Industry map | AI infrastructure stack | Turn layers into five LinkedIn cards. |
| 9 | Website | Publish Optical Interconnect Company Map Industry map | optical interconnect company map | Repurpose into Chinese WeChat explainer. |
| 10 | Website | Publish AI Networking Company Map Industry map | AI networking company map | Create X thread on switching, ASICs, retimers. |
| 11 | Website | Publish Power & Cooling Map Industry map | AI data center cooling companies | LinkedIn post on physical constraints. |
| 12 | Optical interconnect company map Social post | AI optical interconnect companies | Use comments to identify missing companies. | |
| 13 | X | Retimers matter Short post | retimers AI data center | Expand into comparison page FAQ. |
| 14 | Power and cooling are now strategy Social post | AI data center power companies | Use as newsletter section. | |
| 15 | Website | Publish Broadcom vs Marvell Comparison | Broadcom vs Marvell AI | Make side-by-side LinkedIn table. |
| 16 | Website | Publish Lumentum vs Coherent Comparison | Lumentum vs Coherent | Create optical supplier comparison post. |
| 17 | Website | Publish CPO vs Pluggable Optics Comparison | CPO vs pluggable optics | Repurpose into Xiaohongshu card set. |
| 18 | Website | Publish Liquid Cooling vs Air Cooling Comparison | liquid cooling AI data center | Turn into Zhihu answer. |
| 19 | AI 基建不只是 GPU Chinese draft | AI 基建 GPU | Link back to English map. | |
| 20 | Zhihu | 英伟达背后的 AI 算力产业链 Chinese draft | AI 算力产业链 | Convert to long Q&A format. |
| 21 | Xiaohongshu | CPO 到底是什么 Chinese draft | CPO 是什么 | Use 5-slide card format. |
| 22 | Newsletter | AI Infrastructure Weekly issue 001 Newsletter | AI infrastructure weekly | Share archive link on LinkedIn and X. |
| 23 | Website | Publish Watchlist landing page Landing page | AI infrastructure watchlist | Mention sample fields in LinkedIn post. |
| 24 | The watchlist model Social post | AI infrastructure watchlist | Collect beta subscriber emails. | |
| 25 | Website | Publish 800G vs 1.6T explainer SEO article | 800G vs 1.6T optical transceivers | Create X post with one key takeaway. |
| 26 | Website | Publish AI data center supply chain map article SEO article | AI data center supply chain map | Use as newsletter issue 002 anchor. |
| 27 | Newsletter | AI Infrastructure Weekly issue 002 Newsletter | optical interconnect AI data center | Repurpose reading list to LinkedIn. |
| 28 | A weekly AI infrastructure workflow Social post | AI infrastructure intelligence | Ask audience which map to expand next. | |
| 29 | Website | Prepare first research pack outline Monetization prep | AI infrastructure research pack | Turn outline into gated report plan. |
| 30 | Website | Review internal links and update sitemap Operations | AI infrastructure map | Use analytics and search console data when available. |
LinkedIn · map
AI infrastructure is moving beyond a GPU-only story. The bottlenecks now include networking fabrics, optical interconnects, power distribution, thermal management, manufacturing, and deployment. The useful question is not only who makes accelerators. It is: which companies make the AI data center stack deployable?
X · insight
The next AI infrastructure bottlenecks are not only compute. Watch networking, optics, power, cooling, and rack-scale deployment. That is where AI capacity becomes physical infrastructure.
LinkedIn · map
Optical interconnect is a stack, not a single product category. Lasers, components, optical modules, DSPs, CPO, silicon photonics, manufacturing, and testing all sit in different layers. Mapping those roles helps separate direct data center exposure from enabling infrastructure exposure.
LinkedIn · comparison
Broadcom and Marvell are both important AI infrastructure semiconductor names, but they are not the same story. Broadcom is often mapped to switching silicon, custom ASICs, and a broader infrastructure platform. Marvell is often mapped to custom silicon, optical DSPs, and data infrastructure chips. The research lens is role, exposure, maturity, and customer concentration.
LinkedIn · comparison
Lumentum and Coherent both matter to optical infrastructure research, but through different company structures and product breadth. The better question is not which one is the better stock. It is which parts of the optical stack each company touches, how those exposures are reported, and where data center demand shows up in primary sources.
X · insight
CPO is a future architecture question. Pluggable optics is today’s operational reality. The important research split: power and bandwidth density versus serviceability and ecosystem readiness.
LinkedIn · map
AI data center power is not one line item. It includes grid access, substations, switchgear, UPS, power distribution, rack power, monitoring, and serviceability. A compute cluster can be delayed by constraints that start far outside the data hall.
LinkedIn · comparison
Liquid cooling is becoming a boardroom topic because AI racks raise the thermal density problem. The key research questions are facility readiness, service model, integration complexity, and which suppliers can support deployment at scale.
X · company-breakdown
Retimers and high-speed connectivity chips are easy to overlook because they are not the headline component. But AI infrastructure depends on clean, reliable data movement across boards, servers, and racks.
LinkedIn · weekly-update
The phrase AI factory only makes sense if the factory can be powered, cooled, networked, assembled, and operated. That is why the AI infrastructure map includes companies in optical interconnect, AI networking, power, cooling, manufacturing, testing, and deployment.
LinkedIn · comparison
One useful way to study AI interconnect is to ask where copper stops being practical and where optical becomes necessary. The answer depends on reach, bandwidth, power, cost, and architecture. That boundary is a company map, not a slogan.
X · insight
800G and 1.6T are not just bigger numbers. They represent pressure on optics, power, thermal design, switch architecture, and manufacturing quality.
LinkedIn · map
AI networking includes switching silicon, systems, Ethernet fabrics, custom ASICs, optical DSPs, retimers, active electrical cables, and operating software. A useful research workflow maps the role first, then the company.
LinkedIn · insight
AI deployment has made power and cooling strategic constraints. Vertiv, Eaton, Schneider Electric, GE Vernova, Modine, and nVent belong in AI infrastructure research because capacity is only useful when it can run reliably.
X · insight
The AI stack does not end at chip design. Manufacturing, testing, optical assembly, server integration, and rack deployment are all infrastructure.
LinkedIn · insight
When researching CPO, ask four questions: what problem does it solve, which systems need it first, what changes operationally, and which companies can manufacture it reliably? That keeps the topic grounded.
LinkedIn · map
It is tempting to label every AI infrastructure supplier as part of one ecosystem. A better approach: say companies are commonly discussed in relation to AI infrastructure ecosystems, then verify actual supplier or partner status from primary sources.
X · company-breakdown
Map the role before judging the company: platform, core supplier, high-beta supplier, or speculative technology. The category matters as much as the ticker.
LinkedIn · weekly-update
A simple weekly workflow: update company profiles, add one category page, publish one map, ship one comparison, repurpose into LinkedIn and Chinese social drafts, then collect questions for the next newsletter.
LinkedIn · company-breakdown
The AI Infrastructure Watchlist is not a trading list. It is a structured research database: company, category, AI relevance, risk level, key technologies, competitors, earnings keywords, and related suppliers.
用产业地图讲清楚 AI 基建,而不是荐股。
用产业地图讲清楚 AI 基建,而不是荐股。
很多人谈 AI 基建时只看 GPU,但一个 AI 数据中心能不能跑起来,还取决于网络、光模块、电力系统、液冷、制造和交付。产业研究的重点不是喊口号,而是把公司放回它所在的基础设施层级。
Zhihu
不要把生态链直接等同于供应商名单,先看基础设施角色。
研究 AI 算力产业链时,比较稳妥的方法是先画层级:AI 网络、光互连、服务器集成、电力、散热、制造。某家公司是否是正式供应商,需要用公告、财报、产品资料验证,不能只靠市场传闻。
Xiaohongshu
CPO 不是魔法词,它解决的是带宽、功耗和集成度的问题。
CPO 可以理解为把光学能力放得离交换芯片更近,希望降低高速传输中的功耗和空间压力。但它也带来制造、维修和生态成熟度问题,所以更适合用技术路线图来跟踪,而不是简单下结论。
AI 集群越大,数据移动越像基础设施问题。
AI 数据中心里,芯片之间、服务器之间、机柜之间都要传输数据。距离更远、带宽更高时,光互连的重要性会上升。研究时可以把公司分成激光器、光组件、光模块、DSP、CPO、制造测试等层。
Zhihu
算力能不能上线,可能先卡在电力和配电。
AI 数据中心不是买到芯片就能上线。电网接入、变电、配电、UPS、机柜供电、监控和维护都可能成为瓶颈。Vertiv、Eaton、Schneider Electric、GE Vernova 等公司之所以进入研究视野,是因为它们靠近物理部署约束。
Xiaohongshu
散热不是配角,高密度 AI 机柜会改变数据中心设计。
液冷受到关注,是因为高密度 AI 机柜可能超出传统风冷的舒适区。它可以提升热管理能力,但也会带来改造、维护、漏液风险管理和运营流程变化。研究重点应放在部署条件,而不是单一概念。
它们不是 GPU 公司,但在数据移动和定制芯片中很重要。
Broadcom 和 Marvell 更适合放在 AI 网络、定制芯片、光 DSP 和数据基础设施半导体层里看。比较它们时,不要用荐股语言,而要看业务角色、客户集中度、技术层级和一手资料中的 AI 暴露。
Zhihu
光通信公司要拆成组件、模块、材料和客户结构来看。
Lumentum 和 Coherent 都和光基础设施有关,但不能简单放在同一个标签里。研究时应拆分激光器、光组件、材料、模块、客户类型和数据中心需求,最后再回到财报和产品资料验证。
从 GPU 到电力和散热,AI 基建是一条很长的链。
AI 数据中心供应链可以分成加速器、AI 网络、光互连、电力、散热、制造测试、服务器和部署。用这张图的好处是,不会把所有公司都混成一个 AI 概念,而是看到每家公司具体解决什么问题。
Xiaohongshu
下一阶段的 AI 基建研究,会越来越像工程和供应链研究。
AI 基建的下一阶段,不只是看算力芯片谁更强,还要看数据怎么传、电怎么供、热怎么散、系统怎么制造和交付。产业地图的价值在于把这些问题放在同一张图上。