Provident Data Centers presents

INFERENT NDC-1

54MW of purpose-built AI inference capacity in the premium North Dallas Corridor — one of the fastest-growing AI data center markets in the United States. This AI inference data center in Dallas is sited for fiber density, proximity to population centers, and fast utility energization on the ERCOT grid.

Request Capacity
54 MW
Critical IT capacity
~1 zettaFLOP
Vera Rubin Inference
2027 Q4
Dallas AI Inference Ready for Service
AI inference data center rendering — Inferent™ NDC-1 facility in North Dallas

Inference-ready infrastructure, built to deploy now.

AI inference workloads are distributed, latency-sensitive, and scale continuously with user adoption. Inferent™ facilities are purpose-built AI inference data centers optimized for these dynamics — right-sized at 30–60MW, sited in high-connectivity markets like the North Dallas AI data center corridor, and designed for delivery in months. Available for AI data center wholesale lease and AI inference build to suit.

Distributed by design
Inference serves users across geographies where latency matters. Inferent™ builds capacity in the markets where your users are — close to population centers and fiber-dense corridors.
Scales with adoption
As AI products gain users, inference compute grows continuously. Inferent™ modular hall configurations let tenants scale from a single hall to full facility buildout on their timeline.
Right-sized for inference economics
30–60MW facilities purpose-built for the companies deploying real inference workloads at scale — with the density, cooling, and power delivery these platforms demand.
2027 delivery
Streamlined site selection, permitting, and construction. Inferent™ delivers commissioned capacity on timelines that match the pace of AI deployment.

North Dallas Corridor

54MW of purpose-built AI inference capacity in one of the fastest-growing DFW AI data center markets in the United States. This North Dallas AI data center is sited for fiber density, proximity to population centers, and fast utility energization on the ERCOT grid.

Approximate distance
DFW01 Plano
Flexential
5 mi
DFW6 Richardson
CyrusOne
5 mi
Datacenter Park Richardson
Digital Realty
5 mi
Infomart
Equinix
10 mi
2323 Bryan Street
Digital Realty
10 mi
Eyeballs within radius
5 mi
785,422
10 mi
2,595,330
20 mi
6,875,380
Near-site fiber carriers
Location North Dallas Corridor
Critical IT capacity 54 MW
Total land area 18.74 acres
Utility grid ERCOT (deregulated)
PUE (annual) 1.20
Cooling Variable Primary Mag Lev Air Cooled Chillers with Adiabatic trim
Power cost 6.5¢ / kWh
Distance to Infomart ~10.5 miles
North Dallas AI data center market map — IX points, tiering points, and DFW data centers within 6 and 12 mile radius of Inferent™ NDC-1
North Dallas Corridor
10.5 mi from Infomart · Multiple IX / tiering points within 6-mile radius

Development underway for 2027 delivery.

Site under control
18.74 acres secured in the North Dallas Corridor
Utility commitment with FEA
Facility extension agreement in place for 54MW energization
Design schematics complete
MEP and architectural partners
Long lead items secured
Switchgear, transformers, and critical electrical equipment on order
Civil engineering complete
Grading, drainage, and site infrastructure fully designed
Multiple fiber carriers confirmed
AT&T, FiberLight, LOGIX, Segra, and Zayo within reach of the site
Aerial site plan — 18.74 acre AI inference data center campus in North Dallas
18.74-acre campus · 275,000 SF data center space · On-site substation · Adjacent 138kV transmission

A zettaFLOP of inference at NDC-1.

NDC-1's basis of design supports 54MW of critical IT load across configurable data halls — optimized for the power density, liquid cooling, and rack-level power delivery that next-generation AI inference platforms demand. This AI inference data center in Dallas is designed to support wholesale lease deployments and AI inference build to suit configurations. Here's how the facility maps to NVIDIA's latest inference platform.

Available H2 2026
NVIDIA Vera Rubin NVL72
72 Rubin GPUs + 36 Vera CPUs per rack. Cable-free modular tray design, 3rd-gen MGX architecture. HBM4 memory, NVLink 6, 260 TB/s scale-up bandwidth.
~200 kW
Per rack
72 GPUs
Per rack
3.6 exaFLOPS
FP4 / rack
20.7 TB
HBM4 / rack
Full buildout — 54 MW
270
Racks
19,440
GPUs
~972
exaFLOPS FP4
5.6 PB
HBM4 total
~20M
Concurrent LLM sessions*

* Estimated for a frontier-class LLM (~200B+ params) at FP4, ~40% utilization, ~500 GFLOPS/token, ~40 tok/s per stream.

1-Story · 27MW Hall
2-Story · 13.5MW Hall
27 MW DATA HALL A B C D E F G H I
Vera Rubin NVL72 (~200kW ea.)
Each rectangle = 1 rack · 9 rows × 15 racks = 135 per hall
27 MW
Critical IT per hall
135
NVL72 racks per hall
HALL 1 — 13.5 MW HALL 2 — 13.5 MW
Vera Rubin NVL72 (~200kW ea.)
Each rectangle = 1 rack · 8 rows × ~8 racks = ~67 per hall × 4 halls = 270 total
13.5 MW
Critical IT per hall
4 halls
Total (54MW across bldg)

Facility is designed for extreme configuration flexibility. Hall sizes, power density, and cooling topology (air-cooled or direct liquid cooling) can be tailored to tenant requirements. The 7/6 distributed redundant electrical system uses 2.25MW power blocks and can be upgraded to block redundant with static transfer switches. Mechanical system uses adiabatic air-cooled magnetic bearing chillers (~50 GPM average water use); high-temp and low-temp DLC loops available at tenant option.

13.5MW AI inference data center hall floor plan with GPU rack layout
13.5MW hall — rack layout detail
Full DFW AI data center building layout — 275,000 SF across two stories
Full building — 275,000 SF across two stories

Built different, on purpose.

Purpose-built for inference
Every design decision — density, power delivery, cooling topology — is optimized for inference workloads. High-density racks, direct liquid cooling, and flexible hall configurations as standard.
Faster to market
18–24 month delivery from site selection to commissioned capacity. Streamlined permitting, efficient construction, and fast utility energization on proven grid infrastructure.
Network-first siting
Every DFW AI data center site is selected for fiber density, low latency to end users, and proximity to carrier hotels and major population centers. Connectivity is the starting point, not an afterthought.
Right-sized capacity
30–60MW facilities built for the companies deploying inference at scale today. AI data center wholesale lease structures, configurable halls, and the operational maturity of an experienced data center team.

Minimal water. Maximum cooling.

NDC-1's advanced primary cooling system — YORK® YVAM magnetic bearing chillers — uses no water at all. Cutting-edge adiabatic trim cooling supplements on the hottest days, averaging about 50 GPM across the year — roughly equivalent to a restaurant, hotel, or car wash.

Zero water for cooling 93% of the time
Annual water consumption comparison
NDC-1 (54 MW facility)
One apartment complex
~25M gal/yr
One 18-hole golf course
~100M gal/yr

The primary cooling system uses no water at all. Adiabatic pre-cooling activates only when ambient temperatures rise — roughly 60 days a year in the Dallas climate — using a fraction of what conventional cooling-tower designs require. Direct liquid cooling loops (high-temp and low-temp) available at tenant option for next-generation GPU platforms.

Based on ~50 GPM avg · ~600 GPM peak · adiabatic trim ~60 days/yr
Temperature data: NOAA Integrated Surface Database, DFW 2015–2024

Experienced team. Proven execution.

Provident Data Centers is a vertically integrated data center developer headquartered in Dallas, TX. The Provident team brings 85 years of focused data center and energy infrastructure experience. Provident has developed over 5+ GW of powered land, with a robust pipeline of 70+ buildings. Our team members delivered over 540MW of high performance data center capacity across tier 1 and tier 2 U.S. markets for hyperscale, enterprise, and colocation tenants.

5+ GW
Data center development
540+ MW
Team HPC delivery
$1.2B+
Team project value
6
Active Texas sites
Vertically integrated
Provident controls the full development stack — land, power, design, construction, and operations — reducing handoff risk and compressing timelines. No outsourced development layers.
Direct utility relationships
Active engagement with municipal utilities on transmission infrastructure, substation design, and energization scheduling. Over 5GW of utility capacity developed across the current portfolio.
Engineering-led
In-house design and engineering with MEP partners selected for data center expertise. Team members drawn from Google, LinkedIn, QTS, AT&T, Blackstone, Lincoln, and Oncor.
Team members' AI/HPC development history
Missouri · 2006–2009
26MW data center
$102M build for a top-4 U.S. bank. Full design-build delivery.
Texas · 2019–2020
40MW data center
$500M campus for a global hyperscaler. Site selection through commissioning.
Texas · 2021–2023
220MW DC + 500MW substation
$100M high-density compute facility with dedicated utility infrastructure.
Texas · 2023–2025
40MW across 160 acres
$500M multi-building campus for a leading U.S. data center operator.
Team client history
Meet the team

Reserve North Dallas
AI Inference capacity.

NDC-1 is in active development for delivery in 2027. Whether you need an AI data center wholesale lease or an AI inference build to suit, tell us your deployment size, timeline, and workload profile — we'll respond within one business day with a capacity proposal.

Additional markets in development. NDC-1 is the first Inferent™ facility — not the last. We are actively evaluating sites across multiple U.S. markets with strong connectivity and available power.

Pipeline Active

Common questions about AI inference data centers.

What is an AI inference data center?

An AI inference data center is a facility purpose-built to run trained AI models in production — serving real-time predictions, generating text, processing images, and powering AI-driven applications at scale. Unlike training facilities that focus on building models, inference data centers are optimized for low latency, high availability, and continuous throughput close to end users.

How is Inferent™ different from a traditional data center?

Inferent™ facilities are designed from the ground up for AI inference workloads. That means high-density rack power (200kW+), direct liquid cooling as a standard option, network-first site selection for fiber density and low latency, and flexible hall configurations that scale with tenant demand. Traditional data centers retrofit these capabilities; Inferent™ builds them in from day one.

Why Dallas for an AI inference data center?

The DFW metroplex is one of the fastest-growing AI data center markets in the United States. The North Dallas Corridor offers dense fiber connectivity (AT&T, FiberLight, LOGIX, Segra, Zayo), proximity to the Infomart carrier hotel, fast utility energization on the deregulated ERCOT grid, and competitive power costs. Dallas absorbed over 470MW of data center capacity in H2 2025 alone.

What capacity is available for wholesale lease or build to suit?

NDC-1 offers 54MW of critical IT capacity available for AI data center wholesale lease, with configurable data halls that can be tailored to tenant requirements. Build-to-suit options are also available for organizations that need custom power density, cooling topology, or hall configurations for their specific AI inference deployment.

What GPU platforms does the facility support?

NDC-1's basis of design supports next-generation GPU platforms including the NVIDIA Vera Rubin NVL72, with 200kW per rack, direct liquid cooling (high-temp and low-temp loops), and a 7/6 distributed redundant electrical system upgradeable to block redundant. At full buildout, the facility can house 270 NVL72 racks delivering approximately 1 zettaFLOP of FP4 inference compute.

A Provident Data Centers company