Real-time updates in Salesforce-to-Salesforce two-way integration: understanding the limitation and practical ways to plan synchronization

Salesforce-to-Salesforce does not provide real-time updates between connected orgs, a key limitation for teams that depend on fresh data. This note explains the impact on decisions and practical fixes—scheduled synchronization, batched data sharing, and careful data design to maintain consistency.

The real-time catch in Salesforce-to-Salesforce integration: what you need to know

If you’re wiring two Salesforce orgs together, you’re likely hoping updates move like a spark from device to device—instant, seamless, and perfectly in sync. Reality check: Salesforce-to-Salesforce (S2S) isn’t built for real-time updates. That limitation isn’t a buzzkill; it’s a design constraint that shapes every data flow, every decision, and every expectation you set with stakeholders. Let’s unpack what that means and how you can design smarter around it.

What Salesforce-to-Salesforce actually does

Think of S2S as a trusted bridge between two orgs. It lets you share records, keep a subset of data aligned, and automate some cross-org processes without custom code. The goal is smooth collaboration—think account ownership, contact sharing, or case handoffs—so teams in one org can see relevant changes in the other. It sounds perfect on paper, but there’s a caveat: the synchronization isn’t instantaneous.

The real limitation to design around

The key takeaway is simple: Salesforce-to-Salesforce does not support real-time updates of records. In practice, that means changes made in one org aren’t reflected instantly in the connected org. There’s a lag, and that lag can vary based on configuration, processing load, and the nature of the data being shared.

Why this matters in the real world

  • Decision latency: If a sales rep closes a deal in Org A, and the related opportunity or account needs to trigger actions in Org B, the other team might be acting on stale data for a while. That delay can ripple into forecasting, pipeline reviews, and service handoffs.

  • Process misalignment: Automated workflows that assume immediate visibility may run people headlong into out-of-sync states. A policy, a approval step, or a rule-based action might fire too early or too late.

  • Data integrity questions: Without real-time refresh, reconciling ownership, status, or eligibility across systems becomes a dating-game of “who updated last” plus a lot of human review.

A practical way to think about latency

The key is to set a reasonable expectation for how quickly data propagates. Some organizations see updates every few minutes; others run nightly or hourly batches. The exact timing depends on how you configure outbound communication, how many records you sync at a time, and how you handle failures and retries. It’s not a flaw so much as a design parameter you bake into your integration plan.

How to design around the lack of real-time updates

You’ll want a strategy that minimizes the impact of latency while keeping data consistent and usable. Here are practical approaches that teams actually use:

  • Define your cadence

  • Decide how often you’ll push data from one org to the other (e.g., every 5 minutes, hourly, or on a batch window after business hours). Clear cadence helps stakeholders plan around data refreshes rather than react to surprises.

  • Align this cadence with business processes that rely on the data. If a process needs near-visibility, you may need more frequent batches, paired with sensible retry logic.

  • Embrace near-real-time with complementary patterns

  • Use platform events or Change Data Capture (CDC) where applicable to surface changes as events. While the primary S2S link won’t push changes instantly, you can publish events that trigger lightweight processing or notifications in the other system to indicate that something changed and needs a refresh.

  • Pair S2S with outbound messaging for critical changes. A quick outbound message can alert a receiving org to fetch the latest snapshot rather than waiting for a full sync.

  • Build idempotent, reliable data flows

  • Make your integrations idempotent so repeating a sync won’t corrupt data. Use unique external IDs, deterministic upsert keys, and careful handling of create/update/delete events.

  • Add robust retry and dead-letter handling. Network hiccups happen. Design for automatic retries, and have a plan to investigate stuck records without manual firefighting.

  • Map ownership and sharing rules explicitly

  • Record ownership, visibility, and sharing settings can complicate synchronization. Document who “owns” which data in each org and how permissions translate across systems.

  • Ensure that deprovisioning, consent changes, or privacy rules propagate in a controlled way to avoid exposing data longer than needed.

  • Protect data quality with validation stages

  • Before writing to the partner org, validate critical fields, enforce data type constraints, and flag mismatches early.

  • Use a staging or quarantine area for flagged records. This keeps the main data flow running while problem records wait for resolution.

  • Plan for conflict resolution

  • Two-way sharing means there will be moments when the same record is edited in both orgs. Decide on a source of truth and a conflict-resolution policy (for example, last-modified wins, or a specific field-level precedence).

  • Communicate these rules to end users so they know what to expect when edits cross the org boundary.

Architectural patterns you’ll see in the wild

  • Master–subscriber model: One org acts as the master for a given object, with the other orgs subscribing to updates. Suitable when one system owns the canonical data, but still subject to update delays.

  • Bidirectional adapters with polling: Each side polls for changes at fixed intervals and pushes updates in both directions. Simple to implement, but you’ll feel the latency.

  • Event-forwarding with lightweight processing: A middleware layer captures changes, emits events, and triggers targeted refreshes in the partner org. This gives you a responsive feel without insisting on instantaneous propagation.

Tools and techniques that often accompany S2S

  • Change Data Capture (CDC) and Platform Events: Use these to surface changes in near real-time without depending on the core S2S update path for every field.

  • Outbound messages and workflow rules: Lightweight signals that say, “Something changed here—please re-check there.”

  • Lightweight middleware: A thin layer (like MuleSoft, Dell Boomi, or Salesforce Integration Cloud) can orchestrate batches, retries, and error handling with clearer visibility and dashboards.

  • Monitoring and dashboards: Track latency, success rates, and exception counts. Visibility reduces firefighting and helps you communicate with stakeholders about timeliness.

A quick scenario to ground this

Imagine two Salesforce orgs in a sales and service collaboration. An account update is made in the sales org and should reflect in the service org so the support team can prepare for a new, larger renewal. Since S2S isn’t real-time, that update might show up in the service org minutes, hours, or even the next batch cycle later. If the support team relies on the latest status to triage cases, they could be acting on outdated information. To counter this, the teams can:

  • Run a frequent batch at the top of the hour to push critical changes.

  • Use a Platform Event to ping the service org that something changed, prompting a quick refresh of the relevant data.

  • Design a simple conflict policy: if the same account is edited in both orgs, the system will log the change and queue a review before anything automatic transfers ownership.

That combination—a reliable batch window, event-driven nudges, and a clear conflict policy—creates a pragmatic flow that feels fast without promising something the architecture won’t deliver.

Common pitfalls to avoid

  • Overreliance on “instant updates” for decision-making. If dashboards depend on real-time data, you’ll see gaps.

  • Forgetting to map ownership and permissions across orgs. Without careful alignment, data can surface inappropriately or hide from the right people.

  • Neglecting retries and error handling. A failed sync should not become a silent data discrepancy.

  • Ignoring data quality upstream. Poorly defined fields or mismatched data types easily amplify latency issues and retries.

A few guiding takeaways

  • Real-time updates aren’t the default for Salesforce-to-Salesforce. Build your plan around a reliable cadence and smart event cues.

  • Treat data synchronization as a trust protocol between orgs: two-way sharing works best when you’re explicit about ownership, timing, and failure handling.

  • Combine the simplest reliable approach first, then layer in more sophistication if and when business needs demand it.

Closing thoughts

Data is the lifeblood of modern organizations, and two-way integration is a powerful bridge. The absence of real-time updates in S2S isn’t a flaw; it’s a design detail that invites thoughtful planning. When you design, you’re not just mapping fields—you’re crafting a dependable rhythm between teams. A well-timed batch, a clear event signal, and a solid retry strategy can deliver near-seamless collaboration, even when the clock isn’t beating in perfect real time.

If you’re building or evaluating an Salesforce-to-Salesforce integration, start with the cadence that fits your business rhythm. Add event-driven nudges for urgent changes. Define ownership and conflict rules upfront. And always keep monitoring in the loop so you can tune latency, catch glitches early, and keep users confidently marching forward.

Want to wire up smarter data flows? Start by listing the most critical cross-org scenarios, sketching the minimum viable cadence, and drafting a simple failure-recovery plan. From there, you’ll have a practical, resilient path that respects the real-world tempo of your business—and your users will thank you for it.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy