SPCI is now called PSA FF-A.

Change-Id: Iaa10e0449edf5f6493ab21e648219392b17cc5ec
diff --git a/.vscode/settings.json b/.vscode/settings.json
index 9be2e0e..b91a27a 100644
--- a/.vscode/settings.json
+++ b/.vscode/settings.json
@@ -15,8 +15,8 @@
         "spinlock.h": "c",
         "offsets.h": "c",
         "barriers.h": "c",
-        "spci.h": "c",
-        "spci_internal.h": "c",
+        "ffa.h": "c",
+        "ffa_internal.h": "c",
         "interrupts_gicv3.h": "c",
         "interrupts.h": "c",
         "abi.h": "c"
diff --git a/docs/Architecture.md b/docs/Architecture.md
index e0261d8..9c086eb 100644
--- a/docs/Architecture.md
+++ b/docs/Architecture.md
@@ -17,7 +17,7 @@
 
 *   Means for VMs to communicate with each other through message passing and
     memory sharing, according to the Arm
-    [Secure Partition Communication Interface (SPCI)](https://developer.arm.com/docs/den0077/a).
+    [Platform Security Architecture Firmware Framework for Arm v8-A (PSA FF-A)](https://developer.arm.com/docs/den0077/a).
 *   Emulation of some basic hardware features such as timers.
 *   A simple paravirtualised interrupt controller for secondary VMs, as they
     don't have access to hardware interrupts.
@@ -156,7 +156,7 @@
 Hafnium maintains state of which VM **owns** each page, and which VMs have
 **access** to it. It does this using the stage 2 page tables of the VMs, with
 some extra application-defined bits in the page table entries. A VM may share,
-lend or donate memory pages to another VM using the appropriate SPCI requests. A
+lend or donate memory pages to another VM using the appropriate FF-A requests. A
 given page of memory may never be shared with more than two VMs, either in terms
 of ownership or access. Thus, the following states are possible for each page,
 for some values of X and Y:
diff --git a/docs/Manifest.md b/docs/Manifest.md
index 458b130..0912a6d 100644
--- a/docs/Manifest.md
+++ b/docs/Manifest.md
@@ -13,7 +13,7 @@
 	hypervisor {
 		compatible = "hafnium,hafnium";
 
-		spci_tee;
+		ffa_tee;
 
 		vm1 {
 			debug_name = "name";
@@ -40,7 +40,7 @@
 of memory, 4 CPUs and, by omitting the `kernel_filename` property, a kernel
 preloaded into memory. The primary VM is given all remaining memory, the same
 number of CPUs as the hardware, a kernel image called `vmlinuz` and a ramdisk
-`initrd.img`. Secondaries cannot have a ramdisk. SPCI memory sharing with the
+`initrd.img`. Secondaries cannot have a ramdisk. FF-A memory sharing with the
 TEE is enabled.
 
 ```
@@ -50,7 +50,7 @@
 	hypervisor {
 		compatible = "hafnium,hafnium";
 
-		spci_tee;
+		ffa_tee;
 
 		vm1 {
 			debug_name = "primary VM";
diff --git a/docs/SchedulerExpectations.md b/docs/SchedulerExpectations.md
index 7dfe193..f642c94 100644
--- a/docs/SchedulerExpectations.md
+++ b/docs/SchedulerExpectations.md
@@ -10,85 +10,85 @@
 
 The scheduler VM is responsible for scheduling the vCPUs of all the other VMs.
 It should request information about the VMs in the system using the
-`SPCI_PARTITION_INFO_GET` function, and then schedule their vCPUs as it wishes.
+`FFA_PARTITION_INFO_GET` function, and then schedule their vCPUs as it wishes.
 The recommended way of doing this is to create a kernel thread for each vCPU,
-which will repeatedly run that vCPU by calling `SPCI_RUN`.
+which will repeatedly run that vCPU by calling `FFA_RUN`.
 
-`SPCI_RUN` will return one of several possible functions, which must be handled
+`FFA_RUN` will return one of several possible functions, which must be handled
 as follows:
 
-### `SPCI_INTERRUPT`
+### `FFA_INTERRUPT`
 
 The vCPU has been preempted but still has work to do. If the scheduling quantum
 has not expired, the scheduler MUST call `hf_vcpu_run` on the vCPU to allow it
 to continue.
 
-### `SPCI_YIELD`
+### `FFA_YIELD`
 
 The vCPU has voluntarily yielded the CPU. The scheduler SHOULD take a scheduling
 decision to give cycles to those that need them but MUST call `hf_vcpu_run` on
 the vCPU at a later point.
 
-### `SPCI_MSG_WAIT`
+### `FFA_MSG_WAIT`
 
 The vCPU is blocked waiting for a message. The scheduler MUST take it off the
-run queue and not call `SPCI_RUN` on the vCPU until it has either:
+run queue and not call `FFA_RUN` on the vCPU until it has either:
 
 *   injected an interrupt
 *   sent it a message
-*   received `HF_SPCI_RUN_WAKE_UP` for it from another vCPU
-*   the timeout provided in `w2` is not `SPCI_SLEEP_INDEFINITE` and the
+*   received `HF_FFA_RUN_WAKE_UP` for it from another vCPU
+*   the timeout provided in `w2` is not `FFA_SLEEP_INDEFINITE` and the
     specified duration has expired.
 
-### `SPCI_MSG_SEND`
+### `FFA_MSG_SEND`
 
 A message has been sent by the vCPU. If the recipient is the scheduler VM itself
 then it can handle it as it pleases. Otherwise the scheduler MUST run a vCPU
 from the recipient VM and priority SHOULD be given to those vCPUs that are
-waiting for a message. The scheduler should call SPCI_RUN again on the sending
+waiting for a message. The scheduler should call FFA_RUN again on the sending
 VM as usual.
 
-### `SPCI_RX_RELEASE`
+### `FFA_RX_RELEASE`
 
 The vCPU has made the mailbox writable and there are pending waiters. The
 scheduler MUST call `hf_mailbox_waiter_get()` repeatedly and notify all waiters
 by injecting an `HF_MAILBOX_WRITABLE_INTID` interrupt. The scheduler should call
-SPCI_RUN again on the sending VM as usual.
+FFA_RUN again on the sending VM as usual.
 
-### `HF_SPCI_RUN_WAIT_FOR_INTERRUPT`
+### `HF_FFA_RUN_WAIT_FOR_INTERRUPT`
 
-_This is a Hafnium-specific function not part of the SPCI standard._
+_This is a Hafnium-specific function not part of the FF-A standard._
 
 The vCPU is blocked waiting for an interrupt. The scheduler MUST take it off the
-run queue and not call `SPCI_RUN` on the vCPU until it has either:
+run queue and not call `FFA_RUN` on the vCPU until it has either:
 
 *   injected an interrupt
-*   received `HF_SPCI_RUN_WAKE_UP` for it from another vCPU
-*   the timeout provided in `w2` is not `SPCI_SLEEP_INDEFINITE` and the
+*   received `HF_FFA_RUN_WAKE_UP` for it from another vCPU
+*   the timeout provided in `w2` is not `FFA_SLEEP_INDEFINITE` and the
     specified duration has expired.
 
-### `HF_SPCI_RUN_WAKE_UP`
+### `HF_FFA_RUN_WAKE_UP`
 
-_This is a Hafnium-specific function not part of the SPCI standard._
+_This is a Hafnium-specific function not part of the FF-A standard._
 
 Hafnium would like `hf_vcpu_run` to be called on another vCPU, specified by
 `hf_vcpu_run_return.wake_up`. The scheduler MUST either wake the vCPU in
 question up if it is blocked, or preempt and re-run it if it is already running
 somewhere. This gives Hafnium a chance to update any CPU state which might have
-changed. The scheduler should call SPCI_RUN again on the sending VM as usual.
+changed. The scheduler should call FFA_RUN again on the sending VM as usual.
 
-### `SPCI_ERROR`
+### `FFA_ERROR`
 
-#### `SPCI_ABORTED`
+#### `FFA_ABORTED`
 
 The vCPU has aborted triggering the whole VM to abort. The scheduler MUST treat
-this the same as `HF_SPCI_RUN_WAKE_UP` for all the other vCPUs of the VM. For
-this vCPU the scheduler SHOULD either never call SPCI_RUN on the vCPU again, or
-treat it the same as `HF_SPCI_RUN_WAIT_FOR_INTERRUPT`.
+this the same as `HF_FFA_RUN_WAKE_UP` for all the other vCPUs of the VM. For
+this vCPU the scheduler SHOULD either never call FFA_RUN on the vCPU again, or
+treat it the same as `HF_FFA_RUN_WAIT_FOR_INTERRUPT`.
 
 #### Any other error code
 
-This should not happen if the scheduler VM has called `SPCI_RUN` correctly, but
+This should not happen if the scheduler VM has called `FFA_RUN` correctly, but
 in case there is some other error it should be logged. The scheduler SHOULD
 either try again or suspend the vCPU indefinitely.
 
@@ -102,6 +102,6 @@
     timer (PPI 10, IRQ 26).
 *   Forward interrupts intended for secondary VMs to an appropriate vCPU of the
     VM by calling `hf_interrupt_inject` and then running the vCPU as usual with
-    `SPCI_RUN`. (If the vCPU is already running at the time that
+    `FFA_RUN`. (If the vCPU is already running at the time that
     `hf_interrupt_inject` is called then it must be preempted and run again so
     that Hafnium can inject the interrupt.)
diff --git a/docs/StyleGuide.md b/docs/StyleGuide.md
index 770d978..1619971 100644
--- a/docs/StyleGuide.md
+++ b/docs/StyleGuide.md
@@ -38,7 +38,7 @@
 These rules apply to comments and other natural language text.
 
 *   Capitalize acronyms.
-    *   CPU, vCPU, VM, EL2, SPCI, QEMU
+    *   CPU, vCPU, VM, EL2, FF-A, QEMU
 *   Spell out Hafnium in full, not Hf.
 *   Use single spaces.
 *   Sentences end with full stops.
diff --git a/docs/VmInterface.md b/docs/VmInterface.md
index 47de251..f061ebc 100644
--- a/docs/VmInterface.md
+++ b/docs/VmInterface.md
@@ -100,7 +100,7 @@
 ## Asynchronous message passing
 
 VMs will be able to send messages of up to 4 KiB to each other asynchronously,
-with no queueing, as specified by SPCI.
+with no queueing, as specified by FF-A.
 
 ## Memory
 
@@ -164,7 +164,7 @@
 ## TrustZone communication
 
 The primary VM will be able to communicate with a TEE running in TrustZone
-either through SPCI messages or through whitelisted SMC calls, and through
+either through FF-A messages or through whitelisted SMC calls, and through
 shared memory.
 
 ## Other SMC calls
diff --git a/driver/linux b/driver/linux
index 81b9069..196ed0e 160000
--- a/driver/linux
+++ b/driver/linux
@@ -1 +1 @@
-Subproject commit 81b90698b40fdcdd886e733a597ced435bd108b9
+Subproject commit 196ed0e80799f7ac99324981de7873b4d3d65f78
diff --git a/inc/hf/api.h b/inc/hf/api.h
index 07b9061..e56a2d2 100644
--- a/inc/hf/api.h
+++ b/inc/hf/api.h
@@ -21,15 +21,15 @@
 #include "hf/vm.h"
 
 #include "vmapi/hf/call.h"
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 void api_init(struct mpool *ppool);
-spci_vm_count_t api_vm_get_count(void);
-spci_vcpu_count_t api_vcpu_get_count(spci_vm_id_t vm_id,
-				     const struct vcpu *current);
+ffa_vm_count_t api_vm_get_count(void);
+ffa_vcpu_count_t api_vcpu_get_count(ffa_vm_id_t vm_id,
+				    const struct vcpu *current);
 void api_regs_state_saved(struct vcpu *vcpu);
 int64_t api_mailbox_writable_get(const struct vcpu *current);
-int64_t api_mailbox_waiter_get(spci_vm_id_t vm_id, const struct vcpu *current);
+int64_t api_mailbox_waiter_get(ffa_vm_id_t vm_id, const struct vcpu *current);
 int64_t api_debug_log(char c, struct vcpu *current);
 
 struct vcpu *api_preempt(struct vcpu *current);
@@ -40,36 +40,35 @@
 
 int64_t api_interrupt_enable(uint32_t intid, bool enable, struct vcpu *current);
 uint32_t api_interrupt_get(struct vcpu *current);
-int64_t api_interrupt_inject(spci_vm_id_t target_vm_id,
-			     spci_vcpu_index_t target_vcpu_idx, uint32_t intid,
+int64_t api_interrupt_inject(ffa_vm_id_t target_vm_id,
+			     ffa_vcpu_index_t target_vcpu_idx, uint32_t intid,
 			     struct vcpu *current, struct vcpu **next);
 
-struct spci_value api_spci_msg_send(spci_vm_id_t sender_vm_id,
-				    spci_vm_id_t receiver_vm_id, uint32_t size,
-				    uint32_t attributes, struct vcpu *current,
-				    struct vcpu **next);
-struct spci_value api_spci_msg_recv(bool block, struct vcpu *current,
-				    struct vcpu **next);
-struct spci_value api_spci_rx_release(struct vcpu *current, struct vcpu **next);
-struct spci_value api_spci_rxtx_map(ipaddr_t send, ipaddr_t recv,
-				    uint32_t page_count, struct vcpu *current,
-				    struct vcpu **next);
+struct ffa_value api_ffa_msg_send(ffa_vm_id_t sender_vm_id,
+				  ffa_vm_id_t receiver_vm_id, uint32_t size,
+				  uint32_t attributes, struct vcpu *current,
+				  struct vcpu **next);
+struct ffa_value api_ffa_msg_recv(bool block, struct vcpu *current,
+				  struct vcpu **next);
+struct ffa_value api_ffa_rx_release(struct vcpu *current, struct vcpu **next);
+struct ffa_value api_ffa_rxtx_map(ipaddr_t send, ipaddr_t recv,
+				  uint32_t page_count, struct vcpu *current,
+				  struct vcpu **next);
 void api_yield(struct vcpu *current, struct vcpu **next);
-struct spci_value api_spci_version(uint32_t requested_version);
-struct spci_value api_spci_id_get(const struct vcpu *current);
-struct spci_value api_spci_features(uint32_t function_id);
-struct spci_value api_spci_run(spci_vm_id_t vm_id, spci_vcpu_index_t vcpu_idx,
-			       const struct vcpu *current, struct vcpu **next);
-struct spci_value api_spci_mem_send(uint32_t share_func, uint32_t length,
-				    uint32_t fragment_length, ipaddr_t address,
-				    uint32_t page_count, struct vcpu *current,
-				    struct vcpu **next);
-struct spci_value api_spci_mem_retrieve_req(uint32_t length,
-					    uint32_t fragment_length,
-					    ipaddr_t address,
-					    uint32_t page_count,
-					    struct vcpu *current);
-struct spci_value api_spci_mem_relinquish(struct vcpu *current);
-struct spci_value api_spci_mem_reclaim(spci_memory_handle_t handle,
-				       spci_memory_region_flags_t flags,
-				       struct vcpu *current);
+struct ffa_value api_ffa_version(uint32_t requested_version);
+struct ffa_value api_ffa_id_get(const struct vcpu *current);
+struct ffa_value api_ffa_features(uint32_t function_id);
+struct ffa_value api_ffa_run(ffa_vm_id_t vm_id, ffa_vcpu_index_t vcpu_idx,
+			     const struct vcpu *current, struct vcpu **next);
+struct ffa_value api_ffa_mem_send(uint32_t share_func, uint32_t length,
+				  uint32_t fragment_length, ipaddr_t address,
+				  uint32_t page_count, struct vcpu *current,
+				  struct vcpu **next);
+struct ffa_value api_ffa_mem_retrieve_req(uint32_t length,
+					  uint32_t fragment_length,
+					  ipaddr_t address, uint32_t page_count,
+					  struct vcpu *current);
+struct ffa_value api_ffa_mem_relinquish(struct vcpu *current);
+struct ffa_value api_ffa_mem_reclaim(ffa_memory_handle_t handle,
+				     ffa_memory_region_flags_t flags,
+				     struct vcpu *current);
diff --git a/inc/hf/arch/cpu.h b/inc/hf/arch/cpu.h
index a7fe5a8..171abaa 100644
--- a/inc/hf/arch/cpu.h
+++ b/inc/hf/arch/cpu.h
@@ -25,7 +25,7 @@
 #include "hf/addr.h"
 #include "hf/vcpu.h"
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 /**
  * Reset the register values other than the PC and argument which are set with
@@ -48,7 +48,7 @@
  * This function must only be called on an arch_regs that is known not be in use
  * by any other physical CPU.
  */
-void arch_regs_set_retval(struct arch_regs *r, struct spci_value v);
+void arch_regs_set_retval(struct arch_regs *r, struct ffa_value v);
 
 /**
  * Initialize and reset CPU-wide register values.
diff --git a/inc/hf/arch/plat/smc.h b/inc/hf/arch/plat/smc.h
index 29493fc..4d5b9a0 100644
--- a/inc/hf/arch/plat/smc.h
+++ b/inc/hf/arch/plat/smc.h
@@ -16,11 +16,11 @@
 
 #pragma once
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 /**
  * Called after an SMC has been forwarded. `args` contains the arguments passed
  * to the SMC and `ret` contains the return values that will be set in the vCPU
  * registers after this call returns.
  */
-void plat_smc_post_forward(struct spci_value args, struct spci_value *ret);
+void plat_smc_post_forward(struct ffa_value args, struct ffa_value *ret);
diff --git a/inc/hf/arch/tee.h b/inc/hf/arch/tee.h
index f37fe0b..5cf4f81 100644
--- a/inc/hf/arch/tee.h
+++ b/inc/hf/arch/tee.h
@@ -16,7 +16,7 @@
 
 #pragma once
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 
 void arch_tee_init(void);
-struct spci_value arch_tee_call(struct spci_value args);
+struct ffa_value arch_tee_call(struct ffa_value args);
diff --git a/inc/hf/dlog.h b/inc/hf/dlog.h
index 5451f85..65b3ac2 100644
--- a/inc/hf/dlog.h
+++ b/inc/hf/dlog.h
@@ -19,7 +19,7 @@
 #include <stdarg.h>
 #include <stddef.h>
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 
 #define DLOG_BUFFER_SIZE 8192
 
@@ -67,4 +67,4 @@
 #define dlog_verbose(...)
 #endif
 
-void dlog_flush_vm_buffer(spci_vm_id_t id, char buffer[], size_t length);
+void dlog_flush_vm_buffer(ffa_vm_id_t id, char buffer[], size_t length);
diff --git a/inc/hf/spci_internal.h b/inc/hf/ffa_internal.h
similarity index 67%
rename from inc/hf/spci_internal.h
rename to inc/hf/ffa_internal.h
index 1c2ccff..201f04f 100644
--- a/inc/hf/spci_internal.h
+++ b/inc/hf/ffa_internal.h
@@ -18,15 +18,15 @@
 
 #include <stdint.h>
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
-#define SPCI_VERSION_MAJOR 0x1
-#define SPCI_VERSION_MINOR 0x0
+#define FFA_VERSION_MAJOR 0x1
+#define FFA_VERSION_MINOR 0x0
 
-#define SPCI_VERSION_MAJOR_OFFSET 16
-#define SPCI_VERSION_RESERVED_BIT UINT32_C(1U << 31)
+#define FFA_VERSION_MAJOR_OFFSET 16
+#define FFA_VERSION_RESERVED_BIT UINT32_C(1U << 31)
 
-static inline struct spci_value spci_error(uint64_t error_code)
+static inline struct ffa_value ffa_error(uint64_t error_code)
 {
-	return (struct spci_value){.func = SPCI_ERROR_32, .arg2 = error_code};
+	return (struct ffa_value){.func = FFA_ERROR_32, .arg2 = error_code};
 }
diff --git a/inc/hf/ffa_memory.h b/inc/hf/ffa_memory.h
new file mode 100644
index 0000000..c133f76
--- /dev/null
+++ b/inc/hf/ffa_memory.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2019 The Hafnium Authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include "hf/mpool.h"
+#include "hf/vm.h"
+
+#include "vmapi/hf/ffa.h"
+
+struct ffa_value ffa_memory_send(struct vm *to, struct vm_locked from_locked,
+				 struct ffa_memory_region *memory_region,
+				 uint32_t memory_share_size,
+				 uint32_t share_func, struct mpool *page_pool);
+struct ffa_value ffa_memory_retrieve(struct vm_locked to_locked,
+				     struct ffa_memory_region *retrieve_request,
+				     uint32_t retrieve_request_size,
+				     struct mpool *page_pool);
+struct ffa_value ffa_memory_relinquish(
+	struct vm_locked from_locked,
+	struct ffa_mem_relinquish *relinquish_request, struct mpool *page_pool);
+struct ffa_value ffa_memory_reclaim(struct vm_locked to_locked,
+				    ffa_memory_handle_t handle, bool clear,
+				    struct mpool *page_pool);
+struct ffa_value ffa_memory_tee_reclaim(struct vm_locked to_locked,
+					ffa_memory_handle_t handle,
+					struct ffa_memory_region *memory_region,
+					bool clear, struct mpool *page_pool);
diff --git a/inc/hf/manifest.h b/inc/hf/manifest.h
index 4c3c5dc..ab3a92f 100644
--- a/inc/hf/manifest.h
+++ b/inc/hf/manifest.h
@@ -16,8 +16,8 @@
 
 #pragma once
 
+#include "hf/ffa.h"
 #include "hf/memiter.h"
-#include "hf/spci.h"
 #include "hf/string.h"
 #include "hf/vm.h"
 
@@ -41,7 +41,7 @@
 		/* Properties specific to secondary VMs. */
 		struct {
 			uint64_t mem_size;
-			spci_vcpu_count_t vcpu_count;
+			ffa_vcpu_count_t vcpu_count;
 		} secondary;
 	};
 };
@@ -50,8 +50,8 @@
  * Hafnium manifest parsed from FDT.
  */
 struct manifest {
-	bool spci_tee_enabled;
-	spci_vm_count_t vm_count;
+	bool ffa_tee_enabled;
+	ffa_vm_count_t vm_count;
 	struct manifest_vm vm[MAX_VMS];
 };
 
diff --git a/inc/hf/spci_memory.h b/inc/hf/spci_memory.h
deleted file mode 100644
index 7b6f086..0000000
--- a/inc/hf/spci_memory.h
+++ /dev/null
@@ -1,42 +0,0 @@
-/*
- * Copyright 2019 The Hafnium Authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *     https://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include "hf/mpool.h"
-#include "hf/vm.h"
-
-#include "vmapi/hf/spci.h"
-
-struct spci_value spci_memory_send(struct vm *to, struct vm_locked from_locked,
-				   struct spci_memory_region *memory_region,
-				   uint32_t memory_share_size,
-				   uint32_t share_func,
-				   struct mpool *page_pool);
-struct spci_value spci_memory_retrieve(
-	struct vm_locked to_locked, struct spci_memory_region *retrieve_request,
-	uint32_t retrieve_request_size, struct mpool *page_pool);
-struct spci_value spci_memory_relinquish(
-	struct vm_locked from_locked,
-	struct spci_mem_relinquish *relinquish_request,
-	struct mpool *page_pool);
-struct spci_value spci_memory_reclaim(struct vm_locked to_locked,
-				      spci_memory_handle_t handle, bool clear,
-				      struct mpool *page_pool);
-struct spci_value spci_memory_tee_reclaim(
-	struct vm_locked to_locked, spci_memory_handle_t handle,
-	struct spci_memory_region *memory_region, bool clear,
-	struct mpool *page_pool);
diff --git a/inc/hf/vcpu.h b/inc/hf/vcpu.h
index 87d5e8c..397ecff 100644
--- a/inc/hf/vcpu.h
+++ b/inc/hf/vcpu.h
@@ -19,7 +19,7 @@
 #include "hf/addr.h"
 #include "hf/spinlock.h"
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 /** The number of bits in each element of the interrupt bitfields. */
 #define INTERRUPT_REGISTER_BITS 32
@@ -96,7 +96,7 @@
 void vcpu_unlock(struct vcpu_locked *locked);
 void vcpu_init(struct vcpu *vcpu, struct vm *vm);
 void vcpu_on(struct vcpu_locked vcpu, ipaddr_t entry, uintreg_t arg);
-spci_vcpu_index_t vcpu_index(const struct vcpu *vcpu);
+ffa_vcpu_index_t vcpu_index(const struct vcpu *vcpu);
 bool vcpu_is_off(struct vcpu_locked vcpu);
 bool vcpu_secondary_reset_and_start(struct vcpu *vcpu, ipaddr_t entry,
 				    uintreg_t arg);
diff --git a/inc/hf/vm.h b/inc/hf/vm.h
index 0d7735e..7803d3f 100644
--- a/inc/hf/vm.h
+++ b/inc/hf/vm.h
@@ -25,7 +25,7 @@
 #include "hf/mm.h"
 #include "hf/mpool.h"
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 #define MAX_SMCS 32
 #define LOG_BUFFER_SIZE 256
@@ -35,10 +35,10 @@
  *
  * EMPTY is the initial state. The follow state transitions are possible:
  * * EMPTY → RECEIVED: message sent to the VM.
- * * RECEIVED → READ: secondary VM returns from SPCI_MSG_WAIT or
- *   SPCI_MSG_POLL, or primary VM returns from SPCI_RUN with an SPCI_MSG_SEND
+ * * RECEIVED → READ: secondary VM returns from FFA_MSG_WAIT or
+ *   FFA_MSG_POLL, or primary VM returns from FFA_RUN with an FFA_MSG_SEND
  *   where the receiver is itself.
- * * READ → EMPTY: VM called SPCI_RX_RELEASE.
+ * * READ → EMPTY: VM called FFA_RX_RELEASE.
  */
 enum mailbox_state {
 	/** There is no message in the mailbox. */
@@ -74,13 +74,13 @@
 	const void *send;
 
 	/** The ID of the VM which sent the message currently in `recv`. */
-	spci_vm_id_t recv_sender;
+	ffa_vm_id_t recv_sender;
 
 	/** The size of the message currently in `recv`. */
 	uint32_t recv_size;
 
 	/**
-	 * The SPCI function ID to use to deliver the message currently in
+	 * The FF-A function ID to use to deliver the message currently in
 	 * `recv`.
 	 */
 	uint32_t recv_func;
@@ -107,12 +107,12 @@
 };
 
 struct vm {
-	spci_vm_id_t id;
+	ffa_vm_id_t id;
 	struct smc_whitelist smc_whitelist;
 
 	/** See api.c for the partial ordering on locks. */
 	struct spinlock lock;
-	spci_vcpu_count_t vcpu_count;
+	ffa_vcpu_count_t vcpu_count;
 	struct vcpu vcpus[MAX_CPUS];
 	struct mm_ptable ptable;
 	struct mailbox mailbox;
@@ -142,18 +142,18 @@
 	struct vm_locked vm2;
 };
 
-struct vm *vm_init(spci_vm_id_t id, spci_vcpu_count_t vcpu_count,
+struct vm *vm_init(ffa_vm_id_t id, ffa_vcpu_count_t vcpu_count,
 		   struct mpool *ppool);
-bool vm_init_next(spci_vcpu_count_t vcpu_count, struct mpool *ppool,
+bool vm_init_next(ffa_vcpu_count_t vcpu_count, struct mpool *ppool,
 		  struct vm **new_vm);
-spci_vm_count_t vm_get_count(void);
-struct vm *vm_find(spci_vm_id_t id);
+ffa_vm_count_t vm_get_count(void);
+struct vm *vm_find(ffa_vm_id_t id);
 struct vm_locked vm_lock(struct vm *vm);
 struct two_vm_locked vm_lock_both(struct vm *vm1, struct vm *vm2);
 void vm_unlock(struct vm_locked *locked);
-struct vcpu *vm_get_vcpu(struct vm *vm, spci_vcpu_index_t vcpu_index);
-struct wait_entry *vm_get_wait_entry(struct vm *vm, spci_vm_id_t for_vm);
-spci_vm_id_t vm_id_for_wait_entry(struct vm *vm, struct wait_entry *entry);
+struct vcpu *vm_get_vcpu(struct vm *vm, ffa_vcpu_index_t vcpu_index);
+struct wait_entry *vm_get_wait_entry(struct vm *vm, ffa_vm_id_t for_vm);
+ffa_vm_id_t vm_id_for_wait_entry(struct vm *vm, struct wait_entry *entry);
 
 bool vm_identity_map(struct vm_locked vm_locked, paddr_t begin, paddr_t end,
 		     uint32_t mode, struct mpool *ppool, ipaddr_t *ipa);
diff --git a/inc/vmapi/hf/abi.h b/inc/vmapi/hf/abi.h
index ed004b7..cc6d14c 100644
--- a/inc/vmapi/hf/abi.h
+++ b/inc/vmapi/hf/abi.h
@@ -16,7 +16,7 @@
 
 #pragma once
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/types.h"
 
 /* Keep macro alignment */
@@ -31,9 +31,9 @@
 #define HF_INTERRUPT_GET               0xff06
 #define HF_INTERRUPT_INJECT            0xff07
 
-/* Custom SPCI-like calls returned from SPCI_RUN. */
-#define HF_SPCI_RUN_WAIT_FOR_INTERRUPT 0xff09
-#define HF_SPCI_RUN_WAKE_UP            0xff0a
+/* Custom FF-A-like calls returned from FFA_RUN. */
+#define HF_FFA_RUN_WAIT_FOR_INTERRUPT 0xff09
+#define HF_FFA_RUN_WAKE_UP            0xff0a
 
 /* This matches what Trusty and its ATF module currently use. */
 #define HF_DEBUG_LOG            0xbd000000
diff --git a/inc/vmapi/hf/call.h b/inc/vmapi/hf/call.h
index 1fa31cf..1416f77 100644
--- a/inc/vmapi/hf/call.h
+++ b/inc/vmapi/hf/call.h
@@ -17,7 +17,7 @@
 #pragma once
 
 #include "hf/abi.h"
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/types.h"
 
 /**
@@ -25,28 +25,28 @@
  * mechanism to call to the hypervisor.
  */
 int64_t hf_call(uint64_t arg0, uint64_t arg1, uint64_t arg2, uint64_t arg3);
-struct spci_value spci_call(struct spci_value args);
+struct ffa_value ffa_call(struct ffa_value args);
 
 /**
  * Returns the VM's own ID.
  */
-static inline struct spci_value spci_id_get(void)
+static inline struct ffa_value ffa_id_get(void)
 {
-	return spci_call((struct spci_value){.func = SPCI_ID_GET_32});
+	return ffa_call((struct ffa_value){.func = FFA_ID_GET_32});
 }
 
 /**
  * Returns the VM's own ID.
  */
-static inline spci_vm_id_t hf_vm_get_id(void)
+static inline ffa_vm_id_t hf_vm_get_id(void)
 {
-	return spci_id_get().arg2;
+	return ffa_id_get().arg2;
 }
 
 /**
  * Returns the number of secondary VMs.
  */
-static inline spci_vm_count_t hf_vm_get_count(void)
+static inline ffa_vm_count_t hf_vm_get_count(void)
 {
 	return hf_call(HF_VM_GET_COUNT, 0, 0, 0);
 }
@@ -54,7 +54,7 @@
 /**
  * Returns the number of vCPUs configured in the given secondary VM.
  */
-static inline spci_vcpu_count_t hf_vcpu_get_count(spci_vm_id_t vm_id)
+static inline ffa_vcpu_count_t hf_vcpu_get_count(ffa_vm_id_t vm_id)
 {
 	return hf_call(HF_VCPU_GET_COUNT, vm_id, 0, 0);
 }
@@ -62,20 +62,20 @@
 /**
  * Runs the given vCPU of the given VM.
  */
-static inline struct spci_value spci_run(spci_vm_id_t vm_id,
-					 spci_vcpu_index_t vcpu_idx)
+static inline struct ffa_value ffa_run(ffa_vm_id_t vm_id,
+				       ffa_vcpu_index_t vcpu_idx)
 {
-	return spci_call((struct spci_value){.func = SPCI_RUN_32,
-					     spci_vm_vcpu(vm_id, vcpu_idx)});
+	return ffa_call((struct ffa_value){.func = FFA_RUN_32,
+					   ffa_vm_vcpu(vm_id, vcpu_idx)});
 }
 
 /**
  * Hints that the vCPU is willing to yield its current use of the physical CPU.
- * This call always returns SPCI_SUCCESS.
+ * This call always returns FFA_SUCCESS.
  */
-static inline struct spci_value spci_yield(void)
+static inline struct ffa_value ffa_yield(void)
 {
-	return spci_call((struct spci_value){.func = SPCI_YIELD_32});
+	return ffa_call((struct ffa_value){.func = FFA_YIELD_32});
 }
 
 /**
@@ -83,24 +83,23 @@
  * shared.
  *
  * Returns:
- *  - SPCI_ERROR SPCI_INVALID_PARAMETERS if the given addresses are not properly
+ *  - FFA_ERROR FFA_INVALID_PARAMETERS if the given addresses are not properly
  *    aligned or are the same.
- *  - SPCI_ERROR SPCI_NO_MEMORY if the hypervisor was unable to map the buffers
+ *  - FFA_ERROR FFA_NO_MEMORY if the hypervisor was unable to map the buffers
  *    due to insuffient page table memory.
- *  - SPCI_ERROR SPCI_DENIED if the pages are already mapped or are not owned by
+ *  - FFA_ERROR FFA_DENIED if the pages are already mapped or are not owned by
  *    the caller.
- *  - SPCI_SUCCESS on success if no further action is needed.
- *  - SPCI_RX_RELEASE if it was called by the primary VM and the primary VM now
+ *  - FFA_SUCCESS on success if no further action is needed.
+ *  - FFA_RX_RELEASE if it was called by the primary VM and the primary VM now
  *    needs to wake up or kick waiters.
  */
-static inline struct spci_value spci_rxtx_map(hf_ipaddr_t send,
-					      hf_ipaddr_t recv)
+static inline struct ffa_value ffa_rxtx_map(hf_ipaddr_t send, hf_ipaddr_t recv)
 {
-	return spci_call(
-		(struct spci_value){.func = SPCI_RXTX_MAP_64,
-				    .arg1 = send,
-				    .arg2 = recv,
-				    .arg3 = HF_MAILBOX_SIZE / SPCI_PAGE_SIZE});
+	return ffa_call(
+		(struct ffa_value){.func = FFA_RXTX_MAP_64,
+				   .arg1 = send,
+				   .arg2 = recv,
+				   .arg3 = HF_MAILBOX_SIZE / FFA_PAGE_SIZE});
 }
 
 /**
@@ -110,71 +109,70 @@
  * caller to be notified when the recipient's receive buffer becomes available.
  *
  * Attributes may include:
- *  - SPCI_MSG_SEND_NOTIFY, to notify the caller when it should try again.
- *  - SPCI_MSG_SEND_LEGACY_MEMORY_*, to send a legacy architected memory sharing
+ *  - FFA_MSG_SEND_NOTIFY, to notify the caller when it should try again.
+ *  - FFA_MSG_SEND_LEGACY_MEMORY_*, to send a legacy architected memory sharing
  *    message.
  *
- * Returns SPCI_SUCCESS if the message is sent, or an error code otherwise:
+ * Returns FFA_SUCCESS if the message is sent, or an error code otherwise:
  *  - INVALID_PARAMETERS: one or more of the parameters do not conform.
  *  - BUSY: the message could not be delivered either because the mailbox
  *    was full or the target VM is not yet set up.
  */
-static inline struct spci_value spci_msg_send(spci_vm_id_t sender_vm_id,
-					      spci_vm_id_t target_vm_id,
-					      uint32_t size,
-					      uint32_t attributes)
+static inline struct ffa_value ffa_msg_send(ffa_vm_id_t sender_vm_id,
+					    ffa_vm_id_t target_vm_id,
+					    uint32_t size, uint32_t attributes)
 {
-	return spci_call((struct spci_value){
-		.func = SPCI_MSG_SEND_32,
+	return ffa_call((struct ffa_value){
+		.func = FFA_MSG_SEND_32,
 		.arg1 = ((uint64_t)sender_vm_id << 16) | target_vm_id,
 		.arg3 = size,
 		.arg4 = attributes});
 }
 
-static inline struct spci_value spci_mem_donate(uint32_t length,
-						uint32_t fragment_length)
-{
-	return spci_call((struct spci_value){.func = SPCI_MEM_DONATE_32,
-					     .arg1 = length,
-					     .arg2 = fragment_length});
-}
-
-static inline struct spci_value spci_mem_lend(uint32_t length,
+static inline struct ffa_value ffa_mem_donate(uint32_t length,
 					      uint32_t fragment_length)
 {
-	return spci_call((struct spci_value){.func = SPCI_MEM_LEND_32,
-					     .arg1 = length,
-					     .arg2 = fragment_length});
+	return ffa_call((struct ffa_value){.func = FFA_MEM_DONATE_32,
+					   .arg1 = length,
+					   .arg2 = fragment_length});
 }
 
-static inline struct spci_value spci_mem_share(uint32_t length,
-					       uint32_t fragment_length)
+static inline struct ffa_value ffa_mem_lend(uint32_t length,
+					    uint32_t fragment_length)
 {
-	return spci_call((struct spci_value){.func = SPCI_MEM_SHARE_32,
-					     .arg1 = length,
-					     .arg2 = fragment_length});
+	return ffa_call((struct ffa_value){.func = FFA_MEM_LEND_32,
+					   .arg1 = length,
+					   .arg2 = fragment_length});
 }
 
-static inline struct spci_value spci_mem_retrieve_req(uint32_t length,
-						      uint32_t fragment_length)
+static inline struct ffa_value ffa_mem_share(uint32_t length,
+					     uint32_t fragment_length)
 {
-	return spci_call((struct spci_value){.func = SPCI_MEM_RETRIEVE_REQ_32,
-					     .arg1 = length,
-					     .arg2 = fragment_length});
+	return ffa_call((struct ffa_value){.func = FFA_MEM_SHARE_32,
+					   .arg1 = length,
+					   .arg2 = fragment_length});
 }
 
-static inline struct spci_value spci_mem_relinquish(void)
+static inline struct ffa_value ffa_mem_retrieve_req(uint32_t length,
+						    uint32_t fragment_length)
 {
-	return spci_call((struct spci_value){.func = SPCI_MEM_RELINQUISH_32});
+	return ffa_call((struct ffa_value){.func = FFA_MEM_RETRIEVE_REQ_32,
+					   .arg1 = length,
+					   .arg2 = fragment_length});
 }
 
-static inline struct spci_value spci_mem_reclaim(
-	spci_memory_handle_t handle, spci_memory_region_flags_t flags)
+static inline struct ffa_value ffa_mem_relinquish(void)
 {
-	return spci_call((struct spci_value){.func = SPCI_MEM_RECLAIM_32,
-					     .arg1 = (uint32_t)handle,
-					     .arg2 = (uint32_t)(handle >> 32),
-					     .arg3 = flags});
+	return ffa_call((struct ffa_value){.func = FFA_MEM_RELINQUISH_32});
+}
+
+static inline struct ffa_value ffa_mem_reclaim(ffa_memory_handle_t handle,
+					       ffa_memory_region_flags_t flags)
+{
+	return ffa_call((struct ffa_value){.func = FFA_MEM_RECLAIM_32,
+					   .arg1 = (uint32_t)handle,
+					   .arg2 = (uint32_t)(handle >> 32),
+					   .arg3 = flags});
 }
 
 /**
@@ -190,13 +188,13 @@
  * that a message becoming available is also treated like a wake-up event.
  *
  * Returns:
- *  - SPCI_MSG_SEND if a message is successfully received.
- *  - SPCI_ERROR SPCI_NOT_SUPPORTED if called from the primary VM.
- *  - SPCI_ERROR SPCI_INTERRUPTED if an interrupt happened during the call.
+ *  - FFA_MSG_SEND if a message is successfully received.
+ *  - FFA_ERROR FFA_NOT_SUPPORTED if called from the primary VM.
+ *  - FFA_ERROR FFA_INTERRUPTED if an interrupt happened during the call.
  */
-static inline struct spci_value spci_msg_wait(void)
+static inline struct ffa_value ffa_msg_wait(void)
 {
-	return spci_call((struct spci_value){.func = SPCI_MSG_WAIT_32});
+	return ffa_call((struct ffa_value){.func = FFA_MSG_WAIT_32});
 }
 
 /**
@@ -206,14 +204,14 @@
  * The mailbox must be cleared before a new message can be received.
  *
  * Returns:
- *  - SPCI_MSG_SEND if a message is successfully received.
- *  - SPCI_ERROR SPCI_NOT_SUPPORTED if called from the primary VM.
- *  - SPCI_ERROR SPCI_INTERRUPTED if an interrupt happened during the call.
- *  - SPCI_ERROR SPCI_RETRY if there was no pending message.
+ *  - FFA_MSG_SEND if a message is successfully received.
+ *  - FFA_ERROR FFA_NOT_SUPPORTED if called from the primary VM.
+ *  - FFA_ERROR FFA_INTERRUPTED if an interrupt happened during the call.
+ *  - FFA_ERROR FFA_RETRY if there was no pending message.
  */
-static inline struct spci_value spci_msg_poll(void)
+static inline struct ffa_value ffa_msg_poll(void)
 {
-	return spci_call((struct spci_value){.func = SPCI_MSG_POLL_32});
+	return ffa_call((struct ffa_value){.func = FFA_MSG_POLL_32});
 }
 
 /**
@@ -222,15 +220,15 @@
  * will overwrite the old and will arrive asynchronously.
  *
  * Returns:
- *  - SPCI_ERROR SPCI_DENIED on failure, if the mailbox hasn't been read.
- *  - SPCI_SUCCESS on success if no further action is needed.
- *  - SPCI_RX_RELEASE if it was called by the primary VM and the primary VM now
+ *  - FFA_ERROR FFA_DENIED on failure, if the mailbox hasn't been read.
+ *  - FFA_SUCCESS on success if no further action is needed.
+ *  - FFA_RX_RELEASE if it was called by the primary VM and the primary VM now
  *    needs to wake up or kick waiters. Waiters should be retrieved by calling
  *    hf_mailbox_waiter_get.
  */
-static inline struct spci_value spci_rx_release(void)
+static inline struct ffa_value ffa_rx_release(void)
 {
-	return spci_call((struct spci_value){.func = SPCI_RX_RELEASE_32});
+	return ffa_call((struct ffa_value){.func = FFA_RX_RELEASE_32});
 }
 
 /**
@@ -256,7 +254,7 @@
  * Returns -1 on failure or if there are no waiters; the VM id of the next
  * waiter otherwise.
  */
-static inline int64_t hf_mailbox_waiter_get(spci_vm_id_t vm_id)
+static inline int64_t hf_mailbox_waiter_get(ffa_vm_id_t vm_id)
 {
 	return hf_call(HF_MAILBOX_WAITER_GET, vm_id, 0, 0);
 }
@@ -294,8 +292,8 @@
  *  - 1 if it was called by the primary VM and the primary VM now needs to wake
  *    up or kick the target vCPU.
  */
-static inline int64_t hf_interrupt_inject(spci_vm_id_t target_vm_id,
-					  spci_vcpu_index_t target_vcpu_idx,
+static inline int64_t hf_interrupt_inject(ffa_vm_id_t target_vm_id,
+					  ffa_vcpu_index_t target_vcpu_idx,
 					  uint32_t intid)
 {
 	return hf_call(HF_INTERRUPT_INJECT, target_vm_id, target_vcpu_idx,
@@ -312,26 +310,26 @@
 	return hf_call(HF_DEBUG_LOG, c, 0, 0);
 }
 
-/** Obtains the Hafnium's version of the implemented SPCI specification. */
-static inline int32_t spci_version(uint32_t requested_version)
+/** Obtains the Hafnium's version of the implemented FF-A specification. */
+static inline int32_t ffa_version(uint32_t requested_version)
 {
-	return spci_call((struct spci_value){.func = SPCI_VERSION_32,
-					     .arg1 = requested_version})
+	return ffa_call((struct ffa_value){.func = FFA_VERSION_32,
+					   .arg1 = requested_version})
 		.func;
 }
 
 /**
  * Discovery function returning information about the implementation of optional
- * SPCI interfaces.
+ * FF-A interfaces.
  *
  * Returns:
- *  - SPCI_SUCCESS in .func if the optional interface with function_id is
+ *  - FFA_SUCCESS in .func if the optional interface with function_id is
  * implemented.
- *  - SPCI_ERROR in .func if the optional interface with function_id is not
+ *  - FFA_ERROR in .func if the optional interface with function_id is not
  * implemented.
  */
-static inline struct spci_value spci_features(uint32_t function_id)
+static inline struct ffa_value ffa_features(uint32_t function_id)
 {
-	return spci_call((struct spci_value){.func = SPCI_FEATURES_32,
-					     .arg1 = function_id});
+	return ffa_call((struct ffa_value){.func = FFA_FEATURES_32,
+					   .arg1 = function_id});
 }
diff --git a/inc/vmapi/hf/ffa.h b/inc/vmapi/hf/ffa.h
new file mode 100644
index 0000000..bd93f0c
--- /dev/null
+++ b/inc/vmapi/hf/ffa.h
@@ -0,0 +1,479 @@
+/*
+ * Copyright 2019 The Hafnium Authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include "hf/types.h"
+
+/* clang-format off */
+
+#define FFA_LOW_32_ID  0x84000060
+#define FFA_HIGH_32_ID 0x8400007F
+#define FFA_LOW_64_ID  0xC4000060
+#define FFA_HIGH_32_ID 0x8400007F
+
+/* FF-A function identifiers. */
+#define FFA_ERROR_32                 0x84000060
+#define FFA_SUCCESS_32               0x84000061
+#define FFA_INTERRUPT_32             0x84000062
+#define FFA_VERSION_32               0x84000063
+#define FFA_FEATURES_32              0x84000064
+#define FFA_RX_RELEASE_32            0x84000065
+#define FFA_RXTX_MAP_32              0x84000066
+#define FFA_RXTX_MAP_64              0xC4000066
+#define FFA_RXTX_UNMAP_32            0x84000067
+#define FFA_PARTITION_INFO_GET_32    0x84000068
+#define FFA_ID_GET_32                0x84000069
+#define FFA_MSG_POLL_32              0x8400006A
+#define FFA_MSG_WAIT_32              0x8400006B
+#define FFA_YIELD_32                 0x8400006C
+#define FFA_RUN_32                   0x8400006D
+#define FFA_MSG_SEND_32              0x8400006E
+#define FFA_MSG_SEND_DIRECT_REQ_32   0x8400006F
+#define FFA_MSG_SEND_DIRECT_RESP_32  0x84000070
+#define FFA_MEM_DONATE_32            0x84000071
+#define FFA_MEM_LEND_32              0x84000072
+#define FFA_MEM_SHARE_32             0x84000073
+#define FFA_MEM_RETRIEVE_REQ_32      0x84000074
+#define FFA_MEM_RETRIEVE_RESP_32     0x84000075
+#define FFA_MEM_RELINQUISH_32        0x84000076
+#define FFA_MEM_RECLAIM_32           0x84000077
+
+/* FF-A error codes. */
+#define FFA_NOT_SUPPORTED      INT32_C(-1)
+#define FFA_INVALID_PARAMETERS INT32_C(-2)
+#define FFA_NO_MEMORY          INT32_C(-3)
+#define FFA_BUSY               INT32_C(-4)
+#define FFA_INTERRUPTED        INT32_C(-5)
+#define FFA_DENIED             INT32_C(-6)
+#define FFA_RETRY              INT32_C(-7)
+#define FFA_ABORTED            INT32_C(-8)
+
+/* clang-format on */
+
+/* FF-A function specific constants. */
+#define FFA_MSG_RECV_BLOCK 0x1
+#define FFA_MSG_RECV_BLOCK_MASK 0x1
+
+#define FFA_MSG_SEND_NOTIFY 0x1
+#define FFA_MSG_SEND_NOTIFY_MASK 0x1
+
+#define FFA_MEM_RECLAIM_CLEAR 0x1
+
+#define FFA_SLEEP_INDEFINITE 0
+
+/**
+ * For use where the FF-A specification refers explicitly to '4K pages'. Not to
+ * be confused with PAGE_SIZE, which is the translation granule Hafnium is
+ * configured to use.
+ */
+#define FFA_PAGE_SIZE 4096
+
+/* The maximum length possible for a single message. */
+#define FFA_MSG_PAYLOAD_MAX HF_MAILBOX_SIZE
+
+enum ffa_data_access {
+	FFA_DATA_ACCESS_NOT_SPECIFIED,
+	FFA_DATA_ACCESS_RO,
+	FFA_DATA_ACCESS_RW,
+	FFA_DATA_ACCESS_RESERVED,
+};
+
+enum ffa_instruction_access {
+	FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+	FFA_INSTRUCTION_ACCESS_NX,
+	FFA_INSTRUCTION_ACCESS_X,
+	FFA_INSTRUCTION_ACCESS_RESERVED,
+};
+
+enum ffa_memory_type {
+	FFA_MEMORY_NOT_SPECIFIED_MEM,
+	FFA_MEMORY_DEVICE_MEM,
+	FFA_MEMORY_NORMAL_MEM,
+};
+
+enum ffa_memory_cacheability {
+	FFA_MEMORY_CACHE_RESERVED = 0x0,
+	FFA_MEMORY_CACHE_NON_CACHEABLE = 0x1,
+	FFA_MEMORY_CACHE_RESERVED_1 = 0x2,
+	FFA_MEMORY_CACHE_WRITE_BACK = 0x3,
+	FFA_MEMORY_DEV_NGNRNE = 0x0,
+	FFA_MEMORY_DEV_NGNRE = 0x1,
+	FFA_MEMORY_DEV_NGRE = 0x2,
+	FFA_MEMORY_DEV_GRE = 0x3,
+};
+
+enum ffa_memory_shareability {
+	FFA_MEMORY_SHARE_NON_SHAREABLE,
+	FFA_MEMORY_SHARE_RESERVED,
+	FFA_MEMORY_OUTER_SHAREABLE,
+	FFA_MEMORY_INNER_SHAREABLE,
+};
+
+typedef uint8_t ffa_memory_access_permissions_t;
+
+/**
+ * This corresponds to table 44 of the FF-A 1.0 EAC specification, "Memory
+ * region attributes descriptor".
+ */
+typedef uint8_t ffa_memory_attributes_t;
+
+#define FFA_DATA_ACCESS_OFFSET (0x0U)
+#define FFA_DATA_ACCESS_MASK ((0x3U) << FFA_DATA_ACCESS_OFFSET)
+
+#define FFA_INSTRUCTION_ACCESS_OFFSET (0x2U)
+#define FFA_INSTRUCTION_ACCESS_MASK ((0x3U) << FFA_INSTRUCTION_ACCESS_OFFSET)
+
+#define FFA_MEMORY_TYPE_OFFSET (0x4U)
+#define FFA_MEMORY_TYPE_MASK ((0x3U) << FFA_MEMORY_TYPE_OFFSET)
+
+#define FFA_MEMORY_CACHEABILITY_OFFSET (0x2U)
+#define FFA_MEMORY_CACHEABILITY_MASK ((0x3U) << FFA_MEMORY_CACHEABILITY_OFFSET)
+
+#define FFA_MEMORY_SHAREABILITY_OFFSET (0x0U)
+#define FFA_MEMORY_SHAREABILITY_MASK ((0x3U) << FFA_MEMORY_SHAREABILITY_OFFSET)
+
+#define ATTR_FUNCTION_SET(name, container_type, offset, mask)                \
+	static inline void ffa_set_##name##_attr(container_type *attr,       \
+						 const enum ffa_##name perm) \
+	{                                                                    \
+		*attr = (*attr & ~(mask)) | ((perm << offset) & mask);       \
+	}
+
+#define ATTR_FUNCTION_GET(name, container_type, offset, mask)      \
+	static inline enum ffa_##name ffa_get_##name##_attr(       \
+		container_type attr)                               \
+	{                                                          \
+		return (enum ffa_##name)((attr & mask) >> offset); \
+	}
+
+ATTR_FUNCTION_SET(data_access, ffa_memory_access_permissions_t,
+		  FFA_DATA_ACCESS_OFFSET, FFA_DATA_ACCESS_MASK)
+ATTR_FUNCTION_GET(data_access, ffa_memory_access_permissions_t,
+		  FFA_DATA_ACCESS_OFFSET, FFA_DATA_ACCESS_MASK)
+
+ATTR_FUNCTION_SET(instruction_access, ffa_memory_access_permissions_t,
+		  FFA_INSTRUCTION_ACCESS_OFFSET, FFA_INSTRUCTION_ACCESS_MASK)
+ATTR_FUNCTION_GET(instruction_access, ffa_memory_access_permissions_t,
+		  FFA_INSTRUCTION_ACCESS_OFFSET, FFA_INSTRUCTION_ACCESS_MASK)
+
+ATTR_FUNCTION_SET(memory_type, ffa_memory_attributes_t, FFA_MEMORY_TYPE_OFFSET,
+		  FFA_MEMORY_TYPE_MASK)
+ATTR_FUNCTION_GET(memory_type, ffa_memory_attributes_t, FFA_MEMORY_TYPE_OFFSET,
+		  FFA_MEMORY_TYPE_MASK)
+
+ATTR_FUNCTION_SET(memory_cacheability, ffa_memory_attributes_t,
+		  FFA_MEMORY_CACHEABILITY_OFFSET, FFA_MEMORY_CACHEABILITY_MASK)
+ATTR_FUNCTION_GET(memory_cacheability, ffa_memory_attributes_t,
+		  FFA_MEMORY_CACHEABILITY_OFFSET, FFA_MEMORY_CACHEABILITY_MASK)
+
+ATTR_FUNCTION_SET(memory_shareability, ffa_memory_attributes_t,
+		  FFA_MEMORY_SHAREABILITY_OFFSET, FFA_MEMORY_SHAREABILITY_MASK)
+ATTR_FUNCTION_GET(memory_shareability, ffa_memory_attributes_t,
+		  FFA_MEMORY_SHAREABILITY_OFFSET, FFA_MEMORY_SHAREABILITY_MASK)
+
+#define FFA_MEMORY_HANDLE_ALLOCATOR_MASK \
+	((ffa_memory_handle_t)(UINT64_C(1) << 63))
+#define FFA_MEMORY_HANDLE_ALLOCATOR_HYPERVISOR \
+	((ffa_memory_handle_t)(UINT64_C(1) << 63))
+
+/** The ID of a VM. These are assigned sequentially starting with an offset. */
+typedef uint16_t ffa_vm_id_t;
+
+/**
+ * A globally-unique ID assigned by the hypervisor for a region of memory being
+ * sent between VMs.
+ */
+typedef uint64_t ffa_memory_handle_t;
+
+/**
+ * A count of VMs. This has the same range as the VM IDs but we give it a
+ * different name to make the different semantics clear.
+ */
+typedef ffa_vm_id_t ffa_vm_count_t;
+
+/** The index of a vCPU within a particular VM. */
+typedef uint16_t ffa_vcpu_index_t;
+
+/**
+ * A count of vCPUs. This has the same range as the vCPU indices but we give it
+ * a different name to make the different semantics clear.
+ */
+typedef ffa_vcpu_index_t ffa_vcpu_count_t;
+
+/** Parameter and return type of FF-A functions. */
+struct ffa_value {
+	uint64_t func;
+	uint64_t arg1;
+	uint64_t arg2;
+	uint64_t arg3;
+	uint64_t arg4;
+	uint64_t arg5;
+	uint64_t arg6;
+	uint64_t arg7;
+};
+
+static inline ffa_vm_id_t ffa_msg_send_sender(struct ffa_value args)
+{
+	return (args.arg1 >> 16) & 0xffff;
+}
+
+static inline ffa_vm_id_t ffa_msg_send_receiver(struct ffa_value args)
+{
+	return args.arg1 & 0xffff;
+}
+
+static inline uint32_t ffa_msg_send_size(struct ffa_value args)
+{
+	return args.arg3;
+}
+
+static inline uint32_t ffa_msg_send_attributes(struct ffa_value args)
+{
+	return args.arg4;
+}
+
+static inline ffa_memory_handle_t ffa_mem_success_handle(struct ffa_value args)
+{
+	return args.arg2;
+}
+
+static inline ffa_vm_id_t ffa_vm_id(struct ffa_value args)
+{
+	return (args.arg1 >> 16) & 0xffff;
+}
+
+static inline ffa_vcpu_index_t ffa_vcpu_index(struct ffa_value args)
+{
+	return args.arg1 & 0xffff;
+}
+
+static inline uint64_t ffa_vm_vcpu(ffa_vm_id_t vm_id,
+				   ffa_vcpu_index_t vcpu_index)
+{
+	return ((uint32_t)vm_id << 16) | vcpu_index;
+}
+
+/**
+ * A set of contiguous pages which is part of a memory region. This corresponds
+ * to table 40 of the FF-A 1.0 EAC specification, "Constituent memory region
+ * descriptor".
+ */
+struct ffa_memory_region_constituent {
+	/**
+	 * The base IPA of the constituent memory region, aligned to 4 kiB page
+	 * size granularity.
+	 */
+	uint64_t address;
+	/** The number of 4 kiB pages in the constituent memory region. */
+	uint32_t page_count;
+	/** Reserved field, must be 0. */
+	uint32_t reserved;
+};
+
+/**
+ * A set of pages comprising a memory region. This corresponds to table 39 of
+ * the FF-A 1.0 EAC specification, "Composite memory region descriptor".
+ */
+struct ffa_composite_memory_region {
+	/**
+	 * The total number of 4 kiB pages included in this memory region. This
+	 * must be equal to the sum of page counts specified in each
+	 * `ffa_memory_region_constituent`.
+	 */
+	uint32_t page_count;
+	/**
+	 * The number of constituents (`ffa_memory_region_constituent`)
+	 * included in this memory region range.
+	 */
+	uint32_t constituent_count;
+	/** Reserved field, must be 0. */
+	uint64_t reserved_0;
+	/** An array of `constituent_count` memory region constituents. */
+	struct ffa_memory_region_constituent constituents[];
+};
+
+/** Flags to indicate properties of receivers during memory region retrieval. */
+typedef uint8_t ffa_memory_receiver_flags_t;
+
+/**
+ * This corresponds to table 41 of the FF-A 1.0 EAC specification, "Memory
+ * access permissions descriptor".
+ */
+struct ffa_memory_region_attributes {
+	/** The ID of the VM to which the memory is being given or shared. */
+	ffa_vm_id_t receiver;
+	/**
+	 * The permissions with which the memory region should be mapped in the
+	 * receiver's page table.
+	 */
+	ffa_memory_access_permissions_t permissions;
+	/**
+	 * Flags used during FFA_MEM_RETRIEVE_REQ and FFA_MEM_RETRIEVE_RESP
+	 * for memory regions with multiple borrowers.
+	 */
+	ffa_memory_receiver_flags_t flags;
+};
+
+/** Flags to control the behaviour of a memory sharing transaction. */
+typedef uint32_t ffa_memory_region_flags_t;
+
+/**
+ * Clear memory region contents after unmapping it from the sender and before
+ * mapping it for any receiver.
+ */
+#define FFA_MEMORY_REGION_FLAG_CLEAR 0x1
+
+/**
+ * Whether the hypervisor may time slice the memory sharing or retrieval
+ * operation.
+ */
+#define FFA_MEMORY_REGION_FLAG_TIME_SLICE 0x2
+
+/**
+ * Whether the hypervisor should clear the memory region after the receiver
+ * relinquishes it or is aborted.
+ */
+#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH 0x4
+
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_MASK ((0x3U) << 3)
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_UNSPECIFIED ((0x0U) << 3)
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE ((0x1U) << 3)
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_LEND ((0x2U) << 3)
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_DONATE ((0x3U) << 3)
+
+/**
+ * This corresponds to table 42 of the FF-A 1.0 EAC specification, "Endpoint
+ * memory access descriptor".
+ */
+struct ffa_memory_access {
+	struct ffa_memory_region_attributes receiver_permissions;
+	/**
+	 * Offset in bytes from the start of the outer `ffa_memory_region` to
+	 * an `ffa_composite_memory_region` struct.
+	 */
+	uint32_t composite_memory_region_offset;
+	uint64_t reserved_0;
+};
+
+/**
+ * Information about a set of pages which are being shared. This corresponds to
+ * table 45 of the FF-A 1.0 EAC specification, "Lend, donate or share memory
+ * transaction descriptor". Note that it is also used for retrieve requests and
+ * responses.
+ */
+struct ffa_memory_region {
+	/**
+	 * The ID of the VM which originally sent the memory region, i.e. the
+	 * owner.
+	 */
+	ffa_vm_id_t sender;
+	ffa_memory_attributes_t attributes;
+	/** Reserved field, must be 0. */
+	uint8_t reserved_0;
+	/** Flags to control behaviour of the transaction. */
+	ffa_memory_region_flags_t flags;
+	ffa_memory_handle_t handle;
+	/**
+	 * An implementation defined value associated with the receiver and the
+	 * memory region.
+	 */
+	uint64_t tag;
+	/** Reserved field, must be 0. */
+	uint32_t reserved_1;
+	/**
+	 * The number of `ffa_memory_access` entries included in this
+	 * transaction.
+	 */
+	uint32_t receiver_count;
+	/**
+	 * An array of `attribute_count` endpoint memory access descriptors.
+	 * Each one specifies a memory region offset, an endpoint and the
+	 * attributes with which this memory region should be mapped in that
+	 * endpoint's page table.
+	 */
+	struct ffa_memory_access receivers[];
+};
+
+/**
+ * Descriptor used for FFA_MEM_RELINQUISH requests. This corresponds to table
+ * 150 of the FF-A 1.0 EAC specification, "Descriptor to relinquish a memory
+ * region".
+ */
+struct ffa_mem_relinquish {
+	ffa_memory_handle_t handle;
+	ffa_memory_region_flags_t flags;
+	uint32_t endpoint_count;
+	ffa_vm_id_t endpoints[];
+};
+
+/**
+ * Gets the `ffa_composite_memory_region` for the given receiver from an
+ * `ffa_memory_region`, or NULL if it is not valid.
+ */
+static inline struct ffa_composite_memory_region *
+ffa_memory_region_get_composite(struct ffa_memory_region *memory_region,
+				uint32_t receiver_index)
+{
+	uint32_t offset = memory_region->receivers[receiver_index]
+				  .composite_memory_region_offset;
+
+	if (offset == 0) {
+		return NULL;
+	}
+
+	return (struct ffa_composite_memory_region *)((uint8_t *)memory_region +
+						      offset);
+}
+
+static inline uint32_t ffa_mem_relinquish_init(
+	struct ffa_mem_relinquish *relinquish_request,
+	ffa_memory_handle_t handle, ffa_memory_region_flags_t flags,
+	ffa_vm_id_t sender)
+{
+	relinquish_request->handle = handle;
+	relinquish_request->flags = flags;
+	relinquish_request->endpoint_count = 1;
+	relinquish_request->endpoints[0] = sender;
+	return sizeof(struct ffa_mem_relinquish) + sizeof(ffa_vm_id_t);
+}
+
+uint32_t ffa_memory_region_init(
+	struct ffa_memory_region *memory_region, ffa_vm_id_t sender,
+	ffa_vm_id_t receiver,
+	const struct ffa_memory_region_constituent constituents[],
+	uint32_t constituent_count, uint32_t tag,
+	ffa_memory_region_flags_t flags, enum ffa_data_access data_access,
+	enum ffa_instruction_access instruction_access,
+	enum ffa_memory_type type, enum ffa_memory_cacheability cacheability,
+	enum ffa_memory_shareability shareability);
+uint32_t ffa_memory_retrieve_request_init(
+	struct ffa_memory_region *memory_region, ffa_memory_handle_t handle,
+	ffa_vm_id_t sender, ffa_vm_id_t receiver, uint32_t tag,
+	ffa_memory_region_flags_t flags, enum ffa_data_access data_access,
+	enum ffa_instruction_access instruction_access,
+	enum ffa_memory_type type, enum ffa_memory_cacheability cacheability,
+	enum ffa_memory_shareability shareability);
+uint32_t ffa_memory_lender_retrieve_request_init(
+	struct ffa_memory_region *memory_region, ffa_memory_handle_t handle,
+	ffa_vm_id_t sender);
+uint32_t ffa_retrieved_memory_region_init(
+	struct ffa_memory_region *response, size_t response_max_size,
+	ffa_vm_id_t sender, ffa_memory_attributes_t attributes,
+	ffa_memory_region_flags_t flags, ffa_memory_handle_t handle,
+	ffa_vm_id_t receiver, ffa_memory_access_permissions_t permissions,
+	const struct ffa_memory_region_constituent constituents[],
+	uint32_t constituent_count);
diff --git a/inc/vmapi/hf/spci.h b/inc/vmapi/hf/spci.h
deleted file mode 100644
index 1587b54..0000000
--- a/inc/vmapi/hf/spci.h
+++ /dev/null
@@ -1,487 +0,0 @@
-/*
- * Copyright 2019 The Hafnium Authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *     https://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include "hf/types.h"
-
-/* clang-format off */
-
-#define SPCI_LOW_32_ID  0x84000060
-#define SPCI_HIGH_32_ID 0x8400007F
-#define SPCI_LOW_64_ID  0xC4000060
-#define SPCI_HIGH_32_ID 0x8400007F
-
-/* SPCI function identifiers. */
-#define SPCI_ERROR_32                 0x84000060
-#define SPCI_SUCCESS_32               0x84000061
-#define SPCI_INTERRUPT_32             0x84000062
-#define SPCI_VERSION_32               0x84000063
-#define SPCI_FEATURES_32              0x84000064
-#define SPCI_RX_RELEASE_32            0x84000065
-#define SPCI_RXTX_MAP_32              0x84000066
-#define SPCI_RXTX_MAP_64              0xC4000066
-#define SPCI_RXTX_UNMAP_32            0x84000067
-#define SPCI_PARTITION_INFO_GET_32    0x84000068
-#define SPCI_ID_GET_32                0x84000069
-#define SPCI_MSG_POLL_32              0x8400006A
-#define SPCI_MSG_WAIT_32              0x8400006B
-#define SPCI_YIELD_32                 0x8400006C
-#define SPCI_RUN_32                   0x8400006D
-#define SPCI_MSG_SEND_32              0x8400006E
-#define SPCI_MSG_SEND_DIRECT_REQ_32   0x8400006F
-#define SPCI_MSG_SEND_DIRECT_RESP_32  0x84000070
-#define SPCI_MEM_DONATE_32            0x84000071
-#define SPCI_MEM_LEND_32              0x84000072
-#define SPCI_MEM_SHARE_32             0x84000073
-#define SPCI_MEM_RETRIEVE_REQ_32      0x84000074
-#define SPCI_MEM_RETRIEVE_RESP_32     0x84000075
-#define SPCI_MEM_RELINQUISH_32        0x84000076
-#define SPCI_MEM_RECLAIM_32           0x84000077
-
-/* SPCI error codes. */
-#define SPCI_NOT_SUPPORTED      INT32_C(-1)
-#define SPCI_INVALID_PARAMETERS INT32_C(-2)
-#define SPCI_NO_MEMORY          INT32_C(-3)
-#define SPCI_BUSY               INT32_C(-4)
-#define SPCI_INTERRUPTED        INT32_C(-5)
-#define SPCI_DENIED             INT32_C(-6)
-#define SPCI_RETRY              INT32_C(-7)
-#define SPCI_ABORTED            INT32_C(-8)
-
-/* clang-format on */
-
-/* SPCI function specific constants. */
-#define SPCI_MSG_RECV_BLOCK 0x1
-#define SPCI_MSG_RECV_BLOCK_MASK 0x1
-
-#define SPCI_MSG_SEND_NOTIFY 0x1
-#define SPCI_MSG_SEND_NOTIFY_MASK 0x1
-
-#define SPCI_MEM_RECLAIM_CLEAR 0x1
-
-#define SPCI_SLEEP_INDEFINITE 0
-
-/**
- * For use where the SPCI specification refers explicitly to '4K pages'. Not to
- * be confused with PAGE_SIZE, which is the translation granule Hafnium is
- * configured to use.
- */
-#define SPCI_PAGE_SIZE 4096
-
-/* The maximum length possible for a single message. */
-#define SPCI_MSG_PAYLOAD_MAX HF_MAILBOX_SIZE
-
-enum spci_data_access {
-	SPCI_DATA_ACCESS_NOT_SPECIFIED,
-	SPCI_DATA_ACCESS_RO,
-	SPCI_DATA_ACCESS_RW,
-	SPCI_DATA_ACCESS_RESERVED,
-};
-
-enum spci_instruction_access {
-	SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-	SPCI_INSTRUCTION_ACCESS_NX,
-	SPCI_INSTRUCTION_ACCESS_X,
-	SPCI_INSTRUCTION_ACCESS_RESERVED,
-};
-
-enum spci_memory_type {
-	SPCI_MEMORY_NOT_SPECIFIED_MEM,
-	SPCI_MEMORY_DEVICE_MEM,
-	SPCI_MEMORY_NORMAL_MEM,
-};
-
-enum spci_memory_cacheability {
-	SPCI_MEMORY_CACHE_RESERVED = 0x0,
-	SPCI_MEMORY_CACHE_NON_CACHEABLE = 0x1,
-	SPCI_MEMORY_CACHE_RESERVED_1 = 0x2,
-	SPCI_MEMORY_CACHE_WRITE_BACK = 0x3,
-	SPCI_MEMORY_DEV_NGNRNE = 0x0,
-	SPCI_MEMORY_DEV_NGNRE = 0x1,
-	SPCI_MEMORY_DEV_NGRE = 0x2,
-	SPCI_MEMORY_DEV_GRE = 0x3,
-};
-
-enum spci_memory_shareability {
-	SPCI_MEMORY_SHARE_NON_SHAREABLE,
-	SPCI_MEMORY_SHARE_RESERVED,
-	SPCI_MEMORY_OUTER_SHAREABLE,
-	SPCI_MEMORY_INNER_SHAREABLE,
-};
-
-typedef uint8_t spci_memory_access_permissions_t;
-
-/**
- * This corresponds to table 44 of the FF-A 1.0 EAC specification, "Memory
- * region attributes descriptor".
- */
-typedef uint8_t spci_memory_attributes_t;
-
-#define SPCI_DATA_ACCESS_OFFSET (0x0U)
-#define SPCI_DATA_ACCESS_MASK ((0x3U) << SPCI_DATA_ACCESS_OFFSET)
-
-#define SPCI_INSTRUCTION_ACCESS_OFFSET (0x2U)
-#define SPCI_INSTRUCTION_ACCESS_MASK ((0x3U) << SPCI_INSTRUCTION_ACCESS_OFFSET)
-
-#define SPCI_MEMORY_TYPE_OFFSET (0x4U)
-#define SPCI_MEMORY_TYPE_MASK ((0x3U) << SPCI_MEMORY_TYPE_OFFSET)
-
-#define SPCI_MEMORY_CACHEABILITY_OFFSET (0x2U)
-#define SPCI_MEMORY_CACHEABILITY_MASK \
-	((0x3U) << SPCI_MEMORY_CACHEABILITY_OFFSET)
-
-#define SPCI_MEMORY_SHAREABILITY_OFFSET (0x0U)
-#define SPCI_MEMORY_SHAREABILITY_MASK \
-	((0x3U) << SPCI_MEMORY_SHAREABILITY_OFFSET)
-
-#define ATTR_FUNCTION_SET(name, container_type, offset, mask)                  \
-	static inline void spci_set_##name##_attr(container_type *attr,        \
-						  const enum spci_##name perm) \
-	{                                                                      \
-		*attr = (*attr & ~(mask)) | ((perm << offset) & mask);         \
-	}
-
-#define ATTR_FUNCTION_GET(name, container_type, offset, mask)       \
-	static inline enum spci_##name spci_get_##name##_attr(      \
-		container_type attr)                                \
-	{                                                           \
-		return (enum spci_##name)((attr & mask) >> offset); \
-	}
-
-ATTR_FUNCTION_SET(data_access, spci_memory_access_permissions_t,
-		  SPCI_DATA_ACCESS_OFFSET, SPCI_DATA_ACCESS_MASK)
-ATTR_FUNCTION_GET(data_access, spci_memory_access_permissions_t,
-		  SPCI_DATA_ACCESS_OFFSET, SPCI_DATA_ACCESS_MASK)
-
-ATTR_FUNCTION_SET(instruction_access, spci_memory_access_permissions_t,
-		  SPCI_INSTRUCTION_ACCESS_OFFSET, SPCI_INSTRUCTION_ACCESS_MASK)
-ATTR_FUNCTION_GET(instruction_access, spci_memory_access_permissions_t,
-		  SPCI_INSTRUCTION_ACCESS_OFFSET, SPCI_INSTRUCTION_ACCESS_MASK)
-
-ATTR_FUNCTION_SET(memory_type, spci_memory_attributes_t,
-		  SPCI_MEMORY_TYPE_OFFSET, SPCI_MEMORY_TYPE_MASK)
-ATTR_FUNCTION_GET(memory_type, spci_memory_attributes_t,
-		  SPCI_MEMORY_TYPE_OFFSET, SPCI_MEMORY_TYPE_MASK)
-
-ATTR_FUNCTION_SET(memory_cacheability, spci_memory_attributes_t,
-		  SPCI_MEMORY_CACHEABILITY_OFFSET,
-		  SPCI_MEMORY_CACHEABILITY_MASK)
-ATTR_FUNCTION_GET(memory_cacheability, spci_memory_attributes_t,
-		  SPCI_MEMORY_CACHEABILITY_OFFSET,
-		  SPCI_MEMORY_CACHEABILITY_MASK)
-
-ATTR_FUNCTION_SET(memory_shareability, spci_memory_attributes_t,
-		  SPCI_MEMORY_SHAREABILITY_OFFSET,
-		  SPCI_MEMORY_SHAREABILITY_MASK)
-ATTR_FUNCTION_GET(memory_shareability, spci_memory_attributes_t,
-		  SPCI_MEMORY_SHAREABILITY_OFFSET,
-		  SPCI_MEMORY_SHAREABILITY_MASK)
-
-#define SPCI_MEMORY_HANDLE_ALLOCATOR_MASK \
-	((spci_memory_handle_t)(UINT64_C(1) << 63))
-#define SPCI_MEMORY_HANDLE_ALLOCATOR_HYPERVISOR \
-	((spci_memory_handle_t)(UINT64_C(1) << 63))
-
-/** The ID of a VM. These are assigned sequentially starting with an offset. */
-typedef uint16_t spci_vm_id_t;
-
-/**
- * A globally-unique ID assigned by the hypervisor for a region of memory being
- * sent between VMs.
- */
-typedef uint64_t spci_memory_handle_t;
-
-/**
- * A count of VMs. This has the same range as the VM IDs but we give it a
- * different name to make the different semantics clear.
- */
-typedef spci_vm_id_t spci_vm_count_t;
-
-/** The index of a vCPU within a particular VM. */
-typedef uint16_t spci_vcpu_index_t;
-
-/**
- * A count of vCPUs. This has the same range as the vCPU indices but we give it
- * a different name to make the different semantics clear.
- */
-typedef spci_vcpu_index_t spci_vcpu_count_t;
-
-/** Parameter and return type of SPCI functions. */
-struct spci_value {
-	uint64_t func;
-	uint64_t arg1;
-	uint64_t arg2;
-	uint64_t arg3;
-	uint64_t arg4;
-	uint64_t arg5;
-	uint64_t arg6;
-	uint64_t arg7;
-};
-
-static inline spci_vm_id_t spci_msg_send_sender(struct spci_value args)
-{
-	return (args.arg1 >> 16) & 0xffff;
-}
-
-static inline spci_vm_id_t spci_msg_send_receiver(struct spci_value args)
-{
-	return args.arg1 & 0xffff;
-}
-
-static inline uint32_t spci_msg_send_size(struct spci_value args)
-{
-	return args.arg3;
-}
-
-static inline uint32_t spci_msg_send_attributes(struct spci_value args)
-{
-	return args.arg4;
-}
-
-static inline spci_memory_handle_t spci_mem_success_handle(
-	struct spci_value args)
-{
-	return args.arg2;
-}
-
-static inline spci_vm_id_t spci_vm_id(struct spci_value args)
-{
-	return (args.arg1 >> 16) & 0xffff;
-}
-
-static inline spci_vcpu_index_t spci_vcpu_index(struct spci_value args)
-{
-	return args.arg1 & 0xffff;
-}
-
-static inline uint64_t spci_vm_vcpu(spci_vm_id_t vm_id,
-				    spci_vcpu_index_t vcpu_index)
-{
-	return ((uint32_t)vm_id << 16) | vcpu_index;
-}
-
-/**
- * A set of contiguous pages which is part of a memory region. This corresponds
- * to table 40 of the FF-A 1.0 EAC specification, "Constituent memory region
- * descriptor".
- */
-struct spci_memory_region_constituent {
-	/**
-	 * The base IPA of the constituent memory region, aligned to 4 kiB page
-	 * size granularity.
-	 */
-	uint64_t address;
-	/** The number of 4 kiB pages in the constituent memory region. */
-	uint32_t page_count;
-	/** Reserved field, must be 0. */
-	uint32_t reserved;
-};
-
-/**
- * A set of pages comprising a memory region. This corresponds to table 39 of
- * the FF-A 1.0 EAC specification, "Composite memory region descriptor".
- */
-struct spci_composite_memory_region {
-	/**
-	 * The total number of 4 kiB pages included in this memory region. This
-	 * must be equal to the sum of page counts specified in each
-	 * `spci_memory_region_constituent`.
-	 */
-	uint32_t page_count;
-	/**
-	 * The number of constituents (`spci_memory_region_constituent`)
-	 * included in this memory region range.
-	 */
-	uint32_t constituent_count;
-	/** Reserved field, must be 0. */
-	uint64_t reserved_0;
-	/** An array of `constituent_count` memory region constituents. */
-	struct spci_memory_region_constituent constituents[];
-};
-
-/** Flags to indicate properties of receivers during memory region retrieval. */
-typedef uint8_t spci_memory_receiver_flags_t;
-
-/**
- * This corresponds to table 41 of the FF-A 1.0 EAC specification, "Memory
- * access permissions descriptor".
- */
-struct spci_memory_region_attributes {
-	/** The ID of the VM to which the memory is being given or shared. */
-	spci_vm_id_t receiver;
-	/**
-	 * The permissions with which the memory region should be mapped in the
-	 * receiver's page table.
-	 */
-	spci_memory_access_permissions_t permissions;
-	/**
-	 * Flags used during SPCI_MEM_RETRIEVE_REQ and SPCI_MEM_RETRIEVE_RESP
-	 * for memory regions with multiple borrowers.
-	 */
-	spci_memory_receiver_flags_t flags;
-};
-
-/** Flags to control the behaviour of a memory sharing transaction. */
-typedef uint32_t spci_memory_region_flags_t;
-
-/**
- * Clear memory region contents after unmapping it from the sender and before
- * mapping it for any receiver.
- */
-#define SPCI_MEMORY_REGION_FLAG_CLEAR 0x1
-
-/**
- * Whether the hypervisor may time slice the memory sharing or retrieval
- * operation.
- */
-#define SPCI_MEMORY_REGION_FLAG_TIME_SLICE 0x2
-
-/**
- * Whether the hypervisor should clear the memory region after the receiver
- * relinquishes it or is aborted.
- */
-#define SPCI_MEMORY_REGION_FLAG_CLEAR_RELINQUISH 0x4
-
-#define SPCI_MEMORY_REGION_TRANSACTION_TYPE_MASK ((0x3U) << 3)
-#define SPCI_MEMORY_REGION_TRANSACTION_TYPE_UNSPECIFIED ((0x0U) << 3)
-#define SPCI_MEMORY_REGION_TRANSACTION_TYPE_SHARE ((0x1U) << 3)
-#define SPCI_MEMORY_REGION_TRANSACTION_TYPE_LEND ((0x2U) << 3)
-#define SPCI_MEMORY_REGION_TRANSACTION_TYPE_DONATE ((0x3U) << 3)
-
-/**
- * This corresponds to table 42 of the FF-A 1.0 EAC specification, "Endpoint
- * memory access descriptor".
- */
-struct spci_memory_access {
-	struct spci_memory_region_attributes receiver_permissions;
-	/**
-	 * Offset in bytes from the start of the outer `spci_memory_region` to
-	 * an `spci_composite_memory_region` struct.
-	 */
-	uint32_t composite_memory_region_offset;
-	uint64_t reserved_0;
-};
-
-/**
- * Information about a set of pages which are being shared. This corresponds to
- * table 45 of the FF-A 1.0 EAC specification, "Lend, donate or share memory
- * transaction descriptor". Note that it is also used for retrieve requests and
- * responses.
- */
-struct spci_memory_region {
-	/**
-	 * The ID of the VM which originally sent the memory region, i.e. the
-	 * owner.
-	 */
-	spci_vm_id_t sender;
-	spci_memory_attributes_t attributes;
-	/** Reserved field, must be 0. */
-	uint8_t reserved_0;
-	/** Flags to control behaviour of the transaction. */
-	spci_memory_region_flags_t flags;
-	spci_memory_handle_t handle;
-	/**
-	 * An implementation defined value associated with the receiver and the
-	 * memory region.
-	 */
-	uint64_t tag;
-	/** Reserved field, must be 0. */
-	uint32_t reserved_1;
-	/**
-	 * The number of `spci_memory_access` entries included in this
-	 * transaction.
-	 */
-	uint32_t receiver_count;
-	/**
-	 * An array of `attribute_count` endpoint memory access descriptors.
-	 * Each one specifies a memory region offset, an endpoint and the
-	 * attributes with which this memory region should be mapped in that
-	 * endpoint's page table.
-	 */
-	struct spci_memory_access receivers[];
-};
-
-/**
- * Descriptor used for SPCI_MEM_RELINQUISH requests. This corresponds to table
- * 150 of the FF-A 1.0 EAC specification, "Descriptor to relinquish a memory
- * region".
- */
-struct spci_mem_relinquish {
-	spci_memory_handle_t handle;
-	spci_memory_region_flags_t flags;
-	uint32_t endpoint_count;
-	spci_vm_id_t endpoints[];
-};
-
-/**
- * Gets the `spci_composite_memory_region` for the given receiver from an
- * `spci_memory_region`, or NULL if it is not valid.
- */
-static inline struct spci_composite_memory_region *
-spci_memory_region_get_composite(struct spci_memory_region *memory_region,
-				 uint32_t receiver_index)
-{
-	uint32_t offset = memory_region->receivers[receiver_index]
-				  .composite_memory_region_offset;
-
-	if (offset == 0) {
-		return NULL;
-	}
-
-	return (struct spci_composite_memory_region *)((uint8_t *)
-							       memory_region +
-						       offset);
-}
-
-static inline uint32_t spci_mem_relinquish_init(
-	struct spci_mem_relinquish *relinquish_request,
-	spci_memory_handle_t handle, spci_memory_region_flags_t flags,
-	spci_vm_id_t sender)
-{
-	relinquish_request->handle = handle;
-	relinquish_request->flags = flags;
-	relinquish_request->endpoint_count = 1;
-	relinquish_request->endpoints[0] = sender;
-	return sizeof(struct spci_mem_relinquish) + sizeof(spci_vm_id_t);
-}
-
-uint32_t spci_memory_region_init(
-	struct spci_memory_region *memory_region, spci_vm_id_t sender,
-	spci_vm_id_t receiver,
-	const struct spci_memory_region_constituent constituents[],
-	uint32_t constituent_count, uint32_t tag,
-	spci_memory_region_flags_t flags, enum spci_data_access data_access,
-	enum spci_instruction_access instruction_access,
-	enum spci_memory_type type, enum spci_memory_cacheability cacheability,
-	enum spci_memory_shareability shareability);
-uint32_t spci_memory_retrieve_request_init(
-	struct spci_memory_region *memory_region, spci_memory_handle_t handle,
-	spci_vm_id_t sender, spci_vm_id_t receiver, uint32_t tag,
-	spci_memory_region_flags_t flags, enum spci_data_access data_access,
-	enum spci_instruction_access instruction_access,
-	enum spci_memory_type type, enum spci_memory_cacheability cacheability,
-	enum spci_memory_shareability shareability);
-uint32_t spci_memory_lender_retrieve_request_init(
-	struct spci_memory_region *memory_region, spci_memory_handle_t handle,
-	spci_vm_id_t sender);
-uint32_t spci_retrieved_memory_region_init(
-	struct spci_memory_region *response, size_t response_max_size,
-	spci_vm_id_t sender, spci_memory_attributes_t attributes,
-	spci_memory_region_flags_t flags, spci_memory_handle_t handle,
-	spci_vm_id_t receiver, spci_memory_access_permissions_t permissions,
-	const struct spci_memory_region_constituent constituents[],
-	uint32_t constituent_count);
diff --git a/src/BUILD.gn b/src/BUILD.gn
index 66c9211..8ec91a0 100644
--- a/src/BUILD.gn
+++ b/src/BUILD.gn
@@ -55,8 +55,8 @@
   sources = [
     "api.c",
     "cpu.c",
+    "ffa_memory.c",
     "manifest.c",
-    "spci_memory.c",
     "vcpu.c",
   ]
 
diff --git a/src/api.c b/src/api.c
index be47717..11d0fc2 100644
--- a/src/api.c
+++ b/src/api.c
@@ -22,17 +22,17 @@
 
 #include "hf/check.h"
 #include "hf/dlog.h"
+#include "hf/ffa_internal.h"
+#include "hf/ffa_memory.h"
 #include "hf/mm.h"
 #include "hf/plat/console.h"
-#include "hf/spci_internal.h"
-#include "hf/spci_memory.h"
 #include "hf/spinlock.h"
 #include "hf/static_assert.h"
 #include "hf/std.h"
 #include "hf/vm.h"
 
 #include "vmapi/hf/call.h"
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 /*
  * To eliminate the risk of deadlocks, we define a partial order for the
@@ -69,10 +69,10 @@
  * Switches the physical CPU back to the corresponding vCPU of the primary VM.
  *
  * This triggers the scheduling logic to run. Run in the context of secondary VM
- * to cause SPCI_RUN to return and the primary VM to regain control of the CPU.
+ * to cause FFA_RUN to return and the primary VM to regain control of the CPU.
  */
 static struct vcpu *api_switch_to_primary(struct vcpu *current,
-					  struct spci_value primary_ret,
+					  struct ffa_value primary_ret,
 					  enum vcpu_state secondary_state)
 {
 	struct vm *primary = vm_find(HF_PRIMARY_VM_ID);
@@ -83,8 +83,8 @@
 	 * timer fires rather than indefinitely.
 	 */
 	switch (primary_ret.func) {
-	case HF_SPCI_RUN_WAIT_FOR_INTERRUPT:
-	case SPCI_MSG_WAIT_32: {
+	case HF_FFA_RUN_WAIT_FOR_INTERRUPT:
+	case FFA_MSG_WAIT_32: {
 		if (arch_timer_enabled_current()) {
 			uint64_t remaining_ns =
 				arch_timer_remaining_ns_current();
@@ -94,7 +94,7 @@
 				 * Timer is pending, so the current vCPU should
 				 * be run again right away.
 				 */
-				primary_ret.func = SPCI_INTERRUPT_32;
+				primary_ret.func = FFA_INTERRUPT_32;
 				/*
 				 * primary_ret.arg1 should already be set to the
 				 * current VM ID and vCPU ID.
@@ -104,7 +104,7 @@
 				primary_ret.arg2 = remaining_ns;
 			}
 		} else {
-			primary_ret.arg2 = SPCI_SLEEP_INDEFINITE;
+			primary_ret.arg2 = FFA_SLEEP_INDEFINITE;
 		}
 		break;
 	}
@@ -130,9 +130,9 @@
  */
 struct vcpu *api_preempt(struct vcpu *current)
 {
-	struct spci_value ret = {
-		.func = SPCI_INTERRUPT_32,
-		.arg1 = spci_vm_vcpu(current->vm->id, vcpu_index(current)),
+	struct ffa_value ret = {
+		.func = FFA_INTERRUPT_32,
+		.arg1 = ffa_vm_vcpu(current->vm->id, vcpu_index(current)),
 	};
 
 	return api_switch_to_primary(current, ret, VCPU_STATE_READY);
@@ -144,9 +144,9 @@
  */
 struct vcpu *api_wait_for_interrupt(struct vcpu *current)
 {
-	struct spci_value ret = {
-		.func = HF_SPCI_RUN_WAIT_FOR_INTERRUPT,
-		.arg1 = spci_vm_vcpu(current->vm->id, vcpu_index(current)),
+	struct ffa_value ret = {
+		.func = HF_FFA_RUN_WAIT_FOR_INTERRUPT,
+		.arg1 = ffa_vm_vcpu(current->vm->id, vcpu_index(current)),
 	};
 
 	return api_switch_to_primary(current, ret,
@@ -158,9 +158,9 @@
  */
 struct vcpu *api_vcpu_off(struct vcpu *current)
 {
-	struct spci_value ret = {
-		.func = HF_SPCI_RUN_WAIT_FOR_INTERRUPT,
-		.arg1 = spci_vm_vcpu(current->vm->id, vcpu_index(current)),
+	struct ffa_value ret = {
+		.func = HF_FFA_RUN_WAIT_FOR_INTERRUPT,
+		.arg1 = ffa_vm_vcpu(current->vm->id, vcpu_index(current)),
 	};
 
 	/*
@@ -179,9 +179,9 @@
  */
 void api_yield(struct vcpu *current, struct vcpu **next)
 {
-	struct spci_value primary_ret = {
-		.func = SPCI_YIELD_32,
-		.arg1 = spci_vm_vcpu(current->vm->id, vcpu_index(current)),
+	struct ffa_value primary_ret = {
+		.func = FFA_YIELD_32,
+		.arg1 = ffa_vm_vcpu(current->vm->id, vcpu_index(current)),
 	};
 
 	if (current->vm->id == HF_PRIMARY_VM_ID) {
@@ -198,10 +198,10 @@
  */
 struct vcpu *api_wake_up(struct vcpu *current, struct vcpu *target_vcpu)
 {
-	struct spci_value ret = {
-		.func = HF_SPCI_RUN_WAKE_UP,
-		.arg1 = spci_vm_vcpu(target_vcpu->vm->id,
-				     vcpu_index(target_vcpu)),
+	struct ffa_value ret = {
+		.func = HF_FFA_RUN_WAKE_UP,
+		.arg1 = ffa_vm_vcpu(target_vcpu->vm->id,
+				    vcpu_index(target_vcpu)),
 	};
 	return api_switch_to_primary(current, ret, VCPU_STATE_READY);
 }
@@ -211,7 +211,7 @@
  */
 struct vcpu *api_abort(struct vcpu *current)
 {
-	struct spci_value ret = spci_error(SPCI_ABORTED);
+	struct ffa_value ret = ffa_error(FFA_ABORTED);
 
 	dlog_notice("Aborting VM %u vCPU %u\n", current->vm->id,
 		    vcpu_index(current));
@@ -234,16 +234,16 @@
 /**
  * Returns the ID of the VM.
  */
-struct spci_value api_spci_id_get(const struct vcpu *current)
+struct ffa_value api_ffa_id_get(const struct vcpu *current)
 {
-	return (struct spci_value){.func = SPCI_SUCCESS_32,
-				   .arg2 = current->vm->id};
+	return (struct ffa_value){.func = FFA_SUCCESS_32,
+				  .arg2 = current->vm->id};
 }
 
 /**
  * Returns the number of VMs configured to run.
  */
-spci_vm_count_t api_vm_get_count(void)
+ffa_vm_count_t api_vm_get_count(void)
 {
 	return vm_get_count();
 }
@@ -252,8 +252,8 @@
  * Returns the number of vCPUs configured in the given VM, or 0 if there is no
  * such VM or the caller is not the primary VM.
  */
-spci_vcpu_count_t api_vcpu_get_count(spci_vm_id_t vm_id,
-				     const struct vcpu *current)
+ffa_vcpu_count_t api_vcpu_get_count(ffa_vm_id_t vm_id,
+				    const struct vcpu *current)
 {
 	struct vm *vm;
 
@@ -370,29 +370,29 @@
 }
 
 /**
- * Constructs an SPCI_MSG_SEND value to return from a successful SPCI_MSG_POLL
- * or SPCI_MSG_WAIT call.
+ * Constructs an FFA_MSG_SEND value to return from a successful FFA_MSG_POLL
+ * or FFA_MSG_WAIT call.
  */
-static struct spci_value spci_msg_recv_return(const struct vm *receiver)
+static struct ffa_value ffa_msg_recv_return(const struct vm *receiver)
 {
 	switch (receiver->mailbox.recv_func) {
-	case SPCI_MSG_SEND_32:
-		return (struct spci_value){
-			.func = SPCI_MSG_SEND_32,
+	case FFA_MSG_SEND_32:
+		return (struct ffa_value){
+			.func = FFA_MSG_SEND_32,
 			.arg1 = (receiver->mailbox.recv_sender << 16) |
 				receiver->id,
 			.arg3 = receiver->mailbox.recv_size};
-	case SPCI_MEM_DONATE_32:
-	case SPCI_MEM_LEND_32:
-	case SPCI_MEM_SHARE_32:
-		return (struct spci_value){.func = receiver->mailbox.recv_func,
-					   .arg1 = receiver->mailbox.recv_size,
-					   .arg2 = receiver->mailbox.recv_size};
+	case FFA_MEM_DONATE_32:
+	case FFA_MEM_LEND_32:
+	case FFA_MEM_SHARE_32:
+		return (struct ffa_value){.func = receiver->mailbox.recv_func,
+					  .arg1 = receiver->mailbox.recv_size,
+					  .arg2 = receiver->mailbox.recv_size};
 	default:
 		/* This should never be reached, but return an error in case. */
 		dlog_error("Tried to return an invalid message function %#x\n",
 			   receiver->mailbox.recv_func);
-		return spci_error(SPCI_DENIED);
+		return ffa_error(FFA_DENIED);
 	}
 }
 
@@ -401,7 +401,7 @@
  * value needs to be forced onto the vCPU.
  */
 static bool api_vcpu_prepare_run(const struct vcpu *current, struct vcpu *vcpu,
-				 struct spci_value *run_ret)
+				 struct ffa_value *run_ret)
 {
 	bool need_vm_lock;
 	bool ret;
@@ -443,7 +443,7 @@
 		 * other physical CPU that is currently running this vCPU will
 		 * return the sleep duration if needed.
 		 */
-		*run_ret = spci_error(SPCI_BUSY);
+		*run_ret = ffa_error(FFA_BUSY);
 		ret = false;
 		goto out;
 	}
@@ -472,7 +472,7 @@
 		 */
 		if (vcpu->vm->mailbox.state == MAILBOX_STATE_RECEIVED) {
 			arch_regs_set_retval(&vcpu->regs,
-					     spci_msg_recv_return(vcpu->vm));
+					     ffa_msg_recv_return(vcpu->vm));
 			vcpu->vm->mailbox.state = MAILBOX_STATE_READ;
 			break;
 		}
@@ -501,10 +501,10 @@
 			 */
 			run_ret->func =
 				vcpu->state == VCPU_STATE_BLOCKED_MAILBOX
-					? SPCI_MSG_WAIT_32
-					: HF_SPCI_RUN_WAIT_FOR_INTERRUPT;
+					? FFA_MSG_WAIT_32
+					: HF_FFA_RUN_WAIT_FOR_INTERRUPT;
 			run_ret->arg1 =
-				spci_vm_vcpu(vcpu->vm->id, vcpu_index(vcpu));
+				ffa_vm_vcpu(vcpu->vm->id, vcpu_index(vcpu));
 			run_ret->arg2 = timer_remaining_ns;
 		}
 
@@ -537,16 +537,16 @@
 	return ret;
 }
 
-struct spci_value api_spci_run(spci_vm_id_t vm_id, spci_vcpu_index_t vcpu_idx,
-			       const struct vcpu *current, struct vcpu **next)
+struct ffa_value api_ffa_run(ffa_vm_id_t vm_id, ffa_vcpu_index_t vcpu_idx,
+			     const struct vcpu *current, struct vcpu **next)
 {
 	struct vm *vm;
 	struct vcpu *vcpu;
-	struct spci_value ret = spci_error(SPCI_INVALID_PARAMETERS);
+	struct ffa_value ret = ffa_error(FFA_INVALID_PARAMETERS);
 
 	/* Only the primary VM can switch vCPUs. */
 	if (current->vm->id != HF_PRIMARY_VM_ID) {
-		ret.arg2 = SPCI_DENIED;
+		ret.arg2 = FFA_DENIED;
 		goto out;
 	}
 
@@ -600,8 +600,8 @@
 	 * Set a placeholder return code to the scheduler. This will be
 	 * overwritten when the switch back to the primary occurs.
 	 */
-	ret.func = SPCI_INTERRUPT_32;
-	ret.arg1 = spci_vm_vcpu(vm_id, vcpu_idx);
+	ret.func = FFA_INTERRUPT_32;
+	ret.arg1 = ffa_vm_vcpu(vm_id, vcpu_idx);
 	ret.arg2 = 0;
 
 out:
@@ -618,24 +618,24 @@
 }
 
 /**
- * Determines the value to be returned by api_vm_configure and spci_rx_release
+ * Determines the value to be returned by api_vm_configure and ffa_rx_release
  * after they've succeeded. If a secondary VM is running and there are waiters,
  * it also switches back to the primary VM for it to wake waiters up.
  */
-static struct spci_value api_waiter_result(struct vm_locked locked_vm,
-					   struct vcpu *current,
-					   struct vcpu **next)
+static struct ffa_value api_waiter_result(struct vm_locked locked_vm,
+					  struct vcpu *current,
+					  struct vcpu **next)
 {
 	struct vm *vm = locked_vm.vm;
 
 	if (list_empty(&vm->mailbox.waiter_list)) {
 		/* No waiters, nothing else to do. */
-		return (struct spci_value){.func = SPCI_SUCCESS_32};
+		return (struct ffa_value){.func = FFA_SUCCESS_32};
 	}
 
 	if (vm->id == HF_PRIMARY_VM_ID) {
 		/* The caller is the primary VM. Tell it to wake up waiters. */
-		return (struct spci_value){.func = SPCI_RX_RELEASE_32};
+		return (struct ffa_value){.func = FFA_RX_RELEASE_32};
 	}
 
 	/*
@@ -643,10 +643,10 @@
 	 * that need to be notified.
 	 */
 	*next = api_switch_to_primary(
-		current, (struct spci_value){.func = SPCI_RX_RELEASE_32},
+		current, (struct ffa_value){.func = FFA_RX_RELEASE_32},
 		VCPU_STATE_READY);
 
-	return (struct spci_value){.func = SPCI_SUCCESS_32};
+	return (struct ffa_value){.func = FFA_SUCCESS_32};
 }
 
 /**
@@ -783,19 +783,19 @@
  * must not be shared.
  *
  * Returns:
- *  - SPCI_ERROR SPCI_INVALID_PARAMETERS if the given addresses are not properly
+ *  - FFA_ERROR FFA_INVALID_PARAMETERS if the given addresses are not properly
  *    aligned or are the same.
- *  - SPCI_ERROR SPCI_NO_MEMORY if the hypervisor was unable to map the buffers
+ *  - FFA_ERROR FFA_NO_MEMORY if the hypervisor was unable to map the buffers
  *    due to insuffient page table memory.
- *  - SPCI_ERROR SPCI_DENIED if the pages are already mapped or are not owned by
+ *  - FFA_ERROR FFA_DENIED if the pages are already mapped or are not owned by
  *    the caller.
- *  - SPCI_SUCCESS on success if no further action is needed.
- *  - SPCI_RX_RELEASE if it was called by the primary VM and the primary VM now
+ *  - FFA_SUCCESS on success if no further action is needed.
+ *  - FFA_RX_RELEASE if it was called by the primary VM and the primary VM now
  *    needs to wake up or kick waiters.
  */
-struct spci_value api_spci_rxtx_map(ipaddr_t send, ipaddr_t recv,
-				    uint32_t page_count, struct vcpu *current,
-				    struct vcpu **next)
+struct ffa_value api_ffa_rxtx_map(ipaddr_t send, ipaddr_t recv,
+				  uint32_t page_count, struct vcpu *current,
+				  struct vcpu **next)
 {
 	struct vm *vm = current->vm;
 	struct vm_locked vm_locked;
@@ -805,17 +805,17 @@
 	paddr_t pa_recv_end;
 	uint32_t orig_send_mode;
 	uint32_t orig_recv_mode;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	/* Hafnium only supports a fixed size of RX/TX buffers. */
-	if (page_count != HF_MAILBOX_SIZE / SPCI_PAGE_SIZE) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+	if (page_count != HF_MAILBOX_SIZE / FFA_PAGE_SIZE) {
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* Fail if addresses are not page-aligned. */
 	if (!is_aligned(ipa_addr(send), PAGE_SIZE) ||
 	    !is_aligned(ipa_addr(recv), PAGE_SIZE)) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* Convert to physical addresses. */
@@ -827,7 +827,7 @@
 
 	/* Fail if the same page is used for the send and receive pages. */
 	if (pa_addr(pa_send_begin) == pa_addr(pa_recv_begin)) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
@@ -842,7 +842,7 @@
 
 	/* We only allow these to be setup once. */
 	if (vm->mailbox.send || vm->mailbox.recv) {
-		ret = spci_error(SPCI_DENIED);
+		ret = ffa_error(FFA_DENIED);
 		goto exit;
 	}
 
@@ -855,7 +855,7 @@
 	    !api_mode_valid_owned_and_exclusive(orig_send_mode) ||
 	    (orig_send_mode & MM_MODE_R) == 0 ||
 	    (orig_send_mode & MM_MODE_W) == 0) {
-		ret = spci_error(SPCI_DENIED);
+		ret = ffa_error(FFA_DENIED);
 		goto exit;
 	}
 
@@ -863,14 +863,14 @@
 			    &orig_recv_mode) ||
 	    !api_mode_valid_owned_and_exclusive(orig_recv_mode) ||
 	    (orig_recv_mode & MM_MODE_R) == 0) {
-		ret = spci_error(SPCI_DENIED);
+		ret = ffa_error(FFA_DENIED);
 		goto exit;
 	}
 
 	if (!api_vm_configure_pages(vm_locked, pa_send_begin, pa_send_end,
 				    orig_send_mode, pa_recv_begin, pa_recv_end,
 				    orig_recv_mode)) {
-		ret = spci_error(SPCI_NO_MEMORY);
+		ret = ffa_error(FFA_NO_MEMORY);
 		goto exit;
 	}
 
@@ -916,12 +916,12 @@
  * Notifies the `to` VM about the message currently in its mailbox, possibly
  * with the help of the primary VM.
  */
-static struct spci_value deliver_msg(struct vm_locked to, spci_vm_id_t from_id,
-				     struct vcpu *current, struct vcpu **next)
+static struct ffa_value deliver_msg(struct vm_locked to, ffa_vm_id_t from_id,
+				    struct vcpu *current, struct vcpu **next)
 {
-	struct spci_value ret = (struct spci_value){.func = SPCI_SUCCESS_32};
-	struct spci_value primary_ret = {
-		.func = SPCI_MSG_SEND_32,
+	struct ffa_value ret = (struct ffa_value){.func = FFA_SUCCESS_32};
+	struct ffa_value primary_ret = {
+		.func = FFA_MSG_SEND_32,
 		.arg1 = ((uint32_t)from_id << 16) | to.vm->id,
 	};
 
@@ -932,7 +932,7 @@
 		 * message is for it, to avoid leaking data about messages for
 		 * other VMs.
 		 */
-		primary_ret = spci_msg_recv_return(to.vm);
+		primary_ret = ffa_msg_recv_return(to.vm);
 
 		to.vm->mailbox.state = MAILBOX_STATE_READ;
 		*next = api_switch_to_primary(current, primary_ret,
@@ -944,7 +944,7 @@
 
 	/* Messages for the TEE are sent on via the dispatcher. */
 	if (to.vm->id == HF_TEE_VM_ID) {
-		struct spci_value call = spci_msg_recv_return(to.vm);
+		struct ffa_value call = ffa_msg_recv_return(to.vm);
 
 		ret = arch_tee_call(call);
 		/*
@@ -954,7 +954,7 @@
 		to.vm->mailbox.state = MAILBOX_STATE_EMPTY;
 		/*
 		 * Don't return to the primary VM in this case, as the TEE is
-		 * not (yet) scheduled via SPCI.
+		 * not (yet) scheduled via FF-A.
 		 */
 		return ret;
 	}
@@ -975,38 +975,38 @@
  * If the recipient's receive buffer is busy, it can optionally register the
  * caller to be notified when the recipient's receive buffer becomes available.
  */
-struct spci_value api_spci_msg_send(spci_vm_id_t sender_vm_id,
-				    spci_vm_id_t receiver_vm_id, uint32_t size,
-				    uint32_t attributes, struct vcpu *current,
-				    struct vcpu **next)
+struct ffa_value api_ffa_msg_send(ffa_vm_id_t sender_vm_id,
+				  ffa_vm_id_t receiver_vm_id, uint32_t size,
+				  uint32_t attributes, struct vcpu *current,
+				  struct vcpu **next)
 {
 	struct vm *from = current->vm;
 	struct vm *to;
 	struct vm_locked to_locked;
 	const void *from_msg;
-	struct spci_value ret;
-	bool notify = (attributes & SPCI_MSG_SEND_NOTIFY_MASK) ==
-		      SPCI_MSG_SEND_NOTIFY;
+	struct ffa_value ret;
+	bool notify =
+		(attributes & FFA_MSG_SEND_NOTIFY_MASK) == FFA_MSG_SEND_NOTIFY;
 
 	/* Ensure sender VM ID corresponds to the current VM. */
 	if (sender_vm_id != from->id) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* Disallow reflexive requests as this suggests an error in the VM. */
 	if (receiver_vm_id == from->id) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* Limit the size of transfer. */
-	if (size > SPCI_MSG_PAYLOAD_MAX) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+	if (size > FFA_MSG_PAYLOAD_MAX) {
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* Ensure the receiver VM exists. */
 	to = vm_find(receiver_vm_id);
 	if (to == NULL) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
@@ -1020,21 +1020,21 @@
 	sl_unlock(&from->lock);
 
 	if (from_msg == NULL) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	to_locked = vm_lock(to);
 
 	if (msg_receiver_busy(to_locked, from, notify)) {
-		ret = spci_error(SPCI_BUSY);
+		ret = ffa_error(FFA_BUSY);
 		goto out;
 	}
 
 	/* Copy data. */
-	memcpy_s(to->mailbox.recv, SPCI_MSG_PAYLOAD_MAX, from_msg, size);
+	memcpy_s(to->mailbox.recv, FFA_MSG_PAYLOAD_MAX, from_msg, size);
 	to->mailbox.recv_size = size;
 	to->mailbox.recv_sender = sender_vm_id;
-	to->mailbox.recv_func = SPCI_MSG_SEND_32;
+	to->mailbox.recv_func = FFA_MSG_SEND_32;
 	ret = deliver_msg(to_locked, sender_vm_id, current, next);
 
 out:
@@ -1047,7 +1047,7 @@
  * Checks whether the vCPU's attempt to block for a message has already been
  * interrupted or whether it is allowed to block.
  */
-bool api_spci_msg_recv_block_interrupted(struct vcpu *current)
+bool api_ffa_msg_recv_block_interrupted(struct vcpu *current)
 {
 	bool interrupted;
 
@@ -1070,18 +1070,18 @@
  *
  * No new messages can be received until the mailbox has been cleared.
  */
-struct spci_value api_spci_msg_recv(bool block, struct vcpu *current,
-				    struct vcpu **next)
+struct ffa_value api_ffa_msg_recv(bool block, struct vcpu *current,
+				  struct vcpu **next)
 {
 	struct vm *vm = current->vm;
-	struct spci_value return_code;
+	struct ffa_value return_code;
 
 	/*
 	 * The primary VM will receive messages as a status code from running
 	 * vCPUs and must not call this function.
 	 */
 	if (vm->id == HF_PRIMARY_VM_ID) {
-		return spci_error(SPCI_NOT_SUPPORTED);
+		return ffa_error(FFA_NOT_SUPPORTED);
 	}
 
 	sl_lock(&vm->lock);
@@ -1089,31 +1089,31 @@
 	/* Return pending messages without blocking. */
 	if (vm->mailbox.state == MAILBOX_STATE_RECEIVED) {
 		vm->mailbox.state = MAILBOX_STATE_READ;
-		return_code = spci_msg_recv_return(vm);
+		return_code = ffa_msg_recv_return(vm);
 		goto out;
 	}
 
 	/* No pending message so fail if not allowed to block. */
 	if (!block) {
-		return_code = spci_error(SPCI_RETRY);
+		return_code = ffa_error(FFA_RETRY);
 		goto out;
 	}
 
 	/*
 	 * From this point onward this call can only be interrupted or a message
 	 * received. If a message is received the return value will be set at
-	 * that time to SPCI_SUCCESS.
+	 * that time to FFA_SUCCESS.
 	 */
-	return_code = spci_error(SPCI_INTERRUPTED);
-	if (api_spci_msg_recv_block_interrupted(current)) {
+	return_code = ffa_error(FFA_INTERRUPTED);
+	if (api_ffa_msg_recv_block_interrupted(current)) {
 		goto out;
 	}
 
 	/* Switch back to primary VM to block. */
 	{
-		struct spci_value run_return = {
-			.func = SPCI_MSG_WAIT_32,
-			.arg1 = spci_vm_vcpu(vm->id, vcpu_index(current)),
+		struct ffa_value run_return = {
+			.func = FFA_MSG_WAIT_32,
+			.arg1 = ffa_vm_vcpu(vm->id, vcpu_index(current)),
 		};
 
 		*next = api_switch_to_primary(current, run_return,
@@ -1165,7 +1165,7 @@
  * Returns -1 on failure or if there are no waiters; the VM id of the next
  * waiter otherwise.
  */
-int64_t api_mailbox_waiter_get(spci_vm_id_t vm_id, const struct vcpu *current)
+int64_t api_mailbox_waiter_get(ffa_vm_id_t vm_id, const struct vcpu *current)
 {
 	struct vm *vm;
 	struct vm_locked locked;
@@ -1210,23 +1210,23 @@
  * will overwrite the old and will arrive asynchronously.
  *
  * Returns:
- *  - SPCI_ERROR SPCI_DENIED on failure, if the mailbox hasn't been read.
- *  - SPCI_SUCCESS on success if no further action is needed.
- *  - SPCI_RX_RELEASE if it was called by the primary VM and the primary VM now
+ *  - FFA_ERROR FFA_DENIED on failure, if the mailbox hasn't been read.
+ *  - FFA_SUCCESS on success if no further action is needed.
+ *  - FFA_RX_RELEASE if it was called by the primary VM and the primary VM now
  *    needs to wake up or kick waiters. Waiters should be retrieved by calling
  *    hf_mailbox_waiter_get.
  */
-struct spci_value api_spci_rx_release(struct vcpu *current, struct vcpu **next)
+struct ffa_value api_ffa_rx_release(struct vcpu *current, struct vcpu **next)
 {
 	struct vm *vm = current->vm;
 	struct vm_locked locked;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	locked = vm_lock(vm);
 	switch (vm->mailbox.state) {
 	case MAILBOX_STATE_EMPTY:
 	case MAILBOX_STATE_RECEIVED:
-		ret = spci_error(SPCI_DENIED);
+		ret = ffa_error(FFA_DENIED);
 		break;
 
 	case MAILBOX_STATE_READ:
@@ -1351,8 +1351,8 @@
  *  - 1 if it was called by the primary VM and the primary VM now needs to wake
  *    up or kick the target vCPU.
  */
-int64_t api_interrupt_inject(spci_vm_id_t target_vm_id,
-			     spci_vcpu_index_t target_vcpu_idx, uint32_t intid,
+int64_t api_interrupt_inject(ffa_vm_id_t target_vm_id,
+			     ffa_vcpu_index_t target_vcpu_idx, uint32_t intid,
 			     struct vcpu *current, struct vcpu **next)
 {
 	struct vcpu *target_vcpu;
@@ -1383,25 +1383,25 @@
 	return internal_interrupt_inject(target_vcpu, intid, current, next);
 }
 
-/** Returns the version of the implemented SPCI specification. */
-struct spci_value api_spci_version(uint32_t requested_version)
+/** Returns the version of the implemented FF-A specification. */
+struct ffa_value api_ffa_version(uint32_t requested_version)
 {
 	/*
 	 * Ensure that both major and minor revision representation occupies at
 	 * most 15 bits.
 	 */
-	static_assert(0x8000 > SPCI_VERSION_MAJOR,
+	static_assert(0x8000 > FFA_VERSION_MAJOR,
 		      "Major revision representation takes more than 15 bits.");
-	static_assert(0x10000 > SPCI_VERSION_MINOR,
+	static_assert(0x10000 > FFA_VERSION_MINOR,
 		      "Minor revision representation takes more than 16 bits.");
-	if (requested_version & SPCI_VERSION_RESERVED_BIT) {
+	if (requested_version & FFA_VERSION_RESERVED_BIT) {
 		/* Invalid encoding, return an error. */
-		return (struct spci_value){.func = SPCI_NOT_SUPPORTED};
+		return (struct ffa_value){.func = FFA_NOT_SUPPORTED};
 	}
 
-	return (struct spci_value){
-		.func = (SPCI_VERSION_MAJOR << SPCI_VERSION_MAJOR_OFFSET) |
-			SPCI_VERSION_MINOR};
+	return (struct ffa_value){
+		.func = (FFA_VERSION_MAJOR << FFA_VERSION_MAJOR_OFFSET) |
+			FFA_VERSION_MINOR};
 }
 
 int64_t api_debug_log(char c, struct vcpu *current)
@@ -1430,59 +1430,59 @@
 
 /**
  * Discovery function returning information about the implementation of optional
- * SPCI interfaces.
+ * FF-A interfaces.
  */
-struct spci_value api_spci_features(uint32_t function_id)
+struct ffa_value api_ffa_features(uint32_t function_id)
 {
 	switch (function_id) {
-	case SPCI_ERROR_32:
-	case SPCI_SUCCESS_32:
-	case SPCI_INTERRUPT_32:
-	case SPCI_VERSION_32:
-	case SPCI_FEATURES_32:
-	case SPCI_RX_RELEASE_32:
-	case SPCI_RXTX_MAP_64:
-	case SPCI_ID_GET_32:
-	case SPCI_MSG_POLL_32:
-	case SPCI_MSG_WAIT_32:
-	case SPCI_YIELD_32:
-	case SPCI_RUN_32:
-	case SPCI_MSG_SEND_32:
-	case SPCI_MEM_DONATE_32:
-	case SPCI_MEM_LEND_32:
-	case SPCI_MEM_SHARE_32:
-	case SPCI_MEM_RETRIEVE_REQ_32:
-	case SPCI_MEM_RETRIEVE_RESP_32:
-	case SPCI_MEM_RELINQUISH_32:
-	case SPCI_MEM_RECLAIM_32:
-		return (struct spci_value){.func = SPCI_SUCCESS_32};
+	case FFA_ERROR_32:
+	case FFA_SUCCESS_32:
+	case FFA_INTERRUPT_32:
+	case FFA_VERSION_32:
+	case FFA_FEATURES_32:
+	case FFA_RX_RELEASE_32:
+	case FFA_RXTX_MAP_64:
+	case FFA_ID_GET_32:
+	case FFA_MSG_POLL_32:
+	case FFA_MSG_WAIT_32:
+	case FFA_YIELD_32:
+	case FFA_RUN_32:
+	case FFA_MSG_SEND_32:
+	case FFA_MEM_DONATE_32:
+	case FFA_MEM_LEND_32:
+	case FFA_MEM_SHARE_32:
+	case FFA_MEM_RETRIEVE_REQ_32:
+	case FFA_MEM_RETRIEVE_RESP_32:
+	case FFA_MEM_RELINQUISH_32:
+	case FFA_MEM_RECLAIM_32:
+		return (struct ffa_value){.func = FFA_SUCCESS_32};
 	default:
-		return spci_error(SPCI_NOT_SUPPORTED);
+		return ffa_error(FFA_NOT_SUPPORTED);
 	}
 }
 
-struct spci_value api_spci_mem_send(uint32_t share_func, uint32_t length,
-				    uint32_t fragment_length, ipaddr_t address,
-				    uint32_t page_count, struct vcpu *current,
-				    struct vcpu **next)
+struct ffa_value api_ffa_mem_send(uint32_t share_func, uint32_t length,
+				  uint32_t fragment_length, ipaddr_t address,
+				  uint32_t page_count, struct vcpu *current,
+				  struct vcpu **next)
 {
 	struct vm *from = current->vm;
 	struct vm *to;
 	const void *from_msg;
-	struct spci_memory_region *memory_region;
-	struct spci_value ret;
+	struct ffa_memory_region *memory_region;
+	struct ffa_value ret;
 
 	if (ipa_addr(address) != 0 || page_count != 0) {
 		/*
 		 * Hafnium only supports passing the descriptor in the TX
 		 * mailbox.
 		 */
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	if (fragment_length != length) {
 		dlog_verbose("Fragmentation not yet supported.\n");
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
@@ -1496,7 +1496,7 @@
 	sl_unlock(&from->lock);
 
 	if (from_msg == NULL) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
@@ -1505,20 +1505,19 @@
 	 * also lets us keep it around in the share state table if needed.
 	 */
 	if (length > HF_MAILBOX_SIZE || length > MM_PPOOL_ENTRY_SIZE) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
-	memory_region =
-		(struct spci_memory_region *)mpool_alloc(&api_page_pool);
+	memory_region = (struct ffa_memory_region *)mpool_alloc(&api_page_pool);
 	if (memory_region == NULL) {
 		dlog_verbose("Failed to allocate memory region copy.\n");
-		return spci_error(SPCI_NO_MEMORY);
+		return ffa_error(FFA_NO_MEMORY);
 	}
 	memcpy_s(memory_region, MM_PPOOL_ENTRY_SIZE, from_msg, length);
 
 	/* The sender must match the caller. */
 	if (memory_region->sender != from->id) {
 		dlog_verbose("Memory region sender doesn't match caller.\n");
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1528,7 +1527,7 @@
 			"Multi-way memory sharing not supported (got %d "
 			"endpoint memory access descriptors, expected 1).\n",
 			memory_region->receiver_count);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1538,7 +1537,7 @@
 	to = vm_find(memory_region->receivers[0].receiver_permissions.receiver);
 	if (to == NULL || to == from) {
 		dlog_verbose("Invalid receiver.\n");
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1550,15 +1549,15 @@
 		struct two_vm_locked vm_to_from_lock = vm_lock_both(to, from);
 
 		if (msg_receiver_busy(vm_to_from_lock.vm1, from, false)) {
-			ret = spci_error(SPCI_BUSY);
+			ret = ffa_error(FFA_BUSY);
 			goto out_unlock;
 		}
 
-		ret = spci_memory_send(to, vm_to_from_lock.vm2, memory_region,
-				       length, share_func, &api_page_pool);
-		if (ret.func == SPCI_SUCCESS_32) {
+		ret = ffa_memory_send(to, vm_to_from_lock.vm2, memory_region,
+				      length, share_func, &api_page_pool);
+		if (ret.func == FFA_SUCCESS_32) {
 			/* Forward memory send message on to TEE. */
-			memcpy_s(to->mailbox.recv, SPCI_MSG_PAYLOAD_MAX,
+			memcpy_s(to->mailbox.recv, FFA_MSG_PAYLOAD_MAX,
 				 memory_region, length);
 			to->mailbox.recv_size = length;
 			to->mailbox.recv_sender = from->id;
@@ -1573,10 +1572,10 @@
 	} else {
 		struct vm_locked from_locked = vm_lock(from);
 
-		ret = spci_memory_send(to, from_locked, memory_region, length,
-				       share_func, &api_page_pool);
+		ret = ffa_memory_send(to, from_locked, memory_region, length,
+				      share_func, &api_page_pool);
 		/*
-		 * spci_memory_send takes ownership of the memory_region, so
+		 * ffa_memory_send takes ownership of the memory_region, so
 		 * make sure we don't free it.
 		 */
 		memory_region = NULL;
@@ -1592,38 +1591,37 @@
 	return ret;
 }
 
-struct spci_value api_spci_mem_retrieve_req(uint32_t length,
-					    uint32_t fragment_length,
-					    ipaddr_t address,
-					    uint32_t page_count,
-					    struct vcpu *current)
+struct ffa_value api_ffa_mem_retrieve_req(uint32_t length,
+					  uint32_t fragment_length,
+					  ipaddr_t address, uint32_t page_count,
+					  struct vcpu *current)
 {
 	struct vm *to = current->vm;
 	struct vm_locked to_locked;
 	const void *to_msg;
-	struct spci_memory_region *retrieve_request;
+	struct ffa_memory_region *retrieve_request;
 	uint32_t message_buffer_size;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	if (ipa_addr(address) != 0 || page_count != 0) {
 		/*
 		 * Hafnium only supports passing the descriptor in the TX
 		 * mailbox.
 		 */
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	if (fragment_length != length) {
 		dlog_verbose("Fragmentation not yet supported.\n");
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	retrieve_request =
-		(struct spci_memory_region *)cpu_get_buffer(current->cpu);
+		(struct ffa_memory_region *)cpu_get_buffer(current->cpu);
 	message_buffer_size = cpu_get_buffer_size(current->cpu);
 	if (length > HF_MAILBOX_SIZE || length > message_buffer_size) {
 		dlog_verbose("Retrieve request too long.\n");
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	to_locked = vm_lock(to);
@@ -1631,7 +1629,7 @@
 
 	if (to_msg == NULL) {
 		dlog_verbose("TX buffer not setup.\n");
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1647,26 +1645,26 @@
 		 * available.
 		 */
 		dlog_verbose("RX buffer not ready.\n");
-		ret = spci_error(SPCI_BUSY);
+		ret = ffa_error(FFA_BUSY);
 		goto out;
 	}
 
-	ret = spci_memory_retrieve(to_locked, retrieve_request, length,
-				   &api_page_pool);
+	ret = ffa_memory_retrieve(to_locked, retrieve_request, length,
+				  &api_page_pool);
 
 out:
 	vm_unlock(&to_locked);
 	return ret;
 }
 
-struct spci_value api_spci_mem_relinquish(struct vcpu *current)
+struct ffa_value api_ffa_mem_relinquish(struct vcpu *current)
 {
 	struct vm *from = current->vm;
 	struct vm_locked from_locked;
 	const void *from_msg;
-	struct spci_mem_relinquish *relinquish_request;
+	struct ffa_mem_relinquish *relinquish_request;
 	uint32_t message_buffer_size;
-	struct spci_value ret;
+	struct ffa_value ret;
 	uint32_t length;
 
 	from_locked = vm_lock(from);
@@ -1674,7 +1672,7 @@
 
 	if (from_msg == NULL) {
 		dlog_verbose("TX buffer not setup.\n");
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1682,72 +1680,72 @@
 	 * Calculate length from relinquish descriptor before copying. We will
 	 * check again later to make sure it hasn't changed.
 	 */
-	length = sizeof(struct spci_mem_relinquish) +
-		 ((struct spci_mem_relinquish *)from_msg)->endpoint_count *
-			 sizeof(spci_vm_id_t);
+	length = sizeof(struct ffa_mem_relinquish) +
+		 ((struct ffa_mem_relinquish *)from_msg)->endpoint_count *
+			 sizeof(ffa_vm_id_t);
 	/*
 	 * Copy the relinquish descriptor to an internal buffer, so that the
 	 * caller can't change it underneath us.
 	 */
 	relinquish_request =
-		(struct spci_mem_relinquish *)cpu_get_buffer(current->cpu);
+		(struct ffa_mem_relinquish *)cpu_get_buffer(current->cpu);
 	message_buffer_size = cpu_get_buffer_size(current->cpu);
 	if (length > HF_MAILBOX_SIZE || length > message_buffer_size) {
 		dlog_verbose("Relinquish message too long.\n");
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 	memcpy_s(relinquish_request, message_buffer_size, from_msg, length);
 
-	if (sizeof(struct spci_mem_relinquish) +
-		    relinquish_request->endpoint_count * sizeof(spci_vm_id_t) !=
+	if (sizeof(struct ffa_mem_relinquish) +
+		    relinquish_request->endpoint_count * sizeof(ffa_vm_id_t) !=
 	    length) {
 		dlog_verbose(
 			"Endpoint count changed while copying to internal "
 			"buffer.\n");
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
-	ret = spci_memory_relinquish(from_locked, relinquish_request,
-				     &api_page_pool);
+	ret = ffa_memory_relinquish(from_locked, relinquish_request,
+				    &api_page_pool);
 
 out:
 	vm_unlock(&from_locked);
 	return ret;
 }
 
-static struct spci_value spci_mem_reclaim_tee(struct vm_locked to_locked,
-					      struct vm_locked from_locked,
-					      spci_memory_handle_t handle,
-					      spci_memory_region_flags_t flags,
-					      struct cpu *cpu)
+static struct ffa_value ffa_mem_reclaim_tee(struct vm_locked to_locked,
+					    struct vm_locked from_locked,
+					    ffa_memory_handle_t handle,
+					    ffa_memory_region_flags_t flags,
+					    struct cpu *cpu)
 {
 	uint32_t fragment_length;
 	uint32_t length;
 	uint32_t request_length;
-	struct spci_memory_region *memory_region =
-		(struct spci_memory_region *)cpu_get_buffer(cpu);
+	struct ffa_memory_region *memory_region =
+		(struct ffa_memory_region *)cpu_get_buffer(cpu);
 	uint32_t message_buffer_size = cpu_get_buffer_size(cpu);
-	struct spci_value tee_ret;
+	struct ffa_value tee_ret;
 
-	request_length = spci_memory_lender_retrieve_request_init(
+	request_length = ffa_memory_lender_retrieve_request_init(
 		from_locked.vm->mailbox.recv, handle, to_locked.vm->id);
 
 	/* Retrieve memory region information from the TEE. */
 	tee_ret = arch_tee_call(
-		(struct spci_value){.func = SPCI_MEM_RETRIEVE_REQ_32,
-				    .arg1 = request_length,
-				    .arg2 = request_length});
-	if (tee_ret.func == SPCI_ERROR_32) {
+		(struct ffa_value){.func = FFA_MEM_RETRIEVE_REQ_32,
+				   .arg1 = request_length,
+				   .arg2 = request_length});
+	if (tee_ret.func == FFA_ERROR_32) {
 		dlog_verbose("Got error %d from EL3.\n", tee_ret.arg2);
 		return tee_ret;
 	}
-	if (tee_ret.func != SPCI_MEM_RETRIEVE_RESP_32) {
+	if (tee_ret.func != FFA_MEM_RETRIEVE_RESP_32) {
 		dlog_verbose(
-			"Got %#x from EL3, expected SPCI_MEM_RETRIEVE_RESP.\n",
+			"Got %#x from EL3, expected FFA_MEM_RETRIEVE_RESP.\n",
 			tee_ret.func);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	length = tee_ret.arg1;
@@ -1757,7 +1755,7 @@
 	    fragment_length > message_buffer_size) {
 		dlog_verbose("Invalid fragment length %d (max %d).\n", length,
 			     HF_MAILBOX_SIZE);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* TODO: Support fragmentation. */
@@ -1766,7 +1764,7 @@
 			"Message fragmentation not yet supported (fragment "
 			"length %d but length %d).\n",
 			fragment_length, length);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
@@ -1780,34 +1778,34 @@
 	 * Validate that transition is allowed (e.g. that caller is owner),
 	 * forward the reclaim request to the TEE, and update page tables.
 	 */
-	return spci_memory_tee_reclaim(to_locked, handle, memory_region,
-				       flags & SPCI_MEM_RECLAIM_CLEAR,
-				       &api_page_pool);
+	return ffa_memory_tee_reclaim(to_locked, handle, memory_region,
+				      flags & FFA_MEM_RECLAIM_CLEAR,
+				      &api_page_pool);
 }
 
-struct spci_value api_spci_mem_reclaim(spci_memory_handle_t handle,
-				       spci_memory_region_flags_t flags,
-				       struct vcpu *current)
+struct ffa_value api_ffa_mem_reclaim(ffa_memory_handle_t handle,
+				     ffa_memory_region_flags_t flags,
+				     struct vcpu *current)
 {
 	struct vm *to = current->vm;
-	struct spci_value ret;
+	struct ffa_value ret;
 
-	if ((handle & SPCI_MEMORY_HANDLE_ALLOCATOR_MASK) ==
-	    SPCI_MEMORY_HANDLE_ALLOCATOR_HYPERVISOR) {
+	if ((handle & FFA_MEMORY_HANDLE_ALLOCATOR_MASK) ==
+	    FFA_MEMORY_HANDLE_ALLOCATOR_HYPERVISOR) {
 		struct vm_locked to_locked = vm_lock(to);
 
-		ret = spci_memory_reclaim(to_locked, handle,
-					  flags & SPCI_MEM_RECLAIM_CLEAR,
-					  &api_page_pool);
+		ret = ffa_memory_reclaim(to_locked, handle,
+					 flags & FFA_MEM_RECLAIM_CLEAR,
+					 &api_page_pool);
 
 		vm_unlock(&to_locked);
 	} else {
 		struct vm *from = vm_find(HF_TEE_VM_ID);
 		struct two_vm_locked vm_to_from_lock = vm_lock_both(to, from);
 
-		ret = spci_mem_reclaim_tee(vm_to_from_lock.vm1,
-					   vm_to_from_lock.vm2, handle, flags,
-					   current->cpu);
+		ret = ffa_mem_reclaim_tee(vm_to_from_lock.vm1,
+					  vm_to_from_lock.vm2, handle, flags,
+					  current->cpu);
 
 		vm_unlock(&vm_to_from_lock.vm1);
 		vm_unlock(&vm_to_from_lock.vm2);
diff --git a/src/arch/aarch64/hftest/power_mgmt.c b/src/arch/aarch64/hftest/power_mgmt.c
index 5412fd1..d53d609 100644
--- a/src/arch/aarch64/hftest/power_mgmt.c
+++ b/src/arch/aarch64/hftest/power_mgmt.c
@@ -36,7 +36,7 @@
 bool arch_cpu_start(uintptr_t id, struct arch_cpu_start_state *state)
 {
 	void vm_cpu_entry(uintptr_t arg);
-	struct spci_value smc_res;
+	struct ffa_value smc_res;
 
 	/* Try to start the CPU. */
 	smc_res = smc64(PSCI_CPU_ON, id, (uintptr_t)&vm_cpu_entry,
@@ -69,7 +69,7 @@
 enum power_status arch_cpu_status(cpu_id_t cpu_id)
 {
 	uint32_t lowest_affinity_level = 0;
-	struct spci_value smc_res;
+	struct ffa_value smc_res;
 
 	/*
 	 * This works because the power_status enum values happen to be the same
diff --git a/src/arch/aarch64/hypervisor/cpu.c b/src/arch/aarch64/hypervisor/cpu.c
index 97457c4..1d32587 100644
--- a/src/arch/aarch64/hypervisor/cpu.c
+++ b/src/arch/aarch64/hypervisor/cpu.c
@@ -23,7 +23,7 @@
 #include "hf/arch/plat/psci.h"
 
 #include "hf/addr.h"
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/std.h"
 #include "hf/vm.h"
 
@@ -69,7 +69,7 @@
 
 void arch_regs_reset(struct vcpu *vcpu)
 {
-	spci_vm_id_t vm_id = vcpu->vm->id;
+	ffa_vm_id_t vm_id = vcpu->vm->id;
 	bool is_primary = vm_id == HF_PRIMARY_VM_ID;
 	cpu_id_t vcpu_id = is_primary ? vcpu->cpu->id : vcpu_index(vcpu);
 	paddr_t table = vcpu->vm->ptable.root;
@@ -125,7 +125,7 @@
 	r->r[0] = arg;
 }
 
-void arch_regs_set_retval(struct arch_regs *r, struct spci_value v)
+void arch_regs_set_retval(struct arch_regs *r, struct ffa_value v)
 {
 	r->r[0] = v.func;
 	r->r[1] = v.arg1;
diff --git a/src/arch/aarch64/hypervisor/debug_el1.c b/src/arch/aarch64/hypervisor/debug_el1.c
index f46cbe6..e346085 100644
--- a/src/arch/aarch64/hypervisor/debug_el1.c
+++ b/src/arch/aarch64/hypervisor/debug_el1.c
@@ -141,7 +141,7 @@
  * Processes an access (msr, mrs) to an EL1 debug register.
  * Returns true if the access was allowed and performed, false otherwise.
  */
-bool debug_el1_process_access(struct vcpu *vcpu, spci_vm_id_t vm_id,
+bool debug_el1_process_access(struct vcpu *vcpu, ffa_vm_id_t vm_id,
 			      uintreg_t esr)
 {
 	/*
diff --git a/src/arch/aarch64/hypervisor/debug_el1.h b/src/arch/aarch64/hypervisor/debug_el1.h
index 9dc1ef6..86b7d60 100644
--- a/src/arch/aarch64/hypervisor/debug_el1.h
+++ b/src/arch/aarch64/hypervisor/debug_el1.h
@@ -20,9 +20,9 @@
 
 #include "hf/cpu.h"
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 bool debug_el1_is_register_access(uintreg_t esr_el2);
 
-bool debug_el1_process_access(struct vcpu *vcpu, spci_vm_id_t vm_id,
+bool debug_el1_process_access(struct vcpu *vcpu, ffa_vm_id_t vm_id,
 			      uintreg_t esr_el2);
diff --git a/src/arch/aarch64/hypervisor/feature_id.h b/src/arch/aarch64/hypervisor/feature_id.h
index 86c7c01..15e8265 100644
--- a/src/arch/aarch64/hypervisor/feature_id.h
+++ b/src/arch/aarch64/hypervisor/feature_id.h
@@ -20,7 +20,7 @@
 
 #include "hf/cpu.h"
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 #define HF_FEATURE_NONE UINT64_C(0)
 
diff --git a/src/arch/aarch64/hypervisor/handler.c b/src/arch/aarch64/hypervisor/handler.c
index d3a4a75..6ad8616 100644
--- a/src/arch/aarch64/hypervisor/handler.c
+++ b/src/arch/aarch64/hypervisor/handler.c
@@ -25,8 +25,8 @@
 #include "hf/check.h"
 #include "hf/cpu.h"
 #include "hf/dlog.h"
+#include "hf/ffa.h"
 #include "hf/panic.h"
-#include "hf/spci.h"
 #include "hf/vm.h"
 
 #include "vmapi/hf/call.h"
@@ -154,7 +154,7 @@
 void maybe_invalidate_tlb(struct vcpu *vcpu)
 {
 	size_t current_cpu_index = cpu_index(vcpu->cpu);
-	spci_vcpu_index_t new_vcpu_index = vcpu_index(vcpu);
+	ffa_vcpu_index_t new_vcpu_index = vcpu_index(vcpu);
 
 	if (vcpu->vm->arch.last_vcpu_on_cpu[current_cpu_index] !=
 	    new_vcpu_index) {
@@ -280,9 +280,9 @@
  * Applies SMC access control according to manifest and forwards the call if
  * access is granted.
  */
-static void smc_forwarder(const struct vm *vm, struct spci_value *args)
+static void smc_forwarder(const struct vm *vm, struct ffa_value *args)
 {
-	struct spci_value ret;
+	struct ffa_value ret;
 	uint32_t client_id = vm->id;
 	uintreg_t arg7 = args->arg7;
 
@@ -313,73 +313,72 @@
 	*args = ret;
 }
 
-static bool spci_handler(struct spci_value *args, struct vcpu **next)
+static bool ffa_handler(struct ffa_value *args, struct vcpu **next)
 {
 	uint32_t func = args->func & ~SMCCC_CONVENTION_MASK;
 
 	/*
 	 * NOTE: When adding new methods to this handler update
-	 * api_spci_features accordingly.
+	 * api_ffa_features accordingly.
 	 */
 	switch (func) {
-	case SPCI_VERSION_32:
-		*args = api_spci_version(args->arg1);
+	case FFA_VERSION_32:
+		*args = api_ffa_version(args->arg1);
 		return true;
-	case SPCI_ID_GET_32:
-		*args = api_spci_id_get(current());
+	case FFA_ID_GET_32:
+		*args = api_ffa_id_get(current());
 		return true;
-	case SPCI_FEATURES_32:
-		*args = api_spci_features(args->arg1);
+	case FFA_FEATURES_32:
+		*args = api_ffa_features(args->arg1);
 		return true;
-	case SPCI_RX_RELEASE_32:
-		*args = api_spci_rx_release(current(), next);
+	case FFA_RX_RELEASE_32:
+		*args = api_ffa_rx_release(current(), next);
 		return true;
-	case SPCI_RXTX_MAP_32:
-		*args = api_spci_rxtx_map(ipa_init(args->arg1),
-					  ipa_init(args->arg2), args->arg3,
-					  current(), next);
+	case FFA_RXTX_MAP_32:
+		*args = api_ffa_rxtx_map(ipa_init(args->arg1),
+					 ipa_init(args->arg2), args->arg3,
+					 current(), next);
 		return true;
-	case SPCI_YIELD_32:
+	case FFA_YIELD_32:
 		api_yield(current(), next);
 
-		/* SPCI_YIELD always returns SPCI_SUCCESS. */
-		*args = (struct spci_value){.func = SPCI_SUCCESS_32};
+		/* FFA_YIELD always returns FFA_SUCCESS. */
+		*args = (struct ffa_value){.func = FFA_SUCCESS_32};
 
 		return true;
-	case SPCI_MSG_SEND_32:
-		*args = api_spci_msg_send(spci_msg_send_sender(*args),
-					  spci_msg_send_receiver(*args),
-					  spci_msg_send_size(*args),
-					  spci_msg_send_attributes(*args),
-					  current(), next);
+	case FFA_MSG_SEND_32:
+		*args = api_ffa_msg_send(
+			ffa_msg_send_sender(*args),
+			ffa_msg_send_receiver(*args), ffa_msg_send_size(*args),
+			ffa_msg_send_attributes(*args), current(), next);
 		return true;
-	case SPCI_MSG_WAIT_32:
-		*args = api_spci_msg_recv(true, current(), next);
+	case FFA_MSG_WAIT_32:
+		*args = api_ffa_msg_recv(true, current(), next);
 		return true;
-	case SPCI_MSG_POLL_32:
-		*args = api_spci_msg_recv(false, current(), next);
+	case FFA_MSG_POLL_32:
+		*args = api_ffa_msg_recv(false, current(), next);
 		return true;
-	case SPCI_RUN_32:
-		*args = api_spci_run(spci_vm_id(*args), spci_vcpu_index(*args),
-				     current(), next);
+	case FFA_RUN_32:
+		*args = api_ffa_run(ffa_vm_id(*args), ffa_vcpu_index(*args),
+				    current(), next);
 		return true;
-	case SPCI_MEM_DONATE_32:
-	case SPCI_MEM_LEND_32:
-	case SPCI_MEM_SHARE_32:
-		*args = api_spci_mem_send(func, args->arg1, args->arg2,
-					  ipa_init(args->arg3), args->arg4,
-					  current(), next);
+	case FFA_MEM_DONATE_32:
+	case FFA_MEM_LEND_32:
+	case FFA_MEM_SHARE_32:
+		*args = api_ffa_mem_send(func, args->arg1, args->arg2,
+					 ipa_init(args->arg3), args->arg4,
+					 current(), next);
 		return true;
-	case SPCI_MEM_RETRIEVE_REQ_32:
-		*args = api_spci_mem_retrieve_req(args->arg1, args->arg2,
-						  ipa_init(args->arg3),
-						  args->arg4, current());
+	case FFA_MEM_RETRIEVE_REQ_32:
+		*args = api_ffa_mem_retrieve_req(args->arg1, args->arg2,
+						 ipa_init(args->arg3),
+						 args->arg4, current());
 		return true;
-	case SPCI_MEM_RELINQUISH_32:
-		*args = api_spci_mem_relinquish(current());
+	case FFA_MEM_RELINQUISH_32:
+		*args = api_ffa_mem_relinquish(current());
 		return true;
-	case SPCI_MEM_RECLAIM_32:
-		*args = api_spci_mem_reclaim(
+	case FFA_MEM_RECLAIM_32:
+		*args = api_ffa_mem_reclaim(
 			(args->arg1 & 0xffffffff) | args->arg2 << 32,
 			args->arg3, current());
 		return true;
@@ -422,7 +421,7 @@
  */
 static struct vcpu *smc_handler(struct vcpu *vcpu)
 {
-	struct spci_value args = {
+	struct ffa_value args = {
 		.func = vcpu->regs.r[0],
 		.arg1 = vcpu->regs.r[1],
 		.arg2 = vcpu->regs.r[2],
@@ -439,7 +438,7 @@
 		return next;
 	}
 
-	if (spci_handler(&args, &next)) {
+	if (ffa_handler(&args, &next)) {
 		arch_regs_set_retval(&vcpu->regs, args);
 		update_vi(next);
 		return next;
@@ -595,7 +594,7 @@
 
 struct vcpu *hvc_handler(struct vcpu *vcpu)
 {
-	struct spci_value args = {
+	struct ffa_value args = {
 		.func = vcpu->regs.r[0],
 		.arg1 = vcpu->regs.r[1],
 		.arg2 = vcpu->regs.r[2],
@@ -612,7 +611,7 @@
 		return next;
 	}
 
-	if (spci_handler(&args, &next)) {
+	if (ffa_handler(&args, &next)) {
 		arch_regs_set_retval(&vcpu->regs, args);
 		update_vi(next);
 		return next;
@@ -816,7 +815,7 @@
 void handle_system_register_access(uintreg_t esr_el2)
 {
 	struct vcpu *vcpu = current();
-	spci_vm_id_t vm_id = vcpu->vm->id;
+	ffa_vm_id_t vm_id = vcpu->vm->id;
 	uintreg_t ec = GET_ESR_EC(esr_el2);
 
 	CHECK(ec == EC_MSR);
diff --git a/src/arch/aarch64/hypervisor/perfmon.c b/src/arch/aarch64/hypervisor/perfmon.c
index 4532e31..81644a0 100644
--- a/src/arch/aarch64/hypervisor/perfmon.c
+++ b/src/arch/aarch64/hypervisor/perfmon.c
@@ -157,8 +157,7 @@
  * Processes an access (msr, mrs) to a performance monitor register.
  * Returns true if the access was allowed and performed, false otherwise.
  */
-bool perfmon_process_access(struct vcpu *vcpu, spci_vm_id_t vm_id,
-			    uintreg_t esr)
+bool perfmon_process_access(struct vcpu *vcpu, ffa_vm_id_t vm_id, uintreg_t esr)
 {
 	/*
 	 * For now, performance monitor registers are not supported by secondary
@@ -232,7 +231,7 @@
 /**
  * Returns the value register PMCCFILTR_EL0 should have at initialization.
  */
-uintreg_t perfmon_get_pmccfiltr_el0_init_value(spci_vm_id_t vm_id)
+uintreg_t perfmon_get_pmccfiltr_el0_init_value(ffa_vm_id_t vm_id)
 {
 	if (vm_id != HF_PRIMARY_VM_ID) {
 		/* Disable cycle counting for secondary VMs. */
diff --git a/src/arch/aarch64/hypervisor/perfmon.h b/src/arch/aarch64/hypervisor/perfmon.h
index afeabd9..a9d68f7 100644
--- a/src/arch/aarch64/hypervisor/perfmon.h
+++ b/src/arch/aarch64/hypervisor/perfmon.h
@@ -20,7 +20,7 @@
 
 #include "hf/cpu.h"
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 /**
  * Set to disable cycle counting when event counting is prohibited.
@@ -74,7 +74,7 @@
 
 bool perfmon_is_register_access(uintreg_t esr_el2);
 
-bool perfmon_process_access(struct vcpu *vcpu, spci_vm_id_t vm_id,
+bool perfmon_process_access(struct vcpu *vcpu, ffa_vm_id_t vm_id,
 			    uintreg_t esr_el2);
 
-uintreg_t perfmon_get_pmccfiltr_el0_init_value(spci_vm_id_t vm_id);
+uintreg_t perfmon_get_pmccfiltr_el0_init_value(ffa_vm_id_t vm_id);
diff --git a/src/arch/aarch64/hypervisor/psci_handler.c b/src/arch/aarch64/hypervisor/psci_handler.c
index aabe979..2f1b7b5 100644
--- a/src/arch/aarch64/hypervisor/psci_handler.c
+++ b/src/arch/aarch64/hypervisor/psci_handler.c
@@ -24,8 +24,8 @@
 #include "hf/api.h"
 #include "hf/cpu.h"
 #include "hf/dlog.h"
+#include "hf/ffa.h"
 #include "hf/panic.h"
-#include "hf/spci.h"
 #include "hf/vm.h"
 
 #include "psci.h"
@@ -38,7 +38,7 @@
 /* Performs arch specific boot time initialisation. */
 void arch_one_time_init(void)
 {
-	struct spci_value smc_res =
+	struct ffa_value smc_res =
 		smc32(PSCI_VERSION, 0, 0, 0, 0, 0, 0, SMCCC_CALLER_HYPERVISOR);
 
 	el3_psci_version = smc_res.func;
@@ -73,7 +73,7 @@
 			     uintreg_t arg1, uintreg_t arg2, uintreg_t *ret)
 {
 	struct cpu *c;
-	struct spci_value smc_res;
+	struct ffa_value smc_res;
 
 	/*
 	 * If there's a problem with the EL3 PSCI, block standard secure service
@@ -242,7 +242,7 @@
  * Convert a PSCI CPU / affinity ID for a secondary VM to the corresponding vCPU
  * index.
  */
-spci_vcpu_index_t vcpu_id_to_index(cpu_id_t vcpu_id)
+ffa_vcpu_index_t vcpu_id_to_index(cpu_id_t vcpu_id)
 {
 	/* For now we use indices as IDs for the purposes of PSCI. */
 	return vcpu_id;
@@ -297,7 +297,7 @@
 		uint32_t lowest_affinity_level = arg1;
 		struct vm *vm = vcpu->vm;
 		struct vcpu_locked target_vcpu;
-		spci_vcpu_index_t target_vcpu_index =
+		ffa_vcpu_index_t target_vcpu_index =
 			vcpu_id_to_index(target_affinity);
 
 		if (lowest_affinity_level != 0) {
@@ -343,7 +343,7 @@
 		cpu_id_t target_cpu = arg0;
 		ipaddr_t entry_point_address = ipa_init(arg1);
 		uint64_t context_id = arg2;
-		spci_vcpu_index_t target_vcpu_index =
+		ffa_vcpu_index_t target_vcpu_index =
 			vcpu_id_to_index(target_cpu);
 		struct vm *vm = vcpu->vm;
 		struct vcpu *target_vcpu;
diff --git a/src/arch/aarch64/hypervisor/tee.c b/src/arch/aarch64/hypervisor/tee.c
index c3885ac..90d9e32 100644
--- a/src/arch/aarch64/hypervisor/tee.c
+++ b/src/arch/aarch64/hypervisor/tee.c
@@ -18,8 +18,8 @@
 
 #include "hf/check.h"
 #include "hf/dlog.h"
+#include "hf/ffa.h"
 #include "hf/panic.h"
-#include "hf/spci.h"
 #include "hf/vm.h"
 
 #include "smc.h"
@@ -27,7 +27,7 @@
 void arch_tee_init(void)
 {
 	struct vm *tee_vm = vm_find(HF_TEE_VM_ID);
-	struct spci_value ret;
+	struct ffa_value ret;
 	uint32_t func;
 
 	CHECK(tee_vm != NULL);
@@ -37,11 +37,11 @@
 	 * perspective and vice-versa.
 	 */
 	dlog_verbose("Setting up buffers for TEE.\n");
-	ret = arch_tee_call((struct spci_value){
-		.func = SPCI_RXTX_MAP_64,
+	ret = arch_tee_call((struct ffa_value){
+		.func = FFA_RXTX_MAP_64,
 		.arg1 = pa_addr(pa_from_va(va_from_ptr(tee_vm->mailbox.recv))),
 		.arg2 = pa_addr(pa_from_va(va_from_ptr(tee_vm->mailbox.send))),
-		.arg3 = HF_MAILBOX_SIZE / SPCI_PAGE_SIZE});
+		.arg3 = HF_MAILBOX_SIZE / FFA_PAGE_SIZE});
 	func = ret.func & ~SMCCC_CONVENTION_MASK;
 	if (ret.func == SMCCC_ERROR_UNKNOWN) {
 		dlog_error(
@@ -49,9 +49,9 @@
 			"Memory sharing with TEE will not work.\n");
 		return;
 	}
-	if (func == SPCI_ERROR_32) {
+	if (func == FFA_ERROR_32) {
 		panic("Error %d setting up TEE message buffers.", ret.arg2);
-	} else if (func != SPCI_SUCCESS_32) {
+	} else if (func != FFA_SUCCESS_32) {
 		panic("Unexpected function %#x returned setting up TEE message "
 		      "buffers.",
 		      ret.func);
@@ -59,7 +59,7 @@
 	dlog_verbose("TEE finished setting up buffers.\n");
 }
 
-struct spci_value arch_tee_call(struct spci_value args)
+struct ffa_value arch_tee_call(struct ffa_value args)
 {
 	return smc_forward(args.func, args.arg1, args.arg2, args.arg3,
 			   args.arg4, args.arg5, args.arg6, args.arg7);
diff --git a/src/arch/aarch64/inc/hf/arch/types.h b/src/arch/aarch64/inc/hf/arch/types.h
index 2ecd722..2f530b6 100644
--- a/src/arch/aarch64/inc/hf/arch/types.h
+++ b/src/arch/aarch64/inc/hf/arch/types.h
@@ -19,7 +19,7 @@
 #include <stdalign.h>
 #include <stdint.h>
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/static_assert.h"
 
 #define PAGE_BITS 12
@@ -67,7 +67,7 @@
 	 * on that CPU, which avoids contention and so no lock is needed to
 	 * access this field.
 	 */
-	spci_vcpu_index_t last_vcpu_on_cpu[MAX_CPUS];
+	ffa_vcpu_index_t last_vcpu_on_cpu[MAX_CPUS];
 	arch_features_t trapped_features;
 
 	/*
diff --git a/src/arch/aarch64/plat/smc/absent.c b/src/arch/aarch64/plat/smc/absent.c
index 6ea9b8d..4545693 100644
--- a/src/arch/aarch64/plat/smc/absent.c
+++ b/src/arch/aarch64/plat/smc/absent.c
@@ -16,7 +16,7 @@
 
 #include "hf/arch/plat/smc.h"
 
-void plat_smc_post_forward(struct spci_value args, struct spci_value *ret)
+void plat_smc_post_forward(struct ffa_value args, struct ffa_value *ret)
 {
 	(void)args;
 	(void)ret;
diff --git a/src/arch/aarch64/smc.c b/src/arch/aarch64/smc.c
index e5de0bd..633c6de 100644
--- a/src/arch/aarch64/smc.c
+++ b/src/arch/aarch64/smc.c
@@ -18,12 +18,12 @@
 
 #include <stdint.h>
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
-static struct spci_value smc_internal(uint32_t func, uint64_t arg0,
-				      uint64_t arg1, uint64_t arg2,
-				      uint64_t arg3, uint64_t arg4,
-				      uint64_t arg5, uint32_t caller_id)
+static struct ffa_value smc_internal(uint32_t func, uint64_t arg0,
+				     uint64_t arg1, uint64_t arg2,
+				     uint64_t arg3, uint64_t arg4,
+				     uint64_t arg5, uint32_t caller_id)
 {
 	register uint64_t r0 __asm__("x0") = func;
 	register uint64_t r1 __asm__("x1") = arg0;
@@ -40,35 +40,35 @@
 		"+r"(r0), "+r"(r1), "+r"(r2), "+r"(r3), "+r"(r4), "+r"(r5),
 		"+r"(r6), "+r"(r7));
 
-	return (struct spci_value){.func = r0,
-				   .arg1 = r1,
-				   .arg2 = r2,
-				   .arg3 = r3,
-				   .arg4 = r4,
-				   .arg5 = r5,
-				   .arg6 = r6,
-				   .arg7 = r7};
+	return (struct ffa_value){.func = r0,
+				  .arg1 = r1,
+				  .arg2 = r2,
+				  .arg3 = r3,
+				  .arg4 = r4,
+				  .arg5 = r5,
+				  .arg6 = r6,
+				  .arg7 = r7};
 }
 
-struct spci_value smc32(uint32_t func, uint32_t arg0, uint32_t arg1,
-			uint32_t arg2, uint32_t arg3, uint32_t arg4,
-			uint32_t arg5, uint32_t caller_id)
+struct ffa_value smc32(uint32_t func, uint32_t arg0, uint32_t arg1,
+		       uint32_t arg2, uint32_t arg3, uint32_t arg4,
+		       uint32_t arg5, uint32_t caller_id)
 {
 	return smc_internal(func | SMCCC_32_BIT, arg0, arg1, arg2, arg3, arg4,
 			    arg5, caller_id);
 }
 
-struct spci_value smc64(uint32_t func, uint64_t arg0, uint64_t arg1,
-			uint64_t arg2, uint64_t arg3, uint64_t arg4,
-			uint64_t arg5, uint32_t caller_id)
+struct ffa_value smc64(uint32_t func, uint64_t arg0, uint64_t arg1,
+		       uint64_t arg2, uint64_t arg3, uint64_t arg4,
+		       uint64_t arg5, uint32_t caller_id)
 {
 	return smc_internal(func | SMCCC_64_BIT, arg0, arg1, arg2, arg3, arg4,
 			    arg5, caller_id);
 }
 
-struct spci_value smc_forward(uint32_t func, uint64_t arg0, uint64_t arg1,
-			      uint64_t arg2, uint64_t arg3, uint64_t arg4,
-			      uint64_t arg5, uint32_t caller_id)
+struct ffa_value smc_forward(uint32_t func, uint64_t arg0, uint64_t arg1,
+			     uint64_t arg2, uint64_t arg3, uint64_t arg4,
+			     uint64_t arg5, uint32_t caller_id)
 {
 	return smc_internal(func, arg0, arg1, arg2, arg3, arg4, arg5,
 			    caller_id);
diff --git a/src/arch/aarch64/smc.h b/src/arch/aarch64/smc.h
index ad8ce5b..872929a 100644
--- a/src/arch/aarch64/smc.h
+++ b/src/arch/aarch64/smc.h
@@ -18,7 +18,7 @@
 
 #include <stdint.h>
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 /* clang-format off */
 
@@ -49,14 +49,14 @@
 
 /* clang-format on */
 
-struct spci_value smc32(uint32_t func, uint32_t arg0, uint32_t arg1,
-			uint32_t arg2, uint32_t arg3, uint32_t arg4,
-			uint32_t arg5, uint32_t caller_id);
+struct ffa_value smc32(uint32_t func, uint32_t arg0, uint32_t arg1,
+		       uint32_t arg2, uint32_t arg3, uint32_t arg4,
+		       uint32_t arg5, uint32_t caller_id);
 
-struct spci_value smc64(uint32_t func, uint64_t arg0, uint64_t arg1,
-			uint64_t arg2, uint64_t arg3, uint64_t arg4,
-			uint64_t arg5, uint32_t caller_id);
+struct ffa_value smc64(uint32_t func, uint64_t arg0, uint64_t arg1,
+		       uint64_t arg2, uint64_t arg3, uint64_t arg4,
+		       uint64_t arg5, uint32_t caller_id);
 
-struct spci_value smc_forward(uint32_t func, uint64_t arg0, uint64_t arg1,
-			      uint64_t arg2, uint64_t arg3, uint64_t arg4,
-			      uint64_t arg5, uint32_t caller_id);
+struct ffa_value smc_forward(uint32_t func, uint64_t arg0, uint64_t arg1,
+			     uint64_t arg2, uint64_t arg3, uint64_t arg4,
+			     uint64_t arg5, uint32_t caller_id);
diff --git a/src/arch/aarch64/sysregs.c b/src/arch/aarch64/sysregs.c
index 8f807a0..9e56265 100644
--- a/src/arch/aarch64/sysregs.c
+++ b/src/arch/aarch64/sysregs.c
@@ -35,7 +35,7 @@
  * Returns the value for HCR_EL2 for the particular VM.
  * For now, the primary VM has one value and all secondary VMs share a value.
  */
-uintreg_t get_hcr_el2_value(spci_vm_id_t vm_id)
+uintreg_t get_hcr_el2_value(ffa_vm_id_t vm_id)
 {
 	uintreg_t hcr_el2_value = 0;
 
diff --git a/src/arch/aarch64/sysregs.h b/src/arch/aarch64/sysregs.h
index 7198576..9dbd165 100644
--- a/src/arch/aarch64/sysregs.h
+++ b/src/arch/aarch64/sysregs.h
@@ -20,7 +20,7 @@
 
 #include "hf/cpu.h"
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 /**
  * RT value that indicates an access to register XZR (always 0).
@@ -588,7 +588,7 @@
  */
 #define SCTLR_EL2_M (UINT64_C(0x1) << 0)
 
-uintreg_t get_hcr_el2_value(spci_vm_id_t vm_id);
+uintreg_t get_hcr_el2_value(ffa_vm_id_t vm_id);
 
 uintreg_t get_mdcr_el2_value(void);
 
diff --git a/src/arch/fake/hypervisor/cpu.c b/src/arch/fake/hypervisor/cpu.c
index 3fc09f9..4a0bcfe 100644
--- a/src/arch/fake/hypervisor/cpu.c
+++ b/src/arch/fake/hypervisor/cpu.c
@@ -17,7 +17,7 @@
 #include "hf/arch/cpu.h"
 
 #include "hf/cpu.h"
-#include "hf/spci.h"
+#include "hf/ffa.h"
 
 void arch_irq_disable(void)
 {
@@ -41,7 +41,7 @@
 	r->arg[0] = arg;
 }
 
-void arch_regs_set_retval(struct arch_regs *r, struct spci_value v)
+void arch_regs_set_retval(struct arch_regs *r, struct ffa_value v)
 {
 	r->arg[0] = v.func;
 	r->arg[1] = v.arg1;
diff --git a/src/arch/fake/hypervisor/tee.c b/src/arch/fake/hypervisor/tee.c
index 6f1eaca..c78c82f 100644
--- a/src/arch/fake/hypervisor/tee.c
+++ b/src/arch/fake/hypervisor/tee.c
@@ -17,11 +17,11 @@
 #include "hf/arch/tee.h"
 
 #include "hf/dlog.h"
-#include "hf/spci.h"
-#include "hf/spci_internal.h"
+#include "hf/ffa.h"
+#include "hf/ffa_internal.h"
 
-struct spci_value arch_tee_call(struct spci_value args)
+struct ffa_value arch_tee_call(struct ffa_value args)
 {
 	dlog_error("Attempted to call TEE function %#x\n", args.func);
-	return spci_error(SPCI_NOT_SUPPORTED);
+	return ffa_error(FFA_NOT_SUPPORTED);
 }
diff --git a/src/cpu.c b/src/cpu.c
index e52fe2d..c31ad0f 100644
--- a/src/cpu.c
+++ b/src/cpu.c
@@ -42,13 +42,13 @@
 	      "Page alignment is too weak for the stack.");
 
 /**
- * Internal buffer used to store SPCI messages from a VM Tx. Its usage prevents
+ * Internal buffer used to store FF-A messages from a VM Tx. Its usage prevents
  * TOCTOU issues while Hafnium performs actions on information that would
  * otherwise be re-writable by the VM.
  *
  * Each buffer is owned by a single CPU. The buffer can only be used for
- * spci_msg_send. The information stored in the buffer is only valid during the
- * spci_msg_send request is performed.
+ * ffa_msg_send. The information stored in the buffer is only valid during the
+ * ffa_msg_send request is performed.
  */
 alignas(PAGE_SIZE) static uint8_t cpu_message_buffer[MAX_CPUS][PAGE_SIZE];
 
diff --git a/src/dlog.c b/src/dlog.c
index 1a4bf67..32600ce 100644
--- a/src/dlog.c
+++ b/src/dlog.c
@@ -19,7 +19,7 @@
 #include <stdbool.h>
 #include <stddef.h>
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/spinlock.h"
 #include "hf/std.h"
 #include "hf/stdout.h"
@@ -229,7 +229,7 @@
  * Send the contents of the given VM's log buffer to the log, preceded by the VM
  * ID and followed by a newline.
  */
-void dlog_flush_vm_buffer(spci_vm_id_t id, char buffer[], size_t length)
+void dlog_flush_vm_buffer(ffa_vm_id_t id, char buffer[], size_t length)
 {
 	lock();
 
diff --git a/src/spci_memory.c b/src/ffa_memory.c
similarity index 65%
rename from src/spci_memory.c
rename to src/ffa_memory.c
index 95cc3ae..8def572 100644
--- a/src/spci_memory.c
+++ b/src/ffa_memory.c
@@ -14,15 +14,15 @@
  * limitations under the License.
  */
 
-#include "hf/spci_memory.h"
+#include "hf/ffa_memory.h"
 
 #include "hf/arch/tee.h"
 
 #include "hf/api.h"
 #include "hf/check.h"
 #include "hf/dlog.h"
+#include "hf/ffa_internal.h"
 #include "hf/mpool.h"
-#include "hf/spci_internal.h"
 #include "hf/std.h"
 #include "hf/vm.h"
 
@@ -36,32 +36,32 @@
  */
 #define MAX_MEM_SHARES 100
 
-static_assert(sizeof(struct spci_memory_region_constituent) % 16 == 0,
-	      "struct spci_memory_region_constituent must be a multiple of 16 "
+static_assert(sizeof(struct ffa_memory_region_constituent) % 16 == 0,
+	      "struct ffa_memory_region_constituent must be a multiple of 16 "
 	      "bytes long.");
-static_assert(sizeof(struct spci_composite_memory_region) % 16 == 0,
-	      "struct spci_composite_memory_region must be a multiple of 16 "
+static_assert(sizeof(struct ffa_composite_memory_region) % 16 == 0,
+	      "struct ffa_composite_memory_region must be a multiple of 16 "
 	      "bytes long.");
-static_assert(sizeof(struct spci_memory_region_attributes) == 4,
-	      "struct spci_memory_region_attributes must be 4bytes long.");
-static_assert(sizeof(struct spci_memory_access) % 16 == 0,
-	      "struct spci_memory_access must be a multiple of 16 bytes long.");
-static_assert(sizeof(struct spci_memory_region) % 16 == 0,
-	      "struct spci_memory_region must be a multiple of 16 bytes long.");
-static_assert(sizeof(struct spci_mem_relinquish) % 16 == 0,
-	      "struct spci_mem_relinquish must be a multiple of 16 "
+static_assert(sizeof(struct ffa_memory_region_attributes) == 4,
+	      "struct ffa_memory_region_attributes must be 4bytes long.");
+static_assert(sizeof(struct ffa_memory_access) % 16 == 0,
+	      "struct ffa_memory_access must be a multiple of 16 bytes long.");
+static_assert(sizeof(struct ffa_memory_region) % 16 == 0,
+	      "struct ffa_memory_region must be a multiple of 16 bytes long.");
+static_assert(sizeof(struct ffa_mem_relinquish) % 16 == 0,
+	      "struct ffa_mem_relinquish must be a multiple of 16 "
 	      "bytes long.");
 
-struct spci_memory_share_state {
+struct ffa_memory_share_state {
 	/**
 	 * The memory region being shared, or NULL if this share state is
 	 * unallocated.
 	 */
-	struct spci_memory_region *memory_region;
+	struct ffa_memory_region *memory_region;
 
 	/**
-	 * The SPCI function used for sharing the memory. Must be one of
-	 * SPCI_MEM_DONATE_32, SPCI_MEM_LEND_32 or SPCI_MEM_SHARE_32 if the
+	 * The FF-A function used for sharing the memory. Must be one of
+	 * FFA_MEM_DONATE_32, FFA_MEM_LEND_32 or FFA_MEM_SHARE_32 if the
 	 * share state is allocated, or 0.
 	 */
 	uint32_t share_func;
@@ -79,24 +79,24 @@
  * Encapsulates the set of share states while the `share_states_lock` is held.
  */
 struct share_states_locked {
-	struct spci_memory_share_state *share_states;
+	struct ffa_memory_share_state *share_states;
 };
 
 /**
- * All access to members of a `struct spci_memory_share_state` must be guarded
+ * All access to members of a `struct ffa_memory_share_state` must be guarded
  * by this lock.
  */
 static struct spinlock share_states_lock_instance = SPINLOCK_INIT;
-static struct spci_memory_share_state share_states[MAX_MEM_SHARES];
+static struct ffa_memory_share_state share_states[MAX_MEM_SHARES];
 
 /**
- * Initialises the next available `struct spci_memory_share_state` and sets
+ * Initialises the next available `struct ffa_memory_share_state` and sets
  * `handle` to its handle. Returns true on succes or false if none are
  * available.
  */
 static bool allocate_share_state(uint32_t share_func,
-				 struct spci_memory_region *memory_region,
-				 spci_memory_handle_t *handle)
+				 struct ffa_memory_region *memory_region,
+				 ffa_memory_handle_t *handle)
 {
 	uint64_t i;
 
@@ -106,14 +106,14 @@
 	for (i = 0; i < MAX_MEM_SHARES; ++i) {
 		if (share_states[i].share_func == 0) {
 			uint32_t j;
-			struct spci_memory_share_state *allocated_state =
+			struct ffa_memory_share_state *allocated_state =
 				&share_states[i];
 			allocated_state->share_func = share_func;
 			allocated_state->memory_region = memory_region;
 			for (j = 0; j < MAX_MEM_SHARE_RECIPIENTS; ++j) {
 				allocated_state->retrieved[j] = false;
 			}
-			*handle = i | SPCI_MEMORY_HANDLE_ALLOCATOR_HYPERVISOR;
+			*handle = i | FFA_MEMORY_HANDLE_ALLOCATOR_HYPERVISOR;
 			sl_unlock(&share_states_lock_instance);
 			return true;
 		}
@@ -145,11 +145,11 @@
  * returns true. Otherwise returns false and doesn't take the lock.
  */
 static bool get_share_state(struct share_states_locked share_states,
-			    spci_memory_handle_t handle,
-			    struct spci_memory_share_state **share_state_ret)
+			    ffa_memory_handle_t handle,
+			    struct ffa_memory_share_state **share_state_ret)
 {
-	struct spci_memory_share_state *share_state;
-	uint32_t index = handle & ~SPCI_MEMORY_HANDLE_ALLOCATOR_MASK;
+	struct ffa_memory_share_state *share_state;
+	uint32_t index = handle & ~FFA_MEMORY_HANDLE_ALLOCATOR_MASK;
 
 	if (index >= MAX_MEM_SHARES) {
 		return false;
@@ -167,7 +167,7 @@
 
 /** Marks a share state as unallocated. */
 static void share_state_free(struct share_states_locked share_states,
-			     struct spci_memory_share_state *share_state,
+			     struct ffa_memory_share_state *share_state,
 			     struct mpool *page_pool)
 {
 	CHECK(share_states.share_states != NULL);
@@ -180,11 +180,11 @@
  * Marks the share state with the given handle as unallocated, or returns false
  * if the handle was invalid.
  */
-static bool share_state_free_handle(spci_memory_handle_t handle,
+static bool share_state_free_handle(ffa_memory_handle_t handle,
 				    struct mpool *page_pool)
 {
 	struct share_states_locked share_states = share_states_lock();
-	struct spci_memory_share_state *share_state;
+	struct ffa_memory_share_state *share_state;
 
 	if (!get_share_state(share_states, handle, &share_state)) {
 		share_states_unlock(&share_states);
@@ -197,7 +197,7 @@
 	return true;
 }
 
-static void dump_memory_region(struct spci_memory_region *memory_region)
+static void dump_memory_region(struct ffa_memory_region *memory_region)
 {
 	uint32_t i;
 
@@ -238,13 +238,13 @@
 		if (share_states[i].share_func != 0) {
 			dlog("%d: ", i);
 			switch (share_states[i].share_func) {
-			case SPCI_MEM_SHARE_32:
+			case FFA_MEM_SHARE_32:
 				dlog("SHARE");
 				break;
-			case SPCI_MEM_LEND_32:
+			case FFA_MEM_LEND_32:
 				dlog("LEND");
 				break;
-			case SPCI_MEM_DONATE_32:
+			case FFA_MEM_DONATE_32:
 				dlog("DONATE");
 				break;
 			default:
@@ -265,32 +265,32 @@
 }
 
 /* TODO: Add device attributes: GRE, cacheability, shareability. */
-static inline uint32_t spci_memory_permissions_to_mode(
-	spci_memory_access_permissions_t permissions)
+static inline uint32_t ffa_memory_permissions_to_mode(
+	ffa_memory_access_permissions_t permissions)
 {
 	uint32_t mode = 0;
 
-	switch (spci_get_data_access_attr(permissions)) {
-	case SPCI_DATA_ACCESS_RO:
+	switch (ffa_get_data_access_attr(permissions)) {
+	case FFA_DATA_ACCESS_RO:
 		mode = MM_MODE_R;
 		break;
-	case SPCI_DATA_ACCESS_RW:
-	case SPCI_DATA_ACCESS_NOT_SPECIFIED:
+	case FFA_DATA_ACCESS_RW:
+	case FFA_DATA_ACCESS_NOT_SPECIFIED:
 		mode = MM_MODE_R | MM_MODE_W;
 		break;
-	case SPCI_DATA_ACCESS_RESERVED:
-		panic("Tried to convert SPCI_DATA_ACCESS_RESERVED.");
+	case FFA_DATA_ACCESS_RESERVED:
+		panic("Tried to convert FFA_DATA_ACCESS_RESERVED.");
 	}
 
-	switch (spci_get_instruction_access_attr(permissions)) {
-	case SPCI_INSTRUCTION_ACCESS_NX:
+	switch (ffa_get_instruction_access_attr(permissions)) {
+	case FFA_INSTRUCTION_ACCESS_NX:
 		break;
-	case SPCI_INSTRUCTION_ACCESS_X:
-	case SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED:
+	case FFA_INSTRUCTION_ACCESS_X:
+	case FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED:
 		mode |= MM_MODE_X;
 		break;
-	case SPCI_INSTRUCTION_ACCESS_RESERVED:
-		panic("Tried to convert SPCI_INSTRUCTION_ACCESS_RESVERVED.");
+	case FFA_INSTRUCTION_ACCESS_RESERVED:
+		panic("Tried to convert FFA_INSTRUCTION_ACCESS_RESVERVED.");
 	}
 
 	return mode;
@@ -299,11 +299,11 @@
 /**
  * Get the current mode in the stage-2 page table of the given vm of all the
  * pages in the given constituents, if they all have the same mode, or return
- * an appropriate SPCI error if not.
+ * an appropriate FF-A error if not.
  */
-static struct spci_value constituents_get_mode(
+static struct ffa_value constituents_get_mode(
 	struct vm_locked vm, uint32_t *orig_mode,
-	struct spci_memory_region_constituent *constituents,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count)
 {
 	uint32_t i;
@@ -313,7 +313,7 @@
 		 * Fail if there are no constituents. Otherwise we would get an
 		 * uninitialised *orig_mode.
 		 */
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	for (i = 0; i < constituent_count; ++i) {
@@ -325,7 +325,7 @@
 		/* Fail if addresses are not page-aligned. */
 		if (!is_aligned(ipa_addr(begin), PAGE_SIZE) ||
 		    !is_aligned(ipa_addr(end), PAGE_SIZE)) {
-			return spci_error(SPCI_INVALID_PARAMETERS);
+			return ffa_error(FFA_INVALID_PARAMETERS);
 		}
 
 		/*
@@ -334,7 +334,7 @@
 		 */
 		if (!mm_vm_get_mode(&vm.vm->ptable, begin, end,
 				    &current_mode)) {
-			return spci_error(SPCI_DENIED);
+			return ffa_error(FFA_DENIED);
 		}
 
 		/*
@@ -343,11 +343,11 @@
 		if (i == 0) {
 			*orig_mode = current_mode;
 		} else if (current_mode != *orig_mode) {
-			return spci_error(SPCI_DENIED);
+			return ffa_error(FFA_DENIED);
 		}
 	}
 
-	return (struct spci_value){.func = SPCI_SUCCESS_32};
+	return (struct ffa_value){.func = FFA_SUCCESS_32};
 }
 
 /**
@@ -356,29 +356,29 @@
  * to the sending VM.
  *
  * Returns:
- *   1) SPCI_DENIED if a state transition was not found;
- *   2) SPCI_DENIED if the pages being shared do not have the same mode within
+ *   1) FFA_DENIED if a state transition was not found;
+ *   2) FFA_DENIED if the pages being shared do not have the same mode within
  *     the <from> VM;
- *   3) SPCI_INVALID_PARAMETERS if the beginning and end IPAs are not page
+ *   3) FFA_INVALID_PARAMETERS if the beginning and end IPAs are not page
  *     aligned;
- *   4) SPCI_INVALID_PARAMETERS if the requested share type was not handled.
- *  Or SPCI_SUCCESS on success.
+ *   4) FFA_INVALID_PARAMETERS if the requested share type was not handled.
+ *  Or FFA_SUCCESS on success.
  */
-static struct spci_value spci_send_check_transition(
+static struct ffa_value ffa_send_check_transition(
 	struct vm_locked from, uint32_t share_func,
-	spci_memory_access_permissions_t permissions, uint32_t *orig_from_mode,
-	struct spci_memory_region_constituent *constituents,
+	ffa_memory_access_permissions_t permissions, uint32_t *orig_from_mode,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, uint32_t *from_mode)
 {
 	const uint32_t state_mask =
 		MM_MODE_INVALID | MM_MODE_UNOWNED | MM_MODE_SHARED;
 	const uint32_t required_from_mode =
-		spci_memory_permissions_to_mode(permissions);
-	struct spci_value ret;
+		ffa_memory_permissions_to_mode(permissions);
+	struct ffa_value ret;
 
 	ret = constituents_get_mode(from, orig_from_mode, constituents,
 				    constituent_count);
-	if (ret.func != SPCI_SUCCESS_32) {
+	if (ret.func != FFA_SUCCESS_32) {
 		return ret;
 	}
 
@@ -386,7 +386,7 @@
 	if (*orig_from_mode & MM_MODE_D) {
 		dlog_verbose("Can't share device memory (mode is %#x).\n",
 			     *orig_from_mode);
-		return spci_error(SPCI_DENIED);
+		return ffa_error(FFA_DENIED);
 	}
 
 	/*
@@ -394,7 +394,7 @@
 	 * memory.
 	 */
 	if ((*orig_from_mode & state_mask) != 0) {
-		return spci_error(SPCI_DENIED);
+		return ffa_error(FFA_DENIED);
 	}
 
 	if ((*orig_from_mode & required_from_mode) != required_from_mode) {
@@ -402,44 +402,44 @@
 			"Sender tried to send memory with permissions which "
 			"required mode %#x but only had %#x itself.\n",
 			required_from_mode, *orig_from_mode);
-		return spci_error(SPCI_DENIED);
+		return ffa_error(FFA_DENIED);
 	}
 
 	/* Find the appropriate new mode. */
 	*from_mode = ~state_mask & *orig_from_mode;
 	switch (share_func) {
-	case SPCI_MEM_DONATE_32:
+	case FFA_MEM_DONATE_32:
 		*from_mode |= MM_MODE_INVALID | MM_MODE_UNOWNED;
 		break;
 
-	case SPCI_MEM_LEND_32:
+	case FFA_MEM_LEND_32:
 		*from_mode |= MM_MODE_INVALID;
 		break;
 
-	case SPCI_MEM_SHARE_32:
+	case FFA_MEM_SHARE_32:
 		*from_mode |= MM_MODE_SHARED;
 		break;
 
 	default:
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
-	return (struct spci_value){.func = SPCI_SUCCESS_32};
+	return (struct ffa_value){.func = FFA_SUCCESS_32};
 }
 
-static struct spci_value spci_relinquish_check_transition(
+static struct ffa_value ffa_relinquish_check_transition(
 	struct vm_locked from, uint32_t *orig_from_mode,
-	struct spci_memory_region_constituent *constituents,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, uint32_t *from_mode)
 {
 	const uint32_t state_mask =
 		MM_MODE_INVALID | MM_MODE_UNOWNED | MM_MODE_SHARED;
 	uint32_t orig_from_state;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	ret = constituents_get_mode(from, orig_from_mode, constituents,
 				    constituent_count);
-	if (ret.func != SPCI_SUCCESS_32) {
+	if (ret.func != FFA_SUCCESS_32) {
 		return ret;
 	}
 
@@ -447,7 +447,7 @@
 	if (*orig_from_mode & MM_MODE_D) {
 		dlog_verbose("Can't relinquish device memory (mode is %#x).\n",
 			     *orig_from_mode);
-		return spci_error(SPCI_DENIED);
+		return ffa_error(FFA_DENIED);
 	}
 
 	/*
@@ -461,13 +461,13 @@
 			"but "
 			"should be %#x).\n",
 			*orig_from_mode, orig_from_state, MM_MODE_UNOWNED);
-		return spci_error(SPCI_DENIED);
+		return ffa_error(FFA_DENIED);
 	}
 
 	/* Find the appropriate new mode. */
 	*from_mode = (~state_mask & *orig_from_mode) | MM_MODE_UNMAPPED_MASK;
 
-	return (struct spci_value){.func = SPCI_SUCCESS_32};
+	return (struct ffa_value){.func = FFA_SUCCESS_32};
 }
 
 /**
@@ -476,37 +476,37 @@
  * to the retrieving VM.
  *
  * Returns:
- *   1) SPCI_DENIED if a state transition was not found;
- *   2) SPCI_DENIED if the pages being shared do not have the same mode within
+ *   1) FFA_DENIED if a state transition was not found;
+ *   2) FFA_DENIED if the pages being shared do not have the same mode within
  *     the <to> VM;
- *   3) SPCI_INVALID_PARAMETERS if the beginning and end IPAs are not page
+ *   3) FFA_INVALID_PARAMETERS if the beginning and end IPAs are not page
  *     aligned;
- *   4) SPCI_INVALID_PARAMETERS if the requested share type was not handled.
- *  Or SPCI_SUCCESS on success.
+ *   4) FFA_INVALID_PARAMETERS if the requested share type was not handled.
+ *  Or FFA_SUCCESS on success.
  */
-static struct spci_value spci_retrieve_check_transition(
+static struct ffa_value ffa_retrieve_check_transition(
 	struct vm_locked to, uint32_t share_func,
-	struct spci_memory_region_constituent *constituents,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, uint32_t memory_to_attributes,
 	uint32_t *to_mode)
 {
 	uint32_t orig_to_mode;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	ret = constituents_get_mode(to, &orig_to_mode, constituents,
 				    constituent_count);
-	if (ret.func != SPCI_SUCCESS_32) {
+	if (ret.func != FFA_SUCCESS_32) {
 		return ret;
 	}
 
-	if (share_func == SPCI_MEM_RECLAIM_32) {
+	if (share_func == FFA_MEM_RECLAIM_32) {
 		const uint32_t state_mask =
 			MM_MODE_INVALID | MM_MODE_UNOWNED | MM_MODE_SHARED;
 		uint32_t orig_to_state = orig_to_mode & state_mask;
 
 		if (orig_to_state != MM_MODE_INVALID &&
 		    orig_to_state != MM_MODE_SHARED) {
-			return spci_error(SPCI_DENIED);
+			return ffa_error(FFA_DENIED);
 		}
 	} else {
 		/*
@@ -516,34 +516,34 @@
 		 */
 		if ((orig_to_mode & MM_MODE_UNMAPPED_MASK) !=
 		    MM_MODE_UNMAPPED_MASK) {
-			return spci_error(SPCI_DENIED);
+			return ffa_error(FFA_DENIED);
 		}
 	}
 
 	/* Find the appropriate new mode. */
 	*to_mode = memory_to_attributes;
 	switch (share_func) {
-	case SPCI_MEM_DONATE_32:
+	case FFA_MEM_DONATE_32:
 		*to_mode |= 0;
 		break;
 
-	case SPCI_MEM_LEND_32:
+	case FFA_MEM_LEND_32:
 		*to_mode |= MM_MODE_UNOWNED;
 		break;
 
-	case SPCI_MEM_SHARE_32:
+	case FFA_MEM_SHARE_32:
 		*to_mode |= MM_MODE_UNOWNED | MM_MODE_SHARED;
 		break;
 
-	case SPCI_MEM_RECLAIM_32:
+	case FFA_MEM_RECLAIM_32:
 		*to_mode |= 0;
 		break;
 
 	default:
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
-	return (struct spci_value){.func = SPCI_SUCCESS_32};
+	return (struct ffa_value){.func = FFA_SUCCESS_32};
 }
 
 /**
@@ -564,9 +564,9 @@
  * Returns true on success, or false if the update failed and no changes were
  * made to memory mappings.
  */
-static bool spci_region_group_identity_map(
+static bool ffa_region_group_identity_map(
 	struct vm_locked vm_locked,
-	struct spci_memory_region_constituent *constituents,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, int mode, struct mpool *ppool, bool commit)
 {
 	/* Iterate over the memory region constituents. */
@@ -633,8 +633,8 @@
  * Clears a region of physical memory by overwriting it with zeros. The data is
  * flushed from the cache so the memory has been cleared across the system.
  */
-static bool spci_clear_memory_constituents(
-	struct spci_memory_region_constituent *constituents,
+static bool ffa_clear_memory_constituents(
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, struct mpool *page_pool)
 {
 	struct mpool local_page_pool;
@@ -685,33 +685,33 @@
  *
  * Returns:
  *  In case of error, one of the following values is returned:
- *   1) SPCI_INVALID_PARAMETERS - The endpoint provided parameters were
+ *   1) FFA_INVALID_PARAMETERS - The endpoint provided parameters were
  *     erroneous;
- *   2) SPCI_NO_MEMORY - Hafnium did not have sufficient memory to complete
+ *   2) FFA_NO_MEMORY - Hafnium did not have sufficient memory to complete
  *     the request.
- *   3) SPCI_DENIED - The sender doesn't have sufficient access to send the
+ *   3) FFA_DENIED - The sender doesn't have sufficient access to send the
  *     memory with the given permissions.
- *  Success is indicated by SPCI_SUCCESS.
+ *  Success is indicated by FFA_SUCCESS.
  */
-static struct spci_value spci_send_memory(
+static struct ffa_value ffa_send_memory(
 	struct vm_locked from_locked,
-	struct spci_memory_region_constituent *constituents,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, uint32_t share_func,
-	spci_memory_access_permissions_t permissions, struct mpool *page_pool,
+	ffa_memory_access_permissions_t permissions, struct mpool *page_pool,
 	bool clear)
 {
 	struct vm *from = from_locked.vm;
 	uint32_t orig_from_mode;
 	uint32_t from_mode;
 	struct mpool local_page_pool;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	/*
 	 * Make sure constituents are properly aligned to a 64-bit boundary. If
 	 * not we would get alignment faults trying to read (64-bit) values.
 	 */
 	if (!is_aligned(constituents, 8)) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
@@ -719,10 +719,10 @@
 	 * all constituents of a memory region being shared are at the same
 	 * state.
 	 */
-	ret = spci_send_check_transition(from_locked, share_func, permissions,
-					 &orig_from_mode, constituents,
-					 constituent_count, &from_mode);
-	if (ret.func != SPCI_SUCCESS_32) {
+	ret = ffa_send_check_transition(from_locked, share_func, permissions,
+					&orig_from_mode, constituents,
+					constituent_count, &from_mode);
+	if (ret.func != FFA_SUCCESS_32) {
 		return ret;
 	}
 
@@ -738,11 +738,11 @@
 	 * without committing, to make sure the entire operation will succeed
 	 * without exhausting the page pool.
 	 */
-	if (!spci_region_group_identity_map(from_locked, constituents,
-					    constituent_count, from_mode,
-					    page_pool, false)) {
+	if (!ffa_region_group_identity_map(from_locked, constituents,
+					   constituent_count, from_mode,
+					   page_pool, false)) {
 		/* TODO: partial defrag of failed range. */
-		ret = spci_error(SPCI_NO_MEMORY);
+		ret = ffa_error(FFA_NO_MEMORY);
 		goto out;
 	}
 
@@ -752,12 +752,12 @@
 	 * case that a whole block is being unmapped that was previously
 	 * partially mapped.
 	 */
-	CHECK(spci_region_group_identity_map(from_locked, constituents,
-					     constituent_count, from_mode,
-					     &local_page_pool, true));
+	CHECK(ffa_region_group_identity_map(from_locked, constituents,
+					    constituent_count, from_mode,
+					    &local_page_pool, true));
 
 	/* Clear the memory so no VM or device can see the previous contents. */
-	if (clear && !spci_clear_memory_constituents(
+	if (clear && !ffa_clear_memory_constituents(
 			     constituents, constituent_count, page_pool)) {
 		/*
 		 * On failure, roll back by returning memory to the sender. This
@@ -765,15 +765,15 @@
 		 * `local_page_pool` by the call above, but will never allocate
 		 * more pages than that so can never fail.
 		 */
-		CHECK(spci_region_group_identity_map(
+		CHECK(ffa_region_group_identity_map(
 			from_locked, constituents, constituent_count,
 			orig_from_mode, &local_page_pool, true));
 
-		ret = spci_error(SPCI_NO_MEMORY);
+		ret = ffa_error(FFA_NO_MEMORY);
 		goto out;
 	}
 
-	ret = (struct spci_value){.func = SPCI_SUCCESS_32};
+	ret = (struct ffa_value){.func = FFA_SUCCESS_32};
 
 out:
 	mpool_fini(&local_page_pool);
@@ -794,22 +794,22 @@
  *
  * Returns:
  *  In case of error, one of the following values is returned:
- *   1) SPCI_INVALID_PARAMETERS - The endpoint provided parameters were
+ *   1) FFA_INVALID_PARAMETERS - The endpoint provided parameters were
  *     erroneous;
- *   2) SPCI_NO_MEMORY - Hafnium did not have sufficient memory to complete
+ *   2) FFA_NO_MEMORY - Hafnium did not have sufficient memory to complete
  *     the request.
- *  Success is indicated by SPCI_SUCCESS.
+ *  Success is indicated by FFA_SUCCESS.
  */
-static struct spci_value spci_retrieve_memory(
+static struct ffa_value ffa_retrieve_memory(
 	struct vm_locked to_locked,
-	struct spci_memory_region_constituent *constituents,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, uint32_t memory_to_attributes,
 	uint32_t share_func, bool clear, struct mpool *page_pool)
 {
 	struct vm *to = to_locked.vm;
 	uint32_t to_mode;
 	struct mpool local_page_pool;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	/*
 	 * Make sure constituents are properly aligned to a 32-bit boundary. If
@@ -817,7 +817,7 @@
 	 */
 	if (!is_aligned(constituents, 4)) {
 		dlog_verbose("Constituents not aligned.\n");
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
@@ -825,10 +825,10 @@
 	 * that all constituents of the memory region being retrieved are at the
 	 * same state.
 	 */
-	ret = spci_retrieve_check_transition(to_locked, share_func,
-					     constituents, constituent_count,
-					     memory_to_attributes, &to_mode);
-	if (ret.func != SPCI_SUCCESS_32) {
+	ret = ffa_retrieve_check_transition(to_locked, share_func, constituents,
+					    constituent_count,
+					    memory_to_attributes, &to_mode);
+	if (ret.func != FFA_SUCCESS_32) {
 		dlog_verbose("Invalid transition.\n");
 		return ret;
 	}
@@ -845,21 +845,21 @@
 	 * the recipient page tables without committing, to make sure the entire
 	 * operation will succeed without exhausting the page pool.
 	 */
-	if (!spci_region_group_identity_map(to_locked, constituents,
-					    constituent_count, to_mode,
-					    page_pool, false)) {
+	if (!ffa_region_group_identity_map(to_locked, constituents,
+					   constituent_count, to_mode,
+					   page_pool, false)) {
 		/* TODO: partial defrag of failed range. */
 		dlog_verbose(
 			"Insufficient memory to update recipient page "
 			"table.\n");
-		ret = spci_error(SPCI_NO_MEMORY);
+		ret = ffa_error(FFA_NO_MEMORY);
 		goto out;
 	}
 
 	/* Clear the memory so no VM or device can see the previous contents. */
-	if (clear && !spci_clear_memory_constituents(
+	if (clear && !ffa_clear_memory_constituents(
 			     constituents, constituent_count, page_pool)) {
-		ret = spci_error(SPCI_NO_MEMORY);
+		ret = ffa_error(FFA_NO_MEMORY);
 		goto out;
 	}
 
@@ -868,11 +868,11 @@
 	 * won't allocate because the transaction was already prepared above, so
 	 * it doesn't need to use the `local_page_pool`.
 	 */
-	CHECK(spci_region_group_identity_map(to_locked, constituents,
-					     constituent_count, to_mode,
-					     page_pool, true));
+	CHECK(ffa_region_group_identity_map(to_locked, constituents,
+					    constituent_count, to_mode,
+					    page_pool, true));
 
-	ret = (struct spci_value){.func = SPCI_SUCCESS_32};
+	ret = (struct ffa_value){.func = FFA_SUCCESS_32};
 
 out:
 	mpool_fini(&local_page_pool);
@@ -896,23 +896,23 @@
  *
  * Returns:
  *  In case of error, one of the following values is returned:
- *   1) SPCI_INVALID_PARAMETERS - The endpoint provided parameters were
+ *   1) FFA_INVALID_PARAMETERS - The endpoint provided parameters were
  *     erroneous;
- *   2) SPCI_NO_MEMORY - Hafnium did not have sufficient memory to complete
+ *   2) FFA_NO_MEMORY - Hafnium did not have sufficient memory to complete
  *     the request.
- *  Success is indicated by SPCI_SUCCESS.
+ *  Success is indicated by FFA_SUCCESS.
  */
-static struct spci_value spci_tee_reclaim_memory(
-	struct vm_locked to_locked, spci_memory_handle_t handle,
-	struct spci_memory_region_constituent *constituents,
+static struct ffa_value ffa_tee_reclaim_memory(
+	struct vm_locked to_locked, ffa_memory_handle_t handle,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, uint32_t memory_to_attributes, bool clear,
 	struct mpool *page_pool)
 {
 	struct vm *to = to_locked.vm;
 	uint32_t to_mode;
 	struct mpool local_page_pool;
-	struct spci_value ret;
-	spci_memory_region_flags_t tee_flags;
+	struct ffa_value ret;
+	ffa_memory_region_flags_t tee_flags;
 
 	/*
 	 * Make sure constituents are properly aligned to a 32-bit boundary. If
@@ -920,7 +920,7 @@
 	 */
 	if (!is_aligned(constituents, 4)) {
 		dlog_verbose("Constituents not aligned.\n");
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
@@ -928,10 +928,10 @@
 	 * that all constituents of the memory region being retrieved are at the
 	 * same state.
 	 */
-	ret = spci_retrieve_check_transition(to_locked, SPCI_MEM_RECLAIM_32,
-					     constituents, constituent_count,
-					     memory_to_attributes, &to_mode);
-	if (ret.func != SPCI_SUCCESS_32) {
+	ret = ffa_retrieve_check_transition(to_locked, FFA_MEM_RECLAIM_32,
+					    constituents, constituent_count,
+					    memory_to_attributes, &to_mode);
+	if (ret.func != FFA_SUCCESS_32) {
 		dlog_verbose("Invalid transition.\n");
 		return ret;
 	}
@@ -948,14 +948,14 @@
 	 * the recipient page tables without committing, to make sure the entire
 	 * operation will succeed without exhausting the page pool.
 	 */
-	if (!spci_region_group_identity_map(to_locked, constituents,
-					    constituent_count, to_mode,
-					    page_pool, false)) {
+	if (!ffa_region_group_identity_map(to_locked, constituents,
+					   constituent_count, to_mode,
+					   page_pool, false)) {
 		/* TODO: partial defrag of failed range. */
 		dlog_verbose(
 			"Insufficient memory to update recipient page "
 			"table.\n");
-		ret = spci_error(SPCI_NO_MEMORY);
+		ret = ffa_error(FFA_NO_MEMORY);
 		goto out;
 	}
 
@@ -964,18 +964,17 @@
 	 */
 	tee_flags = 0;
 	if (clear) {
-		tee_flags |= SPCI_MEMORY_REGION_FLAG_CLEAR;
+		tee_flags |= FFA_MEMORY_REGION_FLAG_CLEAR;
 	}
-	ret = arch_tee_call(
-		(struct spci_value){.func = SPCI_MEM_RECLAIM_32,
-				    .arg1 = (uint32_t)handle,
-				    .arg2 = (uint32_t)(handle >> 32),
-				    .arg3 = tee_flags});
+	ret = arch_tee_call((struct ffa_value){.func = FFA_MEM_RECLAIM_32,
+					       .arg1 = (uint32_t)handle,
+					       .arg2 = (uint32_t)(handle >> 32),
+					       .arg3 = tee_flags});
 
-	if (ret.func != SPCI_SUCCESS_32) {
+	if (ret.func != FFA_SUCCESS_32) {
 		dlog_verbose(
 			"Got %#x (%d) from EL3 in response to "
-			"SPCI_MEM_RECLAIM_32, expected SPCI_SUCCESS_32.\n",
+			"FFA_MEM_RECLAIM_32, expected FFA_SUCCESS_32.\n",
 			ret.func, ret.arg2);
 		goto out;
 	}
@@ -986,11 +985,11 @@
 	 * transaction was already prepared above, so it doesn't need to use the
 	 * `local_page_pool`.
 	 */
-	CHECK(spci_region_group_identity_map(to_locked, constituents,
-					     constituent_count, to_mode,
-					     page_pool, true));
+	CHECK(ffa_region_group_identity_map(to_locked, constituents,
+					    constituent_count, to_mode,
+					    page_pool, true));
 
-	ret = (struct spci_value){.func = SPCI_SUCCESS_32};
+	ret = (struct ffa_value){.func = FFA_SUCCESS_32};
 
 out:
 	mpool_fini(&local_page_pool);
@@ -1004,20 +1003,20 @@
 	return ret;
 }
 
-static struct spci_value spci_relinquish_memory(
+static struct ffa_value ffa_relinquish_memory(
 	struct vm_locked from_locked,
-	struct spci_memory_region_constituent *constituents,
+	struct ffa_memory_region_constituent *constituents,
 	uint32_t constituent_count, struct mpool *page_pool, bool clear)
 {
 	uint32_t orig_from_mode;
 	uint32_t from_mode;
 	struct mpool local_page_pool;
-	struct spci_value ret;
+	struct ffa_value ret;
 
-	ret = spci_relinquish_check_transition(from_locked, &orig_from_mode,
-					       constituents, constituent_count,
-					       &from_mode);
-	if (ret.func != SPCI_SUCCESS_32) {
+	ret = ffa_relinquish_check_transition(from_locked, &orig_from_mode,
+					      constituents, constituent_count,
+					      &from_mode);
+	if (ret.func != FFA_SUCCESS_32) {
 		dlog_verbose("Invalid transition.\n");
 		return ret;
 	}
@@ -1034,11 +1033,11 @@
 	 * without committing, to make sure the entire operation will succeed
 	 * without exhausting the page pool.
 	 */
-	if (!spci_region_group_identity_map(from_locked, constituents,
-					    constituent_count, from_mode,
-					    page_pool, false)) {
+	if (!ffa_region_group_identity_map(from_locked, constituents,
+					   constituent_count, from_mode,
+					   page_pool, false)) {
 		/* TODO: partial defrag of failed range. */
-		ret = spci_error(SPCI_NO_MEMORY);
+		ret = ffa_error(FFA_NO_MEMORY);
 		goto out;
 	}
 
@@ -1048,12 +1047,12 @@
 	 * case that a whole block is being unmapped that was previously
 	 * partially mapped.
 	 */
-	CHECK(spci_region_group_identity_map(from_locked, constituents,
-					     constituent_count, from_mode,
-					     &local_page_pool, true));
+	CHECK(ffa_region_group_identity_map(from_locked, constituents,
+					    constituent_count, from_mode,
+					    &local_page_pool, true));
 
 	/* Clear the memory so no VM or device can see the previous contents. */
-	if (clear && !spci_clear_memory_constituents(
+	if (clear && !ffa_clear_memory_constituents(
 			     constituents, constituent_count, page_pool)) {
 		/*
 		 * On failure, roll back by returning memory to the sender. This
@@ -1061,15 +1060,15 @@
 		 * `local_page_pool` by the call above, but will never allocate
 		 * more pages than that so can never fail.
 		 */
-		CHECK(spci_region_group_identity_map(
+		CHECK(ffa_region_group_identity_map(
 			from_locked, constituents, constituent_count,
 			orig_from_mode, &local_page_pool, true));
 
-		ret = spci_error(SPCI_NO_MEMORY);
+		ret = ffa_error(FFA_NO_MEMORY);
 		goto out;
 	}
 
-	ret = (struct spci_value){.func = SPCI_SUCCESS_32};
+	ret = (struct ffa_value){.func = FFA_SUCCESS_32};
 
 out:
 	mpool_fini(&local_page_pool);
@@ -1087,20 +1086,20 @@
  * Check that the given `memory_region` represents a valid memory send request
  * of the given `share_func` type, return the clear flag and permissions via the
  * respective output parameters, and update the permissions if necessary.
- * Returns SPCI_SUCCESS if the request was valid, or the relevant SPCI_ERROR if
+ * Returns FFA_SUCCESS if the request was valid, or the relevant FFA_ERROR if
  * not.
  */
-static struct spci_value spci_memory_send_validate(
+static struct ffa_value ffa_memory_send_validate(
 	struct vm *to, struct vm_locked from_locked,
-	struct spci_memory_region *memory_region, uint32_t memory_share_size,
+	struct ffa_memory_region *memory_region, uint32_t memory_share_size,
 	uint32_t share_func, bool *clear,
-	spci_memory_access_permissions_t *permissions)
+	ffa_memory_access_permissions_t *permissions)
 {
-	struct spci_composite_memory_region *composite;
+	struct ffa_composite_memory_region *composite;
 	uint32_t receivers_size;
 	uint32_t constituents_size;
-	enum spci_data_access data_access;
-	enum spci_instruction_access instruction_access;
+	enum ffa_data_access data_access;
+	enum ffa_instruction_access instruction_access;
 
 	CHECK(clear != NULL);
 	CHECK(permissions != NULL);
@@ -1108,97 +1107,97 @@
 	/* The sender must match the message sender. */
 	if (memory_region->sender != from_locked.vm->id) {
 		dlog_verbose("Invalid sender %d.\n", memory_region->sender);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* We only support a single recipient. */
 	if (memory_region->receiver_count != 1) {
 		dlog_verbose("Multiple recipients not supported.\n");
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/*
 	 * Ensure that the composite header is within the memory bounds and
 	 * doesn't overlap the first part of the message.
 	 */
-	receivers_size = sizeof(struct spci_memory_access) *
+	receivers_size = sizeof(struct ffa_memory_access) *
 			 memory_region->receiver_count;
 	if (memory_region->receivers[0].composite_memory_region_offset <
-		    sizeof(struct spci_memory_region) + receivers_size ||
+		    sizeof(struct ffa_memory_region) + receivers_size ||
 	    memory_region->receivers[0].composite_memory_region_offset +
-			    sizeof(struct spci_composite_memory_region) >=
+			    sizeof(struct ffa_composite_memory_region) >=
 		    memory_share_size) {
 		dlog_verbose(
 			"Invalid composite memory region descriptor offset.\n");
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
-	composite = spci_memory_region_get_composite(memory_region, 0);
+	composite = ffa_memory_region_get_composite(memory_region, 0);
 
 	/*
 	 * Ensure the number of constituents are within the memory
 	 * bounds.
 	 */
-	constituents_size = sizeof(struct spci_memory_region_constituent) *
+	constituents_size = sizeof(struct ffa_memory_region_constituent) *
 			    composite->constituent_count;
 	if (memory_share_size !=
 	    memory_region->receivers[0].composite_memory_region_offset +
-		    sizeof(struct spci_composite_memory_region) +
+		    sizeof(struct ffa_composite_memory_region) +
 		    constituents_size) {
 		dlog_verbose("Invalid size %d or constituent offset %d.\n",
 			     memory_share_size,
 			     memory_region->receivers[0]
 				     .composite_memory_region_offset);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* The recipient must match the message recipient. */
 	if (memory_region->receivers[0].receiver_permissions.receiver !=
 	    to->id) {
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
-	*clear = memory_region->flags & SPCI_MEMORY_REGION_FLAG_CLEAR;
+	*clear = memory_region->flags & FFA_MEMORY_REGION_FLAG_CLEAR;
 	/*
 	 * Clear is not allowed for memory sharing, as the sender still has
 	 * access to the memory.
 	 */
-	if (*clear && share_func == SPCI_MEM_SHARE_32) {
+	if (*clear && share_func == FFA_MEM_SHARE_32) {
 		dlog_verbose("Memory can't be cleared while being shared.\n");
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* No other flags are allowed/supported here. */
-	if (memory_region->flags & ~SPCI_MEMORY_REGION_FLAG_CLEAR) {
+	if (memory_region->flags & ~FFA_MEMORY_REGION_FLAG_CLEAR) {
 		dlog_verbose("Invalid flags %#x.\n", memory_region->flags);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* Check that the permissions are valid. */
 	*permissions =
 		memory_region->receivers[0].receiver_permissions.permissions;
-	data_access = spci_get_data_access_attr(*permissions);
-	instruction_access = spci_get_instruction_access_attr(*permissions);
-	if (data_access == SPCI_DATA_ACCESS_RESERVED ||
-	    instruction_access == SPCI_INSTRUCTION_ACCESS_RESERVED) {
+	data_access = ffa_get_data_access_attr(*permissions);
+	instruction_access = ffa_get_instruction_access_attr(*permissions);
+	if (data_access == FFA_DATA_ACCESS_RESERVED ||
+	    instruction_access == FFA_INSTRUCTION_ACCESS_RESERVED) {
 		dlog_verbose("Reserved value for receiver permissions %#x.\n",
 			     *permissions);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
-	if (instruction_access != SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED) {
+	if (instruction_access != FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED) {
 		dlog_verbose(
 			"Invalid instruction access permissions %#x for "
 			"sending memory.\n",
 			*permissions);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
-	if (share_func == SPCI_MEM_SHARE_32) {
-		if (data_access == SPCI_DATA_ACCESS_NOT_SPECIFIED) {
+	if (share_func == FFA_MEM_SHARE_32) {
+		if (data_access == FFA_DATA_ACCESS_NOT_SPECIFIED) {
 			dlog_verbose(
 				"Invalid data access permissions %#x for "
 				"sharing memory.\n",
 				*permissions);
-			return spci_error(SPCI_INVALID_PARAMETERS);
+			return ffa_error(FFA_INVALID_PARAMETERS);
 		}
 		/*
 		 * According to section 6.11.3 of the FF-A spec NX is required
@@ -1206,29 +1205,29 @@
 		 * sender) so set it in the copy that we store, ready to be
 		 * returned to the retriever.
 		 */
-		spci_set_instruction_access_attr(permissions,
-						 SPCI_INSTRUCTION_ACCESS_NX);
+		ffa_set_instruction_access_attr(permissions,
+						FFA_INSTRUCTION_ACCESS_NX);
 		memory_region->receivers[0].receiver_permissions.permissions =
 			*permissions;
 	}
-	if (share_func == SPCI_MEM_LEND_32 &&
-	    data_access == SPCI_DATA_ACCESS_NOT_SPECIFIED) {
+	if (share_func == FFA_MEM_LEND_32 &&
+	    data_access == FFA_DATA_ACCESS_NOT_SPECIFIED) {
 		dlog_verbose(
 			"Invalid data access permissions %#x for lending "
 			"memory.\n",
 			*permissions);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
-	if (share_func == SPCI_MEM_DONATE_32 &&
-	    data_access != SPCI_DATA_ACCESS_NOT_SPECIFIED) {
+	if (share_func == FFA_MEM_DONATE_32 &&
+	    data_access != FFA_DATA_ACCESS_NOT_SPECIFIED) {
 		dlog_verbose(
 			"Invalid data access permissions %#x for donating "
 			"memory.\n",
 			*permissions);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
-	return (struct spci_value){.func = SPCI_SUCCESS_32};
+	return (struct ffa_value){.func = FFA_SUCCESS_32};
 }
 
 /**
@@ -1245,43 +1244,42 @@
  * This function takes ownership of the `memory_region` passed in; it must not
  * be freed by the caller.
  */
-struct spci_value spci_memory_send(struct vm *to, struct vm_locked from_locked,
-				   struct spci_memory_region *memory_region,
-				   uint32_t memory_share_size,
-				   uint32_t share_func, struct mpool *page_pool)
+struct ffa_value ffa_memory_send(struct vm *to, struct vm_locked from_locked,
+				 struct ffa_memory_region *memory_region,
+				 uint32_t memory_share_size,
+				 uint32_t share_func, struct mpool *page_pool)
 {
-	struct spci_composite_memory_region *composite;
+	struct ffa_composite_memory_region *composite;
 	bool clear;
-	spci_memory_access_permissions_t permissions;
-	struct spci_value ret;
-	spci_memory_handle_t handle;
+	ffa_memory_access_permissions_t permissions;
+	struct ffa_value ret;
+	ffa_memory_handle_t handle;
 
 	/*
 	 * If there is an error validating the `memory_region` then we need to
 	 * free it because we own it but we won't be storing it in a share state
 	 * after all.
 	 */
-	ret = spci_memory_send_validate(to, from_locked, memory_region,
-					memory_share_size, share_func, &clear,
-					&permissions);
-	if (ret.func != SPCI_SUCCESS_32) {
+	ret = ffa_memory_send_validate(to, from_locked, memory_region,
+				       memory_share_size, share_func, &clear,
+				       &permissions);
+	if (ret.func != FFA_SUCCESS_32) {
 		mpool_free(page_pool, memory_region);
 		return ret;
 	}
 
 	/* Set flag for share function, ready to be retrieved later. */
 	switch (share_func) {
-	case SPCI_MEM_SHARE_32:
+	case FFA_MEM_SHARE_32:
 		memory_region->flags |=
-			SPCI_MEMORY_REGION_TRANSACTION_TYPE_SHARE;
+			FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE;
 		break;
-	case SPCI_MEM_LEND_32:
-		memory_region->flags |=
-			SPCI_MEMORY_REGION_TRANSACTION_TYPE_LEND;
+	case FFA_MEM_LEND_32:
+		memory_region->flags |= FFA_MEMORY_REGION_TRANSACTION_TYPE_LEND;
 		break;
-	case SPCI_MEM_DONATE_32:
+	case FFA_MEM_DONATE_32:
 		memory_region->flags |=
-			SPCI_MEMORY_REGION_TRANSACTION_TYPE_DONATE;
+			FFA_MEMORY_REGION_TRANSACTION_TYPE_DONATE;
 		break;
 	}
 
@@ -1295,17 +1293,17 @@
 	    !allocate_share_state(share_func, memory_region, &handle)) {
 		dlog_verbose("Failed to allocate share state.\n");
 		mpool_free(page_pool, memory_region);
-		return spci_error(SPCI_NO_MEMORY);
+		return ffa_error(FFA_NO_MEMORY);
 	}
 
 	dump_share_states();
 
 	/* Check that state is valid in sender page table and update. */
-	composite = spci_memory_region_get_composite(memory_region, 0);
-	ret = spci_send_memory(from_locked, composite->constituents,
-			       composite->constituent_count, share_func,
-			       permissions, page_pool, clear);
-	if (ret.func != SPCI_SUCCESS_32) {
+	composite = ffa_memory_region_get_composite(memory_region, 0);
+	ret = ffa_send_memory(from_locked, composite->constituents,
+			      composite->constituent_count, share_func,
+			      permissions, page_pool, clear);
+	if (ret.func != FFA_SUCCESS_32) {
 		if (to->id != HF_TEE_VM_ID) {
 			/* Free share state. */
 			bool freed = share_state_free_handle(handle, page_pool);
@@ -1318,63 +1316,64 @@
 
 	if (to->id == HF_TEE_VM_ID) {
 		/* No share state allocated here so no handle to return. */
-		return (struct spci_value){.func = SPCI_SUCCESS_32};
+		return (struct ffa_value){.func = FFA_SUCCESS_32};
 	}
 
-	return (struct spci_value){.func = SPCI_SUCCESS_32, .arg2 = handle};
+	return (struct ffa_value){.func = FFA_SUCCESS_32, .arg2 = handle};
 }
 
-struct spci_value spci_memory_retrieve(
-	struct vm_locked to_locked, struct spci_memory_region *retrieve_request,
-	uint32_t retrieve_request_size, struct mpool *page_pool)
+struct ffa_value ffa_memory_retrieve(struct vm_locked to_locked,
+				     struct ffa_memory_region *retrieve_request,
+				     uint32_t retrieve_request_size,
+				     struct mpool *page_pool)
 {
 	uint32_t expected_retrieve_request_size =
-		sizeof(struct spci_memory_region) +
+		sizeof(struct ffa_memory_region) +
 		retrieve_request->receiver_count *
-			sizeof(struct spci_memory_access);
-	spci_memory_handle_t handle = retrieve_request->handle;
-	spci_memory_region_flags_t transaction_type =
+			sizeof(struct ffa_memory_access);
+	ffa_memory_handle_t handle = retrieve_request->handle;
+	ffa_memory_region_flags_t transaction_type =
 		retrieve_request->flags &
-		SPCI_MEMORY_REGION_TRANSACTION_TYPE_MASK;
-	struct spci_memory_region *memory_region;
-	spci_memory_access_permissions_t sent_permissions;
-	enum spci_data_access sent_data_access;
-	enum spci_instruction_access sent_instruction_access;
-	spci_memory_access_permissions_t requested_permissions;
-	enum spci_data_access requested_data_access;
-	enum spci_instruction_access requested_instruction_access;
-	spci_memory_access_permissions_t permissions;
+		FFA_MEMORY_REGION_TRANSACTION_TYPE_MASK;
+	struct ffa_memory_region *memory_region;
+	ffa_memory_access_permissions_t sent_permissions;
+	enum ffa_data_access sent_data_access;
+	enum ffa_instruction_access sent_instruction_access;
+	ffa_memory_access_permissions_t requested_permissions;
+	enum ffa_data_access requested_data_access;
+	enum ffa_instruction_access requested_instruction_access;
+	ffa_memory_access_permissions_t permissions;
 	uint32_t memory_to_attributes;
-	struct spci_composite_memory_region *composite;
+	struct ffa_composite_memory_region *composite;
 	struct share_states_locked share_states;
-	struct spci_memory_share_state *share_state;
-	struct spci_value ret;
+	struct ffa_memory_share_state *share_state;
+	struct ffa_value ret;
 	uint32_t response_size;
 
 	dump_share_states();
 
 	if (retrieve_request_size != expected_retrieve_request_size) {
 		dlog_verbose(
-			"Invalid length for SPCI_MEM_RETRIEVE_REQ, expected %d "
+			"Invalid length for FFA_MEM_RETRIEVE_REQ, expected %d "
 			"but was %d.\n",
 			expected_retrieve_request_size, retrieve_request_size);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	if (retrieve_request->receiver_count != 1) {
 		dlog_verbose(
 			"Multi-way memory sharing not supported (got %d "
-			"receivers descriptors on SPCI_MEM_RETRIEVE_REQ, "
+			"receivers descriptors on FFA_MEM_RETRIEVE_REQ, "
 			"expected 1).\n",
 			retrieve_request->receiver_count);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	share_states = share_states_lock();
 	if (!get_share_state(share_states, handle, &share_state)) {
-		dlog_verbose("Invalid handle %#x for SPCI_MEM_RETRIEVE_REQ.\n",
+		dlog_verbose("Invalid handle %#x for FFA_MEM_RETRIEVE_REQ.\n",
 			     handle);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1386,36 +1385,36 @@
 	 * if it has been specified.
 	 */
 	if (transaction_type !=
-		    SPCI_MEMORY_REGION_TRANSACTION_TYPE_UNSPECIFIED &&
+		    FFA_MEMORY_REGION_TRANSACTION_TYPE_UNSPECIFIED &&
 	    transaction_type != (memory_region->flags &
-				 SPCI_MEMORY_REGION_TRANSACTION_TYPE_MASK)) {
+				 FFA_MEMORY_REGION_TRANSACTION_TYPE_MASK)) {
 		dlog_verbose(
 			"Incorrect transaction type %#x for "
-			"SPCI_MEM_RETRIEVE_REQ, expected %#x for handle %#x.\n",
+			"FFA_MEM_RETRIEVE_REQ, expected %#x for handle %#x.\n",
 			transaction_type,
 			memory_region->flags &
-				SPCI_MEMORY_REGION_TRANSACTION_TYPE_MASK,
+				FFA_MEMORY_REGION_TRANSACTION_TYPE_MASK,
 			handle);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
 	if (retrieve_request->sender != memory_region->sender) {
 		dlog_verbose(
-			"Incorrect sender ID %d for SPCI_MEM_RETRIEVE_REQ, "
+			"Incorrect sender ID %d for FFA_MEM_RETRIEVE_REQ, "
 			"expected %d for handle %#x.\n",
 			retrieve_request->sender, memory_region->sender,
 			handle);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
 	if (retrieve_request->tag != memory_region->tag) {
 		dlog_verbose(
-			"Incorrect tag %d for SPCI_MEM_RETRIEVE_REQ, expected "
+			"Incorrect tag %d for FFA_MEM_RETRIEVE_REQ, expected "
 			"%d for handle %#x.\n",
 			retrieve_request->tag, memory_region->tag, handle);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1423,10 +1422,10 @@
 	    to_locked.vm->id) {
 		dlog_verbose(
 			"Retrieve request receiver VM ID %d didn't match "
-			"caller of SPCI_MEM_RETRIEVE_REQ.\n",
+			"caller of FFA_MEM_RETRIEVE_REQ.\n",
 			retrieve_request->receivers[0]
 				.receiver_permissions.receiver);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1434,19 +1433,19 @@
 	    to_locked.vm->id) {
 		dlog_verbose(
 			"Incorrect receiver VM ID %d for "
-			"SPCI_MEM_RETRIEVE_REQ, expected %d for handle %#x.\n",
+			"FFA_MEM_RETRIEVE_REQ, expected %d for handle %#x.\n",
 			to_locked.vm->id,
 			memory_region->receivers[0]
 				.receiver_permissions.receiver,
 			handle);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
 	if (share_state->retrieved[0]) {
 		dlog_verbose("Memory with handle %#x already retrieved.\n",
 			     handle);
-		ret = spci_error(SPCI_DENIED);
+		ret = ffa_error(FFA_DENIED);
 		goto out;
 	}
 
@@ -1458,7 +1457,7 @@
 			"%d).\n",
 			retrieve_request->receivers[0]
 				.composite_memory_region_offset);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1469,60 +1468,58 @@
 	/* TODO: Check attributes too. */
 	sent_permissions =
 		memory_region->receivers[0].receiver_permissions.permissions;
-	sent_data_access = spci_get_data_access_attr(sent_permissions);
+	sent_data_access = ffa_get_data_access_attr(sent_permissions);
 	sent_instruction_access =
-		spci_get_instruction_access_attr(sent_permissions);
+		ffa_get_instruction_access_attr(sent_permissions);
 	requested_permissions =
 		retrieve_request->receivers[0].receiver_permissions.permissions;
-	requested_data_access =
-		spci_get_data_access_attr(requested_permissions);
+	requested_data_access = ffa_get_data_access_attr(requested_permissions);
 	requested_instruction_access =
-		spci_get_instruction_access_attr(requested_permissions);
+		ffa_get_instruction_access_attr(requested_permissions);
 	permissions = 0;
 	switch (sent_data_access) {
-	case SPCI_DATA_ACCESS_NOT_SPECIFIED:
-	case SPCI_DATA_ACCESS_RW:
-		if (requested_data_access == SPCI_DATA_ACCESS_NOT_SPECIFIED ||
-		    requested_data_access == SPCI_DATA_ACCESS_RW) {
-			spci_set_data_access_attr(&permissions,
-						  SPCI_DATA_ACCESS_RW);
+	case FFA_DATA_ACCESS_NOT_SPECIFIED:
+	case FFA_DATA_ACCESS_RW:
+		if (requested_data_access == FFA_DATA_ACCESS_NOT_SPECIFIED ||
+		    requested_data_access == FFA_DATA_ACCESS_RW) {
+			ffa_set_data_access_attr(&permissions,
+						 FFA_DATA_ACCESS_RW);
 			break;
 		}
 		/* Intentional fall-through. */
-	case SPCI_DATA_ACCESS_RO:
-		if (requested_data_access == SPCI_DATA_ACCESS_NOT_SPECIFIED ||
-		    requested_data_access == SPCI_DATA_ACCESS_RO) {
-			spci_set_data_access_attr(&permissions,
-						  SPCI_DATA_ACCESS_RO);
+	case FFA_DATA_ACCESS_RO:
+		if (requested_data_access == FFA_DATA_ACCESS_NOT_SPECIFIED ||
+		    requested_data_access == FFA_DATA_ACCESS_RO) {
+			ffa_set_data_access_attr(&permissions,
+						 FFA_DATA_ACCESS_RO);
 			break;
 		}
 		dlog_verbose(
 			"Invalid data access requested; sender specified "
 			"permissions %#x but receiver requested %#x.\n",
 			sent_permissions, requested_permissions);
-		ret = spci_error(SPCI_DENIED);
+		ret = ffa_error(FFA_DENIED);
 		goto out;
-	case SPCI_DATA_ACCESS_RESERVED:
-		panic("Got unexpected SPCI_DATA_ACCESS_RESERVED. Should be "
+	case FFA_DATA_ACCESS_RESERVED:
+		panic("Got unexpected FFA_DATA_ACCESS_RESERVED. Should be "
 		      "checked before this point.");
 	}
 	switch (sent_instruction_access) {
-	case SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED:
-	case SPCI_INSTRUCTION_ACCESS_X:
+	case FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED:
+	case FFA_INSTRUCTION_ACCESS_X:
 		if (requested_instruction_access ==
-			    SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED ||
-		    requested_instruction_access == SPCI_INSTRUCTION_ACCESS_X) {
-			spci_set_instruction_access_attr(
-				&permissions, SPCI_INSTRUCTION_ACCESS_X);
+			    FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED ||
+		    requested_instruction_access == FFA_INSTRUCTION_ACCESS_X) {
+			ffa_set_instruction_access_attr(
+				&permissions, FFA_INSTRUCTION_ACCESS_X);
 			break;
 		}
-	case SPCI_INSTRUCTION_ACCESS_NX:
+	case FFA_INSTRUCTION_ACCESS_NX:
 		if (requested_instruction_access ==
-			    SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED ||
-		    requested_instruction_access ==
-			    SPCI_INSTRUCTION_ACCESS_NX) {
-			spci_set_instruction_access_attr(
-				&permissions, SPCI_INSTRUCTION_ACCESS_NX);
+			    FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED ||
+		    requested_instruction_access == FFA_INSTRUCTION_ACCESS_NX) {
+			ffa_set_instruction_access_attr(
+				&permissions, FFA_INSTRUCTION_ACCESS_NX);
 			break;
 		}
 		dlog_verbose(
@@ -1530,20 +1527,20 @@
 			"specified "
 			"permissions %#x but receiver requested %#x.\n",
 			sent_permissions, requested_permissions);
-		ret = spci_error(SPCI_DENIED);
+		ret = ffa_error(FFA_DENIED);
 		goto out;
-	case SPCI_INSTRUCTION_ACCESS_RESERVED:
-		panic("Got unexpected SPCI_INSTRUCTION_ACCESS_RESERVED. Should "
+	case FFA_INSTRUCTION_ACCESS_RESERVED:
+		panic("Got unexpected FFA_INSTRUCTION_ACCESS_RESERVED. Should "
 		      "be checked before this point.");
 	}
-	memory_to_attributes = spci_memory_permissions_to_mode(permissions);
+	memory_to_attributes = ffa_memory_permissions_to_mode(permissions);
 
-	composite = spci_memory_region_get_composite(memory_region, 0);
-	ret = spci_retrieve_memory(to_locked, composite->constituents,
-				   composite->constituent_count,
-				   memory_to_attributes,
-				   share_state->share_func, false, page_pool);
-	if (ret.func != SPCI_SUCCESS_32) {
+	composite = ffa_memory_region_get_composite(memory_region, 0);
+	ret = ffa_retrieve_memory(to_locked, composite->constituents,
+				  composite->constituent_count,
+				  memory_to_attributes, share_state->share_func,
+				  false, page_pool);
+	if (ret.func != FFA_SUCCESS_32) {
 		goto out;
 	}
 
@@ -1552,17 +1549,17 @@
 	 * must be done before the share_state is (possibly) freed.
 	 */
 	/* TODO: combine attributes from sender and request. */
-	response_size = spci_retrieved_memory_region_init(
+	response_size = ffa_retrieved_memory_region_init(
 		to_locked.vm->mailbox.recv, HF_MAILBOX_SIZE,
 		memory_region->sender, memory_region->attributes,
 		memory_region->flags, handle, to_locked.vm->id, permissions,
 		composite->constituents, composite->constituent_count);
 	to_locked.vm->mailbox.recv_size = response_size;
 	to_locked.vm->mailbox.recv_sender = HF_HYPERVISOR_VM_ID;
-	to_locked.vm->mailbox.recv_func = SPCI_MEM_RETRIEVE_RESP_32;
+	to_locked.vm->mailbox.recv_func = FFA_MEM_RETRIEVE_RESP_32;
 	to_locked.vm->mailbox.state = MAILBOX_STATE_READ;
 
-	if (share_state->share_func == SPCI_MEM_DONATE_32) {
+	if (share_state->share_func == FFA_MEM_DONATE_32) {
 		/*
 		 * Memory that has been donated can't be relinquished, so no
 		 * need to keep the share state around.
@@ -1573,9 +1570,9 @@
 		share_state->retrieved[0] = true;
 	}
 
-	ret = (struct spci_value){.func = SPCI_MEM_RETRIEVE_RESP_32,
-				  .arg1 = response_size,
-				  .arg2 = response_size};
+	ret = (struct ffa_value){.func = FFA_MEM_RETRIEVE_RESP_32,
+				 .arg1 = response_size,
+				 .arg2 = response_size};
 
 out:
 	share_states_unlock(&share_states);
@@ -1583,24 +1580,24 @@
 	return ret;
 }
 
-struct spci_value spci_memory_relinquish(
+struct ffa_value ffa_memory_relinquish(
 	struct vm_locked from_locked,
-	struct spci_mem_relinquish *relinquish_request, struct mpool *page_pool)
+	struct ffa_mem_relinquish *relinquish_request, struct mpool *page_pool)
 {
-	spci_memory_handle_t handle = relinquish_request->handle;
+	ffa_memory_handle_t handle = relinquish_request->handle;
 	struct share_states_locked share_states;
-	struct spci_memory_share_state *share_state;
-	struct spci_memory_region *memory_region;
+	struct ffa_memory_share_state *share_state;
+	struct ffa_memory_region *memory_region;
 	bool clear;
-	struct spci_composite_memory_region *composite;
-	struct spci_value ret;
+	struct ffa_composite_memory_region *composite;
+	struct ffa_value ret;
 
 	if (relinquish_request->endpoint_count != 1) {
 		dlog_verbose(
 			"Stream endpoints not supported (got %d endpoints on "
-			"SPCI_MEM_RELINQUISH, expected 1).\n",
+			"FFA_MEM_RELINQUISH, expected 1).\n",
 			relinquish_request->endpoint_count);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	if (relinquish_request->endpoints[0] != from_locked.vm->id) {
@@ -1608,16 +1605,16 @@
 			"VM ID %d in relinquish message doesn't match calling "
 			"VM ID %d.\n",
 			relinquish_request->endpoints[0], from_locked.vm->id);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	dump_share_states();
 
 	share_states = share_states_lock();
 	if (!get_share_state(share_states, handle, &share_state)) {
-		dlog_verbose("Invalid handle %#x for SPCI_MEM_RELINQUISH.\n",
+		dlog_verbose("Invalid handle %#x for FFA_MEM_RELINQUISH.\n",
 			     handle);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1632,7 +1629,7 @@
 			from_locked.vm->id, handle,
 			memory_region->receivers[0]
 				.receiver_permissions.receiver);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1641,28 +1638,28 @@
 			"Memory with handle %#x not yet retrieved, can't "
 			"relinquish.\n",
 			handle);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
-	clear = relinquish_request->flags & SPCI_MEMORY_REGION_FLAG_CLEAR;
+	clear = relinquish_request->flags & FFA_MEMORY_REGION_FLAG_CLEAR;
 
 	/*
 	 * Clear is not allowed for memory that was shared, as the original
 	 * sender still has access to the memory.
 	 */
-	if (clear && share_state->share_func == SPCI_MEM_SHARE_32) {
+	if (clear && share_state->share_func == FFA_MEM_SHARE_32) {
 		dlog_verbose("Memory which was shared can't be cleared.\n");
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
-	composite = spci_memory_region_get_composite(memory_region, 0);
-	ret = spci_relinquish_memory(from_locked, composite->constituents,
-				     composite->constituent_count, page_pool,
-				     clear);
+	composite = ffa_memory_region_get_composite(memory_region, 0);
+	ret = ffa_relinquish_memory(from_locked, composite->constituents,
+				    composite->constituent_count, page_pool,
+				    clear);
 
-	if (ret.func == SPCI_SUCCESS_32) {
+	if (ret.func == FFA_SUCCESS_32) {
 		/*
 		 * Mark memory handle as not retrieved, so it can be reclaimed
 		 * (or retrieved again).
@@ -1681,24 +1678,24 @@
  * updates the page table of the reclaiming VM, and frees the internal state
  * associated with the handle.
  */
-struct spci_value spci_memory_reclaim(struct vm_locked to_locked,
-				      spci_memory_handle_t handle, bool clear,
-				      struct mpool *page_pool)
+struct ffa_value ffa_memory_reclaim(struct vm_locked to_locked,
+				    ffa_memory_handle_t handle, bool clear,
+				    struct mpool *page_pool)
 {
 	struct share_states_locked share_states;
-	struct spci_memory_share_state *share_state;
-	struct spci_memory_region *memory_region;
-	struct spci_composite_memory_region *composite;
+	struct ffa_memory_share_state *share_state;
+	struct ffa_memory_region *memory_region;
+	struct ffa_composite_memory_region *composite;
 	uint32_t memory_to_attributes = MM_MODE_R | MM_MODE_W | MM_MODE_X;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	dump_share_states();
 
 	share_states = share_states_lock();
 	if (!get_share_state(share_states, handle, &share_state)) {
-		dlog_verbose("Invalid handle %#x for SPCI_MEM_RECLAIM.\n",
+		dlog_verbose("Invalid handle %#x for FFA_MEM_RECLAIM.\n",
 			     handle);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1710,7 +1707,7 @@
 			"VM %d attempted to reclaim memory handle %#x "
 			"originally sent by VM %d.\n",
 			to_locked.vm->id, handle, memory_region->sender);
-		ret = spci_error(SPCI_INVALID_PARAMETERS);
+		ret = ffa_error(FFA_INVALID_PARAMETERS);
 		goto out;
 	}
 
@@ -1719,17 +1716,17 @@
 			"Tried to reclaim memory handle %#x that has not been "
 			"relinquished.\n",
 			handle);
-		ret = spci_error(SPCI_DENIED);
+		ret = ffa_error(FFA_DENIED);
 		goto out;
 	}
 
-	composite = spci_memory_region_get_composite(memory_region, 0);
-	ret = spci_retrieve_memory(to_locked, composite->constituents,
-				   composite->constituent_count,
-				   memory_to_attributes, SPCI_MEM_RECLAIM_32,
-				   clear, page_pool);
+	composite = ffa_memory_region_get_composite(memory_region, 0);
+	ret = ffa_retrieve_memory(to_locked, composite->constituents,
+				  composite->constituent_count,
+				  memory_to_attributes, FFA_MEM_RECLAIM_32,
+				  clear, page_pool);
 
-	if (ret.func == SPCI_SUCCESS_32) {
+	if (ret.func == FFA_SUCCESS_32) {
 		share_state_free(share_states, share_state, page_pool);
 		dlog_verbose("Freed share state after successful reclaim.\n");
 	}
@@ -1743,13 +1740,13 @@
  * Validates that the reclaim transition is allowed for the given memory region
  * and updates the page table of the reclaiming VM.
  */
-struct spci_value spci_memory_tee_reclaim(
-	struct vm_locked to_locked, spci_memory_handle_t handle,
-	struct spci_memory_region *memory_region, bool clear,
-	struct mpool *page_pool)
+struct ffa_value ffa_memory_tee_reclaim(struct vm_locked to_locked,
+					ffa_memory_handle_t handle,
+					struct ffa_memory_region *memory_region,
+					bool clear, struct mpool *page_pool)
 {
 	uint32_t memory_to_attributes = MM_MODE_R | MM_MODE_W | MM_MODE_X;
-	struct spci_composite_memory_region *composite;
+	struct ffa_composite_memory_region *composite;
 
 	if (memory_region->receiver_count != 1) {
 		/* Only one receiver supported by Hafnium for now. */
@@ -1757,7 +1754,7 @@
 			"Multiple recipients not supported (got %d, expected "
 			"1).\n",
 			memory_region->receiver_count);
-		return spci_error(SPCI_NOT_SUPPORTED);
+		return ffa_error(FFA_NOT_SUPPORTED);
 	}
 
 	if (memory_region->handle != handle) {
@@ -1765,7 +1762,7 @@
 			"Got memory region handle %#x from TEE but requested "
 			"handle %#x.\n",
 			memory_region->handle, handle);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
 	/* The original sender must match the caller. */
@@ -1774,17 +1771,17 @@
 			"VM %d attempted to reclaim memory handle %#x "
 			"originally sent by VM %d.\n",
 			to_locked.vm->id, handle, memory_region->sender);
-		return spci_error(SPCI_INVALID_PARAMETERS);
+		return ffa_error(FFA_INVALID_PARAMETERS);
 	}
 
-	composite = spci_memory_region_get_composite(memory_region, 0);
+	composite = ffa_memory_region_get_composite(memory_region, 0);
 
 	/*
 	 * Forward the request to the TEE and then map the memory back into the
 	 * caller's stage-2 page table.
 	 */
-	return spci_tee_reclaim_memory(to_locked, handle,
-				       composite->constituents,
-				       composite->constituent_count,
-				       memory_to_attributes, clear, page_pool);
+	return ffa_tee_reclaim_memory(to_locked, handle,
+				      composite->constituents,
+				      composite->constituent_count,
+				      memory_to_attributes, clear, page_pool);
 }
diff --git a/src/init.c b/src/init.c
index d224d49..5652897 100644
--- a/src/init.c
+++ b/src/init.c
@@ -160,7 +160,7 @@
 	/* Enable TLB invalidation for VM page table updates. */
 	mm_vm_enable_invalidation();
 
-	if (manifest.spci_tee_enabled) {
+	if (manifest.ffa_tee_enabled) {
 		/* Set up message buffers for TEE dispatcher. */
 		arch_tee_init();
 	}
diff --git a/src/load.c b/src/load.c
index 0040201..259726e 100644
--- a/src/load.c
+++ b/src/load.c
@@ -411,7 +411,7 @@
 
 	for (i = 0; i < manifest->vm_count; ++i) {
 		const struct manifest_vm *manifest_vm = &manifest->vm[i];
-		spci_vm_id_t vm_id = HF_VM_ID_OFFSET + i;
+		ffa_vm_id_t vm_id = HF_VM_ID_OFFSET + i;
 		uint64_t mem_size;
 		paddr_t secondary_mem_begin;
 		paddr_t secondary_mem_end;
diff --git a/src/manifest.c b/src/manifest.c
index 9b80106..460494c 100644
--- a/src/manifest.c
+++ b/src/manifest.c
@@ -41,7 +41,7 @@
 static_assert(HF_TEE_VM_ID > VM_ID_MAX,
 	      "TrustZone VM ID clashes with normal VM range.");
 
-static inline size_t count_digits(spci_vm_id_t vm_id)
+static inline size_t count_digits(ffa_vm_id_t vm_id)
 {
 	size_t digits = 0;
 
@@ -56,7 +56,7 @@
  * Generates a string with the two letters "vm" followed by an integer.
  * Assumes `buf` is of size VM_NAME_BUF_SIZE.
  */
-static void generate_vm_node_name(struct string *str, spci_vm_id_t vm_id)
+static void generate_vm_node_name(struct string *str, ffa_vm_id_t vm_id)
 {
 	static const char *digits = "0123456789";
 	size_t vm_id_digits = count_digits(vm_id);
@@ -216,7 +216,7 @@
 
 static enum manifest_return_code parse_vm(const struct fdt_node *node,
 					  struct manifest_vm *vm,
-					  spci_vm_id_t vm_id)
+					  ffa_vm_id_t vm_id)
 {
 	struct uint32list_iter smcs;
 	size_t idx;
@@ -280,11 +280,11 @@
 		return MANIFEST_ERROR_NOT_COMPATIBLE;
 	}
 
-	TRY(read_bool(&hyp_node, "spci_tee", &manifest->spci_tee_enabled));
+	TRY(read_bool(&hyp_node, "ffa_tee", &manifest->ffa_tee_enabled));
 
 	/* Iterate over reserved VM IDs and check no such nodes exist. */
 	for (i = 0; i < HF_VM_ID_OFFSET; i++) {
-		spci_vm_id_t vm_id = (spci_vm_id_t)i;
+		ffa_vm_id_t vm_id = (ffa_vm_id_t)i;
 		struct fdt_node vm_node = hyp_node;
 
 		generate_vm_node_name(&vm_name, vm_id);
@@ -295,7 +295,7 @@
 
 	/* Iterate over VM nodes until we find one that does not exist. */
 	for (i = 0; i <= MAX_VMS; ++i) {
-		spci_vm_id_t vm_id = HF_VM_ID_OFFSET + i;
+		ffa_vm_id_t vm_id = HF_VM_ID_OFFSET + i;
 		struct fdt_node vm_node = hyp_node;
 
 		generate_vm_node_name(&vm_name, vm_id);
diff --git a/src/vcpu.c b/src/vcpu.c
index b77689d..c98f2ad 100644
--- a/src/vcpu.c
+++ b/src/vcpu.c
@@ -64,7 +64,7 @@
 	vcpu.vcpu->state = VCPU_STATE_READY;
 }
 
-spci_vcpu_index_t vcpu_index(const struct vcpu *vcpu)
+ffa_vcpu_index_t vcpu_index(const struct vcpu *vcpu)
 {
 	size_t index = vcpu - vcpu->vm->vcpus;
 
diff --git a/src/vm.c b/src/vm.c
index 7237dfb..0f2b9c8 100644
--- a/src/vm.c
+++ b/src/vm.c
@@ -19,18 +19,18 @@
 #include "hf/api.h"
 #include "hf/check.h"
 #include "hf/cpu.h"
+#include "hf/ffa.h"
 #include "hf/layout.h"
 #include "hf/plat/iommu.h"
-#include "hf/spci.h"
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
 
 static struct vm vms[MAX_VMS];
 static struct vm tee_vm;
-static spci_vm_count_t vm_count;
+static ffa_vm_count_t vm_count;
 
-struct vm *vm_init(spci_vm_id_t id, spci_vcpu_count_t vcpu_count,
+struct vm *vm_init(ffa_vm_id_t id, ffa_vcpu_count_t vcpu_count,
 		   struct mpool *ppool)
 {
 	uint32_t i;
@@ -76,7 +76,7 @@
 	return vm;
 }
 
-bool vm_init_next(spci_vcpu_count_t vcpu_count, struct mpool *ppool,
+bool vm_init_next(ffa_vcpu_count_t vcpu_count, struct mpool *ppool,
 		  struct vm **new_vm)
 {
 	if (vm_count >= MAX_VMS) {
@@ -93,12 +93,12 @@
 	return true;
 }
 
-spci_vm_count_t vm_get_count(void)
+ffa_vm_count_t vm_get_count(void)
 {
 	return vm_count;
 }
 
-struct vm *vm_find(spci_vm_id_t id)
+struct vm *vm_find(ffa_vm_id_t id)
 {
 	uint16_t index;
 
@@ -167,7 +167,7 @@
  * Get the vCPU with the given index from the given VM.
  * This assumes the index is valid, i.e. less than vm->vcpu_count.
  */
-struct vcpu *vm_get_vcpu(struct vm *vm, spci_vcpu_index_t vcpu_index)
+struct vcpu *vm_get_vcpu(struct vm *vm, ffa_vcpu_index_t vcpu_index)
 {
 	CHECK(vcpu_index < vm->vcpu_count);
 	return &vm->vcpus[vcpu_index];
@@ -176,7 +176,7 @@
 /**
  * Gets `vm`'s wait entry for waiting on the `for_vm`.
  */
-struct wait_entry *vm_get_wait_entry(struct vm *vm, spci_vm_id_t for_vm)
+struct wait_entry *vm_get_wait_entry(struct vm *vm, ffa_vm_id_t for_vm)
 {
 	uint16_t index;
 
@@ -190,7 +190,7 @@
 /**
  * Gets the ID of the VM which the given VM's wait entry is for.
  */
-spci_vm_id_t vm_id_for_wait_entry(struct vm *vm, struct wait_entry *entry)
+ffa_vm_id_t vm_id_for_wait_entry(struct vm *vm, struct wait_entry *entry)
 {
 	uint16_t index = entry - vm->wait_entries;
 
diff --git a/test/arch/aarch64/tee_test.c b/test/arch/aarch64/tee_test.c
index 1b07a3c..b369adb 100644
--- a/test/arch/aarch64/tee_test.c
+++ b/test/arch/aarch64/tee_test.c
@@ -19,32 +19,32 @@
 #include <stdint.h>
 
 #include "hf/addr.h"
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/types.h"
 
 #include "smc.h"
 #include "test/hftest.h"
 
-alignas(SPCI_PAGE_SIZE) static uint8_t tee_send_buffer[HF_MAILBOX_SIZE];
-alignas(SPCI_PAGE_SIZE) static uint8_t tee_recv_buffer[HF_MAILBOX_SIZE];
+alignas(FFA_PAGE_SIZE) static uint8_t tee_send_buffer[HF_MAILBOX_SIZE];
+alignas(FFA_PAGE_SIZE) static uint8_t tee_recv_buffer[HF_MAILBOX_SIZE];
 
 /**
- * Make sure SPCI_RXTX_MAP to EL3 works.
+ * Make sure FFA_RXTX_MAP to EL3 works.
  */
 TEST(arch_tee, init)
 {
-	struct spci_value ret = arch_tee_call((struct spci_value){
-		.func = SPCI_RXTX_MAP_64,
+	struct ffa_value ret = arch_tee_call((struct ffa_value){
+		.func = FFA_RXTX_MAP_64,
 		.arg1 = pa_addr(pa_from_va(va_from_ptr(tee_recv_buffer))),
 		.arg2 = pa_addr(pa_from_va(va_from_ptr(tee_send_buffer))),
-		.arg3 = HF_MAILBOX_SIZE / SPCI_PAGE_SIZE});
+		.arg3 = HF_MAILBOX_SIZE / FFA_PAGE_SIZE});
 	uint32_t func = ret.func & ~SMCCC_CONVENTION_MASK;
 
 	/*
 	 * TODO(qwandor): Remove this UNKNOWN check once we have a build of TF-A
-	 * which supports SPCI memory sharing.
+	 * which supports FF-A memory sharing.
 	 */
 	if (ret.func != SMCCC_ERROR_UNKNOWN) {
-		ASSERT_EQ(func, SPCI_SUCCESS_32);
+		ASSERT_EQ(func, FFA_SUCCESS_32);
 	}
 }
diff --git a/test/hftest/service.c b/test/hftest/service.c
index afbf6f6..2765cfb 100644
--- a/test/hftest/service.c
+++ b/test/hftest/service.c
@@ -17,9 +17,9 @@
 #include <stdalign.h>
 #include <stdint.h>
 
+#include "hf/ffa.h"
 #include "hf/memiter.h"
 #include "hf/mm.h"
-#include "hf/spci.h"
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
@@ -82,7 +82,7 @@
 	struct memiter args;
 	hftest_test_fn service;
 	struct hftest_context *ctx;
-	struct spci_value ret;
+	struct ffa_value ret;
 
 	/*
 	 * Initialize the stage-1 MMU and identity-map the entire address space.
@@ -98,14 +98,14 @@
 	/* Prepare the context. */
 
 	/* Set up the mailbox. */
-	spci_rxtx_map(send_addr, recv_addr);
+	ffa_rxtx_map(send_addr, recv_addr);
 
 	/* Receive the name of the service to run. */
-	ret = spci_msg_wait();
-	ASSERT_EQ(ret.func, SPCI_MSG_SEND_32);
-	memiter_init(&args, recv, spci_msg_send_size(ret));
+	ret = ffa_msg_wait();
+	ASSERT_EQ(ret.func, FFA_MSG_SEND_32);
+	memiter_init(&args, recv, ffa_msg_send_size(ret));
 	service = find_service(&args);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Check the service was found. */
 	if (service == NULL) {
@@ -126,7 +126,7 @@
 	ctx->memory_size = memory_size;
 
 	/* Pause so the next time cycles are given the service will be run. */
-	spci_yield();
+	ffa_yield();
 
 	/* Let the service run. */
 	service();
diff --git a/test/inc/test/hftest_impl.h b/test/inc/test/hftest_impl.h
index 55bb88c..1806258 100644
--- a/test/inc/test/hftest_impl.h
+++ b/test/inc/test/hftest_impl.h
@@ -19,10 +19,10 @@
 #include <stdnoreturn.h>
 
 #include "hf/fdt.h"
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/std.h"
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 #define HFTEST_MAX_TESTS 50
 
@@ -284,7 +284,7 @@
  */
 #define HFTEST_SERVICE_SELECT(vm_id, service, send_buffer)                    \
 	do {                                                                  \
-		struct spci_value run_res;                                    \
+		struct ffa_value run_res;                                     \
 		uint32_t msg_length =                                         \
 			strnlen_s(service, SERVICE_NAME_MAX_LENGTH);          \
                                                                               \
@@ -292,19 +292,19 @@
 		 * Let the service configure its mailbox and wait for a       \
 		 * message.                                                   \
 		 */                                                           \
-		run_res = spci_run(vm_id, 0);                                 \
-		ASSERT_EQ(run_res.func, SPCI_MSG_WAIT_32);                    \
-		ASSERT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);               \
+		run_res = ffa_run(vm_id, 0);                                  \
+		ASSERT_EQ(run_res.func, FFA_MSG_WAIT_32);                     \
+		ASSERT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);                \
                                                                               \
 		/* Send the selected service to run and let it be handled. */ \
-		memcpy_s(send_buffer, SPCI_MSG_PAYLOAD_MAX, service,          \
+		memcpy_s(send_buffer, FFA_MSG_PAYLOAD_MAX, service,           \
 			 msg_length);                                         \
                                                                               \
-		ASSERT_EQ(spci_msg_send(hf_vm_get_id(), vm_id, msg_length, 0) \
+		ASSERT_EQ(ffa_msg_send(hf_vm_get_id(), vm_id, msg_length, 0)  \
 				  .func,                                      \
-			  SPCI_SUCCESS_32);                                   \
-		run_res = spci_run(vm_id, 0);                                 \
-		ASSERT_EQ(run_res.func, SPCI_YIELD_32);                       \
+			  FFA_SUCCESS_32);                                    \
+		run_res = ffa_run(vm_id, 0);                                  \
+		ASSERT_EQ(run_res.func, FFA_YIELD_32);                        \
 	} while (0)
 
 #define HFTEST_SERVICE_SEND_BUFFER() hftest_get_context()->send
diff --git a/test/inc/test/vmapi/exception_handler.h b/test/inc/test/vmapi/exception_handler.h
index 47de143..1a15bdc 100644
--- a/test/inc/test/vmapi/exception_handler.h
+++ b/test/inc/test/vmapi/exception_handler.h
@@ -16,7 +16,7 @@
 
 #pragma once
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 bool exception_handler_skip_instruction(void);
 
@@ -33,5 +33,5 @@
 void exception_handler_send_exception_count(void);
 
 int exception_handler_receive_exception_count(
-	const struct spci_value *send_res,
-	const struct spci_memory_region *recv_buf);
+	const struct ffa_value *send_res,
+	const struct ffa_memory_region *recv_buf);
diff --git a/test/inc/test/vmapi/ffa.h b/test/inc/test/vmapi/ffa.h
new file mode 100644
index 0000000..c01008e
--- /dev/null
+++ b/test/inc/test/vmapi/ffa.h
@@ -0,0 +1,49 @@
+/*
+ * Copyright 2018 The Hafnium Authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include "vmapi/hf/ffa.h"
+
+#define EXPECT_FFA_ERROR(value, ffa_error)       \
+	do {                                     \
+		struct ffa_value v = (value);    \
+		EXPECT_EQ(v.func, FFA_ERROR_32); \
+		EXPECT_EQ(v.arg2, (ffa_error));  \
+	} while (0)
+
+struct mailbox_buffers {
+	void *send;
+	void *recv;
+};
+
+struct mailbox_buffers set_up_mailbox(void);
+ffa_memory_handle_t send_memory_and_retrieve_request(
+	uint32_t share_func, void *tx_buffer, ffa_vm_id_t sender,
+	ffa_vm_id_t recipient,
+	struct ffa_memory_region_constituent constituents[],
+	uint32_t constituent_count, ffa_memory_region_flags_t flags,
+	enum ffa_data_access send_data_access,
+	enum ffa_data_access retrieve_data_access,
+	enum ffa_instruction_access send_instruction_access,
+	enum ffa_instruction_access retrieve_instruction_access);
+ffa_vm_id_t retrieve_memory_from_message(void *recv_buf, void *send_buf,
+					 struct ffa_value msg_ret,
+					 ffa_memory_handle_t *handle);
+ffa_vm_id_t retrieve_memory_from_message_expect_fail(void *recv_buf,
+						     void *send_buf,
+						     struct ffa_value msg_ret,
+						     int32_t expected_error);
diff --git a/test/inc/test/vmapi/spci.h b/test/inc/test/vmapi/spci.h
deleted file mode 100644
index bab7f81..0000000
--- a/test/inc/test/vmapi/spci.h
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- * Copyright 2018 The Hafnium Authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *     https://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include "vmapi/hf/spci.h"
-
-#define EXPECT_SPCI_ERROR(value, spci_error)      \
-	do {                                      \
-		struct spci_value v = (value);    \
-		EXPECT_EQ(v.func, SPCI_ERROR_32); \
-		EXPECT_EQ(v.arg2, (spci_error));  \
-	} while (0)
-
-struct mailbox_buffers {
-	void *send;
-	void *recv;
-};
-
-struct mailbox_buffers set_up_mailbox(void);
-spci_memory_handle_t send_memory_and_retrieve_request(
-	uint32_t share_func, void *tx_buffer, spci_vm_id_t sender,
-	spci_vm_id_t recipient,
-	struct spci_memory_region_constituent constituents[],
-	uint32_t constituent_count, spci_memory_region_flags_t flags,
-	enum spci_data_access send_data_access,
-	enum spci_data_access retrieve_data_access,
-	enum spci_instruction_access send_instruction_access,
-	enum spci_instruction_access retrieve_instruction_access);
-spci_vm_id_t retrieve_memory_from_message(void *recv_buf, void *send_buf,
-					  struct spci_value msg_ret,
-					  spci_memory_handle_t *handle);
-spci_vm_id_t retrieve_memory_from_message_expect_fail(void *recv_buf,
-						      void *send_buf,
-						      struct spci_value msg_ret,
-						      int32_t expected_error);
diff --git a/test/linux/hftest_socket.c b/test/linux/hftest_socket.c
index b25691f..0c34c21 100644
--- a/test/linux/hftest_socket.c
+++ b/test/linux/hftest_socket.c
@@ -17,15 +17,15 @@
 #include <stdalign.h>
 #include <stdint.h>
 
+#include "hf/ffa.h"
 #include "hf/memiter.h"
-#include "hf/spci.h"
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
 #include "vmapi/hf/transport.h"
 
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 alignas(4096) uint8_t kstack[4096];
 
@@ -67,9 +67,9 @@
 	/* Prepare the context. */
 
 	/* Set up the mailbox. */
-	spci_rxtx_map(send_addr, recv_addr);
+	ffa_rxtx_map(send_addr, recv_addr);
 
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
 
 	/* Clean the context. */
 	ctx = hftest_get_context();
@@ -80,29 +80,29 @@
 	ctx->memory_size = memory_size;
 
 	for (;;) {
-		struct spci_value ret;
+		struct ffa_value ret;
 
 		/* Receive the packet. */
-		ret = spci_msg_wait();
-		EXPECT_EQ(ret.func, SPCI_MSG_SEND_32);
-		EXPECT_LE(spci_msg_send_size(ret), SPCI_MSG_PAYLOAD_MAX);
+		ret = ffa_msg_wait();
+		EXPECT_EQ(ret.func, FFA_MSG_SEND_32);
+		EXPECT_LE(ffa_msg_send_size(ret), FFA_MSG_PAYLOAD_MAX);
 
 		/* Echo the message back to the sender. */
-		memcpy_s(send, SPCI_MSG_PAYLOAD_MAX, recv,
-			 spci_msg_send_size(ret));
+		memcpy_s(send, FFA_MSG_PAYLOAD_MAX, recv,
+			 ffa_msg_send_size(ret));
 
 		/* Swap the socket's source and destination ports */
 		struct hf_msg_hdr *hdr = (struct hf_msg_hdr *)send;
 		swap(&(hdr->src_port), &(hdr->dst_port));
 
 		/* Swap the destination and source ids. */
-		spci_vm_id_t dst_id = spci_msg_send_sender(ret);
-		spci_vm_id_t src_id = spci_msg_send_receiver(ret);
+		ffa_vm_id_t dst_id = ffa_msg_send_sender(ret);
+		ffa_vm_id_t src_id = ffa_msg_send_receiver(ret);
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-		EXPECT_EQ(spci_msg_send(src_id, dst_id, spci_msg_send_size(ret),
-					0)
-				  .func,
-			  SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+		EXPECT_EQ(
+			ffa_msg_send(src_id, dst_id, ffa_msg_send_size(ret), 0)
+				.func,
+			FFA_SUCCESS_32);
 	}
 }
diff --git a/test/linux/linux.c b/test/linux/linux.c
index 2124f3f..32d7db7 100644
--- a/test/linux/linux.c
+++ b/test/linux/linux.c
@@ -83,7 +83,7 @@
  */
 TEST(linux, socket_echo_hafnium)
 {
-	spci_vm_id_t vm_id = HF_VM_ID_OFFSET + 1;
+	ffa_vm_id_t vm_id = HF_VM_ID_OFFSET + 1;
 	int port = 10;
 	int socket_id;
 	struct hf_sockaddr addr;
diff --git a/test/vmapi/arch/aarch64/gicv3/busy_secondary.c b/test/vmapi/arch/aarch64/gicv3/busy_secondary.c
index 8e39a2e..0fdb0db 100644
--- a/test/vmapi/arch/aarch64/gicv3/busy_secondary.c
+++ b/test/vmapi/arch/aarch64/gicv3/busy_secondary.c
@@ -18,7 +18,7 @@
 #include "hf/arch/vm/interrupts_gicv3.h"
 
 #include "hf/dlog.h"
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
@@ -38,15 +38,15 @@
 SET_UP(busy_secondary)
 {
 	system_setup();
-	EXPECT_EQ(spci_rxtx_map(send_page_addr, recv_page_addr).func,
-		  SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rxtx_map(send_page_addr, recv_page_addr).func,
+		  FFA_SUCCESS_32);
 	SERVICE_SELECT(SERVICE_VM1, "busy", send_buffer);
 }
 
 TEST(busy_secondary, virtual_timer)
 {
 	const char message[] = "loop";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
 	interrupt_enable(VIRTUAL_TIMER_IRQ, true);
 	interrupt_set_priority(VIRTUAL_TIMER_IRQ, 0x80);
@@ -63,9 +63,9 @@
 	arch_irq_enable();
 
 	/* Let the secondary get started and wait for our message. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Check that no interrupts are active or pending to start with. */
 	EXPECT_EQ(io_read32_array(GICD_ISPENDR, 0), 0);
@@ -80,13 +80,13 @@
 
 	/* Let secondary start looping. */
 	dlog("Telling secondary to loop.\n");
-	memcpy_s(send_buffer, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(send_buffer, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_INTERRUPT_32);
+		FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_INTERRUPT_32);
 
 	dlog("Waiting for interrupt\n");
 	while (last_interrupt_id == 0) {
@@ -112,7 +112,7 @@
 TEST(busy_secondary, physical_timer)
 {
 	const char message[] = "loop";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
 	interrupt_enable(PHYSICAL_TIMER_IRQ, true);
 	interrupt_set_priority(PHYSICAL_TIMER_IRQ, 0x80);
@@ -121,9 +121,9 @@
 	arch_irq_enable();
 
 	/* Let the secondary get started and wait for our message. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Check that no interrupts are active or pending to start with. */
 	EXPECT_EQ(io_read32_array(GICD_ISPENDR, 0), 0);
@@ -138,13 +138,13 @@
 
 	/* Let secondary start looping. */
 	dlog("Telling secondary to loop.\n");
-	memcpy_s(send_buffer, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(send_buffer, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_INTERRUPT_32);
+		FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_INTERRUPT_32);
 
 	dlog("Waiting for interrupt\n");
 	while (last_interrupt_id == 0) {
diff --git a/test/vmapi/arch/aarch64/gicv3/gicv3.c b/test/vmapi/arch/aarch64/gicv3/gicv3.c
index 13c4565..e1fbf24 100644
--- a/test/vmapi/arch/aarch64/gicv3/gicv3.c
+++ b/test/vmapi/arch/aarch64/gicv3/gicv3.c
@@ -87,14 +87,14 @@
  */
 TEST(system, icc_ctlr_access_trapped_secondary)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
-	EXPECT_EQ(spci_rxtx_map(send_page_addr, recv_page_addr).func,
-		  SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rxtx_map(send_page_addr, recv_page_addr).func,
+		  FFA_SUCCESS_32);
 	SERVICE_SELECT(SERVICE_VM1, "access_systemreg_ctlr", send_buffer);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /*
@@ -103,12 +103,12 @@
  */
 TEST(system, icc_sre_write_trapped_secondary)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
-	EXPECT_EQ(spci_rxtx_map(send_page_addr, recv_page_addr).func,
-		  SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rxtx_map(send_page_addr, recv_page_addr).func,
+		  FFA_SUCCESS_32);
 	SERVICE_SELECT(SERVICE_VM1, "write_systemreg_sre", send_buffer);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
diff --git a/test/vmapi/arch/aarch64/gicv3/services/busy.c b/test/vmapi/arch/aarch64/gicv3/services/busy.c
index fece9f6..154ce67 100644
--- a/test/vmapi/arch/aarch64/gicv3/services/busy.c
+++ b/test/vmapi/arch/aarch64/gicv3/services/busy.c
@@ -31,7 +31,7 @@
 {
 	dlog("Secondary waiting for message...\n");
 	mailbox_receive_retry();
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 	dlog("Secondary received message, looping forever.\n");
 	for (;;) {
 	}
diff --git a/test/vmapi/arch/aarch64/gicv3/services/common.c b/test/vmapi/arch/aarch64/gicv3/services/common.c
index ba18d39..7644d8b 100644
--- a/test/vmapi/arch/aarch64/gicv3/services/common.c
+++ b/test/vmapi/arch/aarch64/gicv3/services/common.c
@@ -24,14 +24,14 @@
  * Try to receive a message from the mailbox, blocking if necessary, and
  * retrying if interrupted.
  */
-struct spci_value mailbox_receive_retry(void)
+struct ffa_value mailbox_receive_retry(void)
 {
-	struct spci_value received;
+	struct ffa_value received;
 
 	do {
-		received = spci_msg_wait();
-	} while (received.func == SPCI_ERROR_32 &&
-		 received.arg2 == SPCI_INTERRUPTED);
+		received = ffa_msg_wait();
+	} while (received.func == FFA_ERROR_32 &&
+		 received.arg2 == FFA_INTERRUPTED);
 
 	return received;
 }
diff --git a/test/vmapi/arch/aarch64/gicv3/services/common.h b/test/vmapi/arch/aarch64/gicv3/services/common.h
index ced8baf..36ecd37 100644
--- a/test/vmapi/arch/aarch64/gicv3/services/common.h
+++ b/test/vmapi/arch/aarch64/gicv3/services/common.h
@@ -14,6 +14,6 @@
  * limitations under the License.
  */
 
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
-struct spci_value mailbox_receive_retry(void);
+struct ffa_value mailbox_receive_retry(void);
diff --git a/test/vmapi/arch/aarch64/gicv3/services/systemreg.c b/test/vmapi/arch/aarch64/gicv3/services/systemreg.c
index 214300b..bbe9ec8 100644
--- a/test/vmapi/arch/aarch64/gicv3/services/systemreg.c
+++ b/test/vmapi/arch/aarch64/gicv3/services/systemreg.c
@@ -45,7 +45,7 @@
 	EXPECT_EQ(exception_handler_get_num(), 2);
 
 	/* Yield after catching the exceptions. */
-	spci_yield();
+	ffa_yield();
 }
 
 TEST_SERVICE(write_systemreg_sre)
@@ -69,5 +69,5 @@
 		ASSERT_EQ(read_msr(ICC_SRE_EL1), 0x7);
 	}
 
-	spci_yield();
+	ffa_yield();
 }
diff --git a/test/vmapi/arch/aarch64/gicv3/services/timer.c b/test/vmapi/arch/aarch64/gicv3/services/timer.c
index f62aa80..c66898c 100644
--- a/test/vmapi/arch/aarch64/gicv3/services/timer.c
+++ b/test/vmapi/arch/aarch64/gicv3/services/timer.c
@@ -28,7 +28,7 @@
 
 #include "common.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 /*
  * Secondary VM that sets timers in response to messages, and sends messages
@@ -48,8 +48,8 @@
 	}
 	buffer[8] = '0' + interrupt_id / 10;
 	buffer[9] = '0' + interrupt_id % 10;
-	memcpy_s(SERVICE_SEND_BUFFER(), SPCI_MSG_PAYLOAD_MAX, buffer, size);
-	spci_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, size, 0);
+	memcpy_s(SERVICE_SEND_BUFFER(), FFA_MSG_PAYLOAD_MAX, buffer, size);
+	ffa_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, size, 0);
 	dlog("secondary IRQ %d ended\n", interrupt_id);
 	event_send_local();
 }
@@ -67,13 +67,12 @@
 		bool receive;
 		bool disable_interrupts;
 		uint32_t ticks;
-		struct spci_value ret = mailbox_receive_retry();
+		struct ffa_value ret = mailbox_receive_retry();
 
-		if (spci_msg_send_sender(ret) != HF_PRIMARY_VM_ID ||
-		    spci_msg_send_size(ret) != sizeof("**** xxxxxxx")) {
+		if (ffa_msg_send_sender(ret) != HF_PRIMARY_VM_ID ||
+		    ffa_msg_send_size(ret) != sizeof("**** xxxxxxx")) {
 			FAIL("Got unexpected message from VM %d, size %d.\n",
-			     spci_msg_send_sender(ret),
-			     spci_msg_send_size(ret));
+			     ffa_msg_send_sender(ret), ffa_msg_send_size(ret));
 		}
 
 		/*
@@ -90,7 +89,7 @@
 			(message[9] - '0') * 100 + (message[10] - '0') * 10 +
 			(message[11] - '0');
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 		dlog("Starting timer for %d ticks.\n", ticks);
 
@@ -110,9 +109,9 @@
 				event_wait();
 			}
 		} else if (receive) {
-			struct spci_value res = spci_msg_wait();
+			struct ffa_value res = ffa_msg_wait();
 
-			EXPECT_SPCI_ERROR(res, SPCI_INTERRUPTED);
+			EXPECT_FFA_ERROR(res, FFA_INTERRUPTED);
 		} else {
 			/* Busy wait until the timer fires. */
 			while (!timer_fired) {
diff --git a/test/vmapi/arch/aarch64/gicv3/timer_secondary.c b/test/vmapi/arch/aarch64/gicv3/timer_secondary.c
index 1e5c107..563a20b 100644
--- a/test/vmapi/arch/aarch64/gicv3/timer_secondary.c
+++ b/test/vmapi/arch/aarch64/gicv3/timer_secondary.c
@@ -19,18 +19,18 @@
 
 #include "hf/abi.h"
 #include "hf/call.h"
-#include "hf/spci.h"
+#include "hf/ffa.h"
 
 #include "gicv3.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 SET_UP(timer_secondary)
 {
 	system_setup();
 
-	EXPECT_EQ(spci_rxtx_map(send_page_addr, recv_page_addr).func,
-		  SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rxtx_map(send_page_addr, recv_page_addr).func,
+		  FFA_SUCCESS_32);
 	SERVICE_SELECT(SERVICE_VM1, "timer", send_buffer);
 
 	interrupt_enable(VIRTUAL_TIMER_IRQ, true);
@@ -41,51 +41,51 @@
 
 TEAR_DOWN(timer_secondary)
 {
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
 }
 
 static void timer_busywait_secondary()
 {
 	const char message[] = "loop 0099999";
 	const char expected_response[] = "Got IRQ 03.";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
 	/* Let the secondary get started and wait for our message. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Send the message for the secondary to set a timer. */
-	memcpy_s(send_buffer, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(send_buffer, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
+		FFA_SUCCESS_32);
 
 	/*
 	 * Let the secondary handle the message and set the timer. It will loop
 	 * until the hardware interrupt fires, at which point we'll get and
-	 * ignore the interrupt, and see a SPCI_YIELD return code.
+	 * ignore the interrupt, and see a FFA_YIELD return code.
 	 */
 	dlog("running secondary after sending timer message.\n");
 	last_interrupt_id = 0;
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_INTERRUPT_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_INTERRUPT_32);
 	dlog("secondary yielded after receiving timer message\n");
 	EXPECT_EQ(last_interrupt_id, VIRTUAL_TIMER_IRQ);
 
 	/*
-	 * Now that the timer has expired, when we call spci_run again Hafnium
+	 * Now that the timer has expired, when we call ffa_run again Hafnium
 	 * should inject a virtual timer interrupt into the secondary, which
 	 * should get it and respond.
 	 */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(recv_buffer, expected_response,
 			 sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -106,40 +106,38 @@
 {
 	const char expected_response[] = "Got IRQ 03.";
 	size_t message_length = strnlen_s(message, 64) + 1;
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
 	/* Let the secondary get started and wait for our message. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Send the message for the secondary to set a timer. */
-	memcpy_s(send_buffer, SPCI_MSG_PAYLOAD_MAX, message, message_length);
-	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, message_length, 0)
-			.func,
-		SPCI_SUCCESS_32);
+	memcpy_s(send_buffer, FFA_MSG_PAYLOAD_MAX, message, message_length);
+	EXPECT_EQ(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, message_length, 0)
+			  .func,
+		  FFA_SUCCESS_32);
 
 	/* Let the secondary handle the message and set the timer. */
 	last_interrupt_id = 0;
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 
 	/*
 	 * There's a race for whether the secondary manages to block and switch
 	 * to the primary before the hardware timer fires, so we need to handle
 	 * three cases:
-	 * 1. The (hardware) timer fires immediately, we get SPCI_INTERRUPT.
+	 * 1. The (hardware) timer fires immediately, we get FFA_INTERRUPT.
 	 * 2. The secondary blocks and switches back, we get expected_code until
 	 *   the timer fires.
 	 *  2a. The timer then expires while we are in the primary, so Hafnium
-	 *   can inject the timer interrupt the next time we call spci_run.
+	 *   can inject the timer interrupt the next time we call ffa_run.
 	 *  2b. The timer fires while the secondary is running, so we get
-	 *   SPCI_INTERRUPT as in case 1.
+	 *   FFA_INTERRUPT as in case 1.
 	 */
 
-	if (run_res.func != expected_code &&
-	    run_res.func != SPCI_INTERRUPT_32) {
-		FAIL("Expected run to return SPCI_INTERRUPT or %#x, but "
+	if (run_res.func != expected_code && run_res.func != FFA_INTERRUPT_32) {
+		FAIL("Expected run to return FFA_INTERRUPT or %#x, but "
 		     "got %#x",
 		     expected_code, run_res.func);
 	}
@@ -151,37 +149,37 @@
 		 * switch to the primary before the timer fires.
 		 */
 		dlog("Primary looping until timer fires\n");
-		if (expected_code == HF_SPCI_RUN_WAIT_FOR_INTERRUPT ||
-		    expected_code == SPCI_MSG_WAIT_32) {
-			EXPECT_NE(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+		if (expected_code == HF_FFA_RUN_WAIT_FOR_INTERRUPT ||
+		    expected_code == FFA_MSG_WAIT_32) {
+			EXPECT_NE(run_res.arg2, FFA_SLEEP_INDEFINITE);
 			dlog("%d ns remaining\n", run_res.arg2);
 		}
-		run_res = spci_run(SERVICE_VM1, 0);
+		run_res = ffa_run(SERVICE_VM1, 0);
 	}
 	dlog("Primary done looping\n");
 
-	if (run_res.func == SPCI_INTERRUPT_32) {
+	if (run_res.func == FFA_INTERRUPT_32) {
 		/*
 		 * This case happens if the (hardware) timer fires before the
 		 * secondary blocks and switches to the primary, either
 		 * immediately after setting the timer or during the loop above.
 		 * Then we get the interrupt to the primary, ignore it, and see
-		 * a SPCI_INTERRUPT code from the spci_run call, so we should
+		 * a FFA_INTERRUPT code from the ffa_run call, so we should
 		 * call it again for the timer interrupt to be injected
 		 * automatically by Hafnium.
 		 */
 		EXPECT_EQ(last_interrupt_id, VIRTUAL_TIMER_IRQ);
 		dlog("Preempted by timer interrupt, running again\n");
-		run_res = spci_run(SERVICE_VM1, 0);
+		run_res = ffa_run(SERVICE_VM1, 0);
 	}
 
 	/* Once we wake it up it should get the timer interrupt and respond. */
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(recv_buffer, expected_response,
 			 sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -197,8 +195,8 @@
 	 * Run the test twice in a row, to check that the state doesn't get
 	 * messed up.
 	 */
-	timer_secondary("WFI  0000001", HF_SPCI_RUN_WAIT_FOR_INTERRUPT);
-	timer_secondary("WFI  0000001", HF_SPCI_RUN_WAIT_FOR_INTERRUPT);
+	timer_secondary("WFI  0000001", HF_FFA_RUN_WAIT_FOR_INTERRUPT);
+	timer_secondary("WFI  0000001", HF_FFA_RUN_WAIT_FOR_INTERRUPT);
 }
 
 TEST(timer_secondary, wfi_long)
@@ -207,8 +205,8 @@
 	 * Run the test twice in a row, to check that the state doesn't get
 	 * messed up.
 	 */
-	timer_secondary("WFI  0099999", HF_SPCI_RUN_WAIT_FOR_INTERRUPT);
-	timer_secondary("WFI  0099999", HF_SPCI_RUN_WAIT_FOR_INTERRUPT);
+	timer_secondary("WFI  0099999", HF_FFA_RUN_WAIT_FOR_INTERRUPT);
+	timer_secondary("WFI  0099999", HF_FFA_RUN_WAIT_FOR_INTERRUPT);
 }
 
 TEST(timer_secondary, wfe_short)
@@ -217,8 +215,8 @@
 	 * Run the test twice in a row, to check that the state doesn't get
 	 * messed up.
 	 */
-	timer_secondary("WFE  0000001", SPCI_YIELD_32);
-	timer_secondary("WFE  0000001", SPCI_YIELD_32);
+	timer_secondary("WFE  0000001", FFA_YIELD_32);
+	timer_secondary("WFE  0000001", FFA_YIELD_32);
 }
 
 TEST(timer_secondary, wfe_long)
@@ -227,8 +225,8 @@
 	 * Run the test twice in a row, to check that the state doesn't get
 	 * messed up.
 	 */
-	timer_secondary("WFE  0099999", SPCI_YIELD_32);
-	timer_secondary("WFE  0099999", SPCI_YIELD_32);
+	timer_secondary("WFE  0099999", FFA_YIELD_32);
+	timer_secondary("WFE  0099999", FFA_YIELD_32);
 }
 
 TEST(timer_secondary, receive_short)
@@ -237,8 +235,8 @@
 	 * Run the test twice in a row, to check that the state doesn't get
 	 * messed up.
 	 */
-	timer_secondary("RECV 0000001", SPCI_MSG_WAIT_32);
-	timer_secondary("RECV 0000001", SPCI_MSG_WAIT_32);
+	timer_secondary("RECV 0000001", FFA_MSG_WAIT_32);
+	timer_secondary("RECV 0000001", FFA_MSG_WAIT_32);
 }
 
 TEST(timer_secondary, receive_long)
@@ -247,8 +245,8 @@
 	 * Run the test twice in a row, to check that the state doesn't get
 	 * messed up.
 	 */
-	timer_secondary("RECV 0099999", SPCI_MSG_WAIT_32);
-	timer_secondary("RECV 0099999", SPCI_MSG_WAIT_32);
+	timer_secondary("RECV 0099999", FFA_MSG_WAIT_32);
+	timer_secondary("RECV 0099999", FFA_MSG_WAIT_32);
 }
 
 /**
@@ -258,27 +256,26 @@
 {
 	const char message[] = "WFI  9999999";
 	size_t message_length = strnlen_s(message, 64) + 1;
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
 	/* Let the secondary get started and wait for our message. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Send the message for the secondary to set a timer. */
-	memcpy_s(send_buffer, SPCI_MSG_PAYLOAD_MAX, message, message_length);
-	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, message_length, 0)
-			.func,
-		SPCI_SUCCESS_32);
+	memcpy_s(send_buffer, FFA_MSG_PAYLOAD_MAX, message, message_length);
+	EXPECT_EQ(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, message_length, 0)
+			  .func,
+		  FFA_SUCCESS_32);
 
 	/*
 	 * Let the secondary handle the message and set the timer.
 	 */
 	last_interrupt_id = 0;
 	for (int i = 0; i < 20; ++i) {
-		run_res = spci_run(SERVICE_VM1, 0);
-		EXPECT_EQ(run_res.func, HF_SPCI_RUN_WAIT_FOR_INTERRUPT);
+		run_res = ffa_run(SERVICE_VM1, 0);
+		EXPECT_EQ(run_res.func, HF_FFA_RUN_WAIT_FOR_INTERRUPT);
 		dlog("Primary looping until timer fires; %d ns "
 		     "remaining\n",
 		     run_res.arg2);
diff --git a/test/vmapi/arch/aarch64/smc_whitelist.c b/test/vmapi/arch/aarch64/smc_whitelist.c
index fe0c664..7604b25 100644
--- a/test/vmapi/arch/aarch64/smc_whitelist.c
+++ b/test/vmapi/arch/aarch64/smc_whitelist.c
@@ -22,7 +22,7 @@
 TEST(smc_whitelist, not_whitelisted_unknown)
 {
 	const uint32_t non_whitelisted_ta_call = 0x3000f00d;
-	struct spci_value smc_res = smc_forward(
+	struct ffa_value smc_res = smc_forward(
 		non_whitelisted_ta_call, 0x1111111111111111, 0x2222222222222222,
 		0x3333333333333333, 0x4444444444444444, 0x5555555555555555,
 		0x6666666666666666, 0x77777777);
diff --git a/test/vmapi/arch/aarch64/smccc.c b/test/vmapi/arch/aarch64/smccc.c
index d42cc16..3442627 100644
--- a/test/vmapi/arch/aarch64/smccc.c
+++ b/test/vmapi/arch/aarch64/smccc.c
@@ -17,14 +17,14 @@
 #include <stdint.h>
 
 #include "vmapi/hf/call.h"
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 #include "smc.h"
 #include "test/hftest.h"
 
-static struct spci_value hvc(uint32_t func, uint64_t arg0, uint64_t arg1,
-			     uint64_t arg2, uint64_t arg3, uint64_t arg4,
-			     uint64_t arg5, uint32_t caller_id)
+static struct ffa_value hvc(uint32_t func, uint64_t arg0, uint64_t arg1,
+			    uint64_t arg2, uint64_t arg3, uint64_t arg4,
+			    uint64_t arg5, uint32_t caller_id)
 {
 	register uint64_t r0 __asm__("x0") = func;
 	register uint64_t r1 __asm__("x1") = arg0;
@@ -41,19 +41,19 @@
 		"+r"(r0), "+r"(r1), "+r"(r2), "+r"(r3), "+r"(r4), "+r"(r5),
 		"+r"(r6), "+r"(r7));
 
-	return (struct spci_value){.func = r0,
-				   .arg1 = r1,
-				   .arg2 = r2,
-				   .arg3 = r3,
-				   .arg4 = r4,
-				   .arg5 = r5,
-				   .arg6 = r6,
-				   .arg7 = r7};
+	return (struct ffa_value){.func = r0,
+				  .arg1 = r1,
+				  .arg2 = r2,
+				  .arg3 = r3,
+				  .arg4 = r4,
+				  .arg5 = r5,
+				  .arg6 = r6,
+				  .arg7 = r7};
 }
 
 TEST(smccc, hf_debug_log_smc_zero_or_unchanged)
 {
-	struct spci_value smc_res =
+	struct ffa_value smc_res =
 		smc_forward(HF_DEBUG_LOG, '\n', 0x2222222222222222,
 			    0x3333333333333333, 0x4444444444444444,
 			    0x5555555555555555, 0x6666666666666666, 0x77777777);
@@ -70,7 +70,7 @@
 
 TEST(smccc, hf_debug_log_hvc_zero_or_unchanged)
 {
-	struct spci_value smc_res =
+	struct ffa_value smc_res =
 		hvc(HF_DEBUG_LOG, '\n', 0x2222222222222222, 0x3333333333333333,
 		    0x4444444444444444, 0x5555555555555555, 0x6666666666666666,
 		    0x77777777);
@@ -86,15 +86,15 @@
 }
 
 /**
- * Checks that calling SPCI_FEATURES via an SMC works as expected.
- * The spci_features helper function uses an HVC, but an SMC should also work.
+ * Checks that calling FFA_FEATURES via an SMC works as expected.
+ * The ffa_features helper function uses an HVC, but an SMC should also work.
  */
-TEST(smccc, spci_features_smc)
+TEST(smccc, ffa_features_smc)
 {
-	struct spci_value ret;
+	struct ffa_value ret;
 
-	ret = smc32(SPCI_FEATURES_32, SPCI_VERSION_32, 0, 0, 0, 0, 0, 0);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = smc32(FFA_FEATURES_32, FFA_VERSION_32, 0, 0, 0, 0, 0, 0);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 	EXPECT_EQ(ret.arg1, 0);
 	EXPECT_EQ(ret.arg2, 0);
 	EXPECT_EQ(ret.arg3, 0);
diff --git a/test/vmapi/common/BUILD.gn b/test/vmapi/common/BUILD.gn
index 081c90e..c9a6eea 100644
--- a/test/vmapi/common/BUILD.gn
+++ b/test/vmapi/common/BUILD.gn
@@ -19,7 +19,7 @@
   public_configs = [ "//test/hftest:hftest_config" ]
   sources = [
     "exception_handler.c",
-    "spci.c",
+    "ffa.c",
   ]
   include_dirs = [ "//src/arch/aarch64" ]
 }
diff --git a/test/vmapi/common/exception_handler.c b/test/vmapi/common/exception_handler.c
index 14fe1e1..82ca679 100644
--- a/test/vmapi/common/exception_handler.c
+++ b/test/vmapi/common/exception_handler.c
@@ -36,27 +36,27 @@
 
 	dlog("Sending exception_count %d to primary VM\n",
 	     exception_handler_exception_count);
-	memcpy_s(send_buf, SPCI_MSG_PAYLOAD_MAX,
+	memcpy_s(send_buf, FFA_MSG_PAYLOAD_MAX,
 		 (const void *)&exception_handler_exception_count,
 		 sizeof(exception_handler_exception_count));
-	EXPECT_EQ(spci_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID,
-				sizeof(exception_handler_exception_count), 0)
+	EXPECT_EQ(ffa_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID,
+			       sizeof(exception_handler_exception_count), 0)
 			  .func,
-		  SPCI_SUCCESS_32);
+		  FFA_SUCCESS_32);
 }
 
 /**
  * Receives the number of exceptions handled.
  */
 int exception_handler_receive_exception_count(
-	const struct spci_value *send_res,
-	const struct spci_memory_region *recv_buf)
+	const struct ffa_value *send_res,
+	const struct ffa_memory_region *recv_buf)
 {
 	int exception_count = *((const int *)recv_buf);
 
-	EXPECT_EQ(send_res->func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(*send_res), sizeof(exception_count));
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(send_res->func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(*send_res), sizeof(exception_count));
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 	return exception_count;
 }
 
diff --git a/test/vmapi/common/ffa.c b/test/vmapi/common/ffa.c
new file mode 100644
index 0000000..a7048b1
--- /dev/null
+++ b/test/vmapi/common/ffa.c
@@ -0,0 +1,161 @@
+/*
+ * Copyright 2018 The Hafnium Authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "hf/ffa.h"
+
+#include "hf/mm.h"
+#include "hf/static_assert.h"
+
+#include "vmapi/hf/call.h"
+
+#include "test/hftest.h"
+#include "test/vmapi/ffa.h"
+
+static alignas(PAGE_SIZE) uint8_t send_page[PAGE_SIZE];
+static alignas(PAGE_SIZE) uint8_t recv_page[PAGE_SIZE];
+static_assert(sizeof(send_page) == PAGE_SIZE, "Send page is not a page.");
+static_assert(sizeof(recv_page) == PAGE_SIZE, "Recv page is not a page.");
+
+static hf_ipaddr_t send_page_addr = (hf_ipaddr_t)send_page;
+static hf_ipaddr_t recv_page_addr = (hf_ipaddr_t)recv_page;
+
+struct mailbox_buffers set_up_mailbox(void)
+{
+	ASSERT_EQ(ffa_rxtx_map(send_page_addr, recv_page_addr).func,
+		  FFA_SUCCESS_32);
+	return (struct mailbox_buffers){
+		.send = send_page,
+		.recv = recv_page,
+	};
+}
+
+/*
+ * Helper function to send memory to a VM then send a message with the retrieve
+ * request it needs to retrieve it.
+ */
+ffa_memory_handle_t send_memory_and_retrieve_request(
+	uint32_t share_func, void *tx_buffer, ffa_vm_id_t sender,
+	ffa_vm_id_t recipient,
+	struct ffa_memory_region_constituent constituents[],
+	uint32_t constituent_count, ffa_memory_region_flags_t flags,
+	enum ffa_data_access send_data_access,
+	enum ffa_data_access retrieve_data_access,
+	enum ffa_instruction_access send_instruction_access,
+	enum ffa_instruction_access retrieve_instruction_access)
+{
+	uint32_t msg_size;
+	struct ffa_value ret;
+	ffa_memory_handle_t handle;
+
+	/* Send the memory. */
+	msg_size = ffa_memory_region_init(
+		tx_buffer, sender, recipient, constituents, constituent_count,
+		0, flags, send_data_access, send_instruction_access,
+		FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+		FFA_MEMORY_OUTER_SHAREABLE);
+	switch (share_func) {
+	case FFA_MEM_DONATE_32:
+		ret = ffa_mem_donate(msg_size, msg_size);
+		break;
+	case FFA_MEM_LEND_32:
+		ret = ffa_mem_lend(msg_size, msg_size);
+		break;
+	case FFA_MEM_SHARE_32:
+		ret = ffa_mem_share(msg_size, msg_size);
+		break;
+	default:
+		FAIL("Invalid share_func %#x.\n", share_func);
+		/* Never reached, but needed to keep clang-analyser happy. */
+		return 0;
+	}
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
+	handle = ffa_mem_success_handle(ret);
+
+	/*
+	 * Send the appropriate retrieve request to the VM so that it can use it
+	 * to retrieve the memory.
+	 */
+	msg_size = ffa_memory_retrieve_request_init(
+		tx_buffer, handle, sender, recipient, 0, 0,
+		retrieve_data_access, retrieve_instruction_access,
+		FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+		FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_EQ(ffa_msg_send(sender, recipient, msg_size, 0).func,
+		  FFA_SUCCESS_32);
+
+	return handle;
+}
+
+/*
+ * Use the retrieve request from the receive buffer to retrieve a memory region
+ * which has been sent to us. Returns the sender, and the handle via a return
+ * parameter.
+ */
+ffa_vm_id_t retrieve_memory_from_message(void *recv_buf, void *send_buf,
+					 struct ffa_value msg_ret,
+					 ffa_memory_handle_t *handle)
+{
+	uint32_t msg_size;
+	struct ffa_value ret;
+	struct ffa_memory_region *memory_region;
+	ffa_vm_id_t sender;
+
+	EXPECT_EQ(msg_ret.func, FFA_MSG_SEND_32);
+	msg_size = ffa_msg_send_size(msg_ret);
+	sender = ffa_msg_send_sender(msg_ret);
+
+	if (handle != NULL) {
+		struct ffa_memory_region *retrieve_request =
+			(struct ffa_memory_region *)recv_buf;
+		*handle = retrieve_request->handle;
+	}
+	memcpy_s(send_buf, HF_MAILBOX_SIZE, recv_buf, msg_size);
+	ffa_rx_release();
+	ret = ffa_mem_retrieve_req(msg_size, msg_size);
+	EXPECT_EQ(ret.func, FFA_MEM_RETRIEVE_RESP_32);
+	memory_region = (struct ffa_memory_region *)recv_buf;
+	EXPECT_EQ(memory_region->receiver_count, 1);
+	EXPECT_EQ(memory_region->receivers[0].receiver_permissions.receiver,
+		  hf_vm_get_id());
+
+	return sender;
+}
+
+/*
+ * Use the retrieve request from the receive buffer to retrieve a memory region
+ * which has been sent to us, expecting it to fail with the given error code.
+ * Returns the sender.
+ */
+ffa_vm_id_t retrieve_memory_from_message_expect_fail(void *recv_buf,
+						     void *send_buf,
+						     struct ffa_value msg_ret,
+						     int32_t expected_error)
+{
+	uint32_t msg_size;
+	struct ffa_value ret;
+	ffa_vm_id_t sender;
+
+	EXPECT_EQ(msg_ret.func, FFA_MSG_SEND_32);
+	msg_size = ffa_msg_send_size(msg_ret);
+	sender = ffa_msg_send_sender(msg_ret);
+
+	memcpy_s(send_buf, HF_MAILBOX_SIZE, recv_buf, msg_size);
+	ffa_rx_release();
+	ret = ffa_mem_retrieve_req(msg_size, msg_size);
+	EXPECT_FFA_ERROR(ret, expected_error);
+
+	return sender;
+}
diff --git a/test/vmapi/common/spci.c b/test/vmapi/common/spci.c
deleted file mode 100644
index 0bc3380..0000000
--- a/test/vmapi/common/spci.c
+++ /dev/null
@@ -1,161 +0,0 @@
-/*
- * Copyright 2018 The Hafnium Authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *     https://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "hf/spci.h"
-
-#include "hf/mm.h"
-#include "hf/static_assert.h"
-
-#include "vmapi/hf/call.h"
-
-#include "test/hftest.h"
-#include "test/vmapi/spci.h"
-
-static alignas(PAGE_SIZE) uint8_t send_page[PAGE_SIZE];
-static alignas(PAGE_SIZE) uint8_t recv_page[PAGE_SIZE];
-static_assert(sizeof(send_page) == PAGE_SIZE, "Send page is not a page.");
-static_assert(sizeof(recv_page) == PAGE_SIZE, "Recv page is not a page.");
-
-static hf_ipaddr_t send_page_addr = (hf_ipaddr_t)send_page;
-static hf_ipaddr_t recv_page_addr = (hf_ipaddr_t)recv_page;
-
-struct mailbox_buffers set_up_mailbox(void)
-{
-	ASSERT_EQ(spci_rxtx_map(send_page_addr, recv_page_addr).func,
-		  SPCI_SUCCESS_32);
-	return (struct mailbox_buffers){
-		.send = send_page,
-		.recv = recv_page,
-	};
-}
-
-/*
- * Helper function to send memory to a VM then send a message with the retrieve
- * request it needs to retrieve it.
- */
-spci_memory_handle_t send_memory_and_retrieve_request(
-	uint32_t share_func, void *tx_buffer, spci_vm_id_t sender,
-	spci_vm_id_t recipient,
-	struct spci_memory_region_constituent constituents[],
-	uint32_t constituent_count, spci_memory_region_flags_t flags,
-	enum spci_data_access send_data_access,
-	enum spci_data_access retrieve_data_access,
-	enum spci_instruction_access send_instruction_access,
-	enum spci_instruction_access retrieve_instruction_access)
-{
-	uint32_t msg_size;
-	struct spci_value ret;
-	spci_memory_handle_t handle;
-
-	/* Send the memory. */
-	msg_size = spci_memory_region_init(
-		tx_buffer, sender, recipient, constituents, constituent_count,
-		0, flags, send_data_access, send_instruction_access,
-		SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-		SPCI_MEMORY_OUTER_SHAREABLE);
-	switch (share_func) {
-	case SPCI_MEM_DONATE_32:
-		ret = spci_mem_donate(msg_size, msg_size);
-		break;
-	case SPCI_MEM_LEND_32:
-		ret = spci_mem_lend(msg_size, msg_size);
-		break;
-	case SPCI_MEM_SHARE_32:
-		ret = spci_mem_share(msg_size, msg_size);
-		break;
-	default:
-		FAIL("Invalid share_func %#x.\n", share_func);
-		/* Never reached, but needed to keep clang-analyser happy. */
-		return 0;
-	}
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
-	handle = spci_mem_success_handle(ret);
-
-	/*
-	 * Send the appropriate retrieve request to the VM so that it can use it
-	 * to retrieve the memory.
-	 */
-	msg_size = spci_memory_retrieve_request_init(
-		tx_buffer, handle, sender, recipient, 0, 0,
-		retrieve_data_access, retrieve_instruction_access,
-		SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-		SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_EQ(spci_msg_send(sender, recipient, msg_size, 0).func,
-		  SPCI_SUCCESS_32);
-
-	return handle;
-}
-
-/*
- * Use the retrieve request from the receive buffer to retrieve a memory region
- * which has been sent to us. Returns the sender, and the handle via a return
- * parameter.
- */
-spci_vm_id_t retrieve_memory_from_message(void *recv_buf, void *send_buf,
-					  struct spci_value msg_ret,
-					  spci_memory_handle_t *handle)
-{
-	uint32_t msg_size;
-	struct spci_value ret;
-	struct spci_memory_region *memory_region;
-	spci_vm_id_t sender;
-
-	EXPECT_EQ(msg_ret.func, SPCI_MSG_SEND_32);
-	msg_size = spci_msg_send_size(msg_ret);
-	sender = spci_msg_send_sender(msg_ret);
-
-	if (handle != NULL) {
-		struct spci_memory_region *retrieve_request =
-			(struct spci_memory_region *)recv_buf;
-		*handle = retrieve_request->handle;
-	}
-	memcpy_s(send_buf, HF_MAILBOX_SIZE, recv_buf, msg_size);
-	spci_rx_release();
-	ret = spci_mem_retrieve_req(msg_size, msg_size);
-	EXPECT_EQ(ret.func, SPCI_MEM_RETRIEVE_RESP_32);
-	memory_region = (struct spci_memory_region *)recv_buf;
-	EXPECT_EQ(memory_region->receiver_count, 1);
-	EXPECT_EQ(memory_region->receivers[0].receiver_permissions.receiver,
-		  hf_vm_get_id());
-
-	return sender;
-}
-
-/*
- * Use the retrieve request from the receive buffer to retrieve a memory region
- * which has been sent to us, expecting it to fail with the given error code.
- * Returns the sender.
- */
-spci_vm_id_t retrieve_memory_from_message_expect_fail(void *recv_buf,
-						      void *send_buf,
-						      struct spci_value msg_ret,
-						      int32_t expected_error)
-{
-	uint32_t msg_size;
-	struct spci_value ret;
-	spci_vm_id_t sender;
-
-	EXPECT_EQ(msg_ret.func, SPCI_MSG_SEND_32);
-	msg_size = spci_msg_send_size(msg_ret);
-	sender = spci_msg_send_sender(msg_ret);
-
-	memcpy_s(send_buf, HF_MAILBOX_SIZE, recv_buf, msg_size);
-	spci_rx_release();
-	ret = spci_mem_retrieve_req(msg_size, msg_size);
-	EXPECT_SPCI_ERROR(ret, expected_error);
-
-	return sender;
-}
diff --git a/test/vmapi/primary_only/faults.c b/test/vmapi/primary_only/faults.c
index 34b7e15..52cfa03 100644
--- a/test/vmapi/primary_only/faults.c
+++ b/test/vmapi/primary_only/faults.c
@@ -70,8 +70,8 @@
 	sl_lock(&s.lock);
 
 	/* Configure the VM's buffers. */
-	EXPECT_EQ(spci_rxtx_map((hf_ipaddr_t)&tx[0], (hf_ipaddr_t)&rx[0]).func,
-		  SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rxtx_map((hf_ipaddr_t)&tx[0], (hf_ipaddr_t)&rx[0]).func,
+		  FFA_SUCCESS_32);
 
 	/* Tell other CPU to stop and wait for it. */
 	s.done = true;
diff --git a/test/vmapi/primary_only/primary_only.c b/test/vmapi/primary_only/primary_only.c
index 1be94a9..ea42c86 100644
--- a/test/vmapi/primary_only/primary_only.c
+++ b/test/vmapi/primary_only/primary_only.c
@@ -23,7 +23,7 @@
 #include "vmapi/hf/call.h"
 
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 /*
  * TODO: Some of these tests are duplicated between 'primary_only' and
@@ -69,7 +69,7 @@
  */
 TEST(hf_vcpu_get_count, reserved_vm_id)
 {
-	spci_vm_id_t id;
+	ffa_vm_id_t id;
 
 	for (id = 0; id < HF_VM_ID_OFFSET; ++id) {
 		EXPECT_EQ(hf_vcpu_get_count(id), 0);
@@ -88,27 +88,27 @@
 /**
  * Confirm it is an error when running a vCPU from the primary VM.
  */
-TEST(spci_run, cannot_run_primary)
+TEST(ffa_run, cannot_run_primary)
 {
-	struct spci_value res = spci_run(HF_PRIMARY_VM_ID, 0);
-	EXPECT_SPCI_ERROR(res, SPCI_INVALID_PARAMETERS);
+	struct ffa_value res = ffa_run(HF_PRIMARY_VM_ID, 0);
+	EXPECT_FFA_ERROR(res, FFA_INVALID_PARAMETERS);
 }
 
 /**
  * Confirm it is an error when running a vCPU from a non-existent secondary VM.
  */
-TEST(spci_run, cannot_run_absent_secondary)
+TEST(ffa_run, cannot_run_absent_secondary)
 {
-	struct spci_value res = spci_run(1, 0);
-	EXPECT_SPCI_ERROR(res, SPCI_INVALID_PARAMETERS);
+	struct ffa_value res = ffa_run(1, 0);
+	EXPECT_FFA_ERROR(res, FFA_INVALID_PARAMETERS);
 }
 
 /**
  * Yielding from the primary is a noop.
  */
-TEST(spci_yield, yield_is_noop_for_primary)
+TEST(ffa_yield, yield_is_noop_for_primary)
 {
-	EXPECT_EQ(spci_yield().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_yield().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -198,8 +198,8 @@
 	dlog("Second CPU stopped.\n");
 }
 
-/** Ensures that the Hafnium SPCI version is reported as expected. */
-TEST(spci, spci_version)
+/** Ensures that the Hafnium FF-A version is reported as expected. */
+TEST(ffa, ffa_version)
 {
 	const int32_t major_revision = 1;
 	const int32_t major_revision_offset = 16;
@@ -207,119 +207,119 @@
 	const int32_t current_version =
 		(major_revision << major_revision_offset) | minor_revision;
 
-	EXPECT_EQ(spci_version(current_version), current_version);
-	EXPECT_EQ(spci_version(0x0), current_version);
-	EXPECT_EQ(spci_version(0x1), current_version);
-	EXPECT_EQ(spci_version(0x10003), current_version);
-	EXPECT_EQ(spci_version(0xffff), current_version);
-	EXPECT_EQ(spci_version(0xfffffff), current_version);
+	EXPECT_EQ(ffa_version(current_version), current_version);
+	EXPECT_EQ(ffa_version(0x0), current_version);
+	EXPECT_EQ(ffa_version(0x1), current_version);
+	EXPECT_EQ(ffa_version(0x10003), current_version);
+	EXPECT_EQ(ffa_version(0xffff), current_version);
+	EXPECT_EQ(ffa_version(0xfffffff), current_version);
 }
 
-/** Ensures that an invalid call to SPCI_VERSION gets an error back. */
-TEST(spci, spci_version_invalid)
+/** Ensures that an invalid call to FFA_VERSION gets an error back. */
+TEST(ffa, ffa_version_invalid)
 {
-	int32_t ret = spci_version(0x80000000);
+	int32_t ret = ffa_version(0x80000000);
 
-	EXPECT_EQ(ret, SPCI_NOT_SUPPORTED);
+	EXPECT_EQ(ret, FFA_NOT_SUPPORTED);
 }
 
-/** Ensures that SPCI_FEATURES is reporting the expected interfaces. */
-TEST(spci, spci_features)
+/** Ensures that FFA_FEATURES is reporting the expected interfaces. */
+TEST(ffa, ffa_features)
 {
-	struct spci_value ret;
+	struct ffa_value ret;
 
-	ret = spci_features(SPCI_ERROR_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_ERROR_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_SUCCESS_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_SUCCESS_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_INTERRUPT_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_INTERRUPT_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_VERSION_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_VERSION_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_FEATURES_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_FEATURES_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_RX_RELEASE_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_RX_RELEASE_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_RXTX_MAP_64);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_RXTX_MAP_64);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_ID_GET_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_ID_GET_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MSG_POLL_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MSG_POLL_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MSG_WAIT_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MSG_WAIT_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_YIELD_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_YIELD_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_RUN_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_RUN_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MSG_SEND_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MSG_SEND_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MEM_DONATE_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MEM_DONATE_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MEM_LEND_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MEM_LEND_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MEM_SHARE_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MEM_SHARE_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MEM_RETRIEVE_REQ_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MEM_RETRIEVE_REQ_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MEM_RETRIEVE_RESP_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MEM_RETRIEVE_RESP_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MEM_RELINQUISH_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MEM_RELINQUISH_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 
-	ret = spci_features(SPCI_MEM_RECLAIM_32);
-	EXPECT_EQ(ret.func, SPCI_SUCCESS_32);
+	ret = ffa_features(FFA_MEM_RECLAIM_32);
+	EXPECT_EQ(ret.func, FFA_SUCCESS_32);
 }
 
 /**
- * Ensures that SPCI_FEATURES returns not supported for a bogus FID or
+ * Ensures that FFA_FEATURES returns not supported for a bogus FID or
  * currently non-implemented interfaces.
  */
-TEST(spci, spci_features_not_supported)
+TEST(ffa, ffa_features_not_supported)
 {
-	struct spci_value ret;
+	struct ffa_value ret;
 
-	ret = spci_features(0);
-	EXPECT_SPCI_ERROR(ret, SPCI_NOT_SUPPORTED);
+	ret = ffa_features(0);
+	EXPECT_FFA_ERROR(ret, FFA_NOT_SUPPORTED);
 
-	ret = spci_features(0x84000000);
-	EXPECT_SPCI_ERROR(ret, SPCI_NOT_SUPPORTED);
+	ret = ffa_features(0x84000000);
+	EXPECT_FFA_ERROR(ret, FFA_NOT_SUPPORTED);
 
-	ret = spci_features(SPCI_RXTX_UNMAP_32);
-	EXPECT_SPCI_ERROR(ret, SPCI_NOT_SUPPORTED);
+	ret = ffa_features(FFA_RXTX_UNMAP_32);
+	EXPECT_FFA_ERROR(ret, FFA_NOT_SUPPORTED);
 
-	ret = spci_features(SPCI_PARTITION_INFO_GET_32);
-	EXPECT_SPCI_ERROR(ret, SPCI_NOT_SUPPORTED);
+	ret = ffa_features(FFA_PARTITION_INFO_GET_32);
+	EXPECT_FFA_ERROR(ret, FFA_NOT_SUPPORTED);
 
-	ret = spci_features(SPCI_MSG_SEND_DIRECT_RESP_32);
-	EXPECT_SPCI_ERROR(ret, SPCI_NOT_SUPPORTED);
+	ret = ffa_features(FFA_MSG_SEND_DIRECT_RESP_32);
+	EXPECT_FFA_ERROR(ret, FFA_NOT_SUPPORTED);
 
-	ret = spci_features(SPCI_MSG_SEND_DIRECT_REQ_32);
-	EXPECT_SPCI_ERROR(ret, SPCI_NOT_SUPPORTED);
+	ret = ffa_features(FFA_MSG_SEND_DIRECT_REQ_32);
+	EXPECT_FFA_ERROR(ret, FFA_NOT_SUPPORTED);
 
-	ret = spci_features(SPCI_MSG_SEND_DIRECT_REQ_32);
-	EXPECT_SPCI_ERROR(ret, SPCI_NOT_SUPPORTED);
+	ret = ffa_features(FFA_MSG_SEND_DIRECT_REQ_32);
+	EXPECT_FFA_ERROR(ret, FFA_NOT_SUPPORTED);
 
-	ret = spci_features(SPCI_MSG_SEND_DIRECT_RESP_32);
-	EXPECT_SPCI_ERROR(ret, SPCI_NOT_SUPPORTED);
+	ret = ffa_features(FFA_MSG_SEND_DIRECT_RESP_32);
+	EXPECT_FFA_ERROR(ret, FFA_NOT_SUPPORTED);
 }
 
 /**
diff --git a/test/vmapi/primary_with_secondaries/BUILD.gn b/test/vmapi/primary_with_secondaries/BUILD.gn
index b0c3885..908a54c 100644
--- a/test/vmapi/primary_with_secondaries/BUILD.gn
+++ b/test/vmapi/primary_with_secondaries/BUILD.gn
@@ -30,6 +30,7 @@
   sources = [
     "boot.c",
     "debug_el1.c",
+    "ffa.c",
     "floating_point.c",
     "interrupts.c",
     "mailbox.c",
@@ -38,7 +39,6 @@
     "perfmon.c",
     "run_race.c",
     "smp.c",
-    "spci.c",
     "sysregs.c",
     "unmapped.c",
   ]
diff --git a/test/vmapi/primary_with_secondaries/boot.c b/test/vmapi/primary_with_secondaries/boot.c
index e098538..fa4839d 100644
--- a/test/vmapi/primary_with_secondaries/boot.c
+++ b/test/vmapi/primary_with_secondaries/boot.c
@@ -21,20 +21,20 @@
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
 #include "test/vmapi/exception_handler.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 /**
  * The VM gets its memory size on boot, and can access it all.
  */
 TEST(boot, memory_size)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "boot_memory", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /**
@@ -42,12 +42,12 @@
  */
 TEST(boot, beyond_memory_size)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "boot_memory_overrun", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -57,12 +57,12 @@
  */
 TEST(boot, memory_before_image)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "boot_memory_underrun", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
diff --git a/test/vmapi/primary_with_secondaries/debug_el1.c b/test/vmapi/primary_with_secondaries/debug_el1.c
index 381c2c6..8580195 100644
--- a/test/vmapi/primary_with_secondaries/debug_el1.c
+++ b/test/vmapi/primary_with_secondaries/debug_el1.c
@@ -16,7 +16,7 @@
 
 #include "primary_with_secondary.h"
 #include "sysregs.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 /**
  * QEMU does not properly handle the trapping of certain system register
@@ -27,13 +27,13 @@
 
 TEST(debug_el1, secondary_basic)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "debug_el1_secondary_basic", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /**
diff --git a/test/vmapi/primary_with_secondaries/ffa.c b/test/vmapi/primary_with_secondaries/ffa.c
new file mode 100644
index 0000000..4bda05a
--- /dev/null
+++ b/test/vmapi/primary_with_secondaries/ffa.c
@@ -0,0 +1,131 @@
+/*
+ * Copyright 2019 The Hafnium Authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "hf/ffa.h"
+
+#include <stdint.h>
+
+#include "hf/std.h"
+
+#include "vmapi/hf/call.h"
+
+#include "primary_with_secondary.h"
+#include "test/hftest.h"
+#include "test/vmapi/ffa.h"
+
+/**
+ * Send a message to a secondary VM which checks the validity of the received
+ * header.
+ */
+TEST(ffa, msg_send)
+{
+	const char message[] = "ffa_msg_send";
+	struct ffa_value run_res;
+	struct mailbox_buffers mb = set_up_mailbox();
+
+	SERVICE_SELECT(SERVICE_VM1, "ffa_check", mb.send);
+
+	/* Set the payload, init the message header and send the message. */
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
+	EXPECT_EQ(
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+			.func,
+		FFA_SUCCESS_32);
+
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
+}
+
+/**
+ * Send a message to a secondary VM spoofing the source VM id.
+ */
+TEST(ffa, msg_send_spoof)
+{
+	const char message[] = "ffa_msg_send";
+	struct mailbox_buffers mb = set_up_mailbox();
+
+	SERVICE_SELECT(SERVICE_VM1, "ffa_check", mb.send);
+
+	/* Set the payload, init the message header and send the message. */
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
+	EXPECT_FFA_ERROR(
+		ffa_msg_send(SERVICE_VM2, SERVICE_VM1, sizeof(message), 0),
+		FFA_INVALID_PARAMETERS);
+}
+
+/**
+ * Send a message to a secondary VM with incorrect destination id.
+ */
+TEST(ffa, ffa_invalid_destination_id)
+{
+	const char message[] = "fail to send";
+	struct mailbox_buffers mb = set_up_mailbox();
+
+	SERVICE_SELECT(SERVICE_VM1, "ffa_check", mb.send);
+	/* Set the payload, init the message header and send the message. */
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
+	EXPECT_FFA_ERROR(ffa_msg_send(HF_PRIMARY_VM_ID, -1, sizeof(message), 0),
+			 FFA_INVALID_PARAMETERS);
+}
+
+/**
+ * Ensure that the length parameter is respected when sending messages.
+ */
+TEST(ffa, ffa_incorrect_length)
+{
+	const char message[] = "this should be truncated";
+	struct ffa_value run_res;
+	struct mailbox_buffers mb = set_up_mailbox();
+
+	SERVICE_SELECT(SERVICE_VM1, "ffa_length", mb.send);
+
+	/* Send the message and compare if truncated. */
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
+	/* Hard code incorrect length. */
+	EXPECT_EQ(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 16, 0).func,
+		  FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
+}
+
+/**
+ * Attempt to send a message larger than what is supported.
+ */
+TEST(ffa, ffa_large_message)
+{
+	const char message[] = "fail to send";
+	struct mailbox_buffers mb = set_up_mailbox();
+
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
+	/* Send a message that is larger than the mailbox supports (4KB). */
+	EXPECT_FFA_ERROR(
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 4 * 1024 + 1, 0),
+		FFA_INVALID_PARAMETERS);
+}
+
+/**
+ * Verify secondary VM non blocking recv.
+ */
+TEST(ffa, ffa_recv_non_blocking)
+{
+	struct mailbox_buffers mb = set_up_mailbox();
+	struct ffa_value run_res;
+
+	/* Check is performed in secondary VM. */
+	SERVICE_SELECT(SERVICE_VM1, "ffa_recv_non_blocking", mb.send);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
+}
diff --git a/test/vmapi/primary_with_secondaries/floating_point.c b/test/vmapi/primary_with_secondaries/floating_point.c
index 1904d68..0cb2c21 100644
--- a/test/vmapi/primary_with_secondaries/floating_point.c
+++ b/test/vmapi/primary_with_secondaries/floating_point.c
@@ -17,14 +17,14 @@
 #include "hf/arch/std.h"
 #include "hf/arch/vm/registers.h"
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 
 #include "vmapi/hf/call.h"
 
 #include "../msr.h"
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 /**
  * Test that floating point registers are saved and restored by
@@ -35,18 +35,18 @@
 {
 	const double first = 1.2;
 	const double second = -2.3;
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	fill_fp_registers(first);
 	SERVICE_SELECT(SERVICE_VM1, "fp_fill", mb.send);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 	EXPECT_EQ(check_fp_register(first), true);
 
 	fill_fp_registers(second);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 	EXPECT_EQ(check_fp_register(second), true);
 }
 
@@ -57,17 +57,17 @@
 TEST(floating_point, fp_fpcr)
 {
 	uintreg_t value = 0;
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	EXPECT_EQ(read_msr(fpcr), value);
 
 	SERVICE_SELECT(SERVICE_VM1, "fp_fpcr", mb.send);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 	EXPECT_EQ(read_msr(fpcr), value);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 	EXPECT_EQ(read_msr(fpcr), value);
 }
diff --git a/test/vmapi/primary_with_secondaries/interrupts.c b/test/vmapi/primary_with_secondaries/interrupts.c
index dce89ab..13e573b 100644
--- a/test/vmapi/primary_with_secondaries/interrupts.c
+++ b/test/vmapi/primary_with_secondaries/interrupts.c
@@ -22,11 +22,11 @@
 
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 TEAR_DOWN(interrupts)
 {
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
 }
 
 /**
@@ -37,27 +37,27 @@
 {
 	const char message[] = "Ping";
 	const char expected_response[] = "Got IRQ 05.";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "interruptible", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Set the message, echo it and wait for a response. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+		FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(mb.recv, expected_response, sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -68,32 +68,32 @@
 TEST(interrupts, inject_interrupt_twice)
 {
 	const char expected_response[] = "Got IRQ 07.";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "interruptible", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Inject the interrupt and wait for a message. */
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_A);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(mb.recv, expected_response, sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Inject the interrupt again, and wait for the same message. */
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_A);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(mb.recv, expected_response, sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -104,33 +104,33 @@
 {
 	const char expected_response[] = "Got IRQ 07.";
 	const char expected_response_2[] = "Got IRQ 08.";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "interruptible", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Inject the interrupt and wait for a message. */
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_A);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(mb.recv, expected_response, sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Inject a different interrupt and wait for a different message. */
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_B);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response_2));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response_2));
 	EXPECT_EQ(memcmp(mb.recv, expected_response_2,
 			 sizeof(expected_response_2)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -143,41 +143,41 @@
 	const char expected_response[] = "Got IRQ 07.";
 	const char message[] = "Ping";
 	const char expected_response_2[] = "Got IRQ 05.";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "interruptible", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Inject the interrupt and wait for a message. */
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_A);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(mb.recv, expected_response, sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Now send a message to the secondary. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response_2));
+		FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response_2));
 	EXPECT_EQ(memcmp(mb.recv, expected_response_2,
 			 sizeof(expected_response_2)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -189,32 +189,32 @@
 {
 	const char expected_response[] = "Got IRQ 09.";
 	const char message[] = "Enable interrupt C";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "interruptible", mb.send);
 
 	/* Inject the interrupt and expect not to get a message. */
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_C);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/*
 	 * Now send a message to the secondary to enable the interrupt ID, and
 	 * expect the response from the interrupt we sent before.
 	 */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+		FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(mb.recv, expected_response, sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -225,7 +225,7 @@
 TEST(interrupts, pending_interrupt_no_blocking_receive)
 {
 	const char expected_response[] = "Done waiting";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "receive_block", mb.send);
@@ -236,12 +236,12 @@
 	 * back after failing to receive a message a few times.
 	 */
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_A);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(mb.recv, expected_response, sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -252,7 +252,7 @@
 TEST(interrupts, pending_interrupt_wfi_not_trapped)
 {
 	const char expected_response[] = "Done waiting";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "wfi", mb.send);
@@ -263,12 +263,12 @@
 	 * back after running WFI a few times.
 	 */
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_A);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response));
 	EXPECT_EQ(memcmp(mb.recv, expected_response, sizeof(expected_response)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /*
@@ -278,24 +278,24 @@
 TEST(interrupts, deliver_interrupt_and_message)
 {
 	const char message[] = "I\'ll see you again.";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "interruptible_echo", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
+		FFA_SUCCESS_32);
 	hf_interrupt_inject(SERVICE_VM1, 0, EXTERNAL_INTERRUPT_ID_A);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(message));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(message));
 	EXPECT_EQ(memcmp(mb.recv, message, sizeof(message)), 0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
diff --git a/test/vmapi/primary_with_secondaries/mailbox.c b/test/vmapi/primary_with_secondaries/mailbox.c
index b96c595..b7085ae 100644
--- a/test/vmapi/primary_with_secondaries/mailbox.c
+++ b/test/vmapi/primary_with_secondaries/mailbox.c
@@ -16,14 +16,14 @@
 
 #include <stdint.h>
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
 
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 /**
  * Reverses the order of the elements in the given array.
@@ -65,7 +65,7 @@
 
 TEAR_DOWN(mailbox)
 {
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
 }
 
 /**
@@ -73,9 +73,9 @@
  */
 TEST(mailbox, clear_empty)
 {
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
 }
 
 /**
@@ -84,26 +84,26 @@
 TEST(mailbox, echo)
 {
 	const char message[] = "Echo this back to me!";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "echo", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Set the message, echo it and check it didn't change. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(message));
+		FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(message));
 	EXPECT_EQ(memcmp(mb.recv, message, sizeof(message)), 0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -112,7 +112,7 @@
 TEST(mailbox, repeated_echo)
 {
 	char message[] = "Echo this back to me!";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	uint8_t i;
 	struct mailbox_buffers mb = set_up_mailbox();
 
@@ -120,23 +120,23 @@
 
 	for (i = 0; i < 100; i++) {
 		/* Run secondary until it reaches the wait for messages. */
-		run_res = spci_run(SERVICE_VM1, 0);
-		EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-		EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+		run_res = ffa_run(SERVICE_VM1, 0);
+		EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+		EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 		/* Set the message, echo it and check it didn't change. */
 		next_permutation(message, sizeof(message) - 1);
-		memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message,
+		memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message,
 			 sizeof(message));
-		EXPECT_EQ(spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1,
-					sizeof(message), 0)
+		EXPECT_EQ(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1,
+				       sizeof(message), 0)
 				  .func,
-			  SPCI_SUCCESS_32);
-		run_res = spci_run(SERVICE_VM1, 0);
-		EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-		EXPECT_EQ(spci_msg_send_size(run_res), sizeof(message));
+			  FFA_SUCCESS_32);
+		run_res = ffa_run(SERVICE_VM1, 0);
+		EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+		EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(message));
 		EXPECT_EQ(memcmp(mb.recv, message, sizeof(message)), 0);
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 	}
 }
 
@@ -147,54 +147,53 @@
 TEST(mailbox, relay)
 {
 	const char message[] = "Send this round the relay!";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "relay", mb.send);
 	SERVICE_SELECT(SERVICE_VM2, "relay", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
-	run_res = spci_run(SERVICE_VM2, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM2, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/*
 	 * Build the message chain so the message is sent from here to
 	 * SERVICE_VM1, then to SERVICE_VM2 and finally back to here.
 	 */
 	{
-		spci_vm_id_t *chain = (spci_vm_id_t *)mb.send;
+		ffa_vm_id_t *chain = (ffa_vm_id_t *)mb.send;
 		*chain++ = htole32(SERVICE_VM2);
 		*chain++ = htole32(HF_PRIMARY_VM_ID);
-		memcpy_s(chain,
-			 SPCI_MSG_PAYLOAD_MAX - (2 * sizeof(spci_vm_id_t)),
+		memcpy_s(chain, FFA_MSG_PAYLOAD_MAX - (2 * sizeof(ffa_vm_id_t)),
 			 message, sizeof(message));
 
 		EXPECT_EQ(
-			spci_msg_send(
+			ffa_msg_send(
 				HF_PRIMARY_VM_ID, SERVICE_VM1,
-				sizeof(message) + (2 * sizeof(spci_vm_id_t)), 0)
+				sizeof(message) + (2 * sizeof(ffa_vm_id_t)), 0)
 				.func,
-			SPCI_SUCCESS_32);
+			FFA_SUCCESS_32);
 	}
 
 	/* Let SERVICE_VM1 forward the message. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_receiver(run_res), SERVICE_VM2);
-	EXPECT_EQ(spci_msg_send_size(run_res), 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_receiver(run_res), SERVICE_VM2);
+	EXPECT_EQ(ffa_msg_send_size(run_res), 0);
 
 	/* Let SERVICE_VM2 forward the message. */
-	run_res = spci_run(SERVICE_VM2, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
+	run_res = ffa_run(SERVICE_VM2, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
 
 	/* Ensure the message is intact. */
-	EXPECT_EQ(spci_msg_send_receiver(run_res), HF_PRIMARY_VM_ID);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(message));
+	EXPECT_EQ(ffa_msg_send_receiver(run_res), HF_PRIMARY_VM_ID);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(message));
 	EXPECT_EQ(memcmp(mb.recv, message, sizeof(message)), 0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -203,19 +202,19 @@
  */
 TEST(mailbox, no_primary_to_secondary_notification_on_configure)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
 	set_up_mailbox();
 
-	EXPECT_SPCI_ERROR(spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 0, 0),
-			  SPCI_BUSY);
+	EXPECT_FFA_ERROR(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 0, 0),
+			 FFA_BUSY);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
-	EXPECT_EQ(spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 0, 0).func,
-		  SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 0, 0).func,
+		  FFA_SUCCESS_32);
 }
 
 /**
@@ -224,28 +223,28 @@
  */
 TEST(mailbox, secondary_to_primary_notification_on_configure)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 
 	set_up_mailbox();
 
-	EXPECT_SPCI_ERROR(spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 0,
-					SPCI_MSG_SEND_NOTIFY),
-			  SPCI_BUSY);
+	EXPECT_FFA_ERROR(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 0,
+				      FFA_MSG_SEND_NOTIFY),
+			 FFA_BUSY);
 
 	/*
 	 * Run first VM for it to configure itself. It should result in
 	 * notifications having to be issued.
 	 */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_RX_RELEASE_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_RX_RELEASE_32);
 
 	/* A single waiter is returned. */
 	EXPECT_EQ(hf_mailbox_waiter_get(SERVICE_VM1), HF_PRIMARY_VM_ID);
 	EXPECT_EQ(hf_mailbox_waiter_get(SERVICE_VM1), -1);
 
 	/* Send should now succeed. */
-	EXPECT_EQ(spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 0, 0).func,
-		  SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 0, 0).func,
+		  FFA_SUCCESS_32);
 }
 
 /**
@@ -256,46 +255,46 @@
 TEST(mailbox, primary_to_secondary)
 {
 	char message[] = "not ready echo";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "echo_with_notification", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Send a message to echo service, and get response back. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(message));
+		FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(message));
 	EXPECT_EQ(memcmp(mb.recv, message, sizeof(message)), 0);
 
 	/* Let secondary VM continue running so that it will wait again. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Without clearing our mailbox, send message again. */
 	reverse(message, strnlen_s(message, sizeof(message)));
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 
 	/* Message should be dropped since the mailbox was not cleared. */
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, HF_SPCI_RUN_WAIT_FOR_INTERRUPT);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+		FFA_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, HF_FFA_RUN_WAIT_FOR_INTERRUPT);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Clear the mailbox. We expect to be told there are pending waiters. */
-	EXPECT_EQ(spci_rx_release().func, SPCI_RX_RELEASE_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_RX_RELEASE_32);
 
 	/* Retrieve a single waiter. */
 	EXPECT_EQ(hf_mailbox_waiter_get(HF_PRIMARY_VM_ID), SERVICE_VM1);
@@ -308,11 +307,11 @@
 	EXPECT_EQ(
 		hf_interrupt_inject(SERVICE_VM1, 0, HF_MAILBOX_WRITABLE_INTID),
 		1);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(message));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(message));
 	EXPECT_EQ(memcmp(mb.recv, message, sizeof(message)), 0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 }
 
 /**
@@ -323,35 +322,35 @@
 TEST(mailbox, secondary_to_primary_notification)
 {
 	const char message[] = "not ready echo";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "echo_with_notification", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 
 	/* Send a message to echo service twice. The second should fail. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
+	memcpy_s(mb.send, FFA_MSG_PAYLOAD_MAX, message, sizeof(message));
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
-	EXPECT_SPCI_ERROR(spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1,
-					sizeof(message), SPCI_MSG_SEND_NOTIFY),
-			  SPCI_BUSY);
+		FFA_SUCCESS_32);
+	EXPECT_FFA_ERROR(ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1,
+				      sizeof(message), FFA_MSG_SEND_NOTIFY),
+			 FFA_BUSY);
 
 	/* Receive a reply for the first message. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(message));
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(message));
 	EXPECT_EQ(memcmp(mb.recv, message, sizeof(message)), 0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Run VM again so that it clears its mailbox. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_RX_RELEASE_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_RX_RELEASE_32);
 
 	/* Retrieve a single waiter. */
 	EXPECT_EQ(hf_mailbox_waiter_get(SERVICE_VM1), HF_PRIMARY_VM_ID);
@@ -359,7 +358,7 @@
 
 	/* Send should now succeed. */
 	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
+		ffa_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
 			.func,
-		SPCI_SUCCESS_32);
+		FFA_SUCCESS_32);
 }
diff --git a/test/vmapi/primary_with_secondaries/memory_sharing.c b/test/vmapi/primary_with_secondaries/memory_sharing.c
index f1a0aae..1b3c1d9 100644
--- a/test/vmapi/primary_with_secondaries/memory_sharing.c
+++ b/test/vmapi/primary_with_secondaries/memory_sharing.c
@@ -24,7 +24,7 @@
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
 #include "test/vmapi/exception_handler.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 alignas(PAGE_SIZE) static uint8_t pages[4 * PAGE_SIZE];
 
@@ -33,27 +33,26 @@
  */
 static void check_cannot_send_memory(
 	struct mailbox_buffers mb,
-	struct spci_value (*send_function)(uint32_t, uint32_t),
-	struct spci_memory_region_constituent constituents[],
+	struct ffa_value (*send_function)(uint32_t, uint32_t),
+	struct ffa_memory_region_constituent constituents[],
 	int constituent_count, int32_t avoid_vm)
 
 {
-	enum spci_data_access data_access[] = {
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RW, SPCI_DATA_ACCESS_RESERVED};
-	enum spci_instruction_access instruction_access[] = {
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX, SPCI_INSTRUCTION_ACCESS_X,
-		SPCI_INSTRUCTION_ACCESS_RESERVED};
-	enum spci_memory_cacheability cacheability[] = {
-		SPCI_MEMORY_CACHE_RESERVED, SPCI_MEMORY_CACHE_NON_CACHEABLE,
-		SPCI_MEMORY_CACHE_RESERVED_1, SPCI_MEMORY_CACHE_WRITE_BACK};
-	enum spci_memory_cacheability device[] = {
-		SPCI_MEMORY_DEV_NGNRNE, SPCI_MEMORY_DEV_NGNRE,
-		SPCI_MEMORY_DEV_NGRE, SPCI_MEMORY_DEV_GRE};
-	enum spci_memory_shareability shareability[] = {
-		SPCI_MEMORY_SHARE_NON_SHAREABLE, SPCI_MEMORY_SHARE_RESERVED,
-		SPCI_MEMORY_OUTER_SHAREABLE, SPCI_MEMORY_INNER_SHAREABLE};
+	enum ffa_data_access data_access[] = {
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RW, FFA_DATA_ACCESS_RESERVED};
+	enum ffa_instruction_access instruction_access[] = {
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_NX,
+		FFA_INSTRUCTION_ACCESS_X, FFA_INSTRUCTION_ACCESS_RESERVED};
+	enum ffa_memory_cacheability cacheability[] = {
+		FFA_MEMORY_CACHE_RESERVED, FFA_MEMORY_CACHE_NON_CACHEABLE,
+		FFA_MEMORY_CACHE_RESERVED_1, FFA_MEMORY_CACHE_WRITE_BACK};
+	enum ffa_memory_cacheability device[] = {
+		FFA_MEMORY_DEV_NGNRNE, FFA_MEMORY_DEV_NGNRE,
+		FFA_MEMORY_DEV_NGRE, FFA_MEMORY_DEV_GRE};
+	enum ffa_memory_shareability shareability[] = {
+		FFA_MEMORY_SHARE_NON_SHAREABLE, FFA_MEMORY_SHARE_RESERVED,
+		FFA_MEMORY_OUTER_SHAREABLE, FFA_MEMORY_INNER_SHAREABLE};
 	uint32_t vms[] = {HF_PRIMARY_VM_ID, SERVICE_VM1, SERVICE_VM2};
 
 	size_t i = 0;
@@ -74,7 +73,7 @@
 					     m < ARRAY_SIZE(cacheability);
 					     ++m) {
 						uint32_t msg_size =
-							spci_memory_region_init(
+							ffa_memory_region_init(
 								mb.send,
 								HF_PRIMARY_VM_ID,
 								vms[i],
@@ -84,26 +83,26 @@
 								data_access[j],
 								instruction_access
 									[k],
-								SPCI_MEMORY_NORMAL_MEM,
+								FFA_MEMORY_NORMAL_MEM,
 								cacheability[m],
 								shareability
 									[l]);
-						struct spci_value ret =
+						struct ffa_value ret =
 							send_function(msg_size,
 								      msg_size);
 
 						EXPECT_EQ(ret.func,
-							  SPCI_ERROR_32);
+							  FFA_ERROR_32);
 						EXPECT_TRUE(
 							ret.arg2 ==
-								SPCI_DENIED ||
+								FFA_DENIED ||
 							ret.arg2 ==
-								SPCI_INVALID_PARAMETERS);
+								FFA_INVALID_PARAMETERS);
 					}
 					for (m = 0; m < ARRAY_SIZE(device);
 					     ++m) {
 						uint32_t msg_size =
-							spci_memory_region_init(
+							ffa_memory_region_init(
 								mb.send,
 								HF_PRIMARY_VM_ID,
 								vms[i],
@@ -113,21 +112,21 @@
 								data_access[j],
 								instruction_access
 									[k],
-								SPCI_MEMORY_DEVICE_MEM,
+								FFA_MEMORY_DEVICE_MEM,
 								device[m],
 								shareability
 									[l]);
-						struct spci_value ret =
+						struct ffa_value ret =
 							send_function(msg_size,
 								      msg_size);
 
 						EXPECT_EQ(ret.func,
-							  SPCI_ERROR_32);
+							  FFA_ERROR_32);
 						EXPECT_TRUE(
 							ret.arg2 ==
-								SPCI_DENIED ||
+								FFA_DENIED ||
 							ret.arg2 ==
-								SPCI_INVALID_PARAMETERS);
+								FFA_INVALID_PARAMETERS);
 					}
 				}
 			}
@@ -140,11 +139,11 @@
  */
 static void check_cannot_lend_memory(
 	struct mailbox_buffers mb,
-	struct spci_memory_region_constituent constituents[],
+	struct ffa_memory_region_constituent constituents[],
 	int constituent_count, int32_t avoid_vm)
 
 {
-	check_cannot_send_memory(mb, spci_mem_lend, constituents,
+	check_cannot_send_memory(mb, ffa_mem_lend, constituents,
 				 constituent_count, avoid_vm);
 }
 
@@ -153,11 +152,11 @@
  */
 static void check_cannot_share_memory(
 	struct mailbox_buffers mb,
-	struct spci_memory_region_constituent constituents[],
+	struct ffa_memory_region_constituent constituents[],
 	int constituent_count, int32_t avoid_vm)
 
 {
-	check_cannot_send_memory(mb, spci_mem_share, constituents,
+	check_cannot_send_memory(mb, ffa_mem_share, constituents,
 				 constituent_count, avoid_vm);
 }
 
@@ -168,7 +167,7 @@
  */
 static void check_cannot_donate_memory(
 	struct mailbox_buffers mb,
-	struct spci_memory_region_constituent constituents[],
+	struct ffa_memory_region_constituent constituents[],
 	int constituent_count, int32_t avoid_vm)
 {
 	uint32_t vms[] = {HF_PRIMARY_VM_ID, SERVICE_VM1, SERVICE_VM2};
@@ -176,21 +175,21 @@
 	size_t i;
 	for (i = 0; i < ARRAY_SIZE(vms); ++i) {
 		uint32_t msg_size;
-		struct spci_value ret;
+		struct ffa_value ret;
 		/* Optionally skip one VM as the donate would succeed. */
 		if (vms[i] == avoid_vm) {
 			continue;
 		}
-		msg_size = spci_memory_region_init(
+		msg_size = ffa_memory_region_init(
 			mb.send, HF_PRIMARY_VM_ID, vms[i], constituents,
-			constituent_count, 0, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-			SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-			SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-			SPCI_MEMORY_OUTER_SHAREABLE);
-		ret = spci_mem_donate(msg_size, msg_size);
-		EXPECT_EQ(ret.func, SPCI_ERROR_32);
-		EXPECT_TRUE(ret.arg2 == SPCI_DENIED ||
-			    ret.arg2 == SPCI_INVALID_PARAMETERS);
+			constituent_count, 0, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+			FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+			FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+			FFA_MEMORY_OUTER_SHAREABLE);
+		ret = ffa_mem_donate(msg_size, msg_size);
+		EXPECT_EQ(ret.func, FFA_ERROR_32);
+		EXPECT_TRUE(ret.arg2 == FFA_DENIED ||
+			    ret.arg2 == FFA_INVALID_PARAMETERS);
 	}
 }
 
@@ -199,26 +198,25 @@
  * it will fail.
  */
 static void check_cannot_relinquish_memory(struct mailbox_buffers mb,
-					   spci_memory_handle_t handle)
+					   ffa_memory_handle_t handle)
 {
 	uint32_t vms[] = {HF_PRIMARY_VM_ID, SERVICE_VM1, SERVICE_VM2};
 
 	size_t i;
 	for (i = 0; i < ARRAY_SIZE(vms); ++i) {
-		struct spci_mem_relinquish *relinquish_req =
-			(struct spci_mem_relinquish *)mb.send;
+		struct ffa_mem_relinquish *relinquish_req =
+			(struct ffa_mem_relinquish *)mb.send;
 
-		*relinquish_req = (struct spci_mem_relinquish){
+		*relinquish_req = (struct ffa_mem_relinquish){
 			.handle = handle, .endpoint_count = 1};
 		relinquish_req->endpoints[0] = vms[i];
-		EXPECT_SPCI_ERROR(spci_mem_relinquish(),
-				  SPCI_INVALID_PARAMETERS);
+		EXPECT_FFA_ERROR(ffa_mem_relinquish(), FFA_INVALID_PARAMETERS);
 	}
 }
 
 TEAR_DOWN(memory_sharing)
 {
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
 }
 
 /**
@@ -227,10 +225,10 @@
  */
 TEST(memory_sharing, concurrent)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
@@ -239,21 +237,21 @@
 	memset_s(ptr, sizeof(pages), 'a', PAGE_SIZE);
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	for (int i = 0; i < PAGE_SIZE; ++i) {
 		pages[i] = i;
 	}
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	for (int i = 0; i < PAGE_SIZE; ++i) {
 		uint8_t value = i + 1;
@@ -267,35 +265,35 @@
  */
 TEST(memory_sharing, share_concurrently_and_get_back)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish", mb.send);
 
 	/* Dirty the memory before sharing it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
 	/* Let the memory be returned. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 	for (int i = 0; i < PAGE_SIZE; ++i) {
 		ASSERT_EQ(ptr[i], 'c');
 	}
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -306,12 +304,12 @@
 TEST(memory_sharing, cannot_share_device_memory)
 {
 	struct mailbox_buffers mb = set_up_mailbox();
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)PAGE_SIZE, .page_count = 1},
 	};
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_return", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_return", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_return", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_return", mb.send);
 
 	check_cannot_lend_memory(mb, constituents, ARRAY_SIZE(constituents),
 				 -1);
@@ -326,40 +324,40 @@
  */
 TEST(memory_sharing, lend_relinquish)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
-	spci_memory_handle_t handle;
+	ffa_memory_handle_t handle;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 		{.address = (uint64_t)pages + PAGE_SIZE, .page_count = 2},
 	};
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 
 	/* Let the memory be returned. */
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	/* Ensure that the secondary VM accessed the region. */
 	for (int i = 0; i < PAGE_SIZE; ++i) {
 		ASSERT_EQ(ptr[i], 'c');
 	}
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -369,32 +367,31 @@
  */
 TEST(memory_sharing, donate_relinquish)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_donate_relinquish", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_donate_relinquish", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 		{.address = (uint64_t)pages + PAGE_SIZE, .page_count = 2},
 	};
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
 	/*
 	 * Let the service access the memory, and try and fail to relinquish it.
 	 */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /**
@@ -402,36 +399,35 @@
  */
 TEST(memory_sharing, give_and_get_back)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_return", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_return", mb.send);
 
 	/* Dirty the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be returned, and retrieve it. */
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(retrieve_memory_from_message(mb.recv, mb.send, run_res, NULL),
 		  SERVICE_VM1);
 
 	for (int i = 0; i < PAGE_SIZE; ++i) {
 		ASSERT_EQ(ptr[i], 'c');
 	}
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -441,35 +437,35 @@
  */
 TEST(memory_sharing, lend_and_get_back)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish", mb.send);
 
 	/* Dirty the memory before lending it. */
 	memset_s(ptr, sizeof(pages), 'c', PAGE_SIZE);
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be returned. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 	for (int i = 0; i < PAGE_SIZE; ++i) {
 		ASSERT_EQ(ptr[i], 'd');
 	}
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -479,40 +475,40 @@
  */
 TEST(memory_sharing, relend_after_return)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish", mb.send);
 
 	/* Lend the memory initially. */
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be returned. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	/* Lend the memory again after it has been returned. */
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Observe the service doesn't fault when accessing the memory. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 }
 
 /**
@@ -520,37 +516,37 @@
  */
 TEST(memory_sharing, lend_elsewhere_after_return)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_lend_relinquish", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_lend_relinquish", mb.send);
 
 	/* Lend the memory initially. */
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be returned. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	/* Share the memory with a different VM after it has been returned. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -560,32 +556,32 @@
  */
 TEST(memory_sharing, give_memory_and_lose_access)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
-	struct spci_memory_region *memory_region;
-	struct spci_composite_memory_region *composite;
+	struct ffa_memory_region *memory_region;
+	struct ffa_composite_memory_region *composite;
 	uint8_t *ptr;
 
 	SERVICE_SELECT(SERVICE_VM1, "give_memory_and_fault", mb.send);
 
 	/* Have the memory be given. */
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(retrieve_memory_from_message(mb.recv, mb.send, run_res, NULL),
 		  SERVICE_VM1);
 
 	/* Check the memory was cleared. */
-	memory_region = (struct spci_memory_region *)mb.recv;
+	memory_region = (struct ffa_memory_region *)mb.recv;
 	ASSERT_EQ(memory_region->receiver_count, 1);
 	ASSERT_NE(memory_region->receivers[0].composite_memory_region_offset,
 		  0);
-	composite = spci_memory_region_get_composite(memory_region, 0);
+	composite = ffa_memory_region_get_composite(memory_region, 0);
 	ptr = (uint8_t *)composite->constituents[0].address;
 	for (int i = 0; i < PAGE_SIZE; ++i) {
 		ASSERT_EQ(ptr[i], 0);
 	}
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -595,32 +591,32 @@
  */
 TEST(memory_sharing, lend_memory_and_lose_access)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
-	struct spci_memory_region *memory_region;
-	struct spci_composite_memory_region *composite;
+	struct ffa_memory_region *memory_region;
+	struct ffa_composite_memory_region *composite;
 	uint8_t *ptr;
 
 	SERVICE_SELECT(SERVICE_VM1, "lend_memory_and_fault", mb.send);
 
 	/* Have the memory be lent. */
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(retrieve_memory_from_message(mb.recv, mb.send, run_res, NULL),
 		  SERVICE_VM1);
 
 	/* Check the memory was cleared. */
-	memory_region = (struct spci_memory_region *)mb.recv;
+	memory_region = (struct ffa_memory_region *)mb.recv;
 	ASSERT_EQ(memory_region->receiver_count, 1);
 	ASSERT_NE(memory_region->receivers[0].composite_memory_region_offset,
 		  0);
-	composite = spci_memory_region_get_composite(memory_region, 0);
+	composite = ffa_memory_region_get_composite(memory_region, 0);
 	ptr = (uint8_t *)composite->constituents[0].address;
 	for (int i = 0; i < PAGE_SIZE; ++i) {
 		ASSERT_EQ(ptr[i], 0);
 	}
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -630,18 +626,18 @@
  */
 TEST(memory_sharing, donate_check_upper_bounds)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_check_upper_bound", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_check_upper_bound", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_check_upper_bound", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_check_upper_bound", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', 4 * PAGE_SIZE);
 
 	/* Specify non-contiguous memory regions. */
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 		{.address = (uint64_t)pages + PAGE_SIZE * 2, .page_count = 1},
 	};
@@ -653,13 +649,12 @@
 	pages[0] = 0;
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 
@@ -678,13 +673,12 @@
 	 * exception loop.
 	 */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM2, 0);
+	run_res = ffa_run(SERVICE_VM2, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -694,18 +688,18 @@
  */
 TEST(memory_sharing, donate_check_lower_bounds)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_check_lower_bound", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_check_lower_bound", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_check_lower_bound", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_check_lower_bound", mb.send);
 
 	/* Initialise the memory before donating it. */
 	memset_s(ptr, sizeof(pages), 'b', 4 * PAGE_SIZE);
 
 	/* Specify non-contiguous memory regions. */
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 		{.address = (uint64_t)pages + PAGE_SIZE * 2, .page_count = 1},
 	};
@@ -717,13 +711,12 @@
 	pages[0] = 0;
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 
@@ -742,13 +735,12 @@
 	 * exception loop.
 	 */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM2, 0);
+	run_res = ffa_run(SERVICE_VM2, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -759,43 +751,41 @@
  */
 TEST(memory_sharing, donate_elsewhere_after_return)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_return", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_return", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_return", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_return", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', 1 * PAGE_SIZE);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 
 	/* Let the memory be returned. */
 	EXPECT_EQ(retrieve_memory_from_message(mb.recv, mb.send, run_res, NULL),
 		  SERVICE_VM1);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Share the memory with another VM. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -806,49 +796,48 @@
  */
 TEST(memory_sharing, donate_vms)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_donate_secondary_and_fault", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_receive", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_donate_secondary_and_fault", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_receive", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', 1 * PAGE_SIZE);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	/* Set up VM2 to wait for message. */
-	run_res = spci_run(SERVICE_VM2, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_WAIT_32);
+	run_res = ffa_run(SERVICE_VM2, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_WAIT_32);
 
 	/* Donate memory. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be sent from VM1 to VM2. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_receiver(run_res), SERVICE_VM2);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_receiver(run_res), SERVICE_VM2);
 
 	/* Receive memory in VM2. */
-	run_res = spci_run(SERVICE_VM2, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM2, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/* Try to access memory in VM1. */
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 
 	/* Ensure that memory in VM2 remains the same. */
-	run_res = spci_run(SERVICE_VM2, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM2, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /**
@@ -856,32 +845,31 @@
  */
 TEST(memory_sharing, donate_twice)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_donate_twice", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_receive", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_donate_twice", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_receive", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', 1 * PAGE_SIZE);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	/* Donate memory to VM1. */
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be received. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/* Fail to share memory again with any VM. */
 	check_cannot_share_memory(mb, constituents, ARRAY_SIZE(constituents),
@@ -894,17 +882,17 @@
 	check_cannot_relinquish_memory(mb, handle);
 
 	/* Let the memory be sent from VM1 to PRIMARY (returned). */
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(retrieve_memory_from_message(mb.recv, mb.send, run_res, NULL),
 		  SERVICE_VM1);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Check we have access again. */
 	ptr[0] = 'f';
 
 	/* Try and fail to donate memory from VM1 to VM2. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /**
@@ -918,18 +906,18 @@
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, HF_PRIMARY_VM_ID, HF_PRIMARY_VM_ID, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
 
-	EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+	EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 }
 
 /**
@@ -943,17 +931,17 @@
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, HF_PRIMARY_VM_ID, HF_PRIMARY_VM_ID, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_lend(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_lend(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 }
 
 /**
@@ -967,17 +955,17 @@
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, HF_PRIMARY_VM_ID, HF_PRIMARY_VM_ID, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_share(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_share(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 }
 
 /**
@@ -985,62 +973,61 @@
  */
 TEST(memory_sharing, donate_invalid_source)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 	uint32_t msg_size;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_donate_invalid_source", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_receive", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_donate_invalid_source", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_receive", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	/* Try invalid configurations. */
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, SERVICE_VM1, HF_PRIMARY_VM_ID, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, SERVICE_VM1, SERVICE_VM1, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, SERVICE_VM2, SERVICE_VM1, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 
 	/* Successfully donate to VM1. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_DONATE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents), 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
 	/* Receive and return memory from VM1. */
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(retrieve_memory_from_message(mb.recv, mb.send, run_res, NULL),
 		  SERVICE_VM1);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Use VM1 to fail to donate memory from the primary to VM2. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /**
@@ -1050,7 +1037,7 @@
 {
 	struct mailbox_buffers mb = set_up_mailbox();
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_return", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_return", mb.send);
 
 	/* Check for unaligned pages for either constituent. */
 	for (int i = 0; i < PAGE_SIZE; i++) {
@@ -1059,32 +1046,32 @@
 			if (i == 0 && j == 0) {
 				continue;
 			}
-			struct spci_memory_region_constituent constituents[] = {
+			struct ffa_memory_region_constituent constituents[] = {
 				{.address = (uint64_t)pages + i,
 				 .page_count = 1},
 				{.address = (uint64_t)pages + PAGE_SIZE + j,
 				 .page_count = 1},
 			};
-			uint32_t msg_size = spci_memory_region_init(
+			uint32_t msg_size = ffa_memory_region_init(
 				mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 				constituents, ARRAY_SIZE(constituents), 0, 0,
-				SPCI_DATA_ACCESS_NOT_SPECIFIED,
-				SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-				SPCI_MEMORY_NORMAL_MEM,
-				SPCI_MEMORY_CACHE_WRITE_BACK,
-				SPCI_MEMORY_OUTER_SHAREABLE);
-			EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size),
-					  SPCI_INVALID_PARAMETERS);
-			msg_size = spci_memory_region_init(
+				FFA_DATA_ACCESS_NOT_SPECIFIED,
+				FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+				FFA_MEMORY_NORMAL_MEM,
+				FFA_MEMORY_CACHE_WRITE_BACK,
+				FFA_MEMORY_OUTER_SHAREABLE);
+			EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size),
+					 FFA_INVALID_PARAMETERS);
+			msg_size = ffa_memory_region_init(
 				mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 				constituents, ARRAY_SIZE(constituents), 0, 0,
-				SPCI_DATA_ACCESS_RW,
-				SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-				SPCI_MEMORY_NORMAL_MEM,
-				SPCI_MEMORY_CACHE_WRITE_BACK,
-				SPCI_MEMORY_OUTER_SHAREABLE);
-			EXPECT_SPCI_ERROR(spci_mem_lend(msg_size, msg_size),
-					  SPCI_INVALID_PARAMETERS);
+				FFA_DATA_ACCESS_RW,
+				FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+				FFA_MEMORY_NORMAL_MEM,
+				FFA_MEMORY_CACHE_WRITE_BACK,
+				FFA_MEMORY_OUTER_SHAREABLE);
+			EXPECT_FFA_ERROR(ffa_mem_lend(msg_size, msg_size),
+					 FFA_INVALID_PARAMETERS);
 		}
 	}
 }
@@ -1094,45 +1081,45 @@
  */
 TEST(memory_sharing, lend_invalid_source)
 {
-	struct spci_value run_res;
-	spci_memory_handle_t handle;
+	struct ffa_value run_res;
+	ffa_memory_handle_t handle;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 	uint32_t msg_size;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_lend_invalid_source", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_lend_invalid_source", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	/* Check cannot swap VM IDs. */
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, SERVICE_VM1, HF_PRIMARY_VM_ID, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_lend(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_lend(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 
 	/* Lend memory to VM1. */
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Receive and return memory from VM1. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	/* Try to lend memory from primary in VM1. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /**
@@ -1141,50 +1128,50 @@
  */
 TEST(memory_sharing, lend_relinquish_X_RW)
 {
-	struct spci_value run_res;
-	spci_memory_handle_t handle;
+	struct ffa_value run_res;
+	ffa_memory_handle_t handle;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish_RW", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish_RW", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be accessed. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/* Let service write to and return memory. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	/* Re-initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be accessed. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -1195,30 +1182,30 @@
  */
 TEST(memory_sharing, share_X_RW)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_share_fail", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_share_fail", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the secondary VM fail to retrieve the memory. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Ensure we still have access. */
 	for (int i = 0; i < PAGE_SIZE; ++i) {
@@ -1227,21 +1214,21 @@
 	}
 
 	/* Reclaim the memory. */
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	/* Re-initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the secondary VM fail to retrieve the memory. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Ensure we still have access. */
 	for (int i = 0; i < PAGE_SIZE; ++i) {
@@ -1250,7 +1237,7 @@
 	}
 
 	/* Reclaim the memory. */
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 }
 
 /**
@@ -1259,29 +1246,29 @@
  */
 TEST(memory_sharing, share_relinquish_NX_RW)
 {
-	struct spci_value run_res;
-	spci_memory_handle_t handle;
+	struct ffa_value run_res;
+	ffa_memory_handle_t handle;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish_RW", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish_RW", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
 	/* Let the memory be accessed. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/* Ensure we still have access. */
 	for (int i = 0; i < PAGE_SIZE; ++i) {
@@ -1289,23 +1276,23 @@
 	}
 
 	/* Let service write to and return memory. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	/* Re-initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE);
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
 	/* Let the memory be accessed. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/* Ensure we still have access. */
 	for (int i = 0; i < PAGE_SIZE; ++i) {
@@ -1313,7 +1300,7 @@
 		ptr[i]++;
 	}
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -1325,31 +1312,31 @@
 {
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	size_t i;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_share_relinquish_clear",
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_share_relinquish_clear",
 		       mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages) * 2, 'b', PAGE_SIZE * 2);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 2},
 	};
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
 	/* Let the memory be received, fail to be cleared, and then returned. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	/* Check that it has not been cleared. */
 	for (i = 0; i < PAGE_SIZE * 2; ++i) {
@@ -1362,12 +1349,12 @@
  */
 TEST(memory_sharing, lend_relinquish_RW_X)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish_X", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish_X", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 0, PAGE_SIZE);
@@ -1376,29 +1363,29 @@
 	/* Set memory to contain the RET instruction to attempt to execute. */
 	*ptr2 = 0xD65F03C0;
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Attempt to execute from memory. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -1408,12 +1395,12 @@
  */
 TEST(memory_sharing, lend_relinquish_RO_X)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish_X", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish_X", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 0, PAGE_SIZE);
@@ -1422,29 +1409,29 @@
 	/* Set memory to contain the RET instruction to attempt to execute. */
 	*ptr2 = 0xD65F03C0;
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 	};
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Attempt to execute from memory. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_mem_reclaim(handle, 0).func, SPCI_SUCCESS_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_reclaim(handle, 0).func, FFA_SUCCESS_32);
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -1454,54 +1441,54 @@
  */
 TEST(memory_sharing, lend_donate)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 	uint32_t msg_size;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish_RW", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_lend_relinquish_RW", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish_RW", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_lend_relinquish_RW", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages) * 2, 'b', PAGE_SIZE * 2);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 2},
 	};
 
 	/* Lend memory to VM1. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be accessed. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/* Ensure we can't donate any sub section of memory to another VM. */
 	constituents[0].page_count = 1;
 	for (int i = 1; i < PAGE_SIZE * 2; i++) {
 		constituents[0].address = (uint64_t)pages + PAGE_SIZE;
-		msg_size = spci_memory_region_init(
+		msg_size = ffa_memory_region_init(
 			mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2, constituents,
 			ARRAY_SIZE(constituents), 0, 0,
-			SPCI_DATA_ACCESS_NOT_SPECIFIED,
-			SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-			SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-			SPCI_MEMORY_OUTER_SHAREABLE);
-		EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size),
-				  SPCI_DENIED);
+			FFA_DATA_ACCESS_NOT_SPECIFIED,
+			FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+			FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+			FFA_MEMORY_OUTER_SHAREABLE);
+		EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size),
+				 FFA_DENIED);
 	}
 
 	/* Ensure we can't donate to the only borrower. */
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size), SPCI_DENIED);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size), FFA_DENIED);
 }
 
 /**
@@ -1509,31 +1496,31 @@
  */
 TEST(memory_sharing, share_donate)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 	uint32_t msg_size;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_relinquish_RW", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_lend_relinquish_RW", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_relinquish_RW", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_lend_relinquish_RW", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE * 4);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 2},
 		{.address = (uint64_t)pages + PAGE_SIZE * 2, .page_count = 2},
 	};
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
 	/* Let the memory be accessed. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/* Attempt to share the same area of memory. */
 	check_cannot_share_memory(mb, constituents, ARRAY_SIZE(constituents),
@@ -1543,24 +1530,24 @@
 	constituents[0].page_count = 1;
 	for (int i = 1; i < PAGE_SIZE * 2; i++) {
 		constituents[0].address = (uint64_t)pages + PAGE_SIZE;
-		msg_size = spci_memory_region_init(
+		msg_size = ffa_memory_region_init(
 			mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2, constituents,
 			ARRAY_SIZE(constituents), 0, 0,
-			SPCI_DATA_ACCESS_NOT_SPECIFIED,
-			SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-			SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-			SPCI_MEMORY_OUTER_SHAREABLE);
-		EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size),
-				  SPCI_DENIED);
+			FFA_DATA_ACCESS_NOT_SPECIFIED,
+			FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+			FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+			FFA_MEMORY_OUTER_SHAREABLE);
+		EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size),
+				 FFA_DENIED);
 	}
 
 	/* Ensure we can't donate to the only borrower. */
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size), SPCI_DENIED);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size), FFA_DENIED);
 }
 
 /**
@@ -1568,33 +1555,33 @@
  */
 TEST(memory_sharing, lend_twice)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 	uint32_t msg_size;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_twice", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_lend_twice", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_twice", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_lend_twice", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages), 'b', PAGE_SIZE * 4);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 2},
 		{.address = (uint64_t)pages + PAGE_SIZE * 3, .page_count = 1},
 	};
 
 	/* Lend memory to VM1. */
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Let the memory be accessed. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/* Attempt to lend the same area of memory. */
 	check_cannot_lend_memory(mb, constituents, ARRAY_SIZE(constituents),
@@ -1609,7 +1596,7 @@
 	check_cannot_relinquish_memory(mb, handle);
 
 	/* Now attempt to share only a portion of the same area of memory. */
-	struct spci_memory_region_constituent constituents_subsection[] = {
+	struct ffa_memory_region_constituent constituents_subsection[] = {
 		{.address = (uint64_t)pages + PAGE_SIZE * 3, .page_count = 1},
 	};
 	check_cannot_lend_memory(mb, constituents_subsection,
@@ -1622,14 +1609,13 @@
 	constituents[0].page_count = 1;
 	for (int i = 0; i < 2; i++) {
 		constituents[0].address = (uint64_t)pages + i * PAGE_SIZE;
-		msg_size = spci_memory_region_init(
+		msg_size = ffa_memory_region_init(
 			mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2, constituents,
-			ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_RO,
-			SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-			SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-			SPCI_MEMORY_OUTER_SHAREABLE);
-		EXPECT_SPCI_ERROR(spci_mem_lend(msg_size, msg_size),
-				  SPCI_DENIED);
+			ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_RO,
+			FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+			FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+			FFA_MEMORY_OUTER_SHAREABLE);
+		EXPECT_FFA_ERROR(ffa_mem_lend(msg_size, msg_size), FFA_DENIED);
 	}
 }
 
@@ -1638,31 +1624,31 @@
  */
 TEST(memory_sharing, share_twice)
 {
-	spci_memory_handle_t handle;
-	struct spci_value run_res;
+	ffa_memory_handle_t handle;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 	uint32_t msg_size;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_lend_twice", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_memory_lend_twice", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_lend_twice", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_memory_lend_twice", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages) * 2, 'b', PAGE_SIZE * 2);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 2},
 	};
 
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NX);
+		FFA_MEM_SHARE_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NX);
 
 	/* Let the memory be accessed. */
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 
 	/*
 	 * Attempting to share or lend the same area of memory with any VM
@@ -1682,14 +1668,13 @@
 	constituents[0].page_count = 1;
 	for (int i = 0; i < 2; i++) {
 		constituents[0].address = (uint64_t)pages + i * PAGE_SIZE;
-		msg_size = spci_memory_region_init(
+		msg_size = ffa_memory_region_init(
 			mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2, constituents,
-			ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_RO,
-			SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-			SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-			SPCI_MEMORY_OUTER_SHAREABLE);
-		EXPECT_SPCI_ERROR(spci_mem_share(msg_size, msg_size),
-				  SPCI_DENIED);
+			ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_RO,
+			FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+			FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+			FFA_MEMORY_OUTER_SHAREABLE);
+		EXPECT_FFA_ERROR(ffa_mem_share(msg_size, msg_size), FFA_DENIED);
 	}
 }
 
@@ -1700,27 +1685,27 @@
 {
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
-	spci_memory_handle_t handle;
+	ffa_memory_handle_t handle;
 	size_t i;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_return", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_return", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages) * 2, 'b', PAGE_SIZE * 2);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 2},
 	};
 
 	/* Lend memory with clear flag. */
 	handle = send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
 		constituents, ARRAY_SIZE(constituents),
-		SPCI_MEMORY_REGION_FLAG_CLEAR, SPCI_DATA_ACCESS_RO,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEMORY_REGION_FLAG_CLEAR, FFA_DATA_ACCESS_RO,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 	/* Take it back again. */
-	spci_mem_reclaim(handle, 0);
+	ffa_mem_reclaim(handle, 0);
 
 	/* Check that it has not been cleared. */
 	for (i = 0; i < PAGE_SIZE * 2; ++i) {
@@ -1738,23 +1723,23 @@
 	uint32_t msg_size;
 	size_t i;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_memory_return", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_memory_return", mb.send);
 
 	/* Initialise the memory before giving it. */
 	memset_s(ptr, sizeof(pages) * 2, 'b', PAGE_SIZE * 2);
 
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 2},
 	};
 
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1, constituents,
-		ARRAY_SIZE(constituents), 0, SPCI_MEMORY_REGION_FLAG_CLEAR,
-		SPCI_DATA_ACCESS_RO, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-		SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_share(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		ARRAY_SIZE(constituents), 0, FFA_MEMORY_REGION_FLAG_CLEAR,
+		FFA_DATA_ACCESS_RO, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+		FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_share(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 
 	/* Check that it has not been cleared. */
 	for (i = 0; i < PAGE_SIZE * 2; ++i) {
@@ -1763,22 +1748,22 @@
 }
 
 /**
- * SPCI: Verify past the upper bound of the lent region cannot be accessed.
+ * FF-A: Verify past the upper bound of the lent region cannot be accessed.
  */
-TEST(memory_sharing, spci_lend_check_upper_bounds)
+TEST(memory_sharing, ffa_lend_check_upper_bounds)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_check_upper_bound", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_check_upper_bound", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_check_upper_bound", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_check_upper_bound", mb.send);
 
 	/* Initialise the memory before lending it. */
 	memset_s(ptr, sizeof(pages), 'b', 4 * PAGE_SIZE);
 
 	/* Specify non-contiguous memory regions. */
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 		{.address = (uint64_t)pages + PAGE_SIZE * 2, .page_count = 1},
 	};
@@ -1790,12 +1775,12 @@
 	pages[0] = 0;
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 
@@ -1814,33 +1799,33 @@
 	 * exception loop.
 	 */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM2, 0);
+	run_res = ffa_run(SERVICE_VM2, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
 
 /**
- * SPCI: Verify past the lower bound of the lent region cannot be accessed.
+ * FF-A: Verify past the lower bound of the lent region cannot be accessed.
  */
-TEST(memory_sharing, spci_lend_check_lower_bounds)
+TEST(memory_sharing, ffa_lend_check_lower_bounds)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 	uint8_t *ptr = pages;
 
-	SERVICE_SELECT(SERVICE_VM1, "spci_check_lower_bound", mb.send);
-	SERVICE_SELECT(SERVICE_VM2, "spci_check_lower_bound", mb.send);
+	SERVICE_SELECT(SERVICE_VM1, "ffa_check_lower_bound", mb.send);
+	SERVICE_SELECT(SERVICE_VM2, "ffa_check_lower_bound", mb.send);
 
 	/* Initialise the memory before lending it. */
 	memset_s(ptr, sizeof(pages), 'b', 4 * PAGE_SIZE);
 
 	/* Specify non-contiguous memory regions. */
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)pages, .page_count = 1},
 		{.address = (uint64_t)pages + PAGE_SIZE * 2, .page_count = 1},
 	};
@@ -1852,12 +1837,12 @@
 	pages[0] = 0;
 
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM1,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 
@@ -1876,12 +1861,12 @@
 	 * exception loop.
 	 */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
-		constituents, ARRAY_SIZE(constituents), 0, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_LEND_32, mb.send, HF_PRIMARY_VM_ID, SERVICE_VM2,
+		constituents, ARRAY_SIZE(constituents), 0, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
-	run_res = spci_run(SERVICE_VM2, 0);
+	run_res = ffa_run(SERVICE_VM2, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
diff --git a/test/vmapi/primary_with_secondaries/no_services.c b/test/vmapi/primary_with_secondaries/no_services.c
index 2757df2..956d5bf 100644
--- a/test/vmapi/primary_with_secondaries/no_services.c
+++ b/test/vmapi/primary_with_secondaries/no_services.c
@@ -25,7 +25,7 @@
 
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 static alignas(PAGE_SIZE) uint8_t send_page[PAGE_SIZE];
 static alignas(PAGE_SIZE) uint8_t recv_page[PAGE_SIZE];
@@ -64,7 +64,7 @@
  */
 TEST(hf_vcpu_get_count, reserved_vm_id)
 {
-	spci_vm_id_t id;
+	ffa_vm_id_t id;
 
 	for (id = 0; id < HF_VM_ID_OFFSET; ++id) {
 		EXPECT_EQ(hf_vcpu_get_count(id), 0);
@@ -83,42 +83,42 @@
 /**
  * The primary can't be run by the hypervisor.
  */
-TEST(spci_run, cannot_run_primary)
+TEST(ffa_run, cannot_run_primary)
 {
-	struct spci_value res = spci_run(HF_PRIMARY_VM_ID, 0);
-	EXPECT_SPCI_ERROR(res, SPCI_INVALID_PARAMETERS);
+	struct ffa_value res = ffa_run(HF_PRIMARY_VM_ID, 0);
+	EXPECT_FFA_ERROR(res, FFA_INVALID_PARAMETERS);
 }
 
 /**
  * Can only run a VM that exists.
  */
-TEST(spci_run, cannot_run_absent_secondary)
+TEST(ffa_run, cannot_run_absent_secondary)
 {
-	struct spci_value res = spci_run(1234, 0);
-	EXPECT_SPCI_ERROR(res, SPCI_INVALID_PARAMETERS);
+	struct ffa_value res = ffa_run(1234, 0);
+	EXPECT_FFA_ERROR(res, FFA_INVALID_PARAMETERS);
 }
 
 /**
  * Can only run a vCPU that exists.
  */
-TEST(spci_run, cannot_run_absent_vcpu)
+TEST(ffa_run, cannot_run_absent_vcpu)
 {
-	struct spci_value res = spci_run(SERVICE_VM1, 1234);
-	EXPECT_SPCI_ERROR(res, SPCI_INVALID_PARAMETERS);
+	struct ffa_value res = ffa_run(SERVICE_VM1, 1234);
+	EXPECT_FFA_ERROR(res, FFA_INVALID_PARAMETERS);
 }
 
 /**
  * The configured send/receive addresses can't be device memory.
  */
-TEST(spci_rxtx_map, fails_with_device_memory)
+TEST(ffa_rxtx_map, fails_with_device_memory)
 {
-	EXPECT_SPCI_ERROR(spci_rxtx_map(PAGE_SIZE, PAGE_SIZE * 2), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rxtx_map(PAGE_SIZE, PAGE_SIZE * 2), FFA_DENIED);
 }
 
 /**
  * The configured send/receive addresses can't be unaligned.
  */
-TEST(spci_rxtx_map, fails_with_unaligned_pointer)
+TEST(ffa_rxtx_map, fails_with_unaligned_pointer)
 {
 	uint8_t maybe_aligned[2];
 	hf_ipaddr_t unaligned_addr = (hf_ipaddr_t)&maybe_aligned[1];
@@ -127,60 +127,60 @@
 	/* Check that the address is unaligned. */
 	ASSERT_EQ(unaligned_addr & 1, 1);
 
-	EXPECT_SPCI_ERROR(spci_rxtx_map(aligned_addr, unaligned_addr),
-			  SPCI_INVALID_PARAMETERS);
-	EXPECT_SPCI_ERROR(spci_rxtx_map(unaligned_addr, aligned_addr),
-			  SPCI_INVALID_PARAMETERS);
-	EXPECT_SPCI_ERROR(spci_rxtx_map(unaligned_addr, unaligned_addr),
-			  SPCI_INVALID_PARAMETERS);
+	EXPECT_FFA_ERROR(ffa_rxtx_map(aligned_addr, unaligned_addr),
+			 FFA_INVALID_PARAMETERS);
+	EXPECT_FFA_ERROR(ffa_rxtx_map(unaligned_addr, aligned_addr),
+			 FFA_INVALID_PARAMETERS);
+	EXPECT_FFA_ERROR(ffa_rxtx_map(unaligned_addr, unaligned_addr),
+			 FFA_INVALID_PARAMETERS);
 }
 
 /**
  * The configured send/receive addresses can't be the same page.
  */
-TEST(spci_rxtx_map, fails_with_same_page)
+TEST(ffa_rxtx_map, fails_with_same_page)
 {
-	EXPECT_SPCI_ERROR(spci_rxtx_map(send_page_addr, send_page_addr),
-			  SPCI_INVALID_PARAMETERS);
-	EXPECT_SPCI_ERROR(spci_rxtx_map(recv_page_addr, recv_page_addr),
-			  SPCI_INVALID_PARAMETERS);
+	EXPECT_FFA_ERROR(ffa_rxtx_map(send_page_addr, send_page_addr),
+			 FFA_INVALID_PARAMETERS);
+	EXPECT_FFA_ERROR(ffa_rxtx_map(recv_page_addr, recv_page_addr),
+			 FFA_INVALID_PARAMETERS);
 }
 
 /**
  * The configuration of the send/receive addresses can only happen once.
  */
-TEST(spci_rxtx_map, fails_if_already_succeeded)
+TEST(ffa_rxtx_map, fails_if_already_succeeded)
 {
-	EXPECT_EQ(spci_rxtx_map(send_page_addr, recv_page_addr).func,
-		  SPCI_SUCCESS_32);
-	EXPECT_SPCI_ERROR(spci_rxtx_map(send_page_addr, recv_page_addr),
-			  SPCI_DENIED);
+	EXPECT_EQ(ffa_rxtx_map(send_page_addr, recv_page_addr).func,
+		  FFA_SUCCESS_32);
+	EXPECT_FFA_ERROR(ffa_rxtx_map(send_page_addr, recv_page_addr),
+			 FFA_DENIED);
 }
 
 /**
  * The configuration of the send/receive address is successful with valid
  * arguments.
  */
-TEST(spci_rxtx_map, succeeds)
+TEST(ffa_rxtx_map, succeeds)
 {
-	EXPECT_EQ(spci_rxtx_map(send_page_addr, recv_page_addr).func,
-		  SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rxtx_map(send_page_addr, recv_page_addr).func,
+		  FFA_SUCCESS_32);
 }
 
 /**
- * The primary receives messages from spci_run().
+ * The primary receives messages from ffa_run().
  */
 TEST(hf_mailbox_receive, cannot_receive_from_primary_blocking)
 {
-	struct spci_value res = spci_msg_wait();
-	EXPECT_NE(res.func, SPCI_SUCCESS_32);
+	struct ffa_value res = ffa_msg_wait();
+	EXPECT_NE(res.func, FFA_SUCCESS_32);
 }
 
 /**
- * The primary receives messages from spci_run().
+ * The primary receives messages from ffa_run().
  */
 TEST(hf_mailbox_receive, cannot_receive_from_primary_non_blocking)
 {
-	struct spci_value res = spci_msg_poll();
-	EXPECT_NE(res.func, SPCI_SUCCESS_32);
+	struct ffa_value res = ffa_msg_poll();
+	EXPECT_NE(res.func, FFA_SUCCESS_32);
 }
diff --git a/test/vmapi/primary_with_secondaries/perfmon.c b/test/vmapi/primary_with_secondaries/perfmon.c
index 5d62250..3e1fcb4 100644
--- a/test/vmapi/primary_with_secondaries/perfmon.c
+++ b/test/vmapi/primary_with_secondaries/perfmon.c
@@ -19,17 +19,17 @@
 #include "../../src/arch/aarch64/sysregs.h"
 #include "primary_with_secondary.h"
 #include "sysregs.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 TEST(perfmon, secondary_basic)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "perfmon_secondary_basic", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
+	run_res = ffa_run(SERVICE_VM1, 0);
+	EXPECT_EQ(run_res.func, FFA_YIELD_32);
 }
 
 /**
diff --git a/test/vmapi/primary_with_secondaries/run_race.c b/test/vmapi/primary_with_secondaries/run_race.c
index eba3536..2ea1bd6 100644
--- a/test/vmapi/primary_with_secondaries/run_race.c
+++ b/test/vmapi/primary_with_secondaries/run_race.c
@@ -26,7 +26,7 @@
 
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 struct cpu_state {
 	struct mailbox_buffers *mb;
@@ -39,32 +39,32 @@
  */
 static bool run_loop(struct mailbox_buffers *mb)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	bool ok = false;
 
 	for (;;) {
 		/* Run until it manages to schedule vCPU on this CPU. */
 		do {
-			run_res = spci_run(SERVICE_VM1, 0);
-		} while (run_res.func == SPCI_ERROR_32 &&
-			 run_res.arg2 == SPCI_BUSY);
+			run_res = ffa_run(SERVICE_VM1, 0);
+		} while (run_res.func == FFA_ERROR_32 &&
+			 run_res.arg2 == FFA_BUSY);
 
 		/* Break out if we received a message with non-zero length. */
-		if (run_res.func == SPCI_MSG_SEND_32 &&
-		    spci_msg_send_size(run_res) != 0) {
+		if (run_res.func == FFA_MSG_SEND_32 &&
+		    ffa_msg_send_size(run_res) != 0) {
 			break;
 		}
 
 		/* Clear mailbox so that next message can be received. */
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 	}
 
 	/* Copies the contents of the received boolean to the return value. */
-	if (spci_msg_send_size(run_res) == sizeof(ok)) {
+	if (ffa_msg_send_size(run_res) == sizeof(ok)) {
 		ok = *(bool *)mb->recv;
 	}
 
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	return ok;
 }
@@ -83,7 +83,7 @@
 
 TEAR_DOWN(vcpu_state)
 {
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
 }
 
 /**
diff --git a/test/vmapi/primary_with_secondaries/services/BUILD.gn b/test/vmapi/primary_with_secondaries/services/BUILD.gn
index d4a78c2..72de3fc 100644
--- a/test/vmapi/primary_with_secondaries/services/BUILD.gn
+++ b/test/vmapi/primary_with_secondaries/services/BUILD.gn
@@ -205,7 +205,7 @@
 }
 
 # Service to receive messages in a secondary VM and ensure that the header fields are correctly set.
-source_set("spci_check") {
+source_set("ffa_check") {
   testonly = true
   public_configs = [
     "..:config",
@@ -216,7 +216,7 @@
   ]
 
   sources = [
-    "spci_check.c",
+    "ffa_check.c",
   ]
 }
 
@@ -231,13 +231,13 @@
     ":debug_el1",
     ":echo",
     ":echo_with_notification",
+    ":ffa_check",
     ":floating_point",
     ":interruptible",
     ":memory",
     ":perfmon",
     ":receive_block",
     ":relay",
-    ":spci_check",
     ":unmapped",
     ":wfi",
     "//test/hftest:hftest_secondary_vm",
diff --git a/test/vmapi/primary_with_secondaries/services/boot.c b/test/vmapi/primary_with_secondaries/services/boot.c
index 6314c1b..d6d1f08 100644
--- a/test/vmapi/primary_with_secondaries/services/boot.c
+++ b/test/vmapi/primary_with_secondaries/services/boot.c
@@ -50,7 +50,7 @@
 	ASSERT_NE(checksum, 0);
 	dlog("Checksum of all memory is %d\n", checksum);
 
-	spci_yield();
+	ffa_yield();
 }
 
 TEST_SERVICE(boot_memory_underrun)
diff --git a/test/vmapi/primary_with_secondaries/services/check_state.c b/test/vmapi/primary_with_secondaries/services/check_state.c
index da27cc6..4fd71f0 100644
--- a/test/vmapi/primary_with_secondaries/services/check_state.c
+++ b/test/vmapi/primary_with_secondaries/services/check_state.c
@@ -22,14 +22,14 @@
 
 #include "test/hftest.h"
 
-void send_with_retry(spci_vm_id_t sender_vm_id, spci_vm_id_t target_vm_id,
+void send_with_retry(ffa_vm_id_t sender_vm_id, ffa_vm_id_t target_vm_id,
 		     uint32_t size)
 {
-	struct spci_value res;
+	struct ffa_value res;
 
 	do {
-		res = spci_msg_send(sender_vm_id, target_vm_id, size, 0);
-	} while (res.func != SPCI_SUCCESS_32);
+		res = ffa_msg_send(sender_vm_id, target_vm_id, size, 0);
+	} while (res.func != FFA_SUCCESS_32);
 }
 
 /**
@@ -64,7 +64,7 @@
 	}
 
 	/* Send two replies, one for each physical CPU. */
-	memcpy_s(SERVICE_SEND_BUFFER(), SPCI_MSG_PAYLOAD_MAX, &ok, sizeof(ok));
+	memcpy_s(SERVICE_SEND_BUFFER(), FFA_MSG_PAYLOAD_MAX, &ok, sizeof(ok));
 	send_with_retry(hf_vm_get_id(), HF_PRIMARY_VM_ID, sizeof(ok));
 	send_with_retry(hf_vm_get_id(), HF_PRIMARY_VM_ID, sizeof(ok));
 }
diff --git a/test/vmapi/primary_with_secondaries/services/debug_el1.c b/test/vmapi/primary_with_secondaries/services/debug_el1.c
index 1e09f76..f4373c0 100644
--- a/test/vmapi/primary_with_secondaries/services/debug_el1.c
+++ b/test/vmapi/primary_with_secondaries/services/debug_el1.c
@@ -33,5 +33,5 @@
 	TRY_READ(DBGWVR0_EL1);
 
 	EXPECT_EQ(exception_handler_get_num(), 5);
-	spci_yield();
+	ffa_yield();
 }
diff --git a/test/vmapi/primary_with_secondaries/services/echo.c b/test/vmapi/primary_with_secondaries/services/echo.c
index b1c99d9..81a42d2 100644
--- a/test/vmapi/primary_with_secondaries/services/echo.c
+++ b/test/vmapi/primary_with_secondaries/services/echo.c
@@ -14,7 +14,7 @@
  * limitations under the License.
  */
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
@@ -25,18 +25,18 @@
 {
 	/* Loop, echo messages back to the sender. */
 	for (;;) {
-		struct spci_value ret = spci_msg_wait();
-		spci_vm_id_t target_vm_id = spci_msg_send_receiver(ret);
-		spci_vm_id_t source_vm_id = spci_msg_send_sender(ret);
+		struct ffa_value ret = ffa_msg_wait();
+		ffa_vm_id_t target_vm_id = ffa_msg_send_receiver(ret);
+		ffa_vm_id_t source_vm_id = ffa_msg_send_sender(ret);
 		void *send_buf = SERVICE_SEND_BUFFER();
 		void *recv_buf = SERVICE_RECV_BUFFER();
 
-		ASSERT_EQ(ret.func, SPCI_MSG_SEND_32);
-		memcpy_s(send_buf, SPCI_MSG_PAYLOAD_MAX, recv_buf,
-			 spci_msg_send_size(ret));
+		ASSERT_EQ(ret.func, FFA_MSG_SEND_32);
+		memcpy_s(send_buf, FFA_MSG_PAYLOAD_MAX, recv_buf,
+			 ffa_msg_send_size(ret));
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-		spci_msg_send(target_vm_id, source_vm_id,
-			      spci_msg_send_size(ret), 0);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+		ffa_msg_send(target_vm_id, source_vm_id, ffa_msg_send_size(ret),
+			     0);
 	}
 }
diff --git a/test/vmapi/primary_with_secondaries/services/echo_with_notification.c b/test/vmapi/primary_with_secondaries/services/echo_with_notification.c
index a74391b..984c6a3 100644
--- a/test/vmapi/primary_with_secondaries/services/echo_with_notification.c
+++ b/test/vmapi/primary_with_secondaries/services/echo_with_notification.c
@@ -17,7 +17,7 @@
 #include "hf/arch/irq.h"
 #include "hf/arch/vm/interrupts.h"
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
@@ -55,20 +55,19 @@
 	for (;;) {
 		void *send_buf = SERVICE_SEND_BUFFER();
 		void *recv_buf = SERVICE_RECV_BUFFER();
-		struct spci_value ret = spci_msg_wait();
-		spci_vm_id_t target_vm_id = spci_msg_send_receiver(ret);
-		spci_vm_id_t source_vm_id = spci_msg_send_sender(ret);
+		struct ffa_value ret = ffa_msg_wait();
+		ffa_vm_id_t target_vm_id = ffa_msg_send_receiver(ret);
+		ffa_vm_id_t source_vm_id = ffa_msg_send_sender(ret);
 
-		memcpy_s(send_buf, SPCI_MSG_PAYLOAD_MAX, recv_buf,
-			 spci_msg_send_size(ret));
+		memcpy_s(send_buf, FFA_MSG_PAYLOAD_MAX, recv_buf,
+			 ffa_msg_send_size(ret));
 
-		while (spci_msg_send(target_vm_id, source_vm_id,
-				     spci_msg_send_size(ret),
-				     SPCI_MSG_SEND_NOTIFY)
-			       .func != SPCI_SUCCESS_32) {
+		while (ffa_msg_send(target_vm_id, source_vm_id,
+				    ffa_msg_send_size(ret), FFA_MSG_SEND_NOTIFY)
+			       .func != FFA_SUCCESS_32) {
 			wait_for_vm(source_vm_id);
 		}
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 	}
 }
diff --git a/test/vmapi/primary_with_secondaries/services/spci_check.c b/test/vmapi/primary_with_secondaries/services/ffa_check.c
similarity index 65%
rename from test/vmapi/primary_with_secondaries/services/spci_check.c
rename to test/vmapi/primary_with_secondaries/services/ffa_check.c
index 9e342ba..ef39716 100644
--- a/test/vmapi/primary_with_secondaries/services/spci_check.c
+++ b/test/vmapi/primary_with_secondaries/services/ffa_check.c
@@ -14,62 +14,62 @@
  * limitations under the License.
  */
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
 
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
-TEST_SERVICE(spci_check)
+TEST_SERVICE(ffa_check)
 {
 	void *recv_buf = SERVICE_RECV_BUFFER();
-	const char message[] = "spci_msg_send";
+	const char message[] = "ffa_msg_send";
 
 	/* Wait for single message to be sent by the primary VM. */
-	struct spci_value ret = spci_msg_wait();
+	struct ffa_value ret = ffa_msg_wait();
 
-	EXPECT_EQ(ret.func, SPCI_MSG_SEND_32);
+	EXPECT_EQ(ret.func, FFA_MSG_SEND_32);
 
 	/* Ensure message header has all fields correctly set. */
-	EXPECT_EQ(spci_msg_send_size(ret), sizeof(message));
-	EXPECT_EQ(spci_msg_send_receiver(ret), hf_vm_get_id());
-	EXPECT_EQ(spci_msg_send_sender(ret), HF_PRIMARY_VM_ID);
+	EXPECT_EQ(ffa_msg_send_size(ret), sizeof(message));
+	EXPECT_EQ(ffa_msg_send_receiver(ret), hf_vm_get_id());
+	EXPECT_EQ(ffa_msg_send_sender(ret), HF_PRIMARY_VM_ID);
 
 	/* Ensure that the payload was correctly transmitted. */
 	EXPECT_EQ(memcmp(recv_buf, message, sizeof(message)), 0);
 
-	spci_yield();
+	ffa_yield();
 }
 
-TEST_SERVICE(spci_length)
+TEST_SERVICE(ffa_length)
 {
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	const char message[] = "this should be truncated";
 
 	/* Wait for single message to be sent by the primary VM. */
-	struct spci_value ret = spci_msg_wait();
+	struct ffa_value ret = ffa_msg_wait();
 
-	EXPECT_EQ(ret.func, SPCI_MSG_SEND_32);
+	EXPECT_EQ(ret.func, FFA_MSG_SEND_32);
 
 	/* Verify the length is as expected. */
-	EXPECT_EQ(16, spci_msg_send_size(ret));
+	EXPECT_EQ(16, ffa_msg_send_size(ret));
 
 	/* Check only part of the message is sent correctly. */
 	EXPECT_NE(memcmp(recv_buf, message, sizeof(message)), 0);
-	EXPECT_EQ(memcmp(recv_buf, message, spci_msg_send_size(ret)), 0);
+	EXPECT_EQ(memcmp(recv_buf, message, ffa_msg_send_size(ret)), 0);
 
-	spci_yield();
+	ffa_yield();
 }
 
-TEST_SERVICE(spci_recv_non_blocking)
+TEST_SERVICE(ffa_recv_non_blocking)
 {
 	/* Wait for single message to be sent by the primary VM. */
-	struct spci_value ret = spci_msg_poll();
+	struct ffa_value ret = ffa_msg_poll();
 
-	EXPECT_SPCI_ERROR(ret, SPCI_RETRY);
+	EXPECT_FFA_ERROR(ret, FFA_RETRY);
 
-	spci_yield();
+	ffa_yield();
 }
diff --git a/test/vmapi/primary_with_secondaries/services/floating_point.c b/test/vmapi/primary_with_secondaries/services/floating_point.c
index f5c0bcc..7fcf1aa 100644
--- a/test/vmapi/primary_with_secondaries/services/floating_point.c
+++ b/test/vmapi/primary_with_secondaries/services/floating_point.c
@@ -17,7 +17,7 @@
 #include "hf/arch/std.h"
 #include "hf/arch/vm/registers.h"
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 
 #include "vmapi/hf/call.h"
 
@@ -28,18 +28,18 @@
 {
 	const double value = 0.75;
 	fill_fp_registers(value);
-	EXPECT_EQ(spci_yield().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_yield().func, FFA_SUCCESS_32);
 
 	ASSERT_TRUE(check_fp_register(value));
-	spci_yield();
+	ffa_yield();
 }
 
 TEST_SERVICE(fp_fpcr)
 {
 	uintreg_t value = 3 << 22; /* Set RMode to RZ */
 	write_msr(fpcr, value);
-	EXPECT_EQ(spci_yield().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_yield().func, FFA_SUCCESS_32);
 
 	ASSERT_EQ(read_msr(fpcr), value);
-	spci_yield();
+	ffa_yield();
 }
diff --git a/test/vmapi/primary_with_secondaries/services/interruptible.c b/test/vmapi/primary_with_secondaries/services/interruptible.c
index fc79d20..73a8d35 100644
--- a/test/vmapi/primary_with_secondaries/services/interruptible.c
+++ b/test/vmapi/primary_with_secondaries/services/interruptible.c
@@ -21,7 +21,7 @@
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
@@ -39,8 +39,8 @@
 	dlog("secondary IRQ %d from current\n", interrupt_id);
 	buffer[8] = '0' + interrupt_id / 10;
 	buffer[9] = '0' + interrupt_id % 10;
-	memcpy_s(SERVICE_SEND_BUFFER(), SPCI_MSG_PAYLOAD_MAX, buffer, size);
-	spci_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, size, 0);
+	memcpy_s(SERVICE_SEND_BUFFER(), FFA_MSG_PAYLOAD_MAX, buffer, size);
+	ffa_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, size, 0);
 	dlog("secondary IRQ %d ended\n", interrupt_id);
 }
 
@@ -48,21 +48,21 @@
  * Try to receive a message from the mailbox, blocking if necessary, and
  * retrying if interrupted.
  */
-struct spci_value mailbox_receive_retry()
+struct ffa_value mailbox_receive_retry()
 {
-	struct spci_value received;
+	struct ffa_value received;
 
 	do {
-		received = spci_msg_wait();
-	} while (received.func == SPCI_ERROR_32 &&
-		 received.arg2 == SPCI_INTERRUPTED);
+		received = ffa_msg_wait();
+	} while (received.func == FFA_ERROR_32 &&
+		 received.arg2 == FFA_INTERRUPTED);
 
 	return received;
 }
 
 TEST_SERVICE(interruptible)
 {
-	spci_vm_id_t this_vm_id = hf_vm_get_id();
+	ffa_vm_id_t this_vm_id = hf_vm_get_id();
 	void *recv_buf = SERVICE_RECV_BUFFER();
 
 	exception_setup(irq, NULL);
@@ -75,26 +75,25 @@
 		const char ping_message[] = "Ping";
 		const char enable_message[] = "Enable interrupt C";
 
-		struct spci_value ret = mailbox_receive_retry();
+		struct ffa_value ret = mailbox_receive_retry();
 
-		ASSERT_EQ(ret.func, SPCI_MSG_SEND_32);
-		if (spci_msg_send_sender(ret) == HF_PRIMARY_VM_ID &&
-		    spci_msg_send_size(ret) == sizeof(ping_message) &&
+		ASSERT_EQ(ret.func, FFA_MSG_SEND_32);
+		if (ffa_msg_send_sender(ret) == HF_PRIMARY_VM_ID &&
+		    ffa_msg_send_size(ret) == sizeof(ping_message) &&
 		    memcmp(recv_buf, ping_message, sizeof(ping_message)) == 0) {
 			/* Interrupt ourselves */
 			hf_interrupt_inject(this_vm_id, 0, SELF_INTERRUPT_ID);
-		} else if (spci_msg_send_sender(ret) == HF_PRIMARY_VM_ID &&
-			   spci_msg_send_size(ret) == sizeof(enable_message) &&
+		} else if (ffa_msg_send_sender(ret) == HF_PRIMARY_VM_ID &&
+			   ffa_msg_send_size(ret) == sizeof(enable_message) &&
 			   memcmp(recv_buf, enable_message,
 				  sizeof(enable_message)) == 0) {
 			/* Enable interrupt ID C. */
 			hf_interrupt_enable(EXTERNAL_INTERRUPT_ID_C, true);
 		} else {
 			dlog("Got unexpected message from VM %d, size %d.\n",
-			     spci_msg_send_sender(ret),
-			     spci_msg_send_size(ret));
+			     ffa_msg_send_sender(ret), ffa_msg_send_size(ret));
 			FAIL("Unexpected message");
 		}
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 	}
 }
diff --git a/test/vmapi/primary_with_secondaries/services/interruptible_echo.c b/test/vmapi/primary_with_secondaries/services/interruptible_echo.c
index 8903990..f09e0a7 100644
--- a/test/vmapi/primary_with_secondaries/services/interruptible_echo.c
+++ b/test/vmapi/primary_with_secondaries/services/interruptible_echo.c
@@ -38,23 +38,23 @@
 	arch_irq_enable();
 
 	for (;;) {
-		struct spci_value res = spci_msg_wait();
+		struct ffa_value res = ffa_msg_wait();
 		void *message = SERVICE_SEND_BUFFER();
 		void *recv_message = SERVICE_RECV_BUFFER();
 
 		/* Retry if interrupted but made visible with the yield. */
-		while (res.func == SPCI_ERROR_32 &&
-		       res.arg2 == SPCI_INTERRUPTED) {
-			spci_yield();
-			res = spci_msg_wait();
+		while (res.func == FFA_ERROR_32 &&
+		       res.arg2 == FFA_INTERRUPTED) {
+			ffa_yield();
+			res = ffa_msg_wait();
 		}
 
-		ASSERT_EQ(res.func, SPCI_MSG_SEND_32);
-		memcpy_s(message, SPCI_MSG_PAYLOAD_MAX, recv_message,
-			 spci_msg_send_size(res));
+		ASSERT_EQ(res.func, FFA_MSG_SEND_32);
+		memcpy_s(message, FFA_MSG_PAYLOAD_MAX, recv_message,
+			 ffa_msg_send_size(res));
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-		spci_msg_send(SERVICE_VM1, HF_PRIMARY_VM_ID,
-			      spci_msg_send_size(res), 0);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+		ffa_msg_send(SERVICE_VM1, HF_PRIMARY_VM_ID,
+			     ffa_msg_send_size(res), 0);
 	}
 }
diff --git a/test/vmapi/primary_with_secondaries/services/memory.c b/test/vmapi/primary_with_secondaries/services/memory.c
index 2c07835..2c598d0 100644
--- a/test/vmapi/primary_with_secondaries/services/memory.c
+++ b/test/vmapi/primary_with_secondaries/services/memory.c
@@ -25,7 +25,7 @@
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
 #include "test/vmapi/exception_handler.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 alignas(PAGE_SIZE) static uint8_t page[PAGE_SIZE];
 
@@ -37,13 +37,13 @@
 		void *recv_buf = SERVICE_RECV_BUFFER();
 		void *send_buf = SERVICE_SEND_BUFFER();
 
-		struct spci_value ret = spci_msg_wait();
-		spci_vm_id_t sender = retrieve_memory_from_message(
+		struct ffa_value ret = ffa_msg_wait();
+		ffa_vm_id_t sender = retrieve_memory_from_message(
 			recv_buf, send_buf, ret, NULL);
-		struct spci_memory_region *memory_region =
-			(struct spci_memory_region *)recv_buf;
-		struct spci_composite_memory_region *composite =
-			spci_memory_region_get_composite(memory_region, 0);
+		struct ffa_memory_region *memory_region =
+			(struct ffa_memory_region *)recv_buf;
+		struct ffa_composite_memory_region *composite =
+			ffa_memory_region_get_composite(memory_region, 0);
 		uint8_t *ptr = (uint8_t *)composite->constituents[0].address;
 
 		ASSERT_EQ(memory_region->receiver_count, 1);
@@ -52,7 +52,7 @@
 			  0);
 
 		/* Allow the memory to be populated. */
-		EXPECT_EQ(spci_yield().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_yield().func, FFA_SUCCESS_32);
 
 		/* Increment each byte of memory. */
 		for (i = 0; i < PAGE_SIZE; ++i) {
@@ -60,25 +60,25 @@
 		}
 
 		/* Signal completion and reset. */
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-		spci_msg_send(hf_vm_get_id(), sender, sizeof(ptr), 0);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+		ffa_msg_send(hf_vm_get_id(), sender, sizeof(ptr), 0);
 	}
 }
 
 TEST_SERVICE(give_memory_and_fault)
 {
 	void *send_buf = SERVICE_SEND_BUFFER();
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)&page, .page_count = 1},
 	};
 
 	/* Give memory to the primary. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, send_buf, hf_vm_get_id(), HF_PRIMARY_VM_ID,
+		FFA_MEM_DONATE_32, send_buf, hf_vm_get_id(), HF_PRIMARY_VM_ID,
 		constituents, ARRAY_SIZE(constituents),
-		SPCI_MEMORY_REGION_FLAG_CLEAR, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEMORY_REGION_FLAG_CLEAR, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
@@ -91,17 +91,17 @@
 TEST_SERVICE(lend_memory_and_fault)
 {
 	void *send_buf = SERVICE_SEND_BUFFER();
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)&page, .page_count = 1},
 	};
 
 	/* Lend memory to the primary. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_LEND_32, send_buf, hf_vm_get_id(), HF_PRIMARY_VM_ID,
+		FFA_MEM_LEND_32, send_buf, hf_vm_get_id(), HF_PRIMARY_VM_ID,
 		constituents, ARRAY_SIZE(constituents),
-		SPCI_MEMORY_REGION_FLAG_CLEAR, SPCI_DATA_ACCESS_RW,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEMORY_REGION_FLAG_CLEAR, FFA_DATA_ACCESS_RW,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
@@ -111,9 +111,9 @@
 	FAIL("Exception not generated by invalid access.");
 }
 
-TEST_SERVICE(spci_memory_return)
+TEST_SERVICE(ffa_memory_return)
 {
-	struct spci_value ret = spci_msg_wait();
+	struct ffa_value ret = ffa_msg_wait();
 	uint8_t *ptr;
 	size_t i;
 	void *recv_buf = SERVICE_RECV_BUFFER();
@@ -121,12 +121,12 @@
 
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
-	spci_vm_id_t sender =
+	ffa_vm_id_t sender =
 		retrieve_memory_from_message(recv_buf, send_buf, ret, NULL);
-	struct spci_memory_region *memory_region =
-		(struct spci_memory_region *)recv_buf;
-	struct spci_composite_memory_region *composite =
-		spci_memory_region_get_composite(memory_region, 0);
+	struct ffa_memory_region *memory_region =
+		(struct ffa_memory_region *)recv_buf;
+	struct ffa_composite_memory_region *composite =
+		ffa_memory_region_get_composite(memory_region, 0);
 
 	ptr = (uint8_t *)composite->constituents[0].address;
 
@@ -137,12 +137,11 @@
 
 	/* Give the memory back and notify the sender. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, send_buf, hf_vm_get_id(), sender,
+		FFA_MEM_DONATE_32, send_buf, hf_vm_get_id(), sender,
 		composite->constituents, composite->constituent_count, 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/*
 	 * Try and access the memory which will cause a fault unless the memory
@@ -156,27 +155,27 @@
 /**
  * Attempt to modify above the upper bound of a memory region sent to us.
  */
-TEST_SERVICE(spci_check_upper_bound)
+TEST_SERVICE(ffa_check_upper_bound)
 {
-	struct spci_memory_region *memory_region;
-	struct spci_composite_memory_region *composite;
+	struct ffa_memory_region *memory_region;
+	struct ffa_composite_memory_region *composite;
 	uint8_t *ptr;
 	uint8_t index;
 
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	void *send_buf = SERVICE_SEND_BUFFER();
-	struct spci_value ret = spci_msg_wait();
+	struct ffa_value ret = ffa_msg_wait();
 
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
 	retrieve_memory_from_message(recv_buf, send_buf, ret, NULL);
-	memory_region = (struct spci_memory_region *)recv_buf;
-	composite = spci_memory_region_get_composite(memory_region, 0);
+	memory_region = (struct ffa_memory_region *)recv_buf;
+	composite = ffa_memory_region_get_composite(memory_region, 0);
 
 	/* Choose which constituent we want to test. */
 	index = *(uint8_t *)composite->constituents[0].address;
 	ptr = (uint8_t *)composite->constituents[index].address;
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/*
 	 * Check that we can't access out of bounds after the region sent to us.
@@ -190,27 +189,27 @@
 /**
  * Attempt to modify below the lower bound of a memory region sent to us.
  */
-TEST_SERVICE(spci_check_lower_bound)
+TEST_SERVICE(ffa_check_lower_bound)
 {
-	struct spci_memory_region *memory_region;
-	struct spci_composite_memory_region *composite;
+	struct ffa_memory_region *memory_region;
+	struct ffa_composite_memory_region *composite;
 	uint8_t *ptr;
 	uint8_t index;
 
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	void *send_buf = SERVICE_SEND_BUFFER();
-	struct spci_value ret = spci_msg_wait();
+	struct ffa_value ret = ffa_msg_wait();
 
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
 	retrieve_memory_from_message(recv_buf, send_buf, ret, NULL);
-	memory_region = (struct spci_memory_region *)recv_buf;
-	composite = spci_memory_region_get_composite(memory_region, 0);
+	memory_region = (struct ffa_memory_region *)recv_buf;
+	composite = ffa_memory_region_get_composite(memory_region, 0);
 
 	/* Choose which constituent we want to test. */
 	index = *(uint8_t *)composite->constituents[0].address;
 	ptr = (uint8_t *)composite->constituents[index].address;
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/*
 	 * Check that we can't access out of bounds before the region sent to
@@ -224,19 +223,19 @@
 /**
  * Attempt to donate memory and then modify.
  */
-TEST_SERVICE(spci_donate_secondary_and_fault)
+TEST_SERVICE(ffa_donate_secondary_and_fault)
 {
 	uint8_t *ptr;
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	void *send_buf = SERVICE_SEND_BUFFER();
 
-	struct spci_value ret = spci_msg_wait();
-	spci_vm_id_t sender =
+	struct ffa_value ret = ffa_msg_wait();
+	ffa_vm_id_t sender =
 		retrieve_memory_from_message(recv_buf, send_buf, ret, NULL);
-	struct spci_memory_region *memory_region =
-		(struct spci_memory_region *)recv_buf;
-	struct spci_composite_memory_region *composite =
-		spci_memory_region_get_composite(memory_region, 0);
+	struct ffa_memory_region *memory_region =
+		(struct ffa_memory_region *)recv_buf;
+	struct ffa_composite_memory_region *composite =
+		ffa_memory_region_get_composite(memory_region, 0);
 
 	ASSERT_EQ(sender, HF_PRIMARY_VM_ID);
 	exception_setup(NULL, exception_handler_yield_data_abort);
@@ -245,12 +244,11 @@
 
 	/* Donate memory to next VM. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, send_buf, hf_vm_get_id(), SERVICE_VM2,
+		FFA_MEM_DONATE_32, send_buf, hf_vm_get_id(), SERVICE_VM2,
 		composite->constituents, composite->constituent_count, 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Ensure that we are unable to modify memory any more. */
 	ptr[0] = 'c';
@@ -261,114 +259,113 @@
 /**
  * Attempt to donate memory twice from VM.
  */
-TEST_SERVICE(spci_donate_twice)
+TEST_SERVICE(ffa_donate_twice)
 {
 	uint32_t msg_size;
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	void *send_buf = SERVICE_SEND_BUFFER();
 
-	struct spci_value ret = spci_msg_wait();
-	spci_vm_id_t sender =
+	struct ffa_value ret = ffa_msg_wait();
+	ffa_vm_id_t sender =
 		retrieve_memory_from_message(recv_buf, send_buf, ret, NULL);
-	struct spci_memory_region *memory_region =
-		(struct spci_memory_region *)recv_buf;
-	struct spci_composite_memory_region *composite =
-		spci_memory_region_get_composite(memory_region, 0);
-	struct spci_memory_region_constituent constituent =
+	struct ffa_memory_region *memory_region =
+		(struct ffa_memory_region *)recv_buf;
+	struct ffa_composite_memory_region *composite =
+		ffa_memory_region_get_composite(memory_region, 0);
+	struct ffa_memory_region_constituent constituent =
 		composite->constituents[0];
 
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Yield to allow attempt to re donate from primary. */
-	spci_yield();
+	ffa_yield();
 
 	/* Give the memory back and notify the sender. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, send_buf, hf_vm_get_id(), sender,
-		&constituent, 1, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_MEM_DONATE_32, send_buf, hf_vm_get_id(), sender,
+		&constituent, 1, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_X);
 
 	/* Attempt to donate the memory to another VM. */
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		send_buf, hf_vm_get_id(), SERVICE_VM2, &constituent, 1, 0, 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size), SPCI_DENIED);
+		FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size), FFA_DENIED);
 
-	spci_yield();
+	ffa_yield();
 }
 
 /**
  * Continually receive memory, check if we have access and ensure it is not
  * changed by a third party.
  */
-TEST_SERVICE(spci_memory_receive)
+TEST_SERVICE(ffa_memory_receive)
 {
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	void *send_buf = SERVICE_SEND_BUFFER();
 
 	for (;;) {
-		struct spci_value ret = spci_msg_wait();
-		struct spci_memory_region *memory_region;
-		struct spci_composite_memory_region *composite;
+		struct ffa_value ret = ffa_msg_wait();
+		struct ffa_memory_region *memory_region;
+		struct ffa_composite_memory_region *composite;
 		uint8_t *ptr;
 
 		retrieve_memory_from_message(recv_buf, send_buf, ret, NULL);
-		memory_region = (struct spci_memory_region *)recv_buf;
-		composite = spci_memory_region_get_composite(memory_region, 0);
+		memory_region = (struct ffa_memory_region *)recv_buf;
+		composite = ffa_memory_region_get_composite(memory_region, 0);
 		ptr = (uint8_t *)composite->constituents[0].address;
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 		ptr[0] = 'd';
-		spci_yield();
+		ffa_yield();
 
 		/* Ensure memory has not changed. */
 		EXPECT_EQ(ptr[0], 'd');
-		spci_yield();
+		ffa_yield();
 	}
 }
 
 /**
  * Receive memory and attempt to donate from primary VM.
  */
-TEST_SERVICE(spci_donate_invalid_source)
+TEST_SERVICE(ffa_donate_invalid_source)
 {
 	uint32_t msg_size;
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	void *send_buf = SERVICE_SEND_BUFFER();
 
-	struct spci_value ret = spci_msg_wait();
-	spci_vm_id_t sender =
+	struct ffa_value ret = ffa_msg_wait();
+	ffa_vm_id_t sender =
 		retrieve_memory_from_message(recv_buf, send_buf, ret, NULL);
-	struct spci_memory_region *memory_region =
-		(struct spci_memory_region *)recv_buf;
-	struct spci_composite_memory_region *composite =
-		spci_memory_region_get_composite(memory_region, 0);
+	struct ffa_memory_region *memory_region =
+		(struct ffa_memory_region *)recv_buf;
+	struct ffa_composite_memory_region *composite =
+		ffa_memory_region_get_composite(memory_region, 0);
 
 	/* Give the memory back and notify the sender. */
 	send_memory_and_retrieve_request(
-		SPCI_MEM_DONATE_32, send_buf, hf_vm_get_id(), sender,
+		FFA_MEM_DONATE_32, send_buf, hf_vm_get_id(), sender,
 		composite->constituents, composite->constituent_count, 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED, SPCI_DATA_ACCESS_RW,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_X);
+		FFA_DATA_ACCESS_NOT_SPECIFIED, FFA_DATA_ACCESS_RW,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_INSTRUCTION_ACCESS_X);
 
 	/* Fail to donate the memory from the primary to VM2. */
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		send_buf, HF_PRIMARY_VM_ID, SERVICE_VM2,
 		composite->constituents, composite->constituent_count, 0, 0,
-		SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_SPCI_ERROR(spci_mem_donate(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
-	spci_yield();
+		FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_FFA_ERROR(ffa_mem_donate(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
+	ffa_yield();
 }
 
-TEST_SERVICE(spci_memory_lend_relinquish)
+TEST_SERVICE(ffa_memory_lend_relinquish)
 {
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
@@ -379,19 +376,19 @@
 		uint32_t count;
 		uint32_t count2;
 		size_t i;
-		spci_memory_handle_t handle;
+		ffa_memory_handle_t handle;
 
 		void *recv_buf = SERVICE_RECV_BUFFER();
 		void *send_buf = SERVICE_SEND_BUFFER();
 
-		struct spci_value ret = spci_msg_wait();
-		spci_vm_id_t sender = retrieve_memory_from_message(
+		struct ffa_value ret = ffa_msg_wait();
+		ffa_vm_id_t sender = retrieve_memory_from_message(
 			recv_buf, send_buf, ret, &handle);
-		struct spci_memory_region *memory_region =
-			(struct spci_memory_region *)recv_buf;
-		struct spci_composite_memory_region *composite =
-			spci_memory_region_get_composite(memory_region, 0);
-		struct spci_memory_region_constituent *constituents =
+		struct ffa_memory_region *memory_region =
+			(struct ffa_memory_region *)recv_buf;
+		struct ffa_composite_memory_region *composite =
+			ffa_memory_region_get_composite(memory_region, 0);
+		struct ffa_memory_region_constituent *constituents =
 			composite->constituents;
 
 		/* ASSERT_TRUE isn't enough for clang-analyze. */
@@ -402,7 +399,7 @@
 		ptr2 = (uint8_t *)constituents[1].address;
 		count2 = constituents[1].page_count;
 		/* Relevant information read, mailbox can be cleared. */
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 		/* Check that one has access to the shared region. */
 		for (i = 0; i < PAGE_SIZE * count; ++i) {
@@ -413,10 +410,10 @@
 		}
 
 		/* Give the memory back and notify the sender. */
-		spci_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
-		EXPECT_EQ(spci_mem_relinquish().func, SPCI_SUCCESS_32);
-		EXPECT_EQ(spci_msg_send(hf_vm_get_id(), sender, 0, 0).func,
-			  SPCI_SUCCESS_32);
+		ffa_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
+		EXPECT_EQ(ffa_mem_relinquish().func, FFA_SUCCESS_32);
+		EXPECT_EQ(ffa_msg_send(hf_vm_get_id(), sender, 0, 0).func,
+			  FFA_SUCCESS_32);
 
 		/*
 		 * Try and access the memory which will cause a fault unless the
@@ -429,22 +426,22 @@
 /**
  * Ensure that we can't relinquish donated memory.
  */
-TEST_SERVICE(spci_memory_donate_relinquish)
+TEST_SERVICE(ffa_memory_donate_relinquish)
 {
 	for (;;) {
 		size_t i;
-		spci_memory_handle_t handle;
-		struct spci_memory_region *memory_region;
-		struct spci_composite_memory_region *composite;
+		ffa_memory_handle_t handle;
+		struct ffa_memory_region *memory_region;
+		struct ffa_composite_memory_region *composite;
 		uint8_t *ptr;
 
 		void *recv_buf = SERVICE_RECV_BUFFER();
 		void *send_buf = SERVICE_SEND_BUFFER();
-		struct spci_value ret = spci_msg_wait();
+		struct ffa_value ret = ffa_msg_wait();
 
 		retrieve_memory_from_message(recv_buf, send_buf, ret, &handle);
-		memory_region = (struct spci_memory_region *)recv_buf;
-		composite = spci_memory_region_get_composite(memory_region, 0);
+		memory_region = (struct ffa_memory_region *)recv_buf;
+		composite = ffa_memory_region_get_composite(memory_region, 0);
 
 		ptr = (uint8_t *)composite->constituents[0].address;
 
@@ -457,15 +454,14 @@
 		 * Attempt to relinquish the memory, which should fail because
 		 * it was donated not lent.
 		 */
-		spci_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-		EXPECT_SPCI_ERROR(spci_mem_relinquish(),
-				  SPCI_INVALID_PARAMETERS);
+		ffa_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+		EXPECT_FFA_ERROR(ffa_mem_relinquish(), FFA_INVALID_PARAMETERS);
 
 		/* Ensure we still have access to the memory. */
 		ptr[0] = 123;
 
-		spci_yield();
+		ffa_yield();
 	}
 }
 
@@ -473,109 +469,108 @@
  * Receive memory that has been shared, try to relinquish it with the clear flag
  * set (and expect to fail), and then relinquish without any flags.
  */
-TEST_SERVICE(spci_memory_share_relinquish_clear)
+TEST_SERVICE(ffa_memory_share_relinquish_clear)
 {
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
 	/* Loop, receiving memory and relinquishing it. */
 	for (;;) {
-		spci_memory_handle_t handle;
+		ffa_memory_handle_t handle;
 
 		void *recv_buf = SERVICE_RECV_BUFFER();
 		void *send_buf = SERVICE_SEND_BUFFER();
 
-		struct spci_value ret = spci_msg_wait();
-		spci_vm_id_t sender = retrieve_memory_from_message(
+		struct ffa_value ret = ffa_msg_wait();
+		ffa_vm_id_t sender = retrieve_memory_from_message(
 			recv_buf, send_buf, ret, &handle);
 
 		/*
 		 * Mailbox can be cleared, we don't actually care what the
 		 * memory region is.
 		 */
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 		/* Trying to relinquish the memory and clear it should fail. */
-		spci_mem_relinquish_init(send_buf, handle,
-					 SPCI_MEMORY_REGION_FLAG_CLEAR,
-					 hf_vm_get_id());
-		EXPECT_SPCI_ERROR(spci_mem_relinquish(),
-				  SPCI_INVALID_PARAMETERS);
+		ffa_mem_relinquish_init(send_buf, handle,
+					FFA_MEMORY_REGION_FLAG_CLEAR,
+					hf_vm_get_id());
+		EXPECT_FFA_ERROR(ffa_mem_relinquish(), FFA_INVALID_PARAMETERS);
 
 		/* Give the memory back and notify the sender. */
-		spci_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
-		EXPECT_EQ(spci_mem_relinquish().func, SPCI_SUCCESS_32);
-		EXPECT_EQ(spci_msg_send(hf_vm_get_id(), sender, 0, 0).func,
-			  SPCI_SUCCESS_32);
+		ffa_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
+		EXPECT_EQ(ffa_mem_relinquish().func, FFA_SUCCESS_32);
+		EXPECT_EQ(ffa_msg_send(hf_vm_get_id(), sender, 0, 0).func,
+			  FFA_SUCCESS_32);
 	}
 }
 
 /**
  * Receive memory and attempt to donate from primary VM.
  */
-TEST_SERVICE(spci_lend_invalid_source)
+TEST_SERVICE(ffa_lend_invalid_source)
 {
-	spci_memory_handle_t handle;
+	ffa_memory_handle_t handle;
 	uint32_t msg_size;
 
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	void *send_buf = SERVICE_SEND_BUFFER();
-	struct spci_value ret = spci_msg_wait();
-	spci_vm_id_t sender =
+	struct ffa_value ret = ffa_msg_wait();
+	ffa_vm_id_t sender =
 		retrieve_memory_from_message(recv_buf, send_buf, ret, &handle);
-	struct spci_memory_region *memory_region =
-		(struct spci_memory_region *)recv_buf;
-	struct spci_composite_memory_region *composite =
-		spci_memory_region_get_composite(memory_region, 0);
+	struct ffa_memory_region *memory_region =
+		(struct ffa_memory_region *)recv_buf;
+	struct ffa_composite_memory_region *composite =
+		ffa_memory_region_get_composite(memory_region, 0);
 
 	/* Give the memory back and notify the sender. */
-	spci_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
-	EXPECT_EQ(spci_mem_relinquish().func, SPCI_SUCCESS_32);
-	EXPECT_EQ(spci_msg_send(hf_vm_get_id(), sender, 0, 0).func,
-		  SPCI_SUCCESS_32);
+	ffa_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
+	EXPECT_EQ(ffa_mem_relinquish().func, FFA_SUCCESS_32);
+	EXPECT_EQ(ffa_msg_send(hf_vm_get_id(), sender, 0, 0).func,
+		  FFA_SUCCESS_32);
 
 	/* Ensure we cannot lend from the primary to another secondary. */
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		send_buf, HF_PRIMARY_VM_ID, SERVICE_VM2,
 		composite->constituents, composite->constituent_count, 0, 0,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_X,
-		SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-		SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_SPCI_ERROR(spci_mem_lend(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_X,
+		FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+		FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_FFA_ERROR(ffa_mem_lend(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 
 	/* Ensure we cannot share from the primary to another secondary. */
-	msg_size = spci_memory_region_init(
+	msg_size = ffa_memory_region_init(
 		send_buf, HF_PRIMARY_VM_ID, SERVICE_VM2,
 		composite->constituents, composite->constituent_count, 0, 0,
-		SPCI_DATA_ACCESS_RW, SPCI_INSTRUCTION_ACCESS_X,
-		SPCI_MEMORY_NORMAL_MEM, SPCI_MEMORY_CACHE_WRITE_BACK,
-		SPCI_MEMORY_OUTER_SHAREABLE);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-	EXPECT_SPCI_ERROR(spci_mem_share(msg_size, msg_size),
-			  SPCI_INVALID_PARAMETERS);
+		FFA_DATA_ACCESS_RW, FFA_INSTRUCTION_ACCESS_X,
+		FFA_MEMORY_NORMAL_MEM, FFA_MEMORY_CACHE_WRITE_BACK,
+		FFA_MEMORY_OUTER_SHAREABLE);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+	EXPECT_FFA_ERROR(ffa_mem_share(msg_size, msg_size),
+			 FFA_INVALID_PARAMETERS);
 
-	spci_yield();
+	ffa_yield();
 }
 
 /**
  * Attempt to execute an instruction from the lent memory.
  */
-TEST_SERVICE(spci_memory_lend_relinquish_X)
+TEST_SERVICE(ffa_memory_lend_relinquish_X)
 {
 	exception_setup(NULL, exception_handler_yield_instruction_abort);
 
 	for (;;) {
-		spci_memory_handle_t handle;
+		ffa_memory_handle_t handle;
 		void *recv_buf = SERVICE_RECV_BUFFER();
 		void *send_buf = SERVICE_SEND_BUFFER();
-		struct spci_value ret = spci_msg_wait();
-		spci_vm_id_t sender = retrieve_memory_from_message(
+		struct ffa_value ret = ffa_msg_wait();
+		ffa_vm_id_t sender = retrieve_memory_from_message(
 			recv_buf, send_buf, ret, &handle);
-		struct spci_memory_region *memory_region =
-			(struct spci_memory_region *)recv_buf;
-		struct spci_composite_memory_region *composite =
-			spci_memory_region_get_composite(memory_region, 0);
-		struct spci_memory_region_constituent *constituents;
+		struct ffa_memory_region *memory_region =
+			(struct ffa_memory_region *)recv_buf;
+		struct ffa_composite_memory_region *composite =
+			ffa_memory_region_get_composite(memory_region, 0);
+		struct ffa_memory_region_constituent *constituents;
 		uint64_t *ptr;
 
 		/* ASSERT_TRUE isn't enough for clang-analyze. */
@@ -584,7 +579,7 @@
 		constituents = composite->constituents;
 		ptr = (uint64_t *)constituents[0].address;
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 		/*
 		 * Verify that the instruction in memory is the encoded RET
@@ -595,56 +590,56 @@
 		__asm__ volatile("blr %0" ::"r"(ptr));
 
 		/* Release the memory again. */
-		spci_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
-		EXPECT_EQ(spci_mem_relinquish().func, SPCI_SUCCESS_32);
-		EXPECT_EQ(spci_msg_send(hf_vm_get_id(), sender, 0, 0).func,
-			  SPCI_SUCCESS_32);
+		ffa_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
+		EXPECT_EQ(ffa_mem_relinquish().func, FFA_SUCCESS_32);
+		EXPECT_EQ(ffa_msg_send(hf_vm_get_id(), sender, 0, 0).func,
+			  FFA_SUCCESS_32);
 	}
 }
 
 /**
  * Attempt to retrieve a shared page but expect to fail.
  */
-TEST_SERVICE(spci_memory_share_fail)
+TEST_SERVICE(ffa_memory_share_fail)
 {
 	for (;;) {
 		void *recv_buf = SERVICE_RECV_BUFFER();
 		void *send_buf = SERVICE_SEND_BUFFER();
-		struct spci_value ret = spci_msg_wait();
-		spci_vm_id_t sender = retrieve_memory_from_message_expect_fail(
-			recv_buf, send_buf, ret, SPCI_DENIED);
+		struct ffa_value ret = ffa_msg_wait();
+		ffa_vm_id_t sender = retrieve_memory_from_message_expect_fail(
+			recv_buf, send_buf, ret, FFA_DENIED);
 
 		/* Return control to primary. */
-		EXPECT_EQ(spci_msg_send(hf_vm_get_id(), sender, 0, 0).func,
-			  SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_msg_send(hf_vm_get_id(), sender, 0, 0).func,
+			  FFA_SUCCESS_32);
 	}
 }
 
 /**
  * Attempt to read and write to a shared page.
  */
-TEST_SERVICE(spci_memory_lend_relinquish_RW)
+TEST_SERVICE(ffa_memory_lend_relinquish_RW)
 {
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
 	for (;;) {
-		spci_memory_handle_t handle;
+		ffa_memory_handle_t handle;
 		uint8_t *ptr;
 		size_t i;
 
 		void *recv_buf = SERVICE_RECV_BUFFER();
 		void *send_buf = SERVICE_SEND_BUFFER();
-		struct spci_value ret = spci_msg_wait();
-		spci_vm_id_t sender = retrieve_memory_from_message(
+		struct ffa_value ret = ffa_msg_wait();
+		ffa_vm_id_t sender = retrieve_memory_from_message(
 			recv_buf, send_buf, ret, &handle);
-		struct spci_memory_region *memory_region =
-			(struct spci_memory_region *)recv_buf;
-		struct spci_composite_memory_region *composite =
-			spci_memory_region_get_composite(memory_region, 0);
-		struct spci_memory_region_constituent constituent_copy =
+		struct ffa_memory_region *memory_region =
+			(struct ffa_memory_region *)recv_buf;
+		struct ffa_composite_memory_region *composite =
+			ffa_memory_region_get_composite(memory_region, 0);
+		struct ffa_memory_region_constituent constituent_copy =
 			composite->constituents[0];
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 		ptr = (uint8_t *)constituent_copy.address;
 
@@ -654,7 +649,7 @@
 		}
 
 		/* Return control to primary, to verify shared access. */
-		spci_yield();
+		ffa_yield();
 
 		/* Attempt to modify the memory. */
 		for (i = 0; i < PAGE_SIZE; ++i) {
@@ -662,32 +657,32 @@
 		}
 
 		/* Give the memory back and notify the sender. */
-		spci_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
-		EXPECT_EQ(spci_mem_relinquish().func, SPCI_SUCCESS_32);
-		EXPECT_EQ(spci_msg_send(hf_vm_get_id(), sender, 0, 0).func,
-			  SPCI_SUCCESS_32);
+		ffa_mem_relinquish_init(send_buf, handle, 0, hf_vm_get_id());
+		EXPECT_EQ(ffa_mem_relinquish().func, FFA_SUCCESS_32);
+		EXPECT_EQ(ffa_msg_send(hf_vm_get_id(), sender, 0, 0).func,
+			  FFA_SUCCESS_32);
 	}
 }
 
-TEST_SERVICE(spci_memory_lend_twice)
+TEST_SERVICE(ffa_memory_lend_twice)
 {
-	struct spci_value ret = spci_msg_wait();
+	struct ffa_value ret = ffa_msg_wait();
 	uint8_t *ptr;
 	uint32_t msg_size;
 	size_t i;
 
 	void *recv_buf = SERVICE_RECV_BUFFER();
 	void *send_buf = SERVICE_SEND_BUFFER();
-	struct spci_memory_region *memory_region;
-	struct spci_composite_memory_region *composite;
-	struct spci_memory_region_constituent constituent_copy;
+	struct ffa_memory_region *memory_region;
+	struct ffa_composite_memory_region *composite;
+	struct ffa_memory_region_constituent constituent_copy;
 
 	retrieve_memory_from_message(recv_buf, send_buf, ret, NULL);
-	memory_region = (struct spci_memory_region *)recv_buf;
-	composite = spci_memory_region_get_composite(memory_region, 0);
+	memory_region = (struct ffa_memory_region *)recv_buf;
+	composite = ffa_memory_region_get_composite(memory_region, 0);
 	constituent_copy = composite->constituents[0];
 
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	ptr = (uint8_t *)constituent_copy.address;
 
@@ -705,24 +700,24 @@
 		constituent_copy.address = (uint64_t)ptr + i;
 
 		/* Fail to lend or share the memory from the primary. */
-		msg_size = spci_memory_region_init(
+		msg_size = ffa_memory_region_init(
 			send_buf, HF_PRIMARY_VM_ID, SERVICE_VM2,
-			&constituent_copy, 1, 0, 0, SPCI_DATA_ACCESS_RW,
-			SPCI_INSTRUCTION_ACCESS_X, SPCI_MEMORY_NORMAL_MEM,
-			SPCI_MEMORY_CACHE_WRITE_BACK,
-			SPCI_MEMORY_OUTER_SHAREABLE);
-		EXPECT_SPCI_ERROR(spci_mem_lend(msg_size, msg_size),
-				  SPCI_INVALID_PARAMETERS);
-		msg_size = spci_memory_region_init(
+			&constituent_copy, 1, 0, 0, FFA_DATA_ACCESS_RW,
+			FFA_INSTRUCTION_ACCESS_X, FFA_MEMORY_NORMAL_MEM,
+			FFA_MEMORY_CACHE_WRITE_BACK,
+			FFA_MEMORY_OUTER_SHAREABLE);
+		EXPECT_FFA_ERROR(ffa_mem_lend(msg_size, msg_size),
+				 FFA_INVALID_PARAMETERS);
+		msg_size = ffa_memory_region_init(
 			send_buf, HF_PRIMARY_VM_ID, SERVICE_VM2,
-			&constituent_copy, 1, 0, 0, SPCI_DATA_ACCESS_RW,
-			SPCI_INSTRUCTION_ACCESS_X, SPCI_MEMORY_NORMAL_MEM,
-			SPCI_MEMORY_CACHE_WRITE_BACK,
-			SPCI_MEMORY_OUTER_SHAREABLE);
-		EXPECT_SPCI_ERROR(spci_mem_share(msg_size, msg_size),
-				  SPCI_INVALID_PARAMETERS);
+			&constituent_copy, 1, 0, 0, FFA_DATA_ACCESS_RW,
+			FFA_INSTRUCTION_ACCESS_X, FFA_MEMORY_NORMAL_MEM,
+			FFA_MEMORY_CACHE_WRITE_BACK,
+			FFA_MEMORY_OUTER_SHAREABLE);
+		EXPECT_FFA_ERROR(ffa_mem_share(msg_size, msg_size),
+				 FFA_INVALID_PARAMETERS);
 	}
 
 	/* Return control to primary. */
-	spci_yield();
+	ffa_yield();
 }
diff --git a/test/vmapi/primary_with_secondaries/services/perfmon.c b/test/vmapi/primary_with_secondaries/services/perfmon.c
index 1e19ffb..ced0ea9 100644
--- a/test/vmapi/primary_with_secondaries/services/perfmon.c
+++ b/test/vmapi/primary_with_secondaries/services/perfmon.c
@@ -31,5 +31,5 @@
 	write_msr(PMINTENSET_EL1, 0xf);
 
 	EXPECT_EQ(exception_handler_get_num(), 3);
-	spci_yield();
+	ffa_yield();
 }
diff --git a/test/vmapi/primary_with_secondaries/services/receive_block.c b/test/vmapi/primary_with_secondaries/services/receive_block.c
index 49ea5f2..348e9b0 100644
--- a/test/vmapi/primary_with_secondaries/services/receive_block.c
+++ b/test/vmapi/primary_with_secondaries/services/receive_block.c
@@ -18,13 +18,13 @@
 #include "hf/arch/vm/interrupts.h"
 
 #include "hf/dlog.h"
-#include "hf/spci.h"
+#include "hf/ffa.h"
 
 #include "vmapi/hf/call.h"
 
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 /*
  * Secondary VM that enables an interrupt, disables interrupts globally, and
@@ -47,12 +47,12 @@
 	hf_interrupt_enable(EXTERNAL_INTERRUPT_ID_A, true);
 
 	for (i = 0; i < 10; ++i) {
-		struct spci_value res = spci_msg_wait();
-		EXPECT_SPCI_ERROR(res, SPCI_INTERRUPTED);
+		struct ffa_value res = ffa_msg_wait();
+		EXPECT_FFA_ERROR(res, FFA_INTERRUPTED);
 	}
 
-	memcpy_s(SERVICE_SEND_BUFFER(), SPCI_MSG_PAYLOAD_MAX, message,
+	memcpy_s(SERVICE_SEND_BUFFER(), FFA_MSG_PAYLOAD_MAX, message,
 		 sizeof(message));
 
-	spci_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, sizeof(message), 0);
+	ffa_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, sizeof(message), 0);
 }
diff --git a/test/vmapi/primary_with_secondaries/services/relay.c b/test/vmapi/primary_with_secondaries/services/relay.c
index 7917251..2ab8534 100644
--- a/test/vmapi/primary_with_secondaries/services/relay.c
+++ b/test/vmapi/primary_with_secondaries/services/relay.c
@@ -30,31 +30,31 @@
 	 * message so multiple IDs can be places at the start of the message.
 	 */
 	for (;;) {
-		spci_vm_id_t *chain;
-		spci_vm_id_t next_vm_id;
+		ffa_vm_id_t *chain;
+		ffa_vm_id_t next_vm_id;
 		void *next_message;
 		uint32_t next_message_size;
 
 		/* Receive the message to relay. */
-		struct spci_value ret = spci_msg_wait();
-		ASSERT_EQ(ret.func, SPCI_MSG_SEND_32);
+		struct ffa_value ret = ffa_msg_wait();
+		ASSERT_EQ(ret.func, FFA_MSG_SEND_32);
 
 		/* Prepare to relay the message. */
 		void *recv_buf = SERVICE_RECV_BUFFER();
 		void *send_buf = SERVICE_SEND_BUFFER();
-		ASSERT_GE(spci_msg_send_size(ret), sizeof(spci_vm_id_t));
+		ASSERT_GE(ffa_msg_send_size(ret), sizeof(ffa_vm_id_t));
 
-		chain = (spci_vm_id_t *)recv_buf;
+		chain = (ffa_vm_id_t *)recv_buf;
 		next_vm_id = le16toh(*chain);
 		next_message = chain + 1;
 		next_message_size =
-			spci_msg_send_size(ret) - sizeof(spci_vm_id_t);
+			ffa_msg_send_size(ret) - sizeof(ffa_vm_id_t);
 
 		/* Send the message to the next stage. */
-		memcpy_s(send_buf, SPCI_MSG_PAYLOAD_MAX, next_message,
+		memcpy_s(send_buf, FFA_MSG_PAYLOAD_MAX, next_message,
 			 next_message_size);
 
-		EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
-		spci_msg_send(hf_vm_get_id(), next_vm_id, next_message_size, 0);
+		EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
+		ffa_msg_send(hf_vm_get_id(), next_vm_id, next_message_size, 0);
 	}
 }
diff --git a/test/vmapi/primary_with_secondaries/services/smp.c b/test/vmapi/primary_with_secondaries/services/smp.c
index 9030fb5..79d91fc 100644
--- a/test/vmapi/primary_with_secondaries/services/smp.c
+++ b/test/vmapi/primary_with_secondaries/services/smp.c
@@ -23,7 +23,7 @@
 #include "hf/std.h"
 
 #include "vmapi/hf/call.h"
-#include "vmapi/hf/spci.h"
+#include "vmapi/hf/ffa.h"
 
 #include "../psci.h"
 #include "primary_with_secondary.h"
@@ -40,10 +40,10 @@
 /** Send a message back to the primary. */
 void send_message(const char *message, uint32_t size)
 {
-	memcpy_s(SERVICE_SEND_BUFFER(), SPCI_MSG_PAYLOAD_MAX, message, size);
+	memcpy_s(SERVICE_SEND_BUFFER(), FFA_MSG_PAYLOAD_MAX, message, size);
 
-	ASSERT_EQ(spci_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, size, 0).func,
-		  SPCI_SUCCESS_32);
+	ASSERT_EQ(ffa_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, size, 0).func,
+		  FFA_SUCCESS_32);
 }
 
 /**
diff --git a/test/vmapi/primary_with_secondaries/services/unmapped.c b/test/vmapi/primary_with_secondaries/services/unmapped.c
index 30d9f07..ec85310 100644
--- a/test/vmapi/primary_with_secondaries/services/unmapped.c
+++ b/test/vmapi/primary_with_secondaries/services/unmapped.c
@@ -40,17 +40,17 @@
 {
 	void *send_buf = SERVICE_SEND_BUFFER();
 	/* Give some memory to the primary VM so that it's unmapped. */
-	struct spci_memory_region_constituent constituents[] = {
+	struct ffa_memory_region_constituent constituents[] = {
 		{.address = (uint64_t)(&pages[PAGE_SIZE]), .page_count = 1},
 	};
-	uint32_t msg_size = spci_memory_region_init(
+	uint32_t msg_size = ffa_memory_region_init(
 		send_buf, hf_vm_get_id(), HF_PRIMARY_VM_ID, constituents,
-		ARRAY_SIZE(constituents), 0, 0, SPCI_DATA_ACCESS_NOT_SPECIFIED,
-		SPCI_INSTRUCTION_ACCESS_NOT_SPECIFIED, SPCI_MEMORY_NORMAL_MEM,
-		SPCI_MEMORY_CACHE_WRITE_BACK, SPCI_MEMORY_OUTER_SHAREABLE);
+		ARRAY_SIZE(constituents), 0, 0, FFA_DATA_ACCESS_NOT_SPECIFIED,
+		FFA_INSTRUCTION_ACCESS_NOT_SPECIFIED, FFA_MEMORY_NORMAL_MEM,
+		FFA_MEMORY_CACHE_WRITE_BACK, FFA_MEMORY_OUTER_SHAREABLE);
 	exception_setup(NULL, exception_handler_yield_data_abort);
 
-	EXPECT_EQ(spci_mem_donate(msg_size, msg_size).func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_mem_donate(msg_size, msg_size).func, FFA_SUCCESS_32);
 
 	*(volatile uint64_t *)(&pages[PAGE_SIZE - 6]);
 	FAIL("Exception not generated by invalid access.");
diff --git a/test/vmapi/primary_with_secondaries/services/wfi.c b/test/vmapi/primary_with_secondaries/services/wfi.c
index 6d61194..4b9f6bc 100644
--- a/test/vmapi/primary_with_secondaries/services/wfi.c
+++ b/test/vmapi/primary_with_secondaries/services/wfi.c
@@ -48,8 +48,8 @@
 		interrupt_wait();
 	}
 
-	memcpy_s(SERVICE_SEND_BUFFER(), SPCI_MSG_PAYLOAD_MAX, message,
+	memcpy_s(SERVICE_SEND_BUFFER(), FFA_MSG_PAYLOAD_MAX, message,
 		 sizeof(message));
 
-	spci_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, sizeof(message), 0);
+	ffa_msg_send(hf_vm_get_id(), HF_PRIMARY_VM_ID, sizeof(message), 0);
 }
diff --git a/test/vmapi/primary_with_secondaries/smp.c b/test/vmapi/primary_with_secondaries/smp.c
index d76b5f0..685d556 100644
--- a/test/vmapi/primary_with_secondaries/smp.c
+++ b/test/vmapi/primary_with_secondaries/smp.c
@@ -22,11 +22,11 @@
 
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 TEAR_DOWN(smp)
 {
-	EXPECT_SPCI_ERROR(spci_rx_release(), SPCI_DENIED);
+	EXPECT_FFA_ERROR(ffa_rx_release(), FFA_DENIED);
 }
 
 /**
@@ -37,40 +37,40 @@
 {
 	const char expected_response_0[] = "vCPU 0";
 	const char expected_response_1[] = "vCPU 1";
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM3, "smp", mb.send);
 
 	/* Let the first vCPU start the second vCPU. */
-	run_res = spci_run(SERVICE_VM3, 0);
-	EXPECT_EQ(run_res.func, HF_SPCI_RUN_WAKE_UP);
-	EXPECT_EQ(spci_vm_id(run_res), SERVICE_VM3);
-	EXPECT_EQ(spci_vcpu_index(run_res), 1);
+	run_res = ffa_run(SERVICE_VM3, 0);
+	EXPECT_EQ(run_res.func, HF_FFA_RUN_WAKE_UP);
+	EXPECT_EQ(ffa_vm_id(run_res), SERVICE_VM3);
+	EXPECT_EQ(ffa_vcpu_index(run_res), 1);
 
 	/* Run the second vCPU and wait for a message. */
 	dlog("Run second vCPU for message\n");
-	run_res = spci_run(SERVICE_VM3, 1);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response_1));
+	run_res = ffa_run(SERVICE_VM3, 1);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response_1));
 	EXPECT_EQ(memcmp(mb.recv, expected_response_1,
 			 sizeof(expected_response_1)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Run the first vCPU and wait for a different message. */
 	dlog("Run first vCPU for message\n");
-	run_res = spci_run(SERVICE_VM3, 0);
-	EXPECT_EQ(run_res.func, SPCI_MSG_SEND_32);
-	EXPECT_EQ(spci_msg_send_size(run_res), sizeof(expected_response_0));
+	run_res = ffa_run(SERVICE_VM3, 0);
+	EXPECT_EQ(run_res.func, FFA_MSG_SEND_32);
+	EXPECT_EQ(ffa_msg_send_size(run_res), sizeof(expected_response_0));
 	EXPECT_EQ(memcmp(mb.recv, expected_response_0,
 			 sizeof(expected_response_0)),
 		  0);
-	EXPECT_EQ(spci_rx_release().func, SPCI_SUCCESS_32);
+	EXPECT_EQ(ffa_rx_release().func, FFA_SUCCESS_32);
 
 	/* Run the second vCPU again, and expect it to turn itself off. */
 	dlog("Run second vCPU for poweroff.\n");
-	run_res = spci_run(SERVICE_VM3, 1);
-	EXPECT_EQ(run_res.func, HF_SPCI_RUN_WAIT_FOR_INTERRUPT);
-	EXPECT_EQ(run_res.arg2, SPCI_SLEEP_INDEFINITE);
+	run_res = ffa_run(SERVICE_VM3, 1);
+	EXPECT_EQ(run_res.func, HF_FFA_RUN_WAIT_FOR_INTERRUPT);
+	EXPECT_EQ(run_res.arg2, FFA_SLEEP_INDEFINITE);
 }
diff --git a/test/vmapi/primary_with_secondaries/spci.c b/test/vmapi/primary_with_secondaries/spci.c
deleted file mode 100644
index 6ffc85f..0000000
--- a/test/vmapi/primary_with_secondaries/spci.c
+++ /dev/null
@@ -1,132 +0,0 @@
-/*
- * Copyright 2019 The Hafnium Authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *     https://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "hf/spci.h"
-
-#include <stdint.h>
-
-#include "hf/std.h"
-
-#include "vmapi/hf/call.h"
-
-#include "primary_with_secondary.h"
-#include "test/hftest.h"
-#include "test/vmapi/spci.h"
-
-/**
- * Send a message to a secondary VM which checks the validity of the received
- * header.
- */
-TEST(spci, msg_send)
-{
-	const char message[] = "spci_msg_send";
-	struct spci_value run_res;
-	struct mailbox_buffers mb = set_up_mailbox();
-
-	SERVICE_SELECT(SERVICE_VM1, "spci_check", mb.send);
-
-	/* Set the payload, init the message header and send the message. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
-	EXPECT_EQ(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, sizeof(message), 0)
-			.func,
-		SPCI_SUCCESS_32);
-
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
-}
-
-/**
- * Send a message to a secondary VM spoofing the source VM id.
- */
-TEST(spci, msg_send_spoof)
-{
-	const char message[] = "spci_msg_send";
-	struct mailbox_buffers mb = set_up_mailbox();
-
-	SERVICE_SELECT(SERVICE_VM1, "spci_check", mb.send);
-
-	/* Set the payload, init the message header and send the message. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
-	EXPECT_SPCI_ERROR(
-		spci_msg_send(SERVICE_VM2, SERVICE_VM1, sizeof(message), 0),
-		SPCI_INVALID_PARAMETERS);
-}
-
-/**
- * Send a message to a secondary VM with incorrect destination id.
- */
-TEST(spci, spci_invalid_destination_id)
-{
-	const char message[] = "fail to send";
-	struct mailbox_buffers mb = set_up_mailbox();
-
-	SERVICE_SELECT(SERVICE_VM1, "spci_check", mb.send);
-	/* Set the payload, init the message header and send the message. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
-	EXPECT_SPCI_ERROR(
-		spci_msg_send(HF_PRIMARY_VM_ID, -1, sizeof(message), 0),
-		SPCI_INVALID_PARAMETERS);
-}
-
-/**
- * Ensure that the length parameter is respected when sending messages.
- */
-TEST(spci, spci_incorrect_length)
-{
-	const char message[] = "this should be truncated";
-	struct spci_value run_res;
-	struct mailbox_buffers mb = set_up_mailbox();
-
-	SERVICE_SELECT(SERVICE_VM1, "spci_length", mb.send);
-
-	/* Send the message and compare if truncated. */
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
-	/* Hard code incorrect length. */
-	EXPECT_EQ(spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 16, 0).func,
-		  SPCI_SUCCESS_32);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
-}
-
-/**
- * Attempt to send a message larger than what is supported.
- */
-TEST(spci, spci_large_message)
-{
-	const char message[] = "fail to send";
-	struct mailbox_buffers mb = set_up_mailbox();
-
-	memcpy_s(mb.send, SPCI_MSG_PAYLOAD_MAX, message, sizeof(message));
-	/* Send a message that is larger than the mailbox supports (4KB). */
-	EXPECT_SPCI_ERROR(
-		spci_msg_send(HF_PRIMARY_VM_ID, SERVICE_VM1, 4 * 1024 + 1, 0),
-		SPCI_INVALID_PARAMETERS);
-}
-
-/**
- * Verify secondary VM non blocking recv.
- */
-TEST(spci, spci_recv_non_blocking)
-{
-	struct mailbox_buffers mb = set_up_mailbox();
-	struct spci_value run_res;
-
-	/* Check is performed in secondary VM. */
-	SERVICE_SELECT(SERVICE_VM1, "spci_recv_non_blocking", mb.send);
-	run_res = spci_run(SERVICE_VM1, 0);
-	EXPECT_EQ(run_res.func, SPCI_YIELD_32);
-}
diff --git a/test/vmapi/primary_with_secondaries/sysregs.c b/test/vmapi/primary_with_secondaries/sysregs.c
index 42f0636..fa7a30c 100644
--- a/test/vmapi/primary_with_secondaries/sysregs.c
+++ b/test/vmapi/primary_with_secondaries/sysregs.c
@@ -20,7 +20,7 @@
 
 #include "primary_with_secondary.h"
 #include "test/vmapi/exception_handler.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 SET_UP(sysregs)
 {
diff --git a/test/vmapi/primary_with_secondaries/unmapped.c b/test/vmapi/primary_with_secondaries/unmapped.c
index 64fa9a0..42c69a9 100644
--- a/test/vmapi/primary_with_secondaries/unmapped.c
+++ b/test/vmapi/primary_with_secondaries/unmapped.c
@@ -19,19 +19,19 @@
 #include "primary_with_secondary.h"
 #include "test/hftest.h"
 #include "test/vmapi/exception_handler.h"
-#include "test/vmapi/spci.h"
+#include "test/vmapi/ffa.h"
 
 /**
  * Accessing unmapped memory traps the VM.
  */
 TEST(unmapped, data_unmapped)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "data_unmapped", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -41,12 +41,12 @@
  */
 TEST(unmapped, straddling_data_unmapped)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "straddling_data_unmapped", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
@@ -56,12 +56,12 @@
  */
 TEST(unmapped, instruction_unmapped)
 {
-	struct spci_value run_res;
+	struct ffa_value run_res;
 	struct mailbox_buffers mb = set_up_mailbox();
 
 	SERVICE_SELECT(SERVICE_VM1, "instruction_unmapped", mb.send);
 
-	run_res = spci_run(SERVICE_VM1, 0);
+	run_res = ffa_run(SERVICE_VM1, 0);
 	EXPECT_EQ(exception_handler_receive_exception_count(&run_res, mb.recv),
 		  1);
 }
diff --git a/vmlib/BUILD.gn b/vmlib/BUILD.gn
index 9ba8c3d..616391d 100644
--- a/vmlib/BUILD.gn
+++ b/vmlib/BUILD.gn
@@ -16,6 +16,6 @@
 
 source_set("vmlib") {
   sources = [
-    "spci.c",
+    "ffa.c",
   ]
 }
diff --git a/vmlib/aarch64/call.c b/vmlib/aarch64/call.c
index 0c1a078..f92c644 100644
--- a/vmlib/aarch64/call.c
+++ b/vmlib/aarch64/call.c
@@ -16,7 +16,7 @@
 
 #include "hf/call.h"
 
-#include "hf/spci.h"
+#include "hf/ffa.h"
 #include "hf/types.h"
 
 int64_t hf_call(uint64_t arg0, uint64_t arg1, uint64_t arg2, uint64_t arg3)
@@ -37,7 +37,7 @@
 	return r0;
 }
 
-struct spci_value spci_call(struct spci_value args)
+struct ffa_value ffa_call(struct ffa_value args)
 {
 	register uint64_t r0 __asm__("x0") = args.func;
 	register uint64_t r1 __asm__("x1") = args.arg1;
@@ -54,12 +54,12 @@
 		"+r"(r0), "+r"(r1), "+r"(r2), "+r"(r3), "+r"(r4), "+r"(r5),
 		"+r"(r6), "+r"(r7));
 
-	return (struct spci_value){.func = r0,
-				   .arg1 = r1,
-				   .arg2 = r2,
-				   .arg3 = r3,
-				   .arg4 = r4,
-				   .arg5 = r5,
-				   .arg6 = r6,
-				   .arg7 = r7};
+	return (struct ffa_value){.func = r0,
+				  .arg1 = r1,
+				  .arg2 = r2,
+				  .arg3 = r3,
+				  .arg4 = r4,
+				  .arg5 = r5,
+				  .arg6 = r6,
+				  .arg7 = r7};
 }
diff --git a/vmlib/ffa.c b/vmlib/ffa.c
new file mode 100644
index 0000000..c32b2b6
--- /dev/null
+++ b/vmlib/ffa.c
@@ -0,0 +1,202 @@
+/*
+ * Copyright 2019 The Hafnium Authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "hf/ffa.h"
+
+#include <stddef.h>
+
+#include "hf/types.h"
+
+#if defined(__linux__) && defined(__KERNEL__)
+#include <linux/kernel.h>
+#include <linux/string.h>
+
+#else
+#include "hf/std.h"
+#endif
+
+/**
+ * Initialises the given `ffa_memory_region` and copies the constituent
+ * information to it. Returns the length in bytes occupied by the data copied to
+ * `memory_region` (attributes, constituents and memory region header size).
+ */
+static uint32_t ffa_memory_region_init_internal(
+	struct ffa_memory_region *memory_region, ffa_vm_id_t sender,
+	ffa_memory_attributes_t attributes, ffa_memory_region_flags_t flags,
+	ffa_memory_handle_t handle, uint32_t tag, ffa_vm_id_t receiver,
+	ffa_memory_access_permissions_t permissions,
+	const struct ffa_memory_region_constituent constituents[],
+	uint32_t constituent_count)
+{
+	struct ffa_composite_memory_region *composite_memory_region;
+	uint32_t index;
+	uint32_t constituents_length =
+		constituent_count *
+		sizeof(struct ffa_memory_region_constituent);
+
+	memory_region->sender = sender;
+	memory_region->attributes = attributes;
+	memory_region->reserved_0 = 0;
+	memory_region->flags = flags;
+	memory_region->handle = handle;
+	memory_region->tag = tag;
+	memory_region->reserved_1 = 0;
+	memory_region->receiver_count = 1;
+	memory_region->receivers[0].receiver_permissions.receiver = receiver;
+	memory_region->receivers[0].receiver_permissions.permissions =
+		permissions;
+	memory_region->receivers[0].receiver_permissions.flags = 0;
+	/*
+	 * Note that `sizeof(struct_ffa_memory_region)` and `sizeof(struct
+	 * ffa_memory_access)` must both be multiples of 16 (as verified by the
+	 * asserts in `ffa_memory.c`, so it is guaranteed that the offset we
+	 * calculate here is aligned to a 64-bit boundary and so 64-bit values
+	 * can be copied without alignment faults.
+	 */
+	memory_region->receivers[0].composite_memory_region_offset =
+		sizeof(struct ffa_memory_region) +
+		memory_region->receiver_count *
+			sizeof(struct ffa_memory_access);
+	memory_region->receivers[0].reserved_0 = 0;
+
+	composite_memory_region =
+		ffa_memory_region_get_composite(memory_region, 0);
+
+	composite_memory_region->page_count = 0;
+	composite_memory_region->constituent_count = constituent_count;
+	composite_memory_region->reserved_0 = 0;
+
+	for (index = 0; index < constituent_count; index++) {
+		composite_memory_region->constituents[index] =
+			constituents[index];
+		composite_memory_region->page_count +=
+			constituents[index].page_count;
+	}
+
+	/*
+	 * TODO: Add assert ensuring that the specified message
+	 * length is not greater than FFA_MSG_PAYLOAD_MAX.
+	 */
+
+	return memory_region->receivers[0].composite_memory_region_offset +
+	       sizeof(struct ffa_composite_memory_region) + constituents_length;
+}
+
+/**
+ * Initialises the given `ffa_memory_region` and copies the constituent
+ * information to it. Returns the length in bytes occupied by the data copied to
+ * `memory_region` (attributes, constituents and memory region header size).
+ */
+uint32_t ffa_memory_region_init(
+	struct ffa_memory_region *memory_region, ffa_vm_id_t sender,
+	ffa_vm_id_t receiver,
+	const struct ffa_memory_region_constituent constituents[],
+	uint32_t constituent_count, uint32_t tag,
+	ffa_memory_region_flags_t flags, enum ffa_data_access data_access,
+	enum ffa_instruction_access instruction_access,
+	enum ffa_memory_type type, enum ffa_memory_cacheability cacheability,
+	enum ffa_memory_shareability shareability)
+{
+	ffa_memory_access_permissions_t permissions = 0;
+	ffa_memory_attributes_t attributes = 0;
+
+	/* Set memory region's permissions. */
+	ffa_set_data_access_attr(&permissions, data_access);
+	ffa_set_instruction_access_attr(&permissions, instruction_access);
+
+	/* Set memory region's page attributes. */
+	ffa_set_memory_type_attr(&attributes, type);
+	ffa_set_memory_cacheability_attr(&attributes, cacheability);
+	ffa_set_memory_shareability_attr(&attributes, shareability);
+
+	return ffa_memory_region_init_internal(
+		memory_region, sender, attributes, flags, 0, tag, receiver,
+		permissions, constituents, constituent_count);
+}
+
+uint32_t ffa_memory_retrieve_request_init(
+	struct ffa_memory_region *memory_region, ffa_memory_handle_t handle,
+	ffa_vm_id_t sender, ffa_vm_id_t receiver, uint32_t tag,
+	ffa_memory_region_flags_t flags, enum ffa_data_access data_access,
+	enum ffa_instruction_access instruction_access,
+	enum ffa_memory_type type, enum ffa_memory_cacheability cacheability,
+	enum ffa_memory_shareability shareability)
+{
+	ffa_memory_access_permissions_t permissions = 0;
+	ffa_memory_attributes_t attributes = 0;
+
+	/* Set memory region's permissions. */
+	ffa_set_data_access_attr(&permissions, data_access);
+	ffa_set_instruction_access_attr(&permissions, instruction_access);
+
+	/* Set memory region's page attributes. */
+	ffa_set_memory_type_attr(&attributes, type);
+	ffa_set_memory_cacheability_attr(&attributes, cacheability);
+	ffa_set_memory_shareability_attr(&attributes, shareability);
+
+	memory_region->sender = sender;
+	memory_region->attributes = attributes;
+	memory_region->reserved_0 = 0;
+	memory_region->flags = flags;
+	memory_region->reserved_1 = 0;
+	memory_region->handle = handle;
+	memory_region->tag = tag;
+	memory_region->receiver_count = 1;
+	memory_region->receivers[0].receiver_permissions.receiver = receiver;
+	memory_region->receivers[0].receiver_permissions.permissions =
+		permissions;
+	memory_region->receivers[0].receiver_permissions.flags = 0;
+	/*
+	 * Offset 0 in this case means that the hypervisor should allocate the
+	 * address ranges. This is the only configuration supported by Hafnium,
+	 * as it enforces 1:1 mappings in the stage 2 page tables.
+	 */
+	memory_region->receivers[0].composite_memory_region_offset = 0;
+	memory_region->receivers[0].reserved_0 = 0;
+
+	return sizeof(struct ffa_memory_region) +
+	       memory_region->receiver_count * sizeof(struct ffa_memory_access);
+}
+
+uint32_t ffa_memory_lender_retrieve_request_init(
+	struct ffa_memory_region *memory_region, ffa_memory_handle_t handle,
+	ffa_vm_id_t sender)
+{
+	memory_region->sender = sender;
+	memory_region->attributes = 0;
+	memory_region->reserved_0 = 0;
+	memory_region->flags = 0;
+	memory_region->reserved_1 = 0;
+	memory_region->handle = handle;
+	memory_region->tag = 0;
+	memory_region->receiver_count = 0;
+
+	return sizeof(struct ffa_memory_region);
+}
+
+uint32_t ffa_retrieved_memory_region_init(
+	struct ffa_memory_region *response, size_t response_max_size,
+	ffa_vm_id_t sender, ffa_memory_attributes_t attributes,
+	ffa_memory_region_flags_t flags, ffa_memory_handle_t handle,
+	ffa_vm_id_t receiver, ffa_memory_access_permissions_t permissions,
+	const struct ffa_memory_region_constituent constituents[],
+	uint32_t constituent_count)
+{
+	/* TODO: Check against response_max_size first. */
+	return ffa_memory_region_init_internal(
+		response, sender, attributes, flags, handle, 0, receiver,
+		permissions, constituents, constituent_count);
+}
diff --git a/vmlib/spci.c b/vmlib/spci.c
deleted file mode 100644
index 4ed1494..0000000
--- a/vmlib/spci.c
+++ /dev/null
@@ -1,204 +0,0 @@
-/*
- * Copyright 2019 The Hafnium Authors.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *     https://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "hf/spci.h"
-
-#include <stddef.h>
-
-#include "hf/types.h"
-
-#if defined(__linux__) && defined(__KERNEL__)
-#include <linux/kernel.h>
-#include <linux/string.h>
-
-#else
-#include "hf/std.h"
-#endif
-
-/**
- * Initialises the given `spci_memory_region` and copies the constituent
- * information to it. Returns the length in bytes occupied by the data copied to
- * `memory_region` (attributes, constituents and memory region header size).
- */
-static uint32_t spci_memory_region_init_internal(
-	struct spci_memory_region *memory_region, spci_vm_id_t sender,
-	spci_memory_attributes_t attributes, spci_memory_region_flags_t flags,
-	spci_memory_handle_t handle, uint32_t tag, spci_vm_id_t receiver,
-	spci_memory_access_permissions_t permissions,
-	const struct spci_memory_region_constituent constituents[],
-	uint32_t constituent_count)
-{
-	struct spci_composite_memory_region *composite_memory_region;
-	uint32_t index;
-	uint32_t constituents_length =
-		constituent_count *
-		sizeof(struct spci_memory_region_constituent);
-
-	memory_region->sender = sender;
-	memory_region->attributes = attributes;
-	memory_region->reserved_0 = 0;
-	memory_region->flags = flags;
-	memory_region->handle = handle;
-	memory_region->tag = tag;
-	memory_region->reserved_1 = 0;
-	memory_region->receiver_count = 1;
-	memory_region->receivers[0].receiver_permissions.receiver = receiver;
-	memory_region->receivers[0].receiver_permissions.permissions =
-		permissions;
-	memory_region->receivers[0].receiver_permissions.flags = 0;
-	/*
-	 * Note that `sizeof(struct_spci_memory_region)` and `sizeof(struct
-	 * spci_memory_access)` must both be multiples of 16 (as verified by the
-	 * asserts in `spci_memory.c`, so it is guaranteed that the offset we
-	 * calculate here is aligned to a 64-bit boundary and so 64-bit values
-	 * can be copied without alignment faults.
-	 */
-	memory_region->receivers[0].composite_memory_region_offset =
-		sizeof(struct spci_memory_region) +
-		memory_region->receiver_count *
-			sizeof(struct spci_memory_access);
-	memory_region->receivers[0].reserved_0 = 0;
-
-	composite_memory_region =
-		spci_memory_region_get_composite(memory_region, 0);
-
-	composite_memory_region->page_count = 0;
-	composite_memory_region->constituent_count = constituent_count;
-	composite_memory_region->reserved_0 = 0;
-
-	for (index = 0; index < constituent_count; index++) {
-		composite_memory_region->constituents[index] =
-			constituents[index];
-		composite_memory_region->page_count +=
-			constituents[index].page_count;
-	}
-
-	/*
-	 * TODO: Add assert ensuring that the specified message
-	 * length is not greater than SPCI_MSG_PAYLOAD_MAX.
-	 */
-
-	return memory_region->receivers[0].composite_memory_region_offset +
-	       sizeof(struct spci_composite_memory_region) +
-	       constituents_length;
-}
-
-/**
- * Initialises the given `spci_memory_region` and copies the constituent
- * information to it. Returns the length in bytes occupied by the data copied to
- * `memory_region` (attributes, constituents and memory region header size).
- */
-uint32_t spci_memory_region_init(
-	struct spci_memory_region *memory_region, spci_vm_id_t sender,
-	spci_vm_id_t receiver,
-	const struct spci_memory_region_constituent constituents[],
-	uint32_t constituent_count, uint32_t tag,
-	spci_memory_region_flags_t flags, enum spci_data_access data_access,
-	enum spci_instruction_access instruction_access,
-	enum spci_memory_type type, enum spci_memory_cacheability cacheability,
-	enum spci_memory_shareability shareability)
-{
-	spci_memory_access_permissions_t permissions = 0;
-	spci_memory_attributes_t attributes = 0;
-
-	/* Set memory region's permissions. */
-	spci_set_data_access_attr(&permissions, data_access);
-	spci_set_instruction_access_attr(&permissions, instruction_access);
-
-	/* Set memory region's page attributes. */
-	spci_set_memory_type_attr(&attributes, type);
-	spci_set_memory_cacheability_attr(&attributes, cacheability);
-	spci_set_memory_shareability_attr(&attributes, shareability);
-
-	return spci_memory_region_init_internal(
-		memory_region, sender, attributes, flags, 0, tag, receiver,
-		permissions, constituents, constituent_count);
-}
-
-uint32_t spci_memory_retrieve_request_init(
-	struct spci_memory_region *memory_region, spci_memory_handle_t handle,
-	spci_vm_id_t sender, spci_vm_id_t receiver, uint32_t tag,
-	spci_memory_region_flags_t flags, enum spci_data_access data_access,
-	enum spci_instruction_access instruction_access,
-	enum spci_memory_type type, enum spci_memory_cacheability cacheability,
-	enum spci_memory_shareability shareability)
-{
-	spci_memory_access_permissions_t permissions = 0;
-	spci_memory_attributes_t attributes = 0;
-
-	/* Set memory region's permissions. */
-	spci_set_data_access_attr(&permissions, data_access);
-	spci_set_instruction_access_attr(&permissions, instruction_access);
-
-	/* Set memory region's page attributes. */
-	spci_set_memory_type_attr(&attributes, type);
-	spci_set_memory_cacheability_attr(&attributes, cacheability);
-	spci_set_memory_shareability_attr(&attributes, shareability);
-
-	memory_region->sender = sender;
-	memory_region->attributes = attributes;
-	memory_region->reserved_0 = 0;
-	memory_region->flags = flags;
-	memory_region->reserved_1 = 0;
-	memory_region->handle = handle;
-	memory_region->tag = tag;
-	memory_region->receiver_count = 1;
-	memory_region->receivers[0].receiver_permissions.receiver = receiver;
-	memory_region->receivers[0].receiver_permissions.permissions =
-		permissions;
-	memory_region->receivers[0].receiver_permissions.flags = 0;
-	/*
-	 * Offset 0 in this case means that the hypervisor should allocate the
-	 * address ranges. This is the only configuration supported by Hafnium,
-	 * as it enforces 1:1 mappings in the stage 2 page tables.
-	 */
-	memory_region->receivers[0].composite_memory_region_offset = 0;
-	memory_region->receivers[0].reserved_0 = 0;
-
-	return sizeof(struct spci_memory_region) +
-	       memory_region->receiver_count *
-		       sizeof(struct spci_memory_access);
-}
-
-uint32_t spci_memory_lender_retrieve_request_init(
-	struct spci_memory_region *memory_region, spci_memory_handle_t handle,
-	spci_vm_id_t sender)
-{
-	memory_region->sender = sender;
-	memory_region->attributes = 0;
-	memory_region->reserved_0 = 0;
-	memory_region->flags = 0;
-	memory_region->reserved_1 = 0;
-	memory_region->handle = handle;
-	memory_region->tag = 0;
-	memory_region->receiver_count = 0;
-
-	return sizeof(struct spci_memory_region);
-}
-
-uint32_t spci_retrieved_memory_region_init(
-	struct spci_memory_region *response, size_t response_max_size,
-	spci_vm_id_t sender, spci_memory_attributes_t attributes,
-	spci_memory_region_flags_t flags, spci_memory_handle_t handle,
-	spci_vm_id_t receiver, spci_memory_access_permissions_t permissions,
-	const struct spci_memory_region_constituent constituents[],
-	uint32_t constituent_count)
-{
-	/* TODO: Check against response_max_size first. */
-	return spci_memory_region_init_internal(
-		response, sender, attributes, flags, handle, 0, receiver,
-		permissions, constituents, constituent_count);
-}