# Reference

`Base.copy!`

— Method`copy!(to::TypeFutures, from::TypeFutures[, copymethod!=DistributedOperations.paralleloperations_copy!, pids])`

Copy `from`

into `to`

using `copymethod!`

.

`Base.fill!`

— Method`fill!(x::TypeFutures, a[, fillmethod!=DistributedOperations.fillmethod!, pids])`

Fill `x`

with `a::Number`

using the `fillmethod!::Function`

.

`DistributedArrays.localpart`

— Method`localpart(x::TypeFutures)`

Get the piece of `x::TypeFutures`

that is local to `myid()`

.

`DistributedOperations.bcast!`

— Method`bcast!(x::TypeFutures, pids)`

Broadcast an existing `x::TypeFutures`

to `pids`

. This is useful for elastic computing where the cluster may grow after the construction and broadcast of `x::TypeFuture`

.

`DistributedOperations.bcast`

— Method`bcast(x[, pids=procs()])`

Broadcast `x`

to `pids`

.

**Example**

```
using Distributed
addprocs(2)
@everywhere using DistributedOperations
x = rand(10)
_x = bcast(x)
y = remotecall_fetch(localpart, workers()[1], _x)
y ≈ x # true
rmprocs(workers())
```

`DistributedOperations.reduce!`

— Method`y = reduce!(x::TypeFutures[, reducemethod!=DistributedOperations.paralleloperations_reduce!])`

Parallel reduction of `x::TypeFutures`

using `reducemethod!`

. By default, the reduction is a mutating in-place element-wise addition, such that `y=localpart(x)`

.

**Example**

```
using Distributed
addprocs(2)
@everywhere using DistributedOperations
x = ArrayFutures(Float64, (3,))
fill!(x, 1, workers())
y = reduce!(x)
y ≈ [2.0,2.0,2.0] # true
localpart(x) ≈ [2.0,2.0,2.0] # true
rmprocs(workers())
```

`DistributedOperations.ArrayFutures`

— Method`x = ArrayFutures(x::Array[, pids=procs()])`

Create `x::TypeFutures`

, and where `myid()`

is assigned `x`

, and all other processes are assigned `zeros(eltype(x), size(x))`

.

`DistributedOperations.ArrayFutures`

— Method`x = ArrayFutures(T, n::NTuple{N,Int}[, pids=procs()])`

Create `x::TypeFutures`

, and where each proccess id (pid) in `pids`

is assigned `zeros(T,n)`

.

**Example**

```
using Distributed
addprocs(2)
@everywhere using DistributedOperations
x = ArrayFutures(Float32, (10,20), procs())
localpart(x)
rmprocs(workers())
```

`DistributedOperations.TypeFutures`

— Method`x = TypeFutures(y::T, pids)`

Construct a `x::TypeFutures`

from `y::T`

on the master process. This is useful for creating `x`

prior to the construction of a cluster. Subsequently, `x`

can be used to broadcast `y`

to workers.

**Example**

```
using Distributed, DistributedOperations
y = (x=rand(2),y=rand(2))
x = TypeFutures(y)
addprocs(2)
@everywhere using DistributedOperations
bcast!(x, workers())
```

`DistributedOperations.TypeFutures`

— Method`x = TypeFutures(y::T, f[, pids=procs()], fargs...)`

Construt a `x::TypeFutures`

from `y::T`

on workers defined by the process id's `pids`

. On each worker `pid`

, `f`

is evaluated, and a future for what is returned by `f`

is stored.

**Example**

```
using Distributed
addprocs(2)
@everywhere using DistributedOperations
@everywhere struct MyStruct
x::Vector{Float64}
y::Vector{Float64}
end
@everywhere foo() = MyStruct(rand(10), rand(10))
x = foo()
x = TypeFutures(x, foo, procs())
@show remotecall_fetch(localpart, workers()[1], x)
rmprocs(workers())
```

`DistributedOperations.TypeFutures`

— Method`x = TypeFutures(T, f, pids, fargs...)`

Construt a `x::TypeFutures`

of type `T`

on workers defined by the process id's `pids`

. On each worker `pid`

, `f`

is evaluated, and a future for what is returned by `f`

is stored.

**Example**

```
using Distributed
addprocs(2)
@everywhere using DistributedOperations
@everywhere struct MyStruct
x::Vector{Float64}
y::Vector{Float64}
end
@everywhere foo() = MyStruct(rand(10), rand(10))
x = TypeFutures(MyStruct, foo, procs())
@show remotecall_fetch(localpart, workers()[1], x)
rmprocs(workers())
```