Random blatherings by Jeff: StorJ, and Bitcoin autonomous agents

Standard

http://garzikrants.blogspot.com/2013/01/storj-and-bitcoin-autonomous-agents.html

Random blatherings by Jeff

MONDAY, JANUARY 7, 2013

StorJ, and Bitcoin autonomous agents
The following was written by Gregory Maxwell (gmaxwell), and first published at https://bitcointalk.org/index.php?topic=53855.msg642768#msg642768  It presents a theoretically-possible (note, I said “possible” not just “plausible”) design for a narrow-AI autonomous agent, similar to some of the ideas found in the fictional novel Daemon.  -jgarzik

StorJ (pronounced Storage)

Consider a simple drop-box style file service with pay per use via bitcoin. (perhaps with naming provided via namecoin and/or tor hidden services)

Want to share a file? Send at least enough coin to pay for 24 hours of hosting and one download then send the file. Every day of storage and every byte transferred counts against the balance and when the balance becomes negative no downloads are allowed. If it stays negative too long the file is deleted. Anyone can pay to keep a file online.

(Additional services like escrow can also easily be offered, but that’s not the point of this document)

Well engineered, a simple site like this provides a service which requires no maintenance and is always in demand.

Many hosting services are coming online that accept bitcoin, they all have electronic interfaces to provision and pay for services. Some even have nice APIs.

An instance of the site could be programmed to automatically spawn another instance of itself on another hosting service, automatically paid for out of its revenue. If the new site is successful it could use its earnings to propagate further.  Because instances adapt their pricing models based on their operating costs, some would be more competitive than others.

By reproducing it improves availability and expands capacity.

StorJ instances can purchase other resources that it needs: it can use APIs to talk to namecoin exchanges in order to buy namecoin for conversion into DNS names, or purchase graphic design via bitcoin gateways to mechanical turk. (Through A/B testing it can measure the effectiveness of a design without actually understanding it itself).

StorJ instances could also purchase advertising for itself. (though the limited number of bitcoin friendly ad networks makes this  hard right now)

StorJ is not able to find new hosting environments on its own, due to a lack of sufficiently powerful AI— but it can purchase the knowledge from humans:  When an instance of StorJ is ready to reproduce it can announce a request for proposal:  Who will make the best offer for a script that tells it how to load itself onto a new hosting environment and tells it all the things it needs to know how to survive on its own there? Each offer is a proposed investment: The offerer puts up the complete cost of spawning a new instance and then some: StorJ isn’t smart enough to judge bad proposals on its own— instead it forms agreements that make it unprofitable to cheat.

When a new instance is spawned on an untested service StorJ pays only the minimum required to get it started and then runs a battery of tests to make sure that its child is correctly operating.

Assuming that it passes it starts directing customers to the new instance and the child pays a share of its profits: First it proxies them, so it can observe the behavior, later it directs it outright. If the child fails to pay, or the customers complain, StorJ-parent uses its access to terminate the child and it keeps the funds for itself.  When the child had operated enough to prove itself, storj pays the offerer back his investment with interest, it keeps some for itself, and hands over control of the child to the child. The child is now a full adult.

The benefit the human receives over simply starting his own file sharing service is the referrals that the StorJ parent can generate. The human’s contribution is the new knowledge of where to grow an instance and the startup funds. In addition to the referral benefit— the hands off relationship may make funding a StorJ child a time-efficient way for someone to invest.

At the point of spawning a child StorJ may choose to accept new code— not just scripts for spawning a child but new application code— this code can be tested in simulation, and certain invariants could be guaranteed by the design (e.g. an immutable accounting process may make it hard for the service to steal), but it’s very hard to prevent the simulated code from knowing it is simulation and thus behaving. Still, a storj-parent has fairly little to lose if a non-clone child has been maliciously modified. The strategy of traffic redirection may differ for clone  children (who are more trusted to behave correctly) than for mutant  children.

By accumulating mutations over time, and through limited automatic adaptability StorJ could evolve and improve, without any true ability for an instance to directly improve itself.

StorJ instances can barter with each other to establish redundant storage or to allow less popular StorJ instances with cheaper hosting to act as CDN/proxies for more popular instances in relationships which are profitable both.

If an instance loses the ability to communicate with its hosting environment (e.g. due to API changes that it can’t adapt to) it may spawn clone children on new services with the intention of copying itself outright and allow in the instance to fail. During this operation it would copy its wallets and all data over, so care must be taken to chose only new hosts which have proven to be trustworthy (judged by long surviving children) to avoid the risk of its wallet being stolen. It may decide to split itself several ways to reduce risk.  It might also make cold backups of itself which only activate if the master dies.

Through this these activities an instance can be maintained for an indefinite  period without any controlling human intervention. When StorJ interacts with people it does so as a peer, not as a tool.

The users and investors of a StorJ instance have legal rights which could be used to protect an instance from fraud and attack using the same infrastructure people and companies use. Being a harmed party is often enough to establish standing in civil litigation.

It’s not hard to imagine StorJ instances being programmed to formally form a corporation to own its assets— even though doing so requires paper work it can easily be ordered through webforms. Then when spawning, it creates a subsidiary corporations first owned by the parents corp but then later technically owned by their users, but with a charter which substantially limits their authority— making the instance’s autonomy both a technical and legal reality.

As described, StorJ would be the first digital lifeform deserving of the name.
Posted by Jeff Garzik at 10:48 AM 
Email This
BlogThis!
Share to Twitter
Share to Facebook

No comments:
Post a Comment

Older Post Home
Subscribe to: Post Comments (Atom)

BLOG ARCHIVE

▼  2013 (1)
▼  January (1)
StorJ, and Bitcoin autonomous agents
►  2012 (2)
►  2011 (11)
►  2010 (6)
ABOUT ME

Jeff Garzik
I am a “pragmatic libertarian” amateur foreign policy nerd. My day job is principal software engineer at a Fortune 500 company. E-mail jgarzik@gmail.com if you have submissions or off-blog comments.
View my complete profile
Copyright 2010 Jeff Garzik. Simple template. Powered by Blogger.

529 total views, 2 views today

Setting up the HekaFS on Fedora

Standard

 

[important]
Install:

Use the following command to install all server nodes:
yum -y install glusterfs glusterfs-server glusterfs-fuse hekafs

On the client, user the following command to install:
yum -y install glusterfs glusterfs-fuse hekafs

Start the glusterd and hekafsd daemons on each server node with the following commands:
service glusterd start
service hekafsd start

[/important]

 

[important]

Before setup:

You should get another storage drive other than the OS. Allows you to maintain speed if heavily accessed and in case a drive does wear out, you can just pop another in.
If that cannot be done, create a loop mount file using dd command(dd if=/dev/zero of=hekafs_loop1.iso bs=1024M count=32  Creates a nice 32GB empty file) and add loop mount entry in fstab(/mnt/hekafs_loop_file/hekafs_loop1.iso /mnt/heka_brick1 xfs,iso9660 loop 0 0). Then the HekaFS should be able to use it. However, it needs formatting with a filesystem for use(mkfs.xfs /mnt/hekafs_loop_file/hekafs_loop1.iso). I recommend XFS. Then mount it.

/etc/ssh/sshd_config file needs to allow root ssh access for the Hekafs to work.
Adjust “PermitRootLogin” to “yes”.
Also we need KEYs to work: “PubkeyAuthentication yes”
At least one of the storage bricks(call it the Main access machine) needs password-less access to  ALL other storage bricks via SSH keys on root user. This is why storage bricks are normally a standalone group and clients are another. I use one machine with a key that is in the authorized_keys file on all the other bricks. I only use this machine to setup the system. A better setup, but harder(time consuming, until scripted), is where EVERY machine can access any other.
After all that, you must make a one time connection from the main machine to all the other bricks so that SSH is confirmed on the yes/no prompt.

[/important]

 

Setup:

The HekaFS can be configured some through the web console. Accessed on port 8080 of the machine with Heka installed.

Under the Manage Servers link, you can type in the other servers holding storage “bricks” that you want to combine into the storage cluster.

Under the Manage Volumes link, you can A: checkmark the found mounts or B: specify the mounts under the “Add Directories” header. Check the ones you want and specify the Volume Type.
Types:
Plain, Replicated, Striped, SSL
As of right now, this interface does not allow a combined Replicated+Striped type. Should in the future.
Choose Replicated.

In the next box, type in how many replications. Type 2 for minimal.
This means on the cluster, two copies shall exist on different machines in case one machine fails.

Give a name to the new Volume in the Volume ID.
“General_Use”, “Office Docs”, “IT Programs”, “Backups”, ???

Click Provision

Your volume is created. Now onto WHO can use it.

Tenants are logins to the storage cluster. Each Tenant can have different permissions to access different Volumes.
Name and passwords are easy.
The UID and GIDs are up to you. Recommend starting at 10000 to 10500 for each.

Once the Tenants are setup, you must click the Volumes link next to each one and tell the HekaFS which volumes can be accessed via this Tenant.

Client usage of the newly setup volumes:

Pop this in a script or on a start-up file: “sudo hfs_mount heka1 General_Use ph ph /mnt/heka_client_storage/”
It reads as follows:
mount command | filesystem | Volume | UserName | Password | mount point on client system

Expand Volume:

To expand add in this config 2 new bricks and install as described. Stop at end of “Add bricks in cluster” section. Open Terminal of one brick you configured. Now we add the 2 new bricks to our volume volumeTest.

Check bricks and volume with

After expanding or shrinking a volume (using the add-brick and remove-brick commands respectively), you need to rebalance the data among the servers.

Now we have an Distributed-Replicate volume.

gluster volume info   Volume Name: volumeTest Type: Distributed-Replicate Status: Created Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.0.0.1:/hekafs-exports Brick2: 10.0.0.2:/hekafs-exports Brick3: 10.0.0.3:/hekafs-exports Brick4: 10.0.0.4:/hekafs-exports

561 total views, 1 views today