S3FS is a PyFilesystem interface to Amazon S3 cloud storage.

As a PyFilesystem concrete class, S3FS allows you to work with S3 in the same as any other supported filesystem.


S3FS may be installed from pip with the following command:

pip install fs-s3fs

This will install the most recent stable version.

Alternatively, if you want the cutting edge code, you can check out the GitHub repos at https://github.com/pyfilesystem/s3fs

Opening an S3 Filesystem

There are two options for constructing a s3fs instance. The simplest way is with an opener, which is a simple URL like syntax. Here is an example:

from fs import open_fs
s3fs = open_fs('s3://mybucket/')

For more granular control, you may import the S3FS class and construct it explicitly:

from fs_s3fs import S3FS
s3fs = S3FS('mybucket')

S3FS Constructor

class fs_s3fs.S3FS(bucket_name, dir_path=u'/', aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None, endpoint_url=None, region=None, delimiter=u'/', strict=True)

Construct an Amazon S3 filesystem for PyFilesystem

  • bucket_name (str) – The S3 bucket name.
  • dir_path (str) – The root directory within the S3 Bucket. Defaults to "/"
  • aws_access_key_id (str) – The access key, or None to read the key from standard configuration files.
  • aws_secret_access_key (str) – The secret key, or None to read the key from standard configuration files.
  • endpoint_url (str) – Alternative endpoint url (None to use default).
  • aws_session_token (str) –
  • region (str) – Optional S3 region.
  • delimiter (str) – The delimiter to separate folders, defaults to a forward slash.
  • strict (bool) – When True (default) S3FS will follow the PyFilesystem specification exactly. Set to False to disable validation of destination paths which may speed up uploads / downloads.


If you don’t supply any credentials, then S3FS will use the access key and secret key configured on your system. You may also specify when creating the filesystem instance. Here’s how you would do that with an opener:

s3fs = open_fs('s3://<access key>:<secret key>@mybucket')

Here’s how you specify credentials with the constructor:

s3fs = S3FS(
    aws_access_key_id=<access key>,
    aws_secret_access_key=<secret key>


Amazon recommends against specifying credentials explicitly like this in production.

S3 Info

You can retrieve S3 info via the s3 namespace. Here’s an example:

>>> info = s.getinfo('foo', namespaces=['s3'])
>>> info.raw['s3']
{'metadata': {}, 'delete_marker': None, 'version_id': None, 'parts_count': None, 'accept_ranges': 'bytes', 'last_modified': 1501935315, 'content_length': 3, 'content_encoding': None, 'request_charged': None, 'replication_status': None, 'server_side_encryption': None, 'expires': None, 'restore': None, 'content_type': 'binary/octet-stream', 'sse_customer_key_md5': None, 'content_disposition': None, 'storage_class': None, 'expiration': None, 'missing_meta': None, 'content_language': None, 'ssekms_key_id': None, 'sse_customer_algorithm': None, 'e_tag': '"37b51d194a7513e45b56f6524f2d51f2"', 'website_redirect_location': None, 'cache_control': None}


You can use the geturl method to generate an externally accessible URL from an S3 object. Here’s an example:

>>> s3fs.geturl('foo')

More Information

See the PyFilesystem Docs for documentation on the rest of the PyFilesystem interface.

Indices and tables