Running juicefs
by itself and it will print all available commands. In addition, you can add -h/--help
flag after each command to get more information, e.g., juicefs format -h
.
NAME:
juicefs - A POSIX file system built on Redis and object storage.
USAGE:
juicefs [global options] command [command options] [arguments...]
VERSION:
1.2.0
COMMANDS:
ADMIN:
format Format a volume
config Change configuration of a volume
quota Manage directory quotas
destroy Destroy an existing volume
gc Garbage collector of objects in data storage
fsck Check consistency of a volume
restore restore files from trash
dump Dump metadata into a JSON file
load Load metadata from a previously dumped JSON file
version Show version
INSPECTOR:
status Show status of a volume
stats Show real time performance statistics of JuiceFS
profile Show profiling of operations completed in JuiceFS
info Show internal information of a path or inode
debug Collect and display system static and runtime information
summary Show tree summary of a directory
SERVICE:
mount Mount a volume
umount Unmount a volume
gateway Start an S3-compatible gateway
webdav Start a WebDAV server
TOOL:
bench Run benchmarks on a path
objbench Run benchmarks on an object storage
warmup Build cache for target directories/files
rmr Remove directories recursively
sync Sync between two storages
clone clone a file or directory without copying the underlying data
compact Trigger compaction of chunks
GLOBAL OPTIONS:
--verbose, --debug, -v enable debug log (default: false)
--quiet, -q show warning and errors only (default: false)
--trace enable trace log (default: false)
--log-id value append the given log id in log, use "random" to use random uuid
--no-agent disable pprof (:6060) agent (default: false)
--pyroscope value pyroscope address
--no-color disable colors (default: false)
--help, -h show help (default: false)
--version, -V print version only (default: false)
COPYRIGHT:
Apache License 2.0
Auto completion
To enable commands completion, simply source the script provided within hack/autocomplete
directory. For example:
source hack/autocomplete/bash_autocomplete
source hack/autocomplete/zsh_autocomplete
Please note the auto-completion is only enabled for the current session. If you want to apply it for all new sessions, add the source
command to .bashrc
or .zshrc
:
echo "source path/to/bash_autocomplete" >> ~/.bashrc
echo "source path/to/zsh_autocomplete" >> ~/.zshrc
Alternatively, if you are using bash on a Linux system, you may just copy the script to /etc/bash_completion.d
and rename it to juicefs
:
cp hack/autocomplete/bash_autocomplete /etc/bash_completion.d/juicefs
source /etc/bash_completion.d/juicefs
Admin
Create and format a file system, if a volume already exists with the same META-URL
, this command will skip the format step. To adjust configurations for existing volumes, use juicefs config
.
Synopsis
juicefs format [command options] META-URL NAME
juicefs format sqlite3://myjfs.db myjfs
juicefs format redis://localhost myjfs --storage=s3 --bucket=https://mybucket.s3.us-east-2.amazonaws.com
juicefs format mysql://jfs:mypassword@(127.0.0.1:3306)/juicefs myjfs
META_PASSWORD=mypassword juicefs format mysql://jfs:@(127.0.0.1:3306)/juicefs myjfs
juicefs format sqlite3://myjfs.db myjfs --inodes=1000000 --capacity=102400
juicefs format sqlite3://myjfs.db myjfs --trash-days=0
Options
Items | Description |
---|
META-URL | Database URL for metadata storage, see JuiceFS supported metadata engines for details. |
NAME | Name of the file system |
--force | overwrite existing format (default: false) |
--no-update | don't update existing volume (default: false) |
Items | Description |
---|
--storage=file | Object storage type (e.g. s3 , gs , oss , cos ) (default: file , refer to documentation for all supported object storage types) |
--bucket=/var/jfs | A bucket URL to store data (default: $HOME/.juicefs/local or /var/jfs ) |
--access-key=value | Access Key for object storage (can also be set via the environment variable ACCESS_KEY ), see How to Set Up Object Storage for more. |
--secret-key value | Secret Key for object storage (can also be set via the environment variable SECRET_KEY ), see How to Set Up Object Storage for more. |
--session-token=value | session token for object storage, see How to Set Up Object Storage for more. |
--storage-class value Added in v1.1 | the default storage class |
Items | Description |
---|
--block-size=4M | size of block in KiB (default: 4M). 4M is usually a better default value because many object storage services use 4M as their internal block size, thus using the same block size in JuiceFS usually yields better performance. |
--compress=none | compression algorithm, choose from lz4 , zstd , none (default). Enabling compression will inevitably affect performance. Among the two supported algorithms, lz4 offers a better performance, while zstd comes with a higher compression ratio, Google for their detailed comparison. |
--encrypt-rsa-key=value | A path to RSA private key (PEM) |
--encrypt-algo=aes256gcm-rsa | encrypt algorithm (aes256gcm-rsa, chacha20-rsa) (default: "aes256gcm-rsa") |
--hash-prefix | For most object storages, if object storage blocks are sequentially named, they will also be closely stored in the underlying physical regions. When loaded with intensive concurrent consecutive reads, this can cause hotspots and hinder object storage performance.
Enabling --hash-prefix will add a hash prefix to name of the blocks (slice ID mod 256, see internal implementation), this distributes data blocks evenly across actual object storage regions, offering more consistent performance. Obviously, this option dictates object naming pattern and should be specified when a file system is created, and cannot be changed on-the-fly.
Currently, AWS S3 had already made improvements and no longer require application side optimization, but for other types of object storages, this option still recommended for large scale scenarios. |
--shards=0 | If your object storage limit speed in a bucket level (or you're using a self-hosted object storage with limited performance), you can store the blocks into N buckets by hash of key (default: 0), when N is greater than 0, bucket should to be in the form of %d , e.g. --bucket "juicefs-%d" . --shards cannot be changed afterwards and must be planned carefully ahead. |
Items | Description |
---|
--capacity=0 | storage space limit in GiB, default to 0 which means no limit. Capacity will include trash files, if trash is enabled. |
--inodes=0 | Limit the number of inodes, default to 0 which means no limit. |
--trash-days=1 | By default, delete files are put into trash, this option controls the number of days before trash files are expired, default to 1, set to 0 to disable trash. |
--enable-acl=true Added in v1.2 | enable POSIX ACL,it is irreversible. |
juicefs config
Change config of a volume. Note that after updating some settings, the client may not take effect immediately, and it needs to wait for a certain period of time. The specific waiting time can be controlled by the --heartbeat
option.
Synopsis
juicefs config [command options] META-URL
juicefs config redis://localhost
juicefs config redis://localhost --inodes 10000000 --capacity 1048576
juicefs config redis://localhost --trash-days 7
juicefs config redis://localhost --min-client-version 1.0.0 --max-client-version 1.1.0
Options
Items | Description |
---|
--yes, -y | automatically answer 'yes' to all prompts and run non-interactively (default: false) |
--force | skip sanity check and force update the configurations (default: false) |
Data storage options
Items | Description |
---|
--storage=file Added in v1.1 | Object storage type (e.g. s3 , gs , oss , cos ) (default: "file" , refer to documentation for all supported object storage types). |
--bucket=/var/jfs | A bucket URL to store data (default: $HOME/.juicefs/local or /var/jfs ) |
--access-key=value | Access Key for object storage (can also be set via the environment variable ACCESS_KEY ), see How to Set Up Object Storage for more. |
--secret-key value | Secret Key for object storage (can also be set via the environment variable SECRET_KEY ), see How to Set Up Object Storage for more. |
--session-token=value | session token for object storage, see How to Set Up Object Storage for more. |
--storage-class value Added in v1.1 | the default storage class |
--upload-limit=0 | bandwidth limit for upload in Mbps (default: 0) |
--download-limit=0 | bandwidth limit for download in Mbps (default: 0) |
Management options
Items | Description |
---|
--capacity value | limit for space in GiB |
--inodes value | limit for number of inodes |
--trash-days value | number of days after which removed files will be permanently deleted |
--enable-acl Added in v1.2 | enable POSIX ACL (irreversible), at the same time, the minimum client version allowed to connect will be upgraded to v1.2 |
--encrypt-secret | encrypt the secret key if it was previously stored in plain format (default: false) |
--min-client-version value Added in v1.1 | minimum client version allowed to connect |
--max-client-version value Added in v1.1 | maximum client version allowed to connect |
--dir-stats Added in v1.1 | enable dir stats, which is necessary for fast summary and dir quota (default: false) |
juicefs quota
Added in v1.1
Manage directory quotas
Synopsis
juicefs quota command [command options] META-URL
juicefs quota set redis://localhost --path /dir1 --capacity 1 --inodes 100
juicefs quota get redis://localhost --path /dir1
juicefs quota list redis://localhost
juicefs quota delete redis://localhost --path /dir1
juicefs quota check redis://localhost
Options
Items | Description |
---|
META-URL | Database URL for metadata storage, see "JuiceFS supported metadata engines" for details. |
--path value | full path of the directory within the volume |
--capacity value | hard quota of the directory limiting its usage of space in GiB (default: 0) |
--inodes value | hard quota of the directory limiting its number of inodes (default: 0) |
--repair | repair inconsistent quota (default: false) |
--strict | calculate total usage of directory in strict mode (NOTE: may be slow for huge directory) (default: false) |
juicefs destroy
Destroy an existing volume, will delete relevant data in metadata engine and object storage. See How to destroy a file system.
Synopsis
juicefs destroy [command options] META-URL UUID
juicefs destroy redis://localhost e94d66a8-2339-4abd-b8d8-6812df737892
Options
Items | Description |
---|
--yes, -y Added in v1.1 | automatically answer 'yes' to all prompts and run non-interactively (default: false) |
--force | skip sanity check and force destroy the volume (default: false) |
juicefs gc
If for some reason, a object storage block escape JuiceFS management completely, i.e. the metadata is gone, but the block still persists in the object storage, and cannot be released, this is called an "object leak". If this happens without any special file system manipulation, it could well indicate a bug within JuiceFS, file a GitHub Issue to let us know.
Meanwhile, you can run this command to deal with leaked objects. It also deletes stale slices produced by file overwrites. See Status Check & Maintenance.
Synopsis
juicefs gc [command options] META-URL
juicefs gc redis://localhost
juicefs gc redis://localhost --compact
juicefs gc redis://localhost --delete
Options
Items | Description |
---|
--compact | compact all chunks with more than 1 slices (default: false). |
--delete | delete leaked objects (default: false) |
--threads=10 | number of threads to delete leaked objects (default: 10) |
juicefs fsck
Check consistency of file system.
Synopsis
juicefs fsck [command options] META-URL
juicefs fsck redis://localhost
Options
Items | Description |
---|
--path value Added in v1.1 | absolute path within JuiceFS to check |
--repair Added in v1.1 | repair specified path if it's broken (default: false) |
--recursive, -r Added in v1.1 | recursively check or repair (default: false) |
--sync-dir-stat Added in v1.1 | sync stat of all directories, even if they are existed and not broken (NOTE: it may take a long time for huge trees) (default: false) |
juicefs restore
Added in v1.1
Rebuild the tree structure for trash files, and put them back to original directories.
Synopsis
juicefs restore [command options] META HOUR ...
juicefs restore redis://localhost/1 2023-05-10-01
Options
Items | Description |
---|
--put-back value | move the recovered files into original directory (default: false) |
--threads value | number of threads (default: 10) |
juicefs dump
Dump metadata into a JSON file. Refer to "Metadata backup" for more information.
Synopsis
juicefs dump [command options] META-URL [FILE]
juicefs dump redis://localhost meta-dump.json
juicefs dump redis://localhost sub-meta-dump.json --subdir /dir/in/jfs
Options
Items | Description |
---|
META-URL | Database URL for metadata storage, see JuiceFS supported metadata engines for details. |
FILE | Export file path, if not specified, it will be exported to standard output. If the filename ends with .gz , it will be automatically compressed. |
--subdir=path | Only export metadata for the specified subdirectory. |
--keep-secret-key Added in v1.1 | Export object storage authentication information, the default is false . Since it is exported in plain text, pay attention to data security when using it. If the export file does not contain object storage authentication information, you need to use juicefs config to reconfigure object storage authentication information after the subsequent import is completed. |
--threads=10 Added in v1.2 | number of threads to dump metadata. (default: 10) |
--fast Added in v1.2 | Use more memory to speedup dump. |
--skip-trash Added in v1.2 | Skip files and directories in trash. |
juicefs load
Load metadata from a previously dumped JSON file. Read "Metadata recovery and migration" to learn more.
Synopsis
juicefs load [command options] META-URL [FILE]
juicefs load redis://127.0.0.1:6379/1 meta-dump.json
Options
Items | Description |
---|
META-URL | Database URL for metadata storage, see JuiceFS supported metadata engines for details. |
FILE | Import file path, if not specified, it will be imported from standard input. If the filename ends with .gz , it will be automatically decompressed. |
--encrypt-rsa-key=path Added in v1.0.4 | The path to the RSA private key file used for encryption. |
--encrypt-alg=aes256gcm-rsa Added in v1.0.4 | Encryption algorithm, the default is aes256gcm-rsa . |
Inspector
juicefs status
Show status of JuiceFS.
Synopsis
juicefs status [command options] META-URL
juicefs status redis://localhost
Options
Items | Description |
---|
--session=0, -s 0 | show detailed information (sustained inodes, locks) of the specified session (SID) (default: 0) |
--more, -m Added in v1.1 | show more statistic information, may take a long time (default: false) |
juicefs stats
Show runtime statistics, read Real-time performance monitoring for more.
Synopsis
juicefs stats [command options] MOUNTPOINT
juicefs stats /mnt/jfs
juicefs stats /mnt/jfs -l 1
Options
Items | Description |
---|
--schema=ufmco | schema string that controls the output sections (u : usage, f : FUSE, m : metadata, c : block cache, o : object storage, g : Go) (default: ufmco ) |
--interval=1 | interval in seconds between each update (default: 1) |
--verbosity=0 | verbosity level, 0 or 1 is enough for most cases (default: 0) |
juicefs profile
Show profiling of operations completed in JuiceFS, based on access log. read Real-time performance monitoring for more.
Synopsis
juicefs profile [command options] MOUNTPOINT/LOGFILE
juicefs profile /mnt/jfs
cat /mnt/jfs/.accesslog > /tmp/jfs.alog
juicefs profile /tmp/jfs.alog
juicefs profile /tmp/jfs.alog --interval 0
Options
Items | Description |
---|
--uid=value, -u value | only track specified UIDs (separated by comma) |
--gid=value, -g value | only track specified GIDs (separated by comma) |
--pid=value, -p value | only track specified PIDs (separated by comma) |
--interval=2 | flush interval in seconds; set it to 0 when replaying a log file to get an immediate result (default: 2) |
juicefs info
Show internal information for given paths or inodes.
Synopsis
juicefs info [command options] PATH or INODE
juicefs info /mnt/jfs/foo
cd /mnt/jfs
juicefs info -i 100
Options
Items | Description |
---|
--inode, -i | use inode instead of path (current dir should be inside JuiceFS) (default: false) |
--recursive, -r | get summary of directories recursively (NOTE: it may take a long time for huge trees) (default: false) |
--strict Added in v1.1 | get accurate summary of directories (NOTE: it may take a long time for huge trees) (default: false) |
--raw | show internal raw information (default: false) |
juicefs debug
Added in v1.1
It collects and displays information from multiple dimensions such as the operating environment and system logs to help better locate errors
Synopsis
juicefs debug [command options] MOUNTPOINT
juicefs debug /mnt/jfs
juicefs debug --out-dir=/var/log /mnt/jfs
juicefs debug --out-dir=/var/log --limit=1000 /mnt/jfs
Options
Items | Description |
---|
--out-dir=./debug/ | The output directory of the results, automatically created if the directory does not exist (default: ./debug/ ) |
--limit=value | The number of log entries collected, from newest to oldest, if not specified, all entries will be collected |
--stats-sec=5 | The number of seconds to sample .stats file (default: 5) |
--trace-sec=5 | The number of seconds to sample trace metrics (default: 5) |
--profile-sec=30 | The number of seconds to sample profile metrics (default: 30) |
juicefs summary
Added in v1.1
It is used to show tree summary of target directory.
Synopsis
juicefs summary [command options] PATH
juicefs summary /mnt/jfs/foo
juicefs summary --depth 5 /mnt/jfs/foo
juicefs summary --entries 20 /mnt/jfs/foo
juicefs summary --strict /mnt/jfs/foo
Options
Items | Description |
---|
--depth value, -d value | depth of tree to show (zero means only show root) (default: 2) |
--entries value, -e value | show top N entries (sort by size) (default: 10) |
--strict | show accurate summary, including directories and files (may be slow) (default: false) |
--csv | print summary in csv format (default: false) |
Service
juicefs mount
Mount a volume. The volume must be formatted in advance.
JuiceFS can be mounted by root or normal user, but due to their privilege differences, cache directory and log path will vary, read below descriptions for more.
Synopsis
juicefs mount [command options] META-URL MOUNTPOINT
juicefs mount redis://localhost /mnt/jfs
juicefs mount redis://:mypassword@localhost /mnt/jfs -d
META_PASSWORD=mypassword juicefs mount redis://localhost /mnt/jfs -d
juicefs mount redis://localhost /mnt/jfs --subdir /dir/in/jfs
juicefs mount redis://localhost /mnt/jfs -d --writeback
juicefs mount redis://localhost /mnt/jfs -d --read-only
juicefs mount redis://localhost /mnt/jfs --backup-meta 0
Options
Items | Description |
---|
META-URL | Database URL for metadata storage, see JuiceFS supported metadata engines for details. |
MOUNTPOINT | file system mount point, e.g. /mnt/jfs , Z: . |
-d, --background | run in background (default: false) |
--no-syslog | disable syslog (default: false) |
--log=path | path of log file when running in background (default: $HOME/.juicefs/juicefs.log or /var/log/juicefs.log ) |
--force | force to mount even if the mount point is already mounted by the same filesystem. |
--update-fstab Added in v1.1 | add / update entry in /etc/fstab , will create a symlink from /sbin/mount.juicefs to JuiceFS executable if not existing (default: false) |
FUSE related options
Items | Description |
---|
--enable-xattr | enable extended attributes (xattr) (default: false) |
--enable-ioctl Added in v1.1 | enable ioctl (support GETFLAGS/SETFLAGS only) (default: false) |
--root-squash value Added in v1.1 | mapping local root user (UID = 0) to another one specified as UID:GID |
--prefix-internal Added in v1.1 | add '.jfs' prefix to all internal files (default: false) |
-o value | other FUSE options, see FUSE Mount Options |
Items | Description |
---|
--subdir=value | mount a sub-directory as root (default: "") |
--backup-meta=3600 | interval (in seconds) to automatically backup metadata in the object storage (0 means disable backup) (default: "3600") |
--backup-skip-trash Added in v1.2 | skip files and directories in trash when backup metadata. |
--heartbeat=12 | interval (in seconds) to send heartbeat; it's recommended that all clients use the same heartbeat value (default: "12") |
--read-only | allow lookup/read operations only (default: false) |
--no-bgjob | Disable background jobs, default to false, which means clients by default carry out background jobs, including:
- Clean up expired files in Trash (look for
cleanupDeletedFiles , cleanupTrash in pkg/meta/base.go ) - Delete slices that's not referenced (look for
cleanupSlices in pkg/meta/base.go ) - Clean up stale client sessions (look for
CleanStaleSessions in pkg/meta/base.go ) Note that compaction isn't affected by this option, it happens automatically with file reads and writes, client will check if compaction is in need, and run in background (take Redis for example, look for compactChunk in pkg/meta/base.go ). |
--atime-mode=noatime Added in v1.1 | Control atime (last time the file was accessed) behavior, support the following modes:
noatime (default): set when the file is created or when SetAttr is explicitly called. Accessing and modifying the file will not affect atime, tracking atime comes at a performance cost, so this is the default behaviorrelatime : update inode access times relative to mtime (last time when the file data was modified) or ctime (last time when file metadata was changed). Only update atime if atime was earlier than the current mtime or ctime, or the file's atime is more than 1 day oldstrictatime : always update atime on access
|
--skip-dir-nlink=20 Added in v1.1 | number of retries after which the update of directory nlink will be skipped (used for tkv only, 0 means never) (default: 20) |
--skip-dir-mtime=100ms Added in v1.2 | skip updating attribute of a directory if the mtime difference is smaller than this value (default: 100ms) |
For metadata cache description and usage, refer to Kernel metadata cache and Client memory metadata cache.
Items | Description |
---|
--attr-cache=1 | attributes cache timeout in seconds (default: 1), read Kernel metadata cache |
--entry-cache=1 | file entry cache timeout in seconds (default: 1), read Kernel metadata cache |
--dir-entry-cache=1 | dir entry cache timeout in seconds (default: 1), read Kernel metadata cache |
--open-cache=0 | open file cache timeout in seconds (0 means disable this feature) (default: 0) |
--open-cache-limit value Added in v1.1 | max number of open files to cache (soft limit, 0 means unlimited) (default: 10000) |
Data storage related options
Items | Description |
---|
--storage=file | Object storage type (e.g. s3 , gs , oss , cos ) (default: "file" , refer to documentation for all supported object storage types). |
--bucket=value | customized endpoint to access object storage |
--storage-class value Added in v1.1 | the storage class for data written by current client |
--get-timeout=60 | the max number of seconds to download an object (default: 60) |
--put-timeout=60 | the max number of seconds to upload an object (default: 60) |
--io-retries=10 | The number of retries when the network is abnormal and the number of retries for metadata requests are also controlled by this option. If the number of retries is exceeded, an EIO Input/output error error will be returned. (default: 10) |
--max-uploads=20 | Upload concurrency, defaults to 20. This is already a reasonably high value for 4M writes, with such write pattern, increasing upload concurrency usually demands higher --buffer-size , learn more at Read/Write Buffer. But for random writes around 100K, 20 might not be enough and can cause congestion at high load, consider using a larger upload concurrency, or try to consolidate small writes in the application end. |
--max-stage-write=0 Added in v1.2 | The maximum number of concurrent writes of data blocks to the cache disk asynchronously. If the maximum number of concurrent writes is reached, the object storage will be uploaded directly (this option is only valid when "Client write data cache" is enabled) (default value: 0, that is, no concurrency limit) |
--max-deletes=10 | number of threads to delete objects (default: 10) |
--upload-limit=0 | bandwidth limit for upload in Mbps (default: 0) |
--download-limit=0 | bandwidth limit for download in Mbps (default: 0) |
Data cache related options
Items | Description |
---|
--buffer-size=300 | total read/write buffering in MiB (default: 300), see Read/Write buffer |
--prefetch=1 | prefetch N blocks in parallel (default: 1), see Client read data cache |
--writeback | upload objects in background (default: false), see Client write data cache |
--upload-delay=0 | When --writeback is enabled, you can use this option to add a delay to object storage upload, default to 0, meaning that upload will begin immediately after write. Different units are supported, including s (second), m (minute), h (hour). If files are deleted during this delay, upload will be skipped entirely, when using JuiceFS for temporary storage, use this option to reduce resource usage. Refer to Client write data cache. |
--upload-hours Added in v1.2 | When --writeback is enabled, data blocks are only uploaded during the specified time of day. The format of the parameter is <start hour>,<end hour> (including "start hour", but not including "end hour", "start hour" must be less than or greater than "end hour"), where <hour> can range from 0 to 23. For example, 0,6 means that data blocks are only uploaded between 0:00 and 5:59 every day, and 23,3 means that data blocks are only uploaded between 23:00 every day and 2:59 the next day. |
--cache-dir=value | directory paths of local cache, use : (Linux, macOS) or ; (Windows) to separate multiple paths (default: $HOME/.juicefs/cache or /var/jfsCache ), see Client read data cache |
--cache-mode value Added in v1.1 | file permissions for cached blocks (default: "0600") |
--cache-size=102400 | size of cached object for read in MiB (default: 102400), see Client read data cache |
--free-space-ratio=0.1 | min free space ratio (default: 0.1), if Client write data cache is enabled, this option also controls write cache size, see Client read data cache |
--cache-partial-only | cache random/small read only (default: false), see Client read data cache |
--verify-cache-checksum=full Added in v1.1 | Checksum level for cache data. After enabled, checksum will be calculated on divided parts of the cache blocks and stored on disks, which are used for verification during reads. The following strategies are supported:
none : Disable checksum verification, if local cache data is tampered, bad data will be read;full (default): Perform verification when reading the full block, use this for sequential read scenarios;shrink : Perform verification on parts that's fully included within the read range, use this for random read scenarios;extend : Perform verification on parts that fully include the read range, this causes read amplifications and is only used for random read scenarios demanding absolute data integrity.
|
--cache-eviction=2-random Added in v1.1 | cache eviction policy (none or 2-random ) (default: "2-random") |
--cache-scan-interval=1h Added in v1.1 | interval (in seconds) to scan cache-dir to rebuild in-memory index (default: "1h") |
--cache-expire=0 Added in v1.2 | Cache blocks that have not been accessed for more than the set time, in seconds, will be automatically cleared (even if the value of --cache-eviction is none , these cache blocks will be deleted). A value of 0 means never expires (default: 0) |
Metrics related options
||Items|Description|
|-|-|
|--metrics=127.0.0.1:9567
|address to export metrics (default: 127.0.0.1:9567
)|
|--custom-labels
|custom labels for metrics, format: key1:value1;key2:value2
(default: "")|
|--consul=127.0.0.1:8500
|Consul address to register (default: 127.0.0.1:8500
)|
|--no-usage-report
|do not send usage report (default: false)|
juicefs umount
Unmount a volume.
Synopsis
juicefs umount [command options] MOUNTPOINT
juicefs umount /mnt/jfs
Options
Items | Description |
---|
-f, --force | force unmount a busy mount point (default: false) |
--flush Added in v1.1 | wait for all staging chunks to be flushed (default: false) |
juicefs gateway
Start an S3-compatible gateway, read Deploy JuiceFS S3 Gateway for more.
Synopsis
juicefs gateway [command options] META-URL ADDRESS
export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=12345678
juicefs gateway redis://localhost localhost:9000
Options
Items | Description |
---|
META-URL | Database URL for metadata storage, see JuiceFS supported metadata engines for details. |
ADDRESS | S3 gateway address and listening port, for example: localhost:9000 |
--log value Added in v1.2 | path for gateway log |
--access-log=path | path for JuiceFS access log. |
--background, -d Added in v1.2 | run in background (default: false) |
--no-banner | disable MinIO startup information (default: false) |
--multi-buckets | use top level of directories as buckets (default: false) |
--keep-etag | save the ETag for uploaded objects (default: false) |
--umask=022 | umask for new file and directory in octal (default: 022) |
--object-tag Added in v1.2 | enable object tagging API |
--domain value Added in v1.2 | domain for virtual-host-style requests |
--refresh-iam-interval=5m Added in v1.2 | interval to reload gateway IAM from configuration (default: 5m) |
Items | Description |
---|
--subdir=value | mount a sub-directory as root (default: "") |
--backup-meta=3600 | interval (in seconds) to automatically backup metadata in the object storage (0 means disable backup) (default: "3600") |
--backup-skip-trash Added in v1.2 | skip files and directories in trash when backup metadata. |
--heartbeat=12 | interval (in seconds) to send heartbeat; it's recommended that all clients use the same heartbeat value (default: "12") |
--read-only | allow lookup/read operations only (default: false) |
--no-bgjob | Disable background jobs, default to false, which means clients by default carry out background jobs, including:
- Clean up expired files in Trash (look for
cleanupDeletedFiles , cleanupTrash in pkg/meta/base.go ) - Delete slices that's not referenced (look for
cleanupSlices in pkg/meta/base.go ) - Clean up stale client sessions (look for
CleanStaleSessions in pkg/meta/base.go ) Note that compaction isn't affected by this option, it happens automatically with file reads and writes, client will check if compaction is in need, and run in background (take Redis for example, look for compactChunk in pkg/meta/base.go ). |
--atime-mode=noatime Added in v1.1 | Control atime (last time the file was accessed) behavior, support the following modes:
noatime (default): set when the file is created or when SetAttr is explicitly called. Accessing and modifying the file will not affect atime, tracking atime comes at a performance cost, so this is the default behaviorrelatime : update inode access times relative to mtime (last time when the file data was modified) or ctime (last time when file metadata was changed). Only update atime if atime was earlier than the current mtime or ctime, or the file's atime is more than 1 day oldstrictatime : always update atime on access
|
--skip-dir-nlink=20 Added in v1.1 | number of retries after which the update of directory nlink will be skipped (used for tkv only, 0 means never) (default: 20) |
--skip-dir-mtime=100ms Added in v1.2 | skip updating attribute of a directory if the mtime difference is smaller than this value (default: 100ms) |
For metadata cache description and usage, refer to Kernel metadata cache and Client memory metadata cache.
Items | Description |
---|
--attr-cache=1 | attributes cache timeout in seconds (default: 1), read Kernel metadata cache |
--entry-cache=1 | file entry cache timeout in seconds (default: 1), read Kernel metadata cache |
--dir-entry-cache=1 | dir entry cache timeout in seconds (default: 1), read Kernel metadata cache |
--open-cache=0 | open file cache timeout in seconds (0 means disable this feature) (default: 0) |
--open-cache-limit value Added in v1.1 | max number of open files to cache (soft limit, 0 means unlimited) (default: 10000) |
Data storage related options
Items | Description |
---|
--storage=file | Object storage type (e.g. s3 , gs , oss , cos ) (default: "file" , refer to documentation for all supported object storage types). |
--bucket=value | customized endpoint to access object storage |
--storage-class value Added in v1.1 | the storage class for data written by current client |
--get-timeout=60 | the max number of seconds to download an object (default: 60) |
--put-timeout=60 | the max number of seconds to upload an object (default: 60) |
--io-retries=10 | The number of retries when the network is abnormal and the number of retries for metadata requests are also controlled by this option. If the number of retries is exceeded, an EIO Input/output error error will be returned. (default: 10) |
--max-uploads=20 | Upload concurrency, defaults to 20. This is already a reasonably high value for 4M writes, with such write pattern, increasing upload concurrency usually demands higher --buffer-size , learn more at Read/Write Buffer. But for random writes around 100K, 20 might not be enough and can cause congestion at high load, consider using a larger upload concurrency, or try to consolidate small writes in the application end. |
--max-stage-write=0 Added in v1.2 | The maximum number of concurrent writes of data blocks to the cache disk asynchronously. If the maximum number of concurrent writes is reached, the object storage will be uploaded directly (this option is only valid when "Client write data cache" is enabled) (default value: 0, that is, no concurrency limit) |
--max-deletes=10 | number of threads to delete objects (default: 10) |
--upload-limit=0 | bandwidth limit for upload in Mbps (default: 0) |
--download-limit=0 | bandwidth limit for download in Mbps (default: 0) |
Data cache related options
Items | Description |
---|
--buffer-size=300 | total read/write buffering in MiB (default: 300), see Read/Write buffer |
--prefetch=1 | prefetch N blocks in parallel (default: 1), see Client read data cache |
--writeback | upload objects in background (default: false), see Client write data cache |
--upload-delay=0 | When --writeback is enabled, you can use this option to add a delay to object storage upload, default to 0, meaning that upload will begin immediately after write. Different units are supported, including s (second), m (minute), h (hour). If files are deleted during this delay, upload will be skipped entirely, when using JuiceFS for temporary storage, use this option to reduce resource usage. Refer to Client write data cache. |
--upload-hours Added in v1.2 | When --writeback is enabled, data blocks are only uploaded during the specified time of day. The format of the parameter is <start hour>,<end hour> (including "start hour", but not including "end hour", "start hour" must be less than or greater than "end hour"), where <hour> can range from 0 to 23. For example, 0,6 means that data blocks are only uploaded between 0:00 and 5:59 every day, and 23,3 means that data blocks are only uploaded between 23:00 every day and 2:59 the next day. |
--cache-dir=value | directory paths of local cache, use : (Linux, macOS) or ; (Windows) to separate multiple paths (default: $HOME/.juicefs/cache or /var/jfsCache ), see Client read data cache |
--cache-mode value Added in v1.1 | file permissions for cached blocks (default: "0600") |
--cache-size=102400 | size of cached object for read in MiB (default: 102400), see Client read data cache |
--free-space-ratio=0.1 | min free space ratio (default: 0.1), if Client write data cache is enabled, this option also controls write cache size, see Client read data cache |
--cache-partial-only | cache random/small read only (default: false), see Client read data cache |
--verify-cache-checksum=full Added in v1.1 | Checksum level for cache data. After enabled, checksum will be calculated on divided parts of the cache blocks and stored on disks, which are used for verification during reads. The following strategies are supported:
none : Disable checksum verification, if local cache data is tampered, bad data will be read;full (default): Perform verification when reading the full block, use this for sequential read scenarios;shrink : Perform verification on parts that's fully included within the read range, use this for random read scenarios;extend : Perform verification on parts that fully include the read range, this causes read amplifications and is only used for random read scenarios demanding absolute data integrity.
|
--cache-eviction=2-random Added in v1.1 | cache eviction policy (none or 2-random ) (default: "2-random") |
--cache-scan-interval=1h Added in v1.1 | interval (in seconds) to scan cache-dir to rebuild in-memory index (default: "1h") |
--cache-expire=0 Added in v1.2 | Cache blocks that have not been accessed for more than the set time, in seconds, will be automatically cleared (even if the value of --cache-eviction is none , these cache blocks will be deleted). A value of 0 means never expires (default: 0) |
Metrics related options
||Items|Description|
|-|-|
|--metrics=127.0.0.1:9567
|address to export metrics (default: 127.0.0.1:9567
)|
|--custom-labels
|custom labels for metrics, format: key1:value1;key2:value2
(default: "")|
|--consul=127.0.0.1:8500
|Consul address to register (default: 127.0.0.1:8500
)|
|--no-usage-report
|do not send usage report (default: false)|
juicefs webdav
Start a WebDAV server, refer to Deploy WebDAV Server for more.
Synopsis
juicefs webdav [command options] META-URL ADDRESS
juicefs webdav redis://localhost localhost:9007
Options
Items | Description |
---|
META-URL | Database URL for metadata storage, see JuiceFS supported metadata engines for details. |
ADDRESS | WebDAV address and listening port, for example: localhost:9007 . |
--cert-file Added in v1.1 | certificate file for HTTPS |
--key-file Added in v1.1 | key file for HTTPS |
--gzip | compress served files via gzip (default: false) |
--disallowList | disallow list a directory (default: false) |
--log value Added in v1.2 | path for WebDAV log |
--access-log=path | path for JuiceFS access log |
--background, -d Added in v1.2 | run in background (default: false) |
Items | Description |
---|
--subdir=value | mount a sub-directory as root (default: "") |
--backup-meta=3600 | interval (in seconds) to automatically backup metadata in the object storage (0 means disable backup) (default: "3600") |
--backup-skip-trash Added in v1.2 | skip files and directories in trash when backup metadata. |
--heartbeat=12 | interval (in seconds) to send heartbeat; it's recommended that all clients use the same heartbeat value (default: "12") |
--read-only | allow lookup/read operations only (default: false) |
--no-bgjob | Disable background jobs, default to false, which means clients by default carry out background jobs, including:
- Clean up expired files in Trash (look for
cleanupDeletedFiles , cleanupTrash in pkg/meta/base.go ) - Delete slices that's not referenced (look for
cleanupSlices in pkg/meta/base.go ) - Clean up stale client sessions (look for
CleanStaleSessions in pkg/meta/base.go ) Note that compaction isn't affected by this option, it happens automatically with file reads and writes, client will check if compaction is in need, and run in background (take Redis for example, look for compactChunk in pkg/meta/base.go ). |
--atime-mode=noatime Added in v1.1 | Control atime (last time the file was accessed) behavior, support the following modes:
noatime (default): set when the file is created or when SetAttr is explicitly called. Accessing and modifying the file will not affect atime, tracking atime comes at a performance cost, so this is the default behaviorrelatime : update inode access times relative to mtime (last time when the file data was modified) or ctime (last time when file metadata was changed). Only update atime if atime was earlier than the current mtime or ctime, or the file's atime is more than 1 day oldstrictatime : always update atime on access
|
--skip-dir-nlink=20 Added in v1.1 | number of retries after which the update of directory nlink will be skipped (used for tkv only, 0 means never) (default: 20) |
--skip-dir-mtime=100ms Added in v1.2 | skip updating attribute of a directory if the mtime difference is smaller than this value (default: 100ms) |
For metadata cache description and usage, refer to Kernel metadata cache and Client memory metadata cache.
Items | Description |
---|
--attr-cache=1 | attributes cache timeout in seconds (default: 1), read Kernel metadata cache |
--entry-cache=1 | file entry cache timeout in seconds (default: 1), read Kernel metadata cache |
--dir-entry-cache=1 | dir entry cache timeout in seconds (default: 1), read Kernel metadata cache |
--open-cache=0 | open file cache timeout in seconds (0 means disable this feature) (default: 0) |
--open-cache-limit value Added in v1.1 | max number of open files to cache (soft limit, 0 means unlimited) (default: 10000) |
Data storage related options
Items | Description |
---|
--storage=file | Object storage type (e.g. s3 , gs , oss , cos ) (default: "file" , refer to documentation for all supported object storage types). |
--bucket=value | customized endpoint to access object storage |
--storage-class value Added in v1.1 | the storage class for data written by current client |
--get-timeout=60 | the max number of seconds to download an object (default: 60) |
--put-timeout=60 | the max number of seconds to upload an object (default: 60) |
--io-retries=10 | The number of retries when the network is abnormal and the number of retries for metadata requests are also controlled by this option. If the number of retries is exceeded, an EIO Input/output error error will be returned. (default: 10) |
--max-uploads=20 | Upload concurrency, defaults to 20. This is already a reasonably high value for 4M writes, with such write pattern, increasing upload concurrency usually demands higher --buffer-size , learn more at Read/Write Buffer. But for random writes around 100K, 20 might not be enough and can cause congestion at high load, consider using a larger upload concurrency, or try to consolidate small writes in the application end. |
--max-stage-write=0 Added in v1.2 | The maximum number of concurrent writes of data blocks to the cache disk asynchronously. If the maximum number of concurrent writes is reached, the object storage will be uploaded directly (this option is only valid when "Client write data cache" is enabled) (default value: 0, that is, no concurrency limit) |
--max-deletes=10 | number of threads to delete objects (default: 10) |
--upload-limit=0 | bandwidth limit for upload in Mbps (default: 0) |
--download-limit=0 | bandwidth limit for download in Mbps (default: 0) |
Data cache related options
Items | Description |
---|
--buffer-size=300 | total read/write buffering in MiB (default: 300), see Read/Write buffer |
--prefetch=1 | prefetch N blocks in parallel (default: 1), see Client read data cache |
--writeback | upload objects in background (default: false), see Client write data cache |
--upload-delay=0 | When --writeback is enabled, you can use this option to add a delay to object storage upload, default to 0, meaning that upload will begin immediately after write. Different units are supported, including s (second), m (minute), h (hour). If files are deleted during this delay, upload will be skipped entirely, when using JuiceFS for temporary storage, use this option to reduce resource usage. Refer to Client write data cache. |
--upload-hours Added in v1.2 | When --writeback is enabled, data blocks are only uploaded during the specified time of day. The format of the parameter is <start hour>,<end hour> (including "start hour", but not including "end hour", "start hour" must be less than or greater than "end hour"), where <hour> can range from 0 to 23. For example, 0,6 means that data blocks are only uploaded between 0:00 and 5:59 every day, and 23,3 means that data blocks are only uploaded between 23:00 every day and 2:59 the next day. |
--cache-dir=value | directory paths of local cache, use : (Linux, macOS) or ; (Windows) to separate multiple paths (default: $HOME/.juicefs/cache or /var/jfsCache ), see Client read data cache |
--cache-mode value Added in v1.1 | file permissions for cached blocks (default: "0600") |
--cache-size=102400 | size of cached object for read in MiB (default: 102400), see Client read data cache |
--free-space-ratio=0.1 | min free space ratio (default: 0.1), if Client write data cache is enabled, this option also controls write cache size, see Client read data cache |
--cache-partial-only | cache random/small read only (default: false), see Client read data cache |
--verify-cache-checksum=full Added in v1.1 | Checksum level for cache data. After enabled, checksum will be calculated on divided parts of the cache blocks and stored on disks, which are used for verification during reads. The following strategies are supported:
none : Disable checksum verification, if local cache data is tampered, bad data will be read;full (default): Perform verification when reading the full block, use this for sequential read scenarios;shrink : Perform verification on parts that's fully included within the read range, use this for random read scenarios;extend : Perform verification on parts that fully include the read range, this causes read amplifications and is only used for random read scenarios demanding absolute data integrity.
|
--cache-eviction=2-random Added in v1.1 | cache eviction policy (none or 2-random ) (default: "2-random") |
--cache-scan-interval=1h Added in v1.1 | interval (in seconds) to scan cache-dir to rebuild in-memory index (default: "1h") |
--cache-expire=0 Added in v1.2 | Cache blocks that have not been accessed for more than the set time, in seconds, will be automatically cleared (even if the value of --cache-eviction is none , these cache blocks will be deleted). A value of 0 means never expires (default: 0) |
Metrics related options
||Items|Description|
|-|-|
|--metrics=127.0.0.1:9567
|address to export metrics (default: 127.0.0.1:9567
)|
|--custom-labels
|custom labels for metrics, format: key1:value1;key2:value2
(default: "")|
|--consul=127.0.0.1:8500
|Consul address to register (default: 127.0.0.1:8500
)|
|--no-usage-report
|do not send usage report (default: false)|
juicefs bench
Run benchmark, including read/write/stat for big and small files.
For a detailed introduction to the bench
subcommand, refer to the documentation.
Synopsis
juicefs bench [command options] PATH
juicefs bench /mnt/jfs -p 4
juicefs bench /mnt/jfs --big-file-size 0
Options
Items | Description |
---|
--block-size=1 | block size in MiB (default: 1) |
--big-file-size=1024 | size of big file in MiB (default: 1024) |
--small-file-size=128 | size of small file in KiB (default: 128) |
--small-file-count=100 | number of small files (default: 100) |
--threads=1, -p 1 | number of concurrent threads (default: 1) |
juicefs objbench
Run basic benchmarks on the target object storage to test if it works as expected. Read documentation for more.
Synopsis
juicefs objbench [command options] BUCKET
ACCESS_KEY=myAccessKey SECRET_KEY=mySecretKey juicefs objbench --storage=s3 https://mybucket.s3.us-east-2.amazonaws.com -p 6
Options
Items | Description |
---|
--storage=file | Object storage type (e.g. s3 , gs , oss , cos ) (default: file , refer to documentation for all supported object storage types) |
--access-key=value | Access Key for object storage (can also be set via the environment variable ACCESS_KEY ), see How to Set Up Object Storage for more. |
--secret-key value | Secret Key for object storage (can also be set via the environment variable SECRET_KEY ), see How to Set Up Object Storage for more. |
--session-token value Added in v1.0 | session token for object storage |
--block-size=4096 | size of each IO block in KiB (default: 4096) |
--big-object-size=1024 | size of each big object in MiB (default: 1024) |
--small-object-size=128 | size of each small object in KiB (default: 128) |
--small-objects=100 | number of small objects (default: 100) |
--skip-functional-tests | skip functional tests (default: false) |
--threads=4, -p 4 | number of concurrent threads (default: 4) |
juicefs warmup
Download data to local cache in advance, to achieve better performance on application's first read. You can specify a mount point path to recursively warm-up all files under this path. You can also specify a file through the --file
option to only warm-up the files contained in it.
If the files needing warming up resides in many different directories, you should specify their names in a text file, and pass to the warmup
command using the --file
option, allowing juicefs warmup
to download concurrently, which is significantly faster than calling juicefs warmup
multiple times, each with a single file.
Synopsis
juicefs warmup [command options] [PATH ...]
juicefs warmup /mnt/jfs/datadir
echo '/jfs/f1
/jfs/f2
/jfs/f3' > /tmp/filelist.txt
juicefs warmup -f /tmp/filelist.txt
Options
Items | Description |
---|
--file=path, -f path | file containing a list of paths (each line is a file path) |
--threads=50, -p 50 | number of concurrent workers, default to 50. Reduce this number in low bandwidth environment to avoid download timeouts |
--background, -b | run in background (default: false) |
--evict Added in v1.2 | evict cached blocks |
--check Added in v1.2 | check whether the data blocks are cached or not |
juicefs rmr
Remove all the files and subdirectories, similar to rm -rf
, except this command deals with metadata directly (bypassing kernel), thus is much faster.
If trash is enabled, deleted files are moved into trash. Read more at Trash.
Synopsis
juicefs rmr PATH ...
juicefs rmr /mnt/jfs/foo
juicefs sync
Sync between two storage, read Data migration for more.
Synopsis
juicefs sync [command options] SRC DST
juicefs sync oss://mybucket.oss-cn-shanghai.aliyuncs.com s3://mybucket.s3.us-east-2.amazonaws.com
juicefs sync s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/
juicefs sync --exclude='a?/b*' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/
juicefs sync --include='a1/b1' --exclude='a[1-9]/b*' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/
juicefs sync --include='a1/b1' --exclude='a*' --include='b2' --exclude='b?' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/
As shown in the examples, the format of both source (SRC
) and destination (DST
) paths is:
[NAME://][ACCESS_KEY:SECRET_KEY[:TOKEN]@]BUCKET[.ENDPOINT][/PREFIX]
In which:
NAME
: JuiceFS supported data storage types like s3
, oss
, refer to this document for a full list.
ACCESS_KEY
and SECRET_KEY
: The credential required to access the data storage, refer to this document.
TOKEN
token used to access the object storage, as some object storage supports the use of temporary token to obtain permission for a limited time
BUCKET[.ENDPOINT]
: The access address of the data storage service. The format may be different for different storage types, and refer to the document.
[/PREFIX]
: Optional, a prefix for the source and destination paths that can be used to limit synchronization of data only in certain paths.
Items | Description |
---|
--start=KEY, -s KEY, --end=KEY, -e KEY | Provide object storage key range for syncing. |
--end KEY, -e KEY | the last KEY to sync |
--exclude=PATTERN | Exclude keys matching PATTERN . Refer to the "Filtering" document to learn how to use it. |
--include=PATTERN | Include keys matching PATTERN , need to be used with --exclude . Refer to the "Filtering" document to learn how to use it. |
--match-full-path Added in v1.2 | Use "Full path filtering mode", default is false. Refer to the "Filtering modes" document to learn how to use it. |
--max-size-SIZE Added in v1.2 | skip files larger than SIZE |
--min-size-SIZE Added in v1.2 | skip files smaller than SIZE |
--max-age=DURATION Added in v1.2 | Skip files whose last modification time exceeds DURATION , in seconds. For example, --max-age=3600 means to synchronize only files that have been modified within 1 hour. |
--min-age=DURATION Added in v1.2 | Skip files whose last modification time is no more than DURATION , in seconds. For example, --min-age=3600 means to synchronize only files whose last modification time is more than 1 hour from the current time. |
--limit=-1 | Limit the number of objects that will be processed, default to -1 which means unlimited. |
--update, -u | Update existing files if the source files' mtime is newer, default to false. |
--force-update, -f | Always update existing file, default to false. |
--existing, --ignore-non-existing Added in v1.1 | Skip creating new files on destination, default to false. |
--ignore-existing Added in v1.1 | Skip updating files that already exist on destination, default to false. |
Items | Description |
---|
--dirs | Sync empty directories as well. |
--perms | Preserve permissions, default to false. |
--links, -l | Copy symlinks as symlinks default to false. |
--inplace Added in v1.2 | When a file in the source path is modified, directly modify the file with the same name in the destination path instead of first writing a temporary file in the destination path and then atomically renaming the temporary file to the real file name. This option only makes sense when the --update option is enabled and the storage system of the destination path supports in-place modification of files (such as JuiceFS, HDFS, NFS). That is to say, if the storage system of the destination path is object storage, enable this option is invalid. (default: false) |
--delete-src, --deleteSrc | Delete objects that already exist in destination. Different from rsync, files won't be deleted at the first run, instead they will be deleted at the next run, after files are successfully copied to the destination. |
--delete-dst, --deleteDst | Delete extraneous objects from destination. |
--check-all | Verify the integrity of all files in source and destination, default to false. Comparison is done on byte streams, which comes at a performance cost. |
--check-new | Verify the integrity of newly copied files, default to false. Comparison is done on byte streams, which comes at a performance cost. |
--dry | Don't actually copy any file. |
Items | Description |
---|
--threads=10, -p 10 | Number of concurrent threads, default to 10. |
--list-threads=1 Added in v1.1 | Number of list threads, default to 1. Read concurrent list to learn its usage. |
--list-depth=1 Added in v1.1 | Depth of concurrent list operation, default to 1. Read concurrent list to learn its usage. |
--no-https | Do not use HTTPS, default to false. |
--storage-class value Added in v1.1 | the storage class for destination |
--bwlimit=0 | Limit bandwidth in Mbps default to 0 which means unlimited. |
Items | Description |
---|
--manager-addr=ADDR | The listening address of the Manager node in distributed synchronization mode in the format: <IP>:[port] . If not specified, it listens on a random port. If this option is omitted, it listens on a random local IPv4 address and a random port. |
--worker=ADDR,ADDR | Worker node addresses used in distributed syncing, comma separated. |
Items | Description |
---|
--metrics value Added in v1.2 | address to export metrics (default: "127.0.0.1:9567") |
--consul value Added in v1.2 | Consul address to register (default: "127.0.0.1:8500") |
juicefs clone
Added in v1.1
Quickly clone directories or files within a single JuiceFS mount point. The cloning process involves copying only the metadata without copying the data blocks, making it extremely fast. Read Clone Files or Directories for more.
Synopsis
juicefs clone [command options] SRC DST
juicefs clone /mnt/jfs/file1 /mnt/jfs/file2
juicefs clone /mnt/jfs/dir1 /mnt/jfs/dir2
juicefs clone -p /mnt/jfs/file1 /mnt/jfs/file2
Options
Items | Description |
---|
--preserve, -p | By default, the executor's UID and GID are used for the clone result, and the mode is recalculated based on the user's umask. Use this option to preserve the UID, GID, and mode of the file. |
juicefs compact
Added in v1.2
Performs fragmentation optimization, merging, or cleaning of non-contiguous slices in the given directory to improve read performance. For detailed information, refer to 「Status Check and Maintenance」.
Overview
juicefs compact [command options] PATH
juicefs compact /mnt/jfs
Parameters
Item | Description |
---|
--threads, -p | Number of threads to concurrently execute tasks (default: 10) |