Compare commits

...

398 Commits

Author SHA1 Message Date
Andrea Cavalli
69951d5a2d Remove debug code 2023-01-24 16:14:19 +01:00
Andrea Cavalli
12cd93f7c1 Fix exports 2023-01-24 16:11:00 +01:00
Andrea Cavalli
c43992ba08 Change version 2023-01-24 16:06:39 +01:00
Andrea Cavalli
cd30267633 Update to java 17 2023-01-24 14:58:50 +01:00
Takuya ASADA
88d9bdc5b2 install.sh: add --without-systemd option
Since we fail to write files to $USER/.config on Jenkins jobs, we need
an option to skip installing systemd units.
Let's add --without-systemd to do that.

Also, to detect the option availability, we need to increment
relocatable package version.

See scylladb/scylla-dtest#2819
2022-09-12 13:00:59 +03:00
Takuya ASADA
06f27357b4 build_reloc.sh: rename relocatable packages
Currently, we use following naming convention for relocatable package
filename:
  ${package_name}-${arch}-package-${version}.${release}.tar.gz
But this is very different with Linux standard packaging system such as
.rpm and .deb.
Let's align the convention to .rpm style, so new convention should be:
  ${package_name}-${version}-${release}.${arch}.tar.gz

See scylladb/scylla#9799

Closes #185
2022-07-19 15:32:05 +03:00
Piotr Grabowski
fe351e8491 Update jackson dependency
Update jackson dependency to a newer version without any known
vulnerabilities. I have checked changelogs of all versions between
2.12.1 and 2.12.6.1, and none of the changes were potentially
problematic (minor fixes, etc).

2.12.6.1 version of jackson-databind is compatible with 2.12.6 versions
of other jackson packages.
2022-05-31 13:46:06 +03:00
Nadav Har'El
53f7f55e8c pom.xml: drop unneeded logging dependencies
pom.xml specifies a dependency on slf4j (the Simple Logging Facade for
Java) and its ancient log4j backend (slf4j-log4j12), but we don't
actually use it the scylla-jmx project - we use the standard
java.util.logging.

So let's drop the unnecessary (and these log4shell days, scary) dependencies.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20211213083055.1383507-1-nyh@scylladb.com>
2021-12-16 11:39:40 +02:00
Benny Halevy
2c43d99aa5 removeNode: support ignoreNodes options
Refs scylladb/scylla-tools-java#225

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes #178
2021-11-15 15:27:07 +02:00
Avi Kivity
26a6919714 build: replace yum with dnf
dnf has replaced yum on Fedora and CentOS. On modern versions of Fedora,
you have to install an extra package to get the old name working, so
avoid that inconvenience and use dnf directly.

Closes #181
2021-11-15 15:19:22 +02:00
Avi Kivity
d6225c5231 build: use utc for build datestamp
This helps keep packages built on different machines have the
same datestamp, if started on the same time.
2021-11-07 15:58:09 +02:00
Benny Halevy
48d37f3402 StorageService: scrub: fix scrubMode is empty condition
`!=` compares references not values.

Use !"".equals(scrubMode) instead, as it also covers
the null scrubMode case.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes #179
2021-11-02 15:21:08 +02:00
Takuya ASADA
5c383b641b reloc: stop removing entire $BUILDDIR
We found that user can mistakenly break system with --builddir option,
something like './reloc/build_deb.sh --builddir /'.
To avoid that we need to stop removing entire $BUILDDIR, remove
directories only we have to clean up before building deb package.

See: https://github.com/scylladb/scylla-python3/pull/23#discussion_r707088453

Closes #177
2021-09-19 10:01:40 +03:00
Juliusz Stasiewicz
658818b2d0 Support --load-and-stream option from nodetool refresh
This information is translated to {"load_and_stream", "true"} entry in the
POST request to Scylla's HTTP API at `storage_service/sstables/{keyspace}`
endpoint.

More about this feature: scylladb/scylla#7846

This change is a consequence of scylladb/scylla-tools-java#253.
2021-09-13 18:22:19 +03:00
Benny Halevy
70b19e6270 scrub: support scrubMode and deprecate skipCorrupted
Support new scrubMode option and deprecate skipCorrupted
that's equivalent to scrubMode="SKIP".

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes #175
2021-08-24 14:51:05 +03:00
Benny Halevy
5311e9bae3 storage_service: takeSnapshot: support the skipFlush option
Fixes #167

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes #168
2021-06-18 12:58:27 +03:00
dependabot[bot]
fbfbdaa298 build(deps): bump snakeyaml from 1.16 to 1.26 in /scylla-apiclient
Bumps [snakeyaml](https://bitbucket.org/asomov/snakeyaml) from 1.16 to 1.26.
- [Commits](https://bitbucket.org/asomov/snakeyaml/branches/compare/snakeyaml-1.26..v1.16)

---
updated-dependencies:
- dependency-name: org.yaml:snakeyaml
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Closes #169
2021-06-10 09:59:48 +03:00
Piotr Wojtczak
a7c4c39dd0 storage_service: Fix getToppartitions to always return both reads and writes
In line with the previous API, the getToppartitions function returned
results for one specified sampler (reads OR writes). This forced
the user to call the function once for each sampler, which is
suboptimal.
This commit changes the signature so that results for both samplers
are returned and the user can then pick whichever they need.
2021-05-10 18:07:07 +03:00
Piotr Wojtczak
440313eb72 storage_service: Add a generic toppartitions endpoint
As part of making the toppartitions API more generic
(i.e. being able to consider multiple tables
and keyspaces specified by the user) this commit adds
a JMX endpoint to call the generic Scylla REST API
introduced in #7864. It has been put inside
storage_service as being now able to query more than
one column family makes it no longer suitable for the
'column_family' group.

Fixes #4520
2021-03-25 12:35:18 +02:00
Takuya ASADA
9c687b562e dist/redhat: add support SLES
CentOS/RHEL and SLES has differnt package name of opejdk, use common
name of JRE.
Note that using common name of Java package is also useful when user want to
use differnt implementation of JRE for Scylla.

Also, disable AutoReqProv which is mistakenly enabled but required for
cross build rpm.
2021-03-15 17:23:14 +02:00
Amnon Heiman
15c1d4f43f StorageService: Add a method to return the uptime
Currently, the nodetool uses the jmx server for the uptime, this is
confusing is what we expect is Scylla uptime.

This patch exposes the API uptime using MBean.

Relates to #154

Signed-off-by: Amnon Heiman <amnon@scylladb.com>

Closes #155
2021-03-04 10:52:08 +02:00
dependabot[bot]
ffab41d714 Bump Jackson version in scylla-apiclient
Bumps `jackson.version` from 2.10.4 to 2.12.1.

Updates `jackson-annotations` from 2.10.4 to 2.12.1
- [Release notes](https://github.com/FasterXML/jackson/releases)
- [Commits](https://github.com/FasterXML/jackson/commits)

Updates `jackson-databind` from 2.10.4 to 2.12.1
- [Release notes](https://github.com/FasterXML/jackson/releases)
- [Commits](https://github.com/FasterXML/jackson/commits)

Updates `jackson-jaxrs-json-provider` from 2.10.4 to 2.12.1

Signed-off-by: dependabot[bot] <support@github.com>

Closes #159
2021-03-04 10:48:34 +02:00
Pekka Enberg
bac7d0b31e Merge 'Fix locking in APIBuilder.remove()' from Pekka Enberg
This pull request reverts the commit c2fc96b ("APIBuilder: Remove
RW-lock in JMX server repository wrapper") and fixes a missing unlock
from APIBuilder.remove().

Closes #163

* github.com:scylladb/scylla-jmx:
  APIBuilder: Unlock RW-lock in remove()
  Revert "APIBuilder: Remove RW-lock in JMX server repository wrapper"
2021-03-03 18:29:57 +02:00
Pekka Enberg
59fd4d2b03 APIBuilder: Unlock RW-lock in remove()
The remove() function accidentally calls lock() in the finally
block, leaving the RW-lock unlocked.

Refs: scylladb/scylla#7991
2021-03-03 18:23:41 +02:00
Pekka Enberg
9d7ee8af3c Revert "APIBuilder: Remove RW-lock in JMX server repository wrapper"
This reverts commit c2fc96be71. The
RW-lock usage had a bug, which will be fixed in a follow up patch.
2021-03-03 18:20:46 +02:00
Calle Wilund
c2fc96be71 APIBuilder: Remove RW-lock in JMX server repository wrapper
This is a seemingly pointless change. The RW-lock code is 100%
correct (afaict), yet we've seen repeated cases of test runs
hanging in JMX query because this lock is seemingly left held
by what seems to be the reaper task.

There is no explanation for this, no sign of exceptions/errors
that could explain the lock being broken. Nor any known JDK/JVM
bugs.

Yet, in tests, it seems that replacing the lock with a more
coarse, yet proven, synchronized, fixes the issue. So there.

I officially hate this patch, and it should not exist.
2021-03-03 15:40:33 +02:00
Amnon Heiman
8073af6e06 CompactionManager: add the compaction id when available
This patch adds the compaction id in getCompactions if it returns by the
API, if it's not the current behaviour will be used and it will return none.

After this patch a call to nodetool compactionstats -H

Will return:

id                                   compaction type keyspace  table     completed total unit progress
c942bd30-7a62-11eb-84bc-576502584f9a COMPACTION      keyspace1 standard1 1062      8576  keys 12.38%
c9429620-7a62-11eb-8afb-576402584f9a COMPACTION      keyspace1 standard1 972       8448  keys 11.51%
Active compaction remaining time :   0h00m00s

Fixes scylladb/scylla#7927

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2021-03-01 10:04:08 +02:00
Pekka Enberg
bf8bb16b52 Merge 'dist/debian: fix renaming debian/scylla-* files rule' from Takuya ASADA
Current renaming rule of debian/scylla-* files is buggy, it fails to
install some .service files when custom product name specified.

Introduce regex based rewriting instead of adhoc renaming, and fixed
wrong renaming rule.

Fixes scylladb/scylla#8113

Closes #158

* github.com:scylladb/scylla-jmx:
  dist/debian: fix renaming debian/scylla-* files rule
  dist/debian: sync packaging script with scylla main repo
2021-02-18 10:34:29 +02:00
Takuya ASADA
8f62d71e11 dist/debian: fix renaming debian/scylla-* files rule
Current renaming rule of debian/scylla-* files is buggy, it fails to
install some .service files when custom product name specified.

Introduce regex based rewriting instead of adhoc renaming, and fixed
wrong renaming rule.

Related scylladb/scylla#8113
2021-02-18 04:38:36 +09:00
Takuya ASADA
3618481e23 dist/debian: sync packaging script with scylla main repo 2021-02-18 04:25:48 +09:00
Takuya ASADA
949cefc251 dist/redhat: stop using systemd macros, call systemctl directly
Fedora version of systemd macros does not work correctly on CentOS7,
since CentOS7 does not support "file trigger" feature.
Even after 05d4378, scriptlets on old scylla .rpm and new scylla .rpm is
not completely same.

To fix the issue we need to stop using systemd macros, call systemctl
directly.

Fixes #94
2021-02-02 11:28:55 +02:00
Piotr Wojtczak
611d586981 Remove obsolete FIXME
The cardinality problem has already been fixed in #149.
2021-01-25 13:07:40 +02:00
Amos Kong
2c9565024f install.sh: set a valid WorkingDirectory for nonroot offline install
In commit 6311525, we set an empty value to WorkDirectory for nonroot.conf
of scylla-jmx.service. It works with ubuntu16, debian9, debian 10. But it
doesn't work with ubuntu 18.

This patch changed the WorkingDirectory of nonroot offline install to
default install directory (/home/scylla-test/scylladb).

Fixes #151

Signed-off-by: Amos Kong <amos@scylladb.com>
2020-12-28 21:18:35 +02:00
Piotr Wojtczak
20469bf749 column_family: Return proper cardinality for toppartitions requests
Right now, in the finishLocalSampling method of the ColumnFamilyStore
we return the size of the list of returned partitions. Instead, we should
be propagating the actual cardinality of the sampled set.
Let's just read the read_cardinality and write_cardinality properties
of the scylla's REST API response.

Fixes #148
2020-12-13 13:50:56 +02:00
Eliran Sinvani
6174a47924 Relocatable Package: create product prefixed relocatable archive
The build system was hardcoded to produce a package that is
prefixed with scylla instead of the product name. This is not
in line with out CI system requirements and can be also a source
for confusion.
This commit make the packaging system generate a package of
the format: {product}-jmx-package.tar.gz instead of
scylla-jmx-package.tar.gz

Closes #146
2020-10-15 17:10:21 +03:00
dependabot[bot]
91e7adffb1 build(deps-dev): bump junit from 4.8.2 to 4.13.1
Bumps [junit](https://github.com/junit-team/junit4) from 4.8.2 to 4.13.1.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.13.1.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.8.2...r4.13.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-10-15 14:22:38 +03:00
Amnon Heiman
c51906ed01 StorageService.java: Use the endpoint for getRangeToEndpointMap
After implementing range_to_endpoint_map endpoint update the API call to
it.

Fixes #36

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2020-10-08 11:53:33 +03:00
Takuya ASADA
c55f3f292b dist/debian/debian_files_gen.py: don't ignore permission error on shutil.rmtree()
shutil.rmtree(ignore_errors=True) was for ignores error when directory not exist,
but it also ignores permission error, so we shouldn't use that.
Run os.path.exists() before shutil.rmtree() instead.

See scylladb/scylla#7337
2020-10-08 11:48:30 +03:00
Takuya ASADA
e3a381d5a1 install.sh: show warning nonroot mode when systemd does not support user mode
On older distribution such as CentOS7, it does not support systemd user mode.
On such distribution nonroot mode does not work, show warning message and
skip running systemctl --user.

See scylladb/scylla#7071
2020-10-07 11:33:22 +03:00
Takuya ASADA
25bcd76017 install.sh: stop using symlinks for systemd units on nonroot mode
On some environment, systemctl enable <service> fails when we use symlink.
So just directly copy systemd units to ~/.config/systemd/user, instead of
creating symlink.

See scylladb/scylla#7288
2020-09-29 13:31:30 +03:00
Avi Kivity
45e4f28766 build: support passing product-version-release as a parameter
Instead of using the baked-in values from SCYLLA-VERSION-GEN,
allow passing an override. This will be used by the supermodule
to have an identical product-version-release (especially release,
which contains the git hash) across all packages.
2020-09-23 12:57:50 +03:00
Avi Kivity
6795a22afe Merge "dist: do not install build dependencies on build script" from Takuya
"
We do not want to install dependencies on package building time,
we want to install it on dbuild container.
So drop package installation from the scripts.

scylladb/scylla#7219
"

Closes #138.

* 'dont_install_builddep' of https://github.com/syuu1228/scylla-jmx:
  reloc: simplified .deb build process
  reloc: simplified .rpm build process
  dist: do not install build dependencies on build script
  dist/debian: Remove conflict tag for Java 11
2020-09-16 10:29:23 +03:00
Takuya ASADA
5fa422c4ea reloc: simplified .deb build process
We don't really need to have two build_deb.sh, merge it to reloc.
2020-09-16 16:19:24 +09:00
Takuya ASADA
f1612ef508 reloc: simplified .rpm build process
We don't really need to have two build_rpm.sh, merge it to reloc.
2020-09-16 16:19:24 +09:00
Pekka Enberg
d3096f32e0 dist: debian: fix detection of debuild
Fix the path of debuild, similar to what commit
f57fbb77b0a3f8d240c9924f3fa4529f5b5c8122 ("dist: debian: fix detection
of debuild") did in scylla-tools.

Message-Id: <20200911164153.34699-1-penberg@scylladb.com>
2020-09-13 16:26:22 +03:00
Takuya ASADA
99e491df40 dist: do not install build dependencies on build script
We do not want to install dependencies on package building time,
we want to install it on dbuild container.
So drop package installation from the scripts.

scylladb/scylla#7219
2020-09-13 20:19:16 +09:00
Pekka Enberg
8d92e5450e Merge 'JMX footprint work' from Calle
"
Fixes #133
Fixes #134
Refs #135

Makes CF mbean refresh code synchronized and tries to remove reductant
calls if we contend. Adds background reaping of dead objects to reduce
memory load in (test) scenarios where we manage to refresh to add, but
not cause removal (i.e. no wildcard queries).

TableMetricsObjectName serialization is fixed in the series because
without it we see loads of exceptions when refreshing the mbean set.
"

* elcallio-jmx-fixes:
  scylla-jmx: Use registration checker objects
  scylla-jmx: Introduce a registration check object
  scylla-jmx: Fix TableMetricObjectName serialization
2020-09-07 13:54:56 +03:00
Calle Wilund
ba3f58c63c scylla-jmx: Use registration checker objects
Fixes #134
Refs #135

Replaces previous refresh calls with ones bound to registration
check objects, which provides some sync between threads doing
refresh, and reduced redundant calls.

Also adds repeated reaping of dead objects, i.e. every 5 minutes
we try to remove dead CF:s (not adding new ones), to reduce
idle footprint.
2020-09-07 11:00:42 +02:00
Calle Wilund
771fe3e360 scylla-jmx: Introduce a registration check object
Allows for shared code for synchronized and optionally
partial update checks.
2020-09-07 11:00:42 +02:00
Pekka Enberg
12ab6aaeb8 Merge "Fix JMX startup after offline installation" from Amos
"Currently after offline installation, the scylla-jmx fails to start.
 This pull request fixes issues with openjdk version detection and
 working directory configuration to make scylla-jmx start.

 Fixes: scylladb/scylla#7098 by [PATCH] install.sh: check both openjdk-8 and openjdk-11

 Fixes: #129 by [PATCH] nonroot.conf: set WorkingDirectory to empty"

Reviewed-by: Takuya ASADA <syuu@scylladb.com>
* 'openjdk' of git://github.com/amoskong/scylla-jmx:
  install.sh: check both openjdk-8 and openjdk-11
  nonroot.conf: set WorkingDirectory to empty
2020-09-07 09:38:28 +03:00
Calle Wilund
1219faf9f1 scylla-jmx: Fix TableMetricObjectName serialization
Fixes #133

TableMetricObjectName is not serializable as such, since
it depends on a lexicon object etc.

Use writeReplace to put a regular ObjectName in
the stream instead.
2020-09-01 15:46:18 +02:00
Amos Kong
d998ac2e1e install.sh: check both openjdk-8 and openjdk-11
On Debian10, only Openjdk-11 is available, the install.sh fails in java
checking. Openjdk-8 and Openjdk-11 all work well, we should check both of them.
This patch also fixed the error message.

Signed-off-by: Amos Kong <amos@scylladb.com>
2020-08-28 01:24:37 +08:00
Amos Kong
6311525346 nonroot.conf: set WorkingDirectory to empty
After offline installation, scylla-jmx fails to be started for a chdir
error. WorkingDirectory is set to /var/lib/scylla in scylla-jmx.service,
it doesn't exist in nonroot install. This patch solved the problem by
setting WorkingDirectory to empty in nonroot.conf.

$ systemctl --user status scylla-jmx
â—Ź scylla-jmx.service - Scylla JMX
   Loaded: loaded (/home/scylla-test/.config/systemd/user/../../../scylladb/etc/systemd/scylla-jmx.service; linked; vendor preset: enabled)
  Drop-In: /home/scylla-test/.config/systemd/user/scylla-jmx.service.d
           └─nonroot.conf
   Active: failed (Result: exit-code) since Wed 2020-08-26 15:19:56 UTC; 2s ago
  Process: 66955 ExecStart=/home/scylla-test/install_root/jmx/scylla-jmx $SCYLLA_JMX_PORT $SCYLLA_API_PORT $SCYLLA_API_ADDR $SCYLLA_JMX_ADDR $SCYLLA_JMX_FILE $SCYLLA_JMX_LOCAL $SCYLLA_JMX_REMOTE $SCYLLA_JMX_DEBUG (code=exited, status=200/CHDIR)
 Main PID: 66955 (code=exited, status=200/CHDIR)

systemd[5654]: Started Scylla JMX.
systemd[66955]: scylla-jmx.service: Changing to the requested working directory failed: No such file or directory
systemd[66955]: scylla-jmx.service: Failed at step CHDIR spawning /home/scylla-test/scylladb/jmx/scylla-jmx: No such file or directory
systemd[5654]: scylla-jmx.service: Main process exited, code=exited, status=200/CHDIR
systemd[5654]: scylla-jmx.service: Failed with result 'exit-code'.

Signed-off-by: Amos Kong <amos@scylladb.com>
2020-08-26 23:34:28 +08:00
Yaron Kaikov
d5d1efd188 dist/debian: Remove conflict tag for Java 11
We current require Java 8 to install the scylla-jmx package on Debian.
As Debian 10 defaults to Java 11, let's remove the conflict flag and add
Java 11 to the dependencies list.
2020-08-25 15:46:04 +03:00
Yaron Kaikov
23da40b559 dist/debian: Remove conflict tag for Java 11
We current require Java 8 to install the scylla-jmx package on Debian.
As Debian 10 defaults to Java 11, let's remove the conflict flag and add
Java 11 to the dependencies list.
2020-08-24 09:13:34 +03:00
Takuya ASADA
be8f1ac511 dist/common/systemd: set WorkingDirectory to get heap dump correctly
Currently scylla-jmx.service's PWD is "/", we get following error when
JVM trying to write heap dump on current directory:

Aug 17 05:52:15 localhost.localdomain scylla-jmx[3469]: Starting the JMX server
Aug 17 05:52:16 localhost.localdomain scylla-jmx[3469]: java.lang.OutOfMemoryError: Java heap space
Aug 17 05:52:16 localhost.localdomain scylla-jmx[3469]: Dumping heap to java_pid3469.hprof ...
Aug 17 05:52:16 localhost.localdomain scylla-jmx[3469]: Unable to create java_pid3469.hprof: Permission denied

To fix this, we need to specify WorkingDirectory on systemd unit.
2020-08-17 09:54:38 +03:00
Avi Kivity
c5ed83178a dist: debian: support non-x86
The package is already arch independent, so remove the artifical
restriction to x86.
2020-08-04 13:07:42 +03:00
Avi Kivity
626fd75173 dist: debian: do not require root during package build
Debian package builds provide a root environment for the installation
scripts, since that's what typical installation scripts expect. To
avoid providing actual root, a "fakeroot" system is used where syscalls
are intercepted and any effect that requires root (like chown) is emulated.

However, fakeroot sporadically fails for us, aborting the package build.
Since our install scripts don't really require root (when operating in
the --packaging mode), we can just tell dpkg-buildpackage that we don't
need fakeroot. This ought to fix the sporadic failures.

As a side effect, package builds are faster.

Follows scylla.git's b608af870b0a1ad88b91a72bddeff0c321877f9e.

Refs scylladb/scylla#6655.
2020-07-29 12:53:20 +03:00
Piotr Jastrzebski
c0d9d0f051 add build/ to gitignore
This directory is created by a build and shouldn't be commited so
it's best for git to just ignore it.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <b20dba12fb726aebd51b2ab9494e7c52f8058feb.1595259605.git.piotr@scylladb.com>
2020-07-21 09:10:25 +03:00
Pekka Enberg
7578d359af reloc: Add "--builddir" option to build_{rpm,deb}.sh
We need the ability to control build directory in scylla.git build
system. Let's add support for the "--builddir" option like in other
variants of the same scripts.

Message-Id: <20200717085723.701209-1-penberg@scylladb.com>
2020-07-18 12:16:31 +03:00
Avi Kivity
aa94fe53e0 dist: redhat: reduce log spam from unpacking sources when building rpm
rpmbuild defaults to logging the name of every file it unpacks from
the archive. This is quite a lot for Java applications.

Make it quiet with the %setup -q flag.
2020-07-15 12:27:43 +03:00
Pekka Enberg
4727910b5e Merge 'gitignore: fix typo and add scylla-apiclient/target/' from Benny
When building scylla-jmx in place, `scylla-apiclient/target/` is left
behind and should be ignored by `.gitignore` otherwise the scylla
submodule directory appears to be dirty.

* bhalevy-gitignore:
  gitignore: do not track scylla-apiclient/target/
  gitignore: fix typo in dependency-reduced-pom.xml
2020-07-14 10:21:44 +03:00
Pekka Enberg
15eb6adf92 apiclient: Bump Jackson version to 2.10.4
Jackson 2.9.x has various vulnerabilities that are fixed in 2.10 series:

https://github.com/FasterXML/jackson-databind/issues/2700#issuecomment-619590967

Let's update to the latest version of Jackson. This is a similar fix to
Github's Dependabot proposal, except we bump the version number across
all Jackson components:

https://github.com/scylladb/scylla-jmx/pull/116
2020-07-14 10:19:49 +03:00
Takuya ASADA
5820992a8e dist/debian: apply generated package version for .orig.tar.gz file
Same as scylladb/scylla#6752,
we currently does not able to apply version number fixup for .orig.tar.gz file,
even we applied correct fixup on debian/changelog, becuase it just reading
SCYLLA-VERSION-FILE.
We should parse debian/{changelog,control} instead.

Fixes #120
2020-07-06 12:49:15 +03:00
Benny Halevy
38eb871383 gitignore: do not track scylla-apiclient/target/
It is created when building jmx.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2020-06-24 13:59:34 +03:00
Benny Halevy
28fe33e588 gitignore: fix typo in dependency-reduced-pom.xml
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2020-06-24 13:59:12 +03:00
Pekka Enberg
b2195734cc Upgrade to Guava 29.0
CVE-2018-10237 impacts Guava 24.1.0 and earlier, so let's upgrade to the latest version.

Reported-by: GitHub and Shlomi Livne
2020-06-16 10:04:48 +03:00
Juliusz Stasiewicz
b2e4796901 Added support for checkAndRepairCdcStreams command 2020-06-15 14:58:13 +03:00
Takuya ASADA
78c3b7627f dist/debian: cleanup build/debian before building .deb
On 52bd496, we stopped to rm -rf debian/ on build_deb.sh, since now we have
prebuilt debian/ directory.
However, it might cause .deb build error when we modified debian package source,
since it never cleanup.

To prevent build error, we need to cleanup build/debian on reloc/build_deb.sh,
before extracting contents from relocatable package.
2020-06-08 18:15:04 +03:00
Takuya ASADA
e0b21b9a19 dist: add --packaging for .rpm/.deb build
354df10 mistakenly does not contain dist/redhat & dist/debian change,
add --package option to them.
2020-06-08 17:45:52 +03:00
Takuya ASADA
2883a8dc63 dist/debian: don't install systemd unit by install.sh, use debian/*.service
Installing *.service by install.sh script causes the error on installing .deb
package, use debian/*.service instead.

Fixes scylladb/scylla#6010
Related scylladb/scylla#5640
Related 29285b28e2
2020-06-08 12:24:03 +03:00
Takuya ASADA
354df10ea9 install.sh: add dependency check and postinst script for manual install
To install scylla-jmx using install.sh easily, we need:
 - run dependency check before install
 - run postinst script after install

But we don't want to run them when we build .rpm/.deb package,
we also need to add --packaging option to skip them.

See scylladb/scylla#5830
2020-06-08 12:22:28 +03:00
Takuya ASADA
3fb777a8f0 dist/debian: support version number containing '_'
.deb packaging system does not support version number contains '_',
it should be replacedwith '-'
2020-06-04 05:27:04 +09:00
Takuya ASADA
f044c8988e dist/debian: move version number fixup to debian_files_gen.py
Now we generate dist/changelog on relocatable package generation time,
we cannot run '.rc' fixup on .deb package building time, need to do it
in debian_files_gen.py.
2020-06-04 05:27:02 +09:00
Takuya ASADA
ec2a830876 reloc-pkg: move all files under project name directory
To make unified relocatable package easily, we may want to merge tarballs to single tarball like this:
zcat .tar.gz | gzip -c > scylla-unified.tar.xz
But it's not possible with current relocatable package format, since there are multiple files conflicts, install.sh, SCYLLA--FILE, dist/, README.md, etc..

To support this, we need to archive everything in the directory when building relocatable package.

See: scylladb/scylla#6315
2020-06-03 09:53:11 +03:00
Amnon Heiman
9628cc0728 StorageService: Add the scrub 3.11 command implementation
The scrub command was not supported from node_tool, but now when we want
to enable it the current API is not compatible with the 3.11 MBean
definition.

This patch adds the definition to the MBean and the implementation to
StorageService.

It also address two problems with the old scrub implementation, just
in case someone will use them.

1. Implementation didn't pass the parameters to the API.
2. A stub implementation called itself instead of calling an actual
implementation.

This patch will enable to test the command from nodetool additional
changes may come on top of it if more command line options will be
supported.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2020-05-29 14:12:09 +03:00
Ivan Prisyazhnyy
c7dcbd7f42 fix is auto compaction disabled
align API to the recent changes at https://github.com/scylladb/scylla/pull/6176

don't wrap API exceptions into IOException for enableAutoCompaction
2020-05-29 14:02:40 +03:00
dependabot[bot]
fc43c56369 build(deps): bump jackson-databind in /scylla-apiclient
Bumps [jackson-databind](https://github.com/FasterXML/jackson) from 2.9.10.1 to 2.9.10.4.
- [Release notes](https://github.com/FasterXML/jackson/releases)
- [Commits](https://github.com/FasterXML/jackson/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2020-05-29 14:00:31 +03:00
Takuya ASADA
52bd496006 dist/debian: drop dependency on pystache
Drop dependency on pystache since it nolong present in Fedora 32.

To implement it, simplified debian package build process.
It will be generate debian/ directory when building relocatable package,
we just need to run debuild using the package.

To generate debian/ directory this commit added debian_files_gen.py,
it construct whole directory including control and changelog files
from template files.
Since we need to stop pystache, these template files swiched to
string.Template class which is included python3 standard library.

see: https://github.com/scylladb/scylla/pull/6313
2020-05-23 06:08:01 +03:00
Takuya ASADA
18f8acc60e dist/redhat: drop dependency on pystache
Same as https://github.com/scylladb/scylla/pull/6313, drop dependency on
pystache since it nolong present in Fedora 32.
2020-05-19 08:15:59 +03:00
Takuya ASADA
773a82d539 dist: allow specify JVM options from sysconfig (#93)
Add SCYLLA_JMX_JVM_OPTS on sysconfig to specify JVM options.

Reviewed-by: Ľuboš Koščo <lubos@scylladb.com>

Fixes #58
2020-01-28 12:43:03 +02:00
Takuya ASADA
46681753cd dist: add /usr/lib/scylla/jmx for compatibility (#91)
On the commit 4c8660d, we dropped /usr/lib/scylla/jmx since it likely no user
script invoke scripts under the directory.
However, we found there are possibility scylla-jmx.service tries to load .jar
file from /usr/lib/scylla/jmx, when user upgraded from older version of scylla.
Because /etc/sysconfig/scylla-jmx is marked as 'noreplace' on our rpm,
yum upgrade may keep old sysconfig file when it modified by user, that may
causes to load .jar from /usr/lib/scylla/jmx since we specify the path in the
sysconfig file.

To avoid the issue it's better to have symlinks on /usr/lib/scylla/jmx for
safety.

See #90
2020-01-16 15:51:39 +02:00
Takuya ASADA
29601254fc dist/redhat: call systemctl --daemon-reload when upgraded (#92)
Since %systemd_post does not call systemctl --daemon-reload, we need to call it
manually to apply changes.

Fixes #90
2020-01-08 13:35:38 +02:00
Takuya ASADA
4c8660d41a dist: drop symlink to scripts (#89)
This is scylla-jmx part of https://github.com/scylladb/scylla/pull/5530

After we stopped replacing /usr/lib/scylla with symlink,
creating symlink on /opt/scylladb/scripts/jmx become meaningless,
so we can drop it now.

We able to introduce new symlink on /usr/lib/scylla, but it likely no user
directly invoke scripts under /usr/lib/scylla/jmx, so we don't have to
do that.
2019-12-30 13:51:33 +02:00
Takuya ASADA
2f34a97c6e dist/debian: add AdoptOpenJDK for Debian 10
Since Debian 10 dropped OpenJDK 8, so we need to switch to other
JVM distribution which still maintain OpenJDK 8 for Debian.
We can use AdoptOpenJDK for such environment.

See scylladb/scylla#5463

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20191218124838.35017-1-syuu@scylladb.com>
2019-12-18 15:32:04 +02:00
Takuya ASADA
236ffa6c98 dist/debian: add Conflicts with openjdk-11
Since Debian variants can install multiple JRE, "Depends: openjdk-8-jre" may
not mean default JRE is openjdk-8.
We should block openjdk-11 for now, until we support it.

Related with scylladb/scylla#5463

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20191218111835.25618-1-syuu@scylladb.com>
2019-12-18 14:08:28 +02:00
Takuya ASADA
771cf6ea50 dist/redhat: force xz compression on rpm binary payload
Same as 301c835cbf,
Fedora 31 switched the default compression to zstd, which isn't readable
by some older rpm distributions (CentOS 7 in particular). Tell it to use
the older xz compression instead, so packages produced on Fedora 31 can
be installed on older distributions.

See: https://github.com/scylladb/scylla/pull/5310

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20191210191441.108774-1-syuu@scylladb.com>
2019-12-10 22:18:29 +02:00
Takuya ASADA
31e6bcf9be dist/redhat: fix rpmbuild error on Fedora 31
Same as scylladb/scylla-ami#53, it seems like rpm macro %systemd_postun requires
 one argument starting from Fedora 31, otherwise it causes the error.
The solution is passing systemd unit name just like
%systemd_post and %systemd_preun.

see: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/TU3T2ZYY67SMAJFR2TD4HY6SCPPDVS5V/

Fixes #87

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20191205120514.9382-1-syuu@scylladb.com>
2019-12-05 17:03:10 +02:00
Alexandros Bantis
d8c47603d9 Create a HTTP client per instance (#86)
Create javax HTTP client once per instance instead of per request.

Fixes #82
2019-11-19 17:28:09 +02:00
Pekka Enberg
f45ae1833e Add Pekka as a "code owner" on GitHub (#85)
Add myself as a "code owner" so that I am assigned a review
automatically:

https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners

I also wanted to add Amnon and Calle, but apparently you need to have
write permissions in order to be a code owner.

The purpose of this automation is to ensure Scylla JMX pull requests
show up in my github.com/pulls page. Thanks Maciej Zimnoch for the tip!
2019-11-14 02:18:33 -08:00
dependabot[bot]
8e1beb11f4 Upgrade jackson-databind from 2.9.9 to 2.9.10.1 (#84)
This upgrades jackson-databind dependency from version 2.9.9 to 2.9.10.1, which fixes various security vulnerabilities:

https://www.cvedetails.com/vulnerability-list/vendor_id-15866/product_id-42991/Fasterxml-Jackson-databind.html
2019-11-13 19:57:55 +02:00
Glauber Costa
27fed6136a Run scylla-jmx in a systemd slice (#79)
Scylla now supports server-defined systemd slices that are used to provide
isolation between components.

This patch adds scylla-jmx to the helper slice. This will guarantee that
scylla-jmx does not use too much resources, influencing the server performance.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2019-11-13 18:28:14 +02:00
Amos Kong
fa00e84794 build_deb.sh: don't generate scylla-jmx.service from mustache template (#81)
Takuya had untemplataize scylla-jmx.service in commit e8355087ea
But the build_deb.sh still tries to generate service file from a deleted
mustache template file -- scylla-jmx.spec.mustache. It wrongly redirects
a path to service file, then scylla-jmx would fail to start.

Fixes #80
2019-10-01 18:35:50 +03:00
Calle Wilund
f915f8fc7a sstableinfo: Fix deserizalization of "properties"
Refs #76

Since the incoming json uses swagger "key", "value" syntax
we need to do explicit deserialization of this property
as well (not just extended props).

Message-Id: <20190930115432.27801-1-calle@scylladb.com>
2019-09-30 15:29:52 +03:00
Avi Kivity
dc7f37b901 Merge "nonroot installer" from Takuya
"This is nonroot installer patchset v4, for scylla-jmx."

* 'nonroot_v4' of https://github.com/syuu1228/scylla-jmx:
  install.sh: add --nonroot mode
  dist/common/systemd: untemplataize *.service, use drop-in units instead
  dist: move package build script to install.sh
2019-09-10 14:46:04 +03:00
Takuya ASADA
9ef12f4651 install.sh: add --nonroot mode
This implements the way to install Scylla without requires root privilege,
not distribution dependent, does not uses package manager.
2019-09-04 09:54:42 +09:00
Takuya ASADA
e8355087ea dist/common/systemd: untemplataize *.service, use drop-in units instead
Since systemd unit can override parameters using drop-in unit, we don't need
mustache template for them.

Also, drop --disttype option on install.sh since it does not required anymore,
introduce --sysconfdir instead for non-redhat distributions.
2019-09-04 08:54:05 +09:00
Takuya ASADA
a1044e3bd1 dist: move package build script to install.sh
Move package build script from .rpm/.deb to single script, install.sh.
We need this to support nonroot mode, and also to keep package build script
consistent between .rpm/.deb.
2019-09-04 08:30:37 +09:00
Pekka Enberg
04ea3ab7e0 Merge 'Implement sstable_info command' from Calle
"Fixes #76

Implements JMX level call for "sstable_info" REST api command.

Requires seastar patch:
json: Make date formatter use RFC8601/RFC3339 format

Requires scylla patch set "Implement sstable_info API command (info on sstables)"

Forwards call to REST sstable_info and packs the data
into CompositeData for JMX consumption."
* 'sstabledesc' of git://github.com/elcallio/scylla-jmx:
  storage_service: Add "getSSTableInfo" command/attribute
  service: Add objects for deserializing sstable_info json
  scylla-apiclient: Add Date json serializer helper
  APIClient: Add jackson JSON serializer support to client object
  apiclient/pom.xml: Add jackson JSON support libs for REST client
2019-08-13 14:40:25 +03:00
Calle Wilund
133b2e4728 storage_service: Add "getSSTableInfo" command/attribute
Fixes #76

Requires seastar patch:
 json: Make date formatter use RFC8601/RFC3339 format

Requires scylla patch set "Sstabledesc"

Forwards call to REST sstable_info and packs the data
into CompositeData for JMX consumption.
2019-08-06 08:12:14 +00:00
Amnon Heiman
71170f5713 CompactionMetrics: use the pending compaction API (#75)
The PendingTasksByTableName metric should use the pending_tasks_by_table
API to get the real value of the pending compaction.

Fixes #74

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2019-08-05 14:12:48 +03:00
Amnon Heiman
ff0723abc6 ColumnFamilyStore: Mbean API support the hex format param (#69)
Cassandra 3.0 version of the JMX added a parameter that allows accepting
the parameter as hex.

This breaks the current implementation with a NoSuchMethodException.

This patch adds the missing implementation.

For a full support, a follow up patch in Scylla is needed, but for the
current functionality it would work.

After this patch usage example:

nodetool getsstables keyspace1 standard1 39303138374b4d343830

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2019-07-29 10:09:04 +03:00
Calle Wilund
cb42205061 service: Add objects for deserializing sstable_info json
Objects + serial logic to automate the transform of
scylla REST json object for sstable_info into
compositedata that can be consumed by nodetool
2019-07-24 14:31:10 +00:00
Calle Wilund
b2f3eeee05 scylla-apiclient: Add Date json serializer helper
To handle RFC8601 formattedd dates in JAXB
2019-07-24 14:30:02 +00:00
Calle Wilund
d8efa60ab7 APIClient: Add jackson JSON serializer support to client object
Allows java ws to deserialize json objects directly.
2019-07-24 14:28:38 +00:00
Calle Wilund
bbc817013e apiclient/pom.xml: Add jackson JSON support libs for REST client 2019-07-24 14:27:56 +00:00
Calle Wilund
263735379e scylla-jmx: Remove depdendency-reduced-pom.xml from tracking
This file is (re-)generated by maven on occasions. It should
not be version controlled.

Add to .gitignore as well.
2019-07-23 11:05:34 +03:00
Amnon Heiman
f0d2df3d15 StorageProxy.java: Add view write metrics
nodetool proxyhistograms command look for the view write metric.

While we do not report that metric yet, we still want the command to
succeed.

After this patch:
$ nodetool proxyhistograms
proxy histograms
Percentile       Read Latency      Write Latency      Range Latency   CAS Read Latency  CAS Write Latency View Write Latency
                     (micros)           (micros)           (micros)           (micros)           (micros)           (micros)
50%                    326.00             110.00             424.50               0.00               0.00               0.00
75%                   1253.00             193.25             877.75               0.00               0.00               0.00
95%                   2935.90            1007.25            5182.55               0.00               0.00               0.00
98%                   3100.00            1040.60            5492.00               0.00               0.00               0.00
99%                   3100.00            1058.00            5492.00               0.00               0.00               0.00
Min                     34.00               9.00              36.00               0.00               0.00               0.00
Max                   3100.00            1058.00            5492.00               0.00               0.00               0.00

See scylladb/scylla#4470

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20190514063316.28040-1-amnon@scylladb.com>
2019-07-21 19:20:37 +03:00
Amnon Heiman
c7bce65919 APIMBeanServer: Support both Table and Tables as metric name
Fixes #71

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2019-07-17 10:56:44 +03:00
Pekka Enberg
4303f06426 Merge "Make API client a separate module for reuse" from Ľuboš
There's an effort to implement a version of "nodetool" that uses
Scylla's REST API directly, Let's make the API client a separate module,
so nodetool can use it.

* 'scylla-apiclient' of https://github.com/tarzanek/scylla-jmx:
  fix README for building instructions
  trigger build from parent maven to have the local repo properly set up
  cleanup commented implicit steps in mvn
  make scylla-apiclient a separate module so the jar can be reused
2019-07-10 22:31:12 +03:00
Lubos Kosco
183eb6158a fix README for building instructions 2019-07-08 11:02:45 +02:00
Lubos Kosco
4296c7d3ae trigger build from parent maven to have the local repo properly set up 2019-07-08 10:54:04 +02:00
Lubos Kosco
222990d821 cleanup commented implicit steps in mvn 2019-07-08 10:04:24 +02:00
Lubos Kosco
91ae4ec8ee make scylla-apiclient a separate module so the jar can be reused 2019-07-01 17:33:08 +02:00
Amnon Heiman
9dae28e2f0 ColumnFamilyStore: finishLocalSampling should respect count limit
When calling nodetool toppartitions with size limit, finishLocalSampling
should respect that and limit the number of the results.

Example:
$ nodetool toppartitions -k 2 keyspace1 standard1 20
WRITES Sampler:
  Cardinality: ~2 (256 capacity)
  Top 2 partitions:
	Partition                Count       +/-
	38333032394d4f4d5030         4         3
	4e353937383137503330         4         3

READS Sampler:
  Cardinality: ~2 (256 capacity)
  Top 2 partitions:
	Nothing recorded during sampling period...

Fixes #66

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2019-06-23 15:31:18 +03:00
Amnon Heiman
2fac82434b APIClient: delete command should check for errors
delete commands do not return a value, still, it is possible that the
command will return a value different than OK.

In such a case, the error should be propagate to the caller via an
exception.

Fixes #65

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20190618135312.2776-1-amnon@scylladb.com>
2019-06-18 18:56:30 +03:00
Takuya ASADA
eda6f4e1a8 dist/debian: run 'systemctl daemon-reload' automatically on package install/uninstall
Since we cannot use dh --with=systemd because we don't want to
automatically enabling systemd units, manage them by our setup scripts,
we have to do 'systemctl daemon-reload' manually.
(On dh --with=systemd, systemd helper automatically provides such
scirpts)

See scylladb/scylla-enterprise#825

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20190618000414.29142-1-syuu@scylladb.com>
2019-06-18 15:46:29 +03:00
Takuya ASADA
f73da49f62 dist: merge /usr/lib/scylla to /opt/scylladb
Since scylla-jmx uses /usr/lib/scylla/jmx for program directory, we also
need to move them under /opt/scylladb.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20190618122451.27721-1-syuu@scylladb.com>
2019-06-18 15:43:53 +03:00
Calle Wilund
512638ed6e APIMBeanServer: Handle nodeprobe wildcard queries in CF refresh
Fixes #63
Message-Id: <20190311082942.3310-2-calle@scylladb.com>
2019-05-05 18:10:37 +03:00
Calle Wilund
5f974bc2bb ColumnFamilyStore: Propapgate exception cause in sampling wait
Message-Id: <20190311082942.3310-1-calle@scylladb.com>
2019-05-05 18:10:37 +03:00
Takuya ASADA
5e50090bfd dist: merge product name parameter on single place
When we add product name customization, we mistakenly defined the
parameter on each package build script.
Number of script is increasing since we recently added relocatable
python3 package, we should merge it in single place.

Also we should save the parameter on relocatable package, just like
version-release parameters.

So move the definition to SCYLLA-VERSION-GEN, save it to
build/SCYLLA-PRODUCT-FILE then archive it to relocatable package.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20190422105304.23454-1-syuu@scylladb.com>
2019-04-22 13:55:53 +03:00
Piotr Jastrzebski
cb1ac4a58c Capture heap dump on OutOfMemoryError
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <ae90bb6e4b18f51376d2e7009078bf8c8e6ed7fd.1552407088.git.piotr@scylladb.com>
2019-03-26 16:24:35 +02:00
Calle Wilund
da21305989 StorageService: Include the arguments in "upgrade" call.
Message-Id: <20190219133431.29009-1-calle@scylladb.com>
2019-02-27 10:33:43 +02:00
Amnon Heiman
27313ee2c4 ColumnFamilyStore: Add an implementation for table sampling
This patch adds the implementation for begin and finish local sampling
of a column family.

There is a difference in the implementation of Cassandra API and Scylla.

In Cassandra and the JMX an external source start and stop the sampling.

In Scylla, a single API call start the sampling and return with the
result. In Scylla the API call always return sampling of the read and of
the writes.

To bridge the difference, the begin sampling command will use a Future
when calling the API. The finish method will wait for the future to end.

Because of the different implementation, it is possible that two
consecutive calls will be made to start sampling one for the read and
one for the write, similarly, two calls will be made to finish for read
and write.

The implementation would ignore the second call to start and will
store the result, so the second call to finish will be served from the
stored result.

Note, that the use of future is only for safety, the way we expect it to
work, the caller to the begin sampling will sleep anyhow while waiting
for the result.

To avoid breaking the MBean compatibility we piggyback the duration on
top of the sampler string.

If no duration is given, a default duration will be taken, this is also
just as a precaution, we will modify the nodetool implementation to
pass that information.

There is a known issue with cardinality, that will need to be addressed.
Also we return a value in the raw column to match what Cassandra JMX
returns, but it's a duplication of the partition key.

See scylladb/scylla#2811

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20190128143505.5241-1-amnon@scylladb.com>
2019-02-03 12:40:04 +02:00
Takuya ASADA
d4493295ff dist/debian: skip running dh_strip_nondeterminism
On some Fedora environment dh build tries to run
dh_strip_nondeterminism, and fails sice Fedora does not provide such
command.

To prevent the build error we need to skip it.

Fixes #62
See 5bf9a03d65

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20190125223351.20381-1-syuu@scylladb.com>
2019-01-28 09:12:50 +02:00
Calle Wilund
9eec9eabf6 scylla-jmx: Make scylla-jmx compatible with jdk9+
Adds explicit maven dependecies for libraries
removed from JDK.
Removes reflection calls forbidden in jdk9+.

Message-Id: <20181120142550.22852-1-calle@scylladb.com>
2018-11-21 13:00:24 +02:00
Takuya ASADA
854e6072a2 dist/redhat: prevent build error on older Fedora/CentOS
Current scylla.spec fails build on Fedora 27, since python2-pystache is
new package name that renamed on Fedora 28.
But Fedora 28's python2-pystache has tag "Provides: pystache",
so we can depends on old package name, this way we can build scylla.spec both
on Fedora 27/28.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20181028175757.32224-1-syuu@scylladb.com>
2018-10-29 11:37:25 +02:00
Takuya ASADA
21e22d4e1a dist/redhat: minor fixes for relocatable .rpm
- we don't use mock anymore, so drop mock directory
- build_rpm.sh usage need to update
- build_rpm.sh should install rpmbuild

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20181024222136.3332-2-syuu@scylladb.com>
2018-10-25 10:29:30 +03:00
Takuya ASADA
9ed8a01519 dist/debian: minor fixes for relocatable .deb
- we don't use pbuilder anymore, so drop pbuilderrc
- on build_deb.sh is_debian / is_ubuntu functions are unused now, drop them

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20181024222136.3332-1-syuu@scylladb.com>
2018-10-25 10:29:27 +03:00
Avi Kivity
df2dee2402 Merge "relocatable package support for jmx" from Takuya
"
This patchset adds relocatable package support for scylla-jmx, and also
support generating .rpm/.deb from relocatable package.

 - Scripts are based on relocatable .rpm/.deb support patchset for main repo
   (not merged https://github.com/syuu1228/scylla/tree/reloc_rpmdeb_v4)
 - Single .rpm package provided for CentOS7/Fedora(unofficial)
 - Single .deb package provided for Ubuntu 14/16/18, Debian 8/9
"

* 'reloc_v1' of https://github.com/syuu1228/scylla-jmx:
  dist/debian: use relocatable package to produce .deb
  dist/redhat: use relocatable package to produce .rpm
  reloc: add support relocatable package
2018-10-24 11:51:34 +03:00
Takuya ASADA
e9bfbedfed dist/debian: use relocatable package to produce .deb 2018-10-24 11:45:11 +09:00
Takuya ASADA
5edfedf642 dist/redhat: use relocatable package to produce .rpm 2018-10-24 02:04:49 +00:00
Takuya ASADA
92847e3381 reloc: add support relocatable package
To align build system with scylla main repo, adding relocatable package
support.

On scylla-jmx, we don't provide libraries and linker since it's Java
program, just contains .jar file and dist/ directory.
2018-10-24 02:02:25 +00:00
Calle Wilund
ca3fa8de20 scylla-jmx: Fix tablemetricsobjectname breakage
Fixes #57

The usage of TableMetricsObjectName-yada-yada relies on translating the
"fake" objectname to a canonical one on remote
publication/serialization. However, the implementation of
ObjectName.getInstance has changed in JDK (micro) updates so it no
longer applies overridable methods -> wrong name published.

Fix by doing explicit ObjectName instansiation.
Message-Id: <20181023132005.23099-1-calle@scylladb.com>
2018-10-23 16:30:29 +03:00
Takuya ASADA
74fa1a40ca dist/debian: install GPG key for cross-building
We found on some Debian environment Ubuntu .deb build fails with
gpg error because lack of Ubuntu GPG key, so we need to install it before
start pbuilder.
Same as on Ubuntu, it needs to install Debian GPG key.

See scylladb/scylla#3823

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20181008110724.18335-1-syuu@scylladb.com>
2018-10-08 15:34:18 +03:00
Takuya ASADA
cd1c79f90f dist/debian: support package renaming on build script
To automatically rename packages on enterprise release, added package name
prefix as a variable on build_deb.sh.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180828010445.11920-2-syuu@scylladb.com>
2018-08-28 09:26:02 +03:00
Takuya ASADA
a27d9601f5 dist/redhat: support package renaming on build script
To automatically rename packages on enterprise release, added package name
prefix as a variable on build_rpm.sh.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180822072105.9420-2-syuu@scylladb.com>
2018-08-22 11:05:00 +03:00
Calle Wilund
c6aee9f63e scylla-jmx: Add "PendingTasksByTableName" gauge to CompactionMetrics
Required by origin 3.11 nodetool.

Message-Id: <20180801084545.23239-1-calle@scylladb.com>
2018-08-01 14:25:06 +03:00
Calle Wilund
9c3ac3e547 scylla-jmx: Update JMX interfaces to origin 3.11
Almost 100% null implementations, which is ok for most purposes
currently used by scylla. Some of these new calls (like dropped
mutations etc) should perhaps however be implemented.

Tested with the nodetool dtests. So sparsely.

Needed when/if scylla-tools-java is upgraded to origin 3.11,
otherwise noodtool breaks.

Message-Id: <20180730113741.14952-1-calle@scylladb.com>
2018-07-30 15:47:43 +03:00
Avi Kivity
b4d983b45a dist: redhat: fix up bad file ownership of rpms/srpms
mock outputs files owned by root. This causes attempts
by scripts that want to junk the working directory (typically
continuous integration) to fail on permission errors.

Fixup those permissions after the fact.
Message-Id: <20180719163258.4393-1-avi@scylladb.com>
2018-07-26 08:17:59 +03:00
Takuya ASADA
d6c408445e dist/debian/build_deb.sh: make build_deb.sh more simplified
Use is_debian()/is_ubuntu() to detect target distribution, also install
pystache by path since package name is different between Fedora and
CentOS.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180703183211.3455-1-syuu@scylladb.com>
2018-07-04 10:12:31 +03:00
Takuya ASADA
2af17c1f53 dist: simplified build script templates
Currently we are using *.in files for templates, applying parameters by sed
command one-by-one.
This patch will replace them by Mustache, it's simple and easy syntax template
language.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180606183113.25275-1-syuu@scylladb.com>
2018-06-10 19:37:26 +03:00
Takuya ASADA
b8394f677b dist/debian: run update-ca-certificates to avoid security exception on Ubuntu 18.04
On Ubuntu 18.04 build fails by java.security.InvalidAlgorithmParameterException,
while downloading .pom file from HTTPS URL.
Looks like it's ca-certs problem, can fix by running update-ca-certificates.

Fixed #52

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180601092954.28319-1-syuu@scylladb.com>
2018-06-03 10:51:11 +03:00
Piotr Jastrzebski
1ad2ba8507 TableRepository: wrap initial repository
Before we were discarding the initial repository while
overriding it with TableRepository. This was a mistake that
caused dtests to fail. Proper solution is to keep the initial
repository inside TableRepository. That way whatever was registered
at the time of JmxMBeanServer creation is still handled properly.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <22181859012fd20ddf37e049a145bc94a3a91a33.1527844328.git.piotr@scylladb.com>
2018-06-02 20:42:00 +03:00
Avi Kivity
71f857b1de Merge "Reduce memory usage and speed up start time" from Piotr
"
Use more memory efficient MBeans repository and remove quadratic behaviour on startup.

This reduces memory usage for 2000 tables from 127M to 82M and reduces start time
from 270 seconds to 2 seconds.

Changes since last version:
1. Fix registered map to handle multiple JMX servers and to properly deregister mbeans
2. Clean up TableRepository code.
"

* 'speedup_2' of https://github.com/haaawk/scylla-jmx:
  Use more efficient MBeans repository
  Remove unnecessary quadratic algorithm from MetricsMBean.register
2018-05-21 11:13:15 +03:00
Avi Kivity
e27312df10 Merge "Reduce memory usage" from Piotr
"
Introduce more memory efficient version of ObjectName and use it in JMX Server.
The original version stores the same data multiple times in different forms.
Big part of data is shared by multiple instances of ObjectName.
Original class keeps a separate copy for each instance.
The new version keeps only one copy that's shared by all instances.
"

* 'speedup_1' of https://github.com/haaawk/scylla-jmx:
  Introduce and use TableMetricObjectName
  Ensure regular ObjectName is returned to remote callers
  Use JmxMBeanServer instead of MBeanServer
2018-05-21 11:10:22 +03:00
Piotr Jastrzebski
862aea4a33 Use more efficient MBeans repository
Default implementation stores MBeans in the following map:

<domain name> -> (<properties as a single string> -> NamedObject)

This is problematic because NamedObject contains ObjectName that
has both domain and properties inside itself.

This means we're storing the same data twice.

For domain "" we want to store MBeans in a more compact way using map:

ObjectName -> DynamicMBean

which is equivalent to NamedObject.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2018-05-16 16:53:09 +02:00
Piotr Jastrzebski
5cba016962 Remove unnecessary quadratic algorithm from MetricsMBean.register
Before this change it was taking JMX Server 270 seconds to start
when Scylla had 2000 tables. After the change it takes only 2 seconds.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2018-05-16 16:21:21 +02:00
Piotr Jastrzebski
455f5717ea Introduce and use TableMetricObjectName
This is a new extention of ObjectName that uses less memory.

TableMetricNameFactory and AllTableMetricNameFactory can
create it instead of regular ObjectName to save memory.

It is possible to save memory because each name created by
TableMetricNameFactory (or AllTableMetricNameFactory) shares
most of its data with other names created by the same factory
and there's no need to create multiple copies.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2018-05-12 19:08:37 +02:00
Piotr Jastrzebski
48408dc6a3 Ensure regular ObjectName is returned to remote callers
Next patch will introduce new ObjectName implementation that
will use less memory. This new object won't be serializable.
This means it won't be possible to transport it to a remote
caller. We want to keep this new object local to JMX server as well.

This patch makes sure that every ObjectName returned
from APIBeanServer is transformed into a regular ObjectName.

It also makes sure that every ObjectInstance returned from
APIBeanServer has its ObjectName swapped with a regular ObjectName.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2018-05-12 18:54:38 +02:00
Piotr Jastrzebski
2c48bab91a Use JmxMBeanServer instead of MBeanServer
JmxMBeanServer is a concrete implementation of a MBeanServer.
We want to use it directly because we need to bypass calls to
JmxMBeanServer.registerMBean and JmxMBeanServer.unregisterMBean.
They take ObjectName as parameter, copy it using
ObjectName.getInstance(ObjectName) and pass it to registerMBean
and unregisterMBean of JmxMBeanServer.getMBeanServerInterceptor().
We want to avoid this copy and pass the ObjectName argument directly
to JmxMBeanServer.getMBeanServerInterceptor().

To do that this patch:
1. changes all MBeanServer variables to JmxMBeanServer
2. creates JmxMBeanServer in APIBuilder making sure accessing
   interceptors is allowed
3. makes sure that JmxMBeanServer.getMBeanServerInterceptor().registerMBean
   is called wherever JmxMBeanServer.registerMBean was called
4. makes sure that JmxMBeanServer.getMBeanServerInterceptor().unregisterMBean
   is called whenever JmxMBeanServer.unregisterMBean was called

Next patch will use different ObjectName implementation that will
use less memory and this patch is crucial because without it every ObjectName
is transformed with ObjectName.getInstance which turns the object into
a regular ObjectName.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2018-05-12 18:35:18 +02:00
Avi Kivity
dd8d5c87ed dist: recognize epel-7-x86_64 mock target and enable networking
The default epel-7-x86_64 wisely disables networking, however our
maven build accesses the maven repository during the build process.

Recognize the target name and redirect it to our networking-enabled
configuration.

Message-Id: <20180408122138.16672-1-avi@scylladb.com>
2018-04-09 11:18:34 +03:00
Duarte Nunes
55abaa1bc8 StorageService: Allow querying the view build status
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20180327002342.11494-1-duarte@scylladb.com>
2018-04-03 14:43:27 +03:00
Amnon Heiman
4e4589ba6f FailureDetector: check that states is not null before use
When a node is part of a cluster but is down (like in the situation where
a cluster is taken down and up again but not all nodes are up). There is
no application_state information for that node.

This patch check that the information exists before using it to prevent
null pointer exception.

After this patch, a call to nodetool gossipinfo would return the
available information without failing.

See scylladb/scylla#3330

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20180329115345.29357-1-amnon@scylladb.com>
2018-03-29 15:18:48 +03:00
Avi Kivity
8d499401f0 Revert "dist: Rename package to scylla-enterprise-jmx"
This reverts commit df65f40bcd. Committed on
wrong branch.
2018-03-27 22:17:43 +03:00
Pekka Enberg
df65f40bcd dist: Rename package to scylla-enterprise-jmx
This patch renames all the packages generated from this repository from
"scylla" prefix to "scylla-enterprise" prefix. The changes are ported
from the 2017.1 enterprise repository.

Message-Id: <20180326132049.32320-1-penberg@scylladb.com>
2018-03-26 16:26:07 +03:00
Takuya ASADA
3c3d7ba8a7 dist/debian: support Ubuntu 18.04
We supported Ubuntu 18.04 on scylla-server, need to support on jmx/tools too.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1521189263-17592-1-git-send-email-syuu@scylladb.com>
2018-03-19 11:28:12 +02:00
Amnon Heiman
f6f03172f1 scylla-jmx: Uses bash explicitly as the interpreter
ubuntu 14 default shell does not respect the string substituation and return an
error when using -Dcom.sun.management.jmxremote.host flag with
scyll-jmx.

For example:

$ scyll-jmx -Dcom.sun.management.jmxremote.host=10.0.0.1

/usr/lib/scylla/jmx/scylla-jmx: 101: /usr/lib/scylla/jmx/scylla-jmx: Bad substitution

This patch change the shell interpreter to bash, instead of the default shell.

Fixes #49

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20180129131602.7380-1-amnon@scylladb.com>
2018-01-29 15:23:21 +02:00
Takuya ASADA
6ae1559bcd dist/redhat: avoid hardcoding GPG key file path on scylla-jmx-epel-7-x86_64.cfg
Since we want to support cross building, we shouldn't hardcode GPG file path,
even these files provided on recent version of mock.

This fixes build error on some older build environment such as CentOS-7.2.

See scylladb/scylla#3002

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1512668704-6775-1-git-send-email-syuu@scylladb.com>
2017-12-08 17:37:21 +02:00
Takuya ASADA
0f38eb221e dist/debian: requires root privilege to wipe build directory
Since we run pbuilder as root, .deb packages owned by root user so we need to
run 'rm -rf build' as root as well.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1512373941-7018-1-git-send-email-syuu@scylladb.com>
2017-12-04 10:04:46 +02:00
Amos Kong
f4ef4a5a3e scripts: process empty string in arguments
Examples:
 # scripts/scylla-jmx -l /usr/lib/scylla/jmx ""
 Result: script stuck

 # scripts/scylla-jmx "" -l /usr/lib/scylla/jmx
   Unknown parameter: /usr/lib/scylla/jmx
 Result: wrongly shift arguments

Above two problem caused by a redundant argument parse ("$PARAM_PORT"),
the variable isn't set, so it's a covert empty string. The others valid
9 options are all parsed with right case, and API_ADDR also be set by
other case path, so it's safe to remove $PARAM_PORT.

This patch removed the redundant argument, and skipped empty string in
arguments.

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <74c3f75d58bab1a8348f2c87f58825eed4c5b705.1510501133.git.amos@scylladb.com>
2017-11-12 17:43:43 +02:00
Amos Kong
01ba660fe7 sysconfig: correct the assignment in env file of systemd
This commit e80a5e3cb3 introduced an issue,
it wrongly passes an empty string to scylla-jmx cmdline, which causes
scylla-jmx script stuck. The cmdline real executes is:

  # scylla-jmx -l /usr/lib/scylla/jmx ""

Wrapped quotation marks of env variable can't be parsed away if we use
${SCYLLA_PARAMS} in ExecStart cmdline, but $SCYLLA_PARAMS works.

Another problem is the variable can't be re-used insider env file,
reference: [1]. Let's split the parameters to multiple env variables,
and combine them in ExecStart cmdline.

[1] https://unix.stackexchange.com/questions/358998/systemd-environmentfile-re-using-variables-how

Fixes scylladb/scylla#2935

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <c983103c08a3f901037fd282a14df5bb7f85dddd.1510494507.git.amos@scylladb.com>
2017-11-12 15:57:43 +02:00
Fred de Villamil
e80a5e3cb3 Make more configuration options available for sysconfig
Using systemd to run scylla-jmx won't load params from
/etc/defaults/scylla-jmx because it expects params to be sent using the
command line.

This patch enhances the sysconfig file by adding the available options
(commented) and passes the right options to systemd `ExecStart` when
defined. That way scylla-jmx is ran with the params defined into
/etc/defaults/scylla-jmx.

Maybe not the most elegant way to do it, but 1/ it works 2/ I didn't
find a better solution to fix that problem.

Message-Id: <20170927085236.6704-1-fdevillamil@synthesio.com>
2017-10-09 15:52:35 +03:00
Pekka Enberg
3233e157cf Merge "Fix auto_compaction request strings" from Glauber
"Column family is not being passed, so the requests fail to reach the
 correct endpoint."

* 'compact' of git://github.com/glommer/scylla-jmx:
  fix auto_compaction request strings
2017-09-04 08:55:10 +03:00
Glauber Costa
2b1ed89ec3 fix auto_compaction request strings
Column family is not being passed, so the requests fail to reach the
correct endpoint.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-08-21 22:55:31 -04:00
Takuya ASADA
631605eff1 dist/debian: append postfix '~DISTRIBUTION' to scylla package version
We are moving to aptly to release .deb packages, that requires debian repository
structure changes.
After the change, we will share 'pool' directory between distributions.
However, our .deb package name on specific release is exactly same between
distributions, so we have file name confliction.
To avoid the problem, we need to append distribution name on package version.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1502351777-11559-1-git-send-email-syuu@scylladb.com>
2017-08-10 11:39:19 +03:00
Avi Kivity
eb4dc47ba9 Merge "make sure mock config is always sane" from Glauber
"This patchset makes sure that we provide a mock configuration file and won't rely
on defaults that might not work for us"

* 'ourmock' of http://github.com/glommer/scylla-jmx:
  default to our in-tree mock config when building on CentOS
  provide mock configuration file
2017-07-31 17:45:50 +03:00
Glauber Costa
566a4f2639 default to our in-tree mock config when building on CentOS
scylla.git does a similar thing, albeit in a more complicated fashion,
testing for whether or not a rebuild is asked for and etc.

For us, the build process is a lot simpler, so it is better to just
point to the file when we detect that we are on CentOS and no explicit
target is given.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-07-27 16:02:16 +00:00
Glauber Costa
7416bc79cf provide mock configuration file
This adds the ability to build CentOS packages from Fedora - scylla
already has it. Aside from that, this guarantees that the build will
work in any system. In particular, by default rpmbuild is not given
network access in some of the systems I tested, causing the build to
fail as it tries to contact the maven repo.

This file is just a copy of the one provided by Centos' mock, with
the option to allow rpmbuild network added.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-07-27 16:02:16 +00:00
Takuya ASADA
81b0d12dfe dist/debian: use correct variable to detect distribution version
Since we switched to pbuilder and supported cross build, we no longer able to
use $DISTRIBUTION and $RELEASE in the script.
Use $TARGET instead.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1500540534-27951-1-git-send-email-syuu@scylladb.com>
2017-07-20 12:00:52 +03:00
Takuya ASADA
8c32e4f8a4 dist/debian: add more versions of Debian/Ubuntu releases
This will add Debian 9(stretch) support, and also non-official support of
Debian testing/unstable and Ubuntu non-LTS versions.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1500501275-8419-1-git-send-email-syuu@scylladb.com>
2017-07-20 10:32:44 +03:00
Takuya ASADA
9147faed36 dist/debian: append 'scylla-jmx' prefix to pbuilder tgz and directories
We want to allocate pbuilder chroot image and build directories for each
scylla repository to make sure build environment is always clean.

So append 'scylla-jmx' prefix for them.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1497902645-29487-2-git-send-email-syuu@scylladb.com>
2017-07-20 10:32:24 +03:00
Takuya ASADA
2b33ce558d dist/debian: no need to rm -rf non-existent file
This line copied from scylla-tools-java, and conf/hotspot_compiler does not
exist on scylla-jmx, so drop the entry.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1497902645-29487-1-git-send-email-syuu@scylladb.com>
2017-07-20 10:32:23 +03:00
Takuya ASADA
956eac3972 dist/debian: add 'sudo' for run yum on redhat variants
Since this script allows to run in non-root mode, sudo is required for yum.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1497891211-26574-2-git-send-email-syuu@scylladb.com>
2017-06-19 20:01:06 +03:00
Takuya ASADA
2a74a394a4 dist/debian: drop unused code for old versioning scheme
Since development version specified as "666.development", this code won't called anymore, so drop it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1497891211-26574-1-git-send-email-syuu@scylladb.com>
2017-06-19 20:01:06 +03:00
Takuya ASADA
2ca07ac215 dist/debian: drop meaningless lines on build script
These lines comes from scylla-tools-java's build_deb.sh, but does nothing.
So drop them.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170616035130.16580-1-syuu@scylladb.com>
2017-06-19 12:31:52 +03:00
Takuya ASADA
8d404d8e9f dist/debian: Use pbuilder for Ubuntu/Debian debs
Enable pbuilder for Ubuntu/Debian to prevent build enviroment dependent issues.
Also support cross building by pbuilder.
(cross-building from Fedora 25 and Ubuntu 16.04 are tested)

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170615100230.18239-1-syuu@scylladb.com>
2017-06-15 13:51:59 +03:00
Calle Wilund
e954db8444 TableMetrics: bugfix: local metrics registry is a proxy. It must proxy
Refs scylladb/scylla#2340 (trunk/1.7)

Must proxy "register" call, otherwise unregistration of mbeans
will instead try to double-register. Code for this somehow fell away.

Message-Id: <1494417610-9720-1-git-send-email-calle@scylladb.com>
2017-05-28 14:02:32 +03:00
Takuya ASADA
fa72ea29d3 dist/redhat: Use mock for CentOS/RHEL rpms
Enable mock for CentOS/RHEL, also support cross building by mock.

See scylladb/scylla#630

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170522081759.8574-1-syuu@scylladb.com>
2017-05-23 12:09:03 +03:00
Tomasz Grabiec
8176d5729f Bring back old forceKeyspaceCompaction() overload
nodetool from scylla-tools-java still uses it. Currently `nodetool compact` fails like this:

error: forceKeyspaceCompaction(java.lang.String, [Ljava.lang.String;)
-- StackTrace --
java.lang.NoSuchMethodException: forceKeyspaceCompaction(java.lang.String, [Ljava.lang.String;)

Fixes scylladb/scylla#2261

Probably broken by 3e146845b4

Message-Id: <1491470483-6147-1-git-send-email-tgrabiec@scylladb.com>
2017-04-06 12:59:52 +03:00
Amnon Heiman
0c541d73e7 MetricsRegistry: Solving empty histograms in nodetool
This patch fixes two issues with the histogram implementation:
* Need to all update before trying to read values from the histogram.
* The histogram values return from the API in microseconds and not nano.

See Scylladb/scylla#2155

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20170315130725.22261-1-amnon@scylladb.com>
2017-03-20 12:33:45 +02:00
Pekka Enberg
34c10fc91c README update
Fix the last remaining mention of "Urchin" and clean up the instructions on how to run it.
2017-03-03 12:35:00 +02:00
Amnon Heiman
9c11768b9d APIClient: Snapshot disk size should be long
When parsing the snapshot disk sizes, it should be long and not int.

See scylladb/scylla#2104

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1487760019-1354-1-git-send-email-amnon@scylladb.com>
2017-02-22 15:13:13 +02:00
Takuya ASADA
962a42dbd5 dist/debian: install ca-certificates-java/jessie-backports for Debian8
openjdk-8-jre-headless/jessie-backports requires
ca-certificates-java/jessie-backports, but apt doesn't override
ca-certificates-java/jessie by default.

So install it before installing openjdk-8.

Fixes #40

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1487269613-21196-1-git-send-email-syuu@scylladb.com>
2017-02-16 23:22:34 +02:00
Takuya ASADA
db711b4b53 dist/debian: install python for git-archive-all
Since git-archive-all is python script, we need to install python if it's
unavailable.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1487054303-11556-1-git-send-email-syuu@scylladb.com>
2017-02-14 09:08:52 +02:00
Takuya ASADA
7f63fabeee dist: rename dist/ubuntu to dist/debian
Now we supported both Ubuntu and Debian on dist/ubuntu, and Ubuntu is one of
Debian variant, so dist/debian is better naming.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1486547332-3247-1-git-send-email-syuu@scylladb.com>
2017-02-08 17:31:24 +02:00
Calle Wilund
2ae7ccf945 scylla-jmx: Fix RMI bind address when using local only connector
Message-Id: <1486382699-25556-1-git-send-email-calle@scylladb.com>
2017-02-06 14:15:43 +02:00
Calle Wilund
ac8743269c scylla-jmx: Allow ssl, auth etc options for JMX connector
Actually removes a bunch of code to manage the JMX connector,
since as of jdk8u102, the standard jmx connector answers to
property setting bind address -> can restrict access.
Note that the RMI connector will now (as is jdk normal)
_bind_ to 0.0.0.0, but it will not answer non-local requests
if "remote" is not enabled. This is the default jdk behaviour.

In any case, we rely on setting the appropriate properties
instead, and also allow pass-through of -D flags to java,
which in turn means those who wish can turn on both auth
and ssl, set key/trust stores etc etc.

Message-Id: <1485357178-20714-1-git-send-email-calle@scylladb.com>
2017-02-02 16:09:07 +02:00
Pekka Enberg
557b346c07 dist/redhat: Require Java 8
The build requires Java 8 since commit
9c2d6cec51 ("Remove yammer/codehale
dependencies and augumentations").

Message-Id: <1485437777-23626-1-git-send-email-penberg@scylladb.com>
2017-01-26 16:52:04 +02:00
Takuya ASADA
79b3f989fc dist/ubuntu: generate Ubuntu/Debian revision correctly
Ubuntu Packaging Guide says if there's no upstream package (means it's not
ported from Debian), revision should be "0ubuntu1", not "ubuntu1" which is we
currently using.

On Debian, Debian Policy Manual says it's conventional to restart revision from 1 when upstream version increased, so we should specify it to "1".

To do it in single script, we will generate the revision on building time.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1483499161-22557-1-git-send-email-syuu@scylladb.com>
2017-01-09 09:46:21 +02:00
Takuya ASADA
ee0e460b26 dist/ubuntu: Fix package build error on Ubuntu 14.04/Debian 8
Since we moved to Java 8, we need to provide additional repository for
distributions which doesn't have Java 8 on default repository.

Fixes #37

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1483377878-17987-1-git-send-email-syuu@scylladb.com>
2017-01-02 19:37:53 +02:00
Pekka Enberg
cc8f5e275b Merge "getTokenToEndpointMap should be sorted by the API" from Amnon
"This series change getTokenToEndpointMap implementation, so that the result
 will be sorted by the API.

 The API returns the tokens sorted according to their type. The APIClient helper
 method was change to keep the original order and getTokenToEndpointMap was
 change so it will return the map as is, without re-sorting it."
2016-12-30 14:14:48 +02:00
Takuya ASADA
7f936862d2 dist/ubuntu: check existance of build tools
It normally won't be problem because scylla-jmx usually build after scylla building.
However some tools we required for build_deb.sh doesn't installed on minimal installation, so we should check and install it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1483004036-2824-1-git-send-email-syuu@scylladb.com>
2016-12-30 10:25:32 +02:00
Takuya ASADA
db635738ad scripts: chmod a+rx git-archive-all to prevent building package
build_deb.sh fails to run because git-archive-all doesn't have execution bit,
to prevent build error we need to add the bit.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1483002191-30263-1-git-send-email-syuu@scylladb.com>
2016-12-30 10:18:52 +02:00
Amnon Heiman
0c7afef8f4 Keep tokensEndPointMap sorted by the API order
This patch change the sorting of tokensEndpointMap so it will use the
order returned by the API.

See Scylladb/scylla#1945

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-12-28 12:48:03 +02:00
Amnon Heiman
b9328960cc APIClient: Keep the map order return from the API
The getMapStrValue return a map from the API. This change the
implementation to use linkedHashMap so the map will be sorted according
to the API order.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-12-28 12:45:33 +02:00
Calle Wilund
6f916b9d8e scylla-jmx: Add dummy "compactionId" to compation info
To keep nodetool happy. Should maybe add actual Ids to compations.
Message-Id: <1482335118-9595-2-git-send-email-calle@scylladb.com>
2016-12-22 14:17:38 +02:00
Calle Wilund
85c3293ef1 scylla-jmx: Missing getter in MBean interface.
It got lost somewhere.
Message-Id: <1482335118-9595-1-git-send-email-calle@scylladb.com>
2016-12-22 14:17:13 +02:00
Pekka Enberg
3838921ca3 Merge branch 'cassandra3' into next 2016-12-16 12:35:06 +02:00
Pekka Enberg
de20954fd6 Revert "APIMbeanServer: Show column family in jconsole"
This reverts commit b9986396bb.
2016-12-16 12:35:04 +02:00
Amnon Heiman
b9986396bb APIMbeanServer: Show column family in jconsole
The nodetool looks for column family with by their name. When using
JConsole, it search for all MBeans.

This patch adds a check for column family and stream with null query
(wild card).

This way the column family will appear in the JConsole.

Fixes #35

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1479227289-15031-1-git-send-email-amnon@scylladb.com>
2016-11-16 09:39:50 +02:00
Pekka Enberg
1931b4b24c Fix SCYLLA-VERSION-GEN permissions
Commit 434ce947b0 ("Code formatting + source
cleanup (eclipse)") changed permissions of SCYLLA-VERSION-GEN for no reason.
Fix them up.
2016-11-09 11:07:29 +02:00
Pekka Enberg
12ba9915a2 Merge "Cassandra 3.x compatibility" from Calle
"This is a pretty hefty change set. Base goal is to update JMX impl to
 Cassandra 3.x compatibility. While the base MBeans are easy enough, the
 metrics parts of the stew are trickier, since Cassandra 3 changed a
 number of things around.  To that end, and to clean up this code
 somewhat, this basically changes all about metrics management in
 scylla-jmx. Some background:

 Cassandra uses codehaul metrics for actual measurement. Now,
 obviously, in this proxy project we need not measure anything, since
 actual happenings are in the Scylla process. Previous version of the
 code however still utilized a (not-so-pretty reflection-hacked-into)
 version of codehaul metrics because they also provided the system for
 exposing the data through JMX. I.e. we added a bunch of stuff we
 really did not need, to avoid dealing with some of that we did.

 In Cassandra 3, v3 of codehaul is used, which the Cassandra devs
 apparently did not like the JMX integration of. Thus they decided to
 deal with JMX exposure themselves. which makes sense, because they
 want to control the syntax/structure. But given this, we no longer
 have any reason to utilize codehaul, since it does nothing for us.

 These change sets instead adapts the cassandra JMX bindings somewhat,
 and adds a wholly own structure of metric point binding, using
 java.util.function interfaces to provide flexible and late-ish binding
 to actual data query objects. End result is a much slimmer set of
 objects/functions to bind metrics (which of course are just queries to
 Scylla API).

 Also, MX4J has been dropped, since it is at best broken. Instead, we
 use simple wrapping of the system management server object to deal with
 dynamically populating transients objects like column families.

 Removed most statefulness (beyond binding) in MBean impls, all
 "bookeeping" of sub-objects and bind status now uses the actual mbean
 server. I.e. remove race conditions + lighter bookkeep.

 Since this is Java, and everything is tied together in a ball of yarn,
 most of the changes here are not self-contained. I.e. some of these
 will, applied individually, break the build. They are still kept as
 individual patches though, mainly for readability."
2016-11-09 11:03:53 +02:00
Pekka Enberg
b61a5b8439 Revert "Implement deprecated metrics in CommitLog and CompactionManager"
This reverts commit 9b63a35da6 in
preparation for Cassandra 3.x compatibility changes.
2016-11-09 11:03:31 +02:00
Pekka Enberg
c08442b158 Revert "Adding missing method implementation"
This reverts commit 8e5d649048 in
preparation for Cassandra 3.x compatibility changes.
2016-11-09 11:03:07 +02:00
Amnon Heiman
8e5d649048 Adding missing method implementation
This series fill some missing functionality by using already existing
API.

The idea is to use existing code in places that it was not used.

Also, in places were a stub value is in place, the methods returns a
stab value.
Message-Id: <1478619479-10023-1-git-send-email-amnon@scylladb.com>
2016-11-09 10:29:46 +02:00
Pekka Enberg
9b63a35da6 Implement deprecated metrics in CommitLog and CompactionManager
Implement some deprecated metrics in CommitLog and CompactionManager,
that can easily just be a wrapper to the non-deprecated metrics API.

Message-Id: <1478591291-30344-1-git-send-email-penberg@scylladb.com>
2016-11-08 10:59:41 +02:00
Calle Wilund
ae6a000807 ColumnFamilyStore: Remove compaction parameter API usage
Do manual mangling of in/out data in JMX instead. Saves on
controversy over more or less pointless API additions.
2016-11-01 09:44:17 +00:00
Pekka Enberg
954f40e550 Revert "scylla-jmx.service.in: Depend on scylla-server"
This reverts commit 4672cd360f.

Fixes #34
2016-10-31 14:02:22 +02:00
elcallio
434ce947b0 Code formatting + source cleanup (eclipse) 2016-10-24 11:43:52 +00:00
elcallio
9c2d6cec51 Remove yammer/codehale dependencies and augumentations 2016-10-24 11:43:52 +00:00
elcallio
824638594b Clean up and simplify Main startup 2016-10-24 11:43:52 +00:00
elcallio
1709ff2d02 API accessor
* Make config an instance object
* Add functional interfaces
* http options
* Remove dead code
* Clean up/format
2016-10-24 11:43:52 +00:00
elcallio
f4f3c44dc1 Rework StreamManager 2016-10-24 11:43:52 +00:00
Calle Wilund
4ed049739a Storage service: Fix 3.x style notifications (repair) 2016-10-24 11:43:51 +00:00
elcallio
4ec7d58249 Rework service.* beans 2016-10-24 11:43:51 +00:00
elcallio
fec8b44942 Rework MessagingService 2016-10-24 11:43:51 +00:00
Calle Wilund
3fe9cfc232 EndpointSnitchInfo: Fix getRack/DC host handling
I.e. our localhost might be (and probably is) different from scyllas
"fb::broadcast", and if not, try to get numerical asap.
2016-10-24 11:43:51 +00:00
elcallio
21a343d003 Rework EnpointSnitchInfo 2016-10-24 11:43:51 +00:00
elcallio
80762eb60a Rework gms beans 2016-10-24 11:43:51 +00:00
elcallio
e49b4ef322 Rework CompactionManager 2016-10-24 11:43:51 +00:00
elcallio
1470b37193 Rework CommitLog 2016-10-24 11:43:51 +00:00
elcallio
e55863e375 Rework ColumnFamilyStore 2016-10-24 11:43:51 +00:00
elcallio
4b83a9388e Make APIMBeanServer simply wrap actual mbeanserver 2016-10-24 11:43:51 +00:00
elcallio
781821ac9e Make APIMBean name derivation check interface fields as well. 2016-10-24 11:43:51 +00:00
elcallio
cd9deafc51 Rework all org.apache.cassandra.metrics types to new style
I.e. bind only JMX object via registry.
2016-10-24 11:43:51 +00:00
elcallio
319dadb79c Add TableMetrics - c3 version of ColumnFamilyMetrics
Using new, slimmer, metrics binding
2016-10-24 11:43:51 +00:00
elcallio
a44c18c621 Add metric/mbean base types + metrics JMX object factory 2016-10-24 11:43:51 +00:00
Calle Wilund
3e146845b4 StorageService: update to c3 compat
Note: some calls that are not (yet) applicable to scylla are 
unimplemented.
2016-10-24 11:43:51 +00:00
Calle Wilund
b4e483b179 StorageProxy: update to c3 compat 2016-10-24 11:43:51 +00:00
Calle Wilund
de28e68532 GCInspector: Add SuppressWarnings("restriction") 2016-10-24 11:43:51 +00:00
Calle Wilund
3a4adcb676 CacheService: update to c3 compat 2016-10-24 11:43:51 +00:00
Calle Wilund
85e1b07544 MessagingService: update to c3 compat
Note: c3 adds configurable size threshold counting of messages sent, 
dividing info "large"/"small" partitions (+gossiper). Message bulk 
queries in v3 mbean reflects this. 

Scylla does not (yet?) have such a threshold divider, so this is 
highly incomplete and just delegates to old apis that "sort-of" fit.
2016-10-24 11:43:51 +00:00
Calle Wilund
f4759f05e7 EndpointSnitchInfo: update to c3 compat 2016-10-24 11:43:51 +00:00
Calle Wilund
68ce437b03 Gossiper: update to c3 compat 2016-10-24 11:43:51 +00:00
Calle Wilund
b7a6554ee9 FailureDetector: update to c3 compat 2016-10-24 11:43:51 +00:00
Calle Wilund
3efcd5103b CompactionManager: update to c3 compat 2016-10-24 11:43:51 +00:00
Calle Wilund
39e4cd8f3f CommitLog: update to c3 compat 2016-10-24 11:43:51 +00:00
Calle Wilund
85b39d7fbe ColumnFamilyStore: update to c3 compat
Note: some calls still unimplemented
2016-10-24 11:43:51 +00:00
Calle Wilund
9a44228c71 APIClient: Add some "post" overloads 2016-10-24 11:43:51 +00:00
Pekka Enberg
45e2c982f7 Merge "Connect with remote cluster" from Yan
"Tools (like jconsole) cannot connect with remote cluster without this fix."
2016-10-18 12:05:44 +03:00
yan cui
82ae18605a connect with remote cluster 2016-10-17 18:38:10 -07:00
Pekka Enberg
c07f5c034f APIClient: Fix error handling for POST if API call fails
Currently, we have a scary looking dtest failure when attempting to force flush a

  Nodetool command '/data/jenkins/workspace/scylla-1.3-dtest/label/monster/mode/release/smp/1/scylla/resources/cassandra/bin/nodetool -h localhost -p 7100 flush' failed; exit status: 2; stderr: Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
  error: javax.ws.rs.ProcessingException (no security manager: RMI class loader disabled)
  -- StackTrace --
  java.lang.ClassNotFoundException: javax.ws.rs.ProcessingException (no security manager: RMI class loader disabled)
          at sun.rmi.server.LoaderHandler.loadClass(LoaderHandler.java:396)
          at sun.rmi.server.LoaderHandler.loadClass(LoaderHandler.java:186)
          at java.rmi.server.RMIClassLoader$2.loadClass(RMIClassLoader.java:637)
          at java.rmi.server.RMIClassLoader.loadClass(RMIClassLoader.java:264)
          at sun.rmi.server.MarshalInputStream.resolveClass(MarshalInputStream.java:219)
          at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
          at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
          at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
          at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
          at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
          at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
          at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
          at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
          at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
          at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:245)
          at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
          at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
          at javax.management.remote.rmi.RMIConnectionImpl_Stub.invoke(Unknown Source)
          at javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:1020)
          at javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:298)
          at com.sun.proxy.$Proxy7.forceKeyspaceFlush(Unknown Source)
          at org.apache.cassandra.tools.NodeProbe.forceKeyspaceFlush(NodeProbe.java:290)
          at org.apache.cassandra.tools.NodeTool$Flush.execute(NodeTool.java:1227)
          at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:288)
          at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)

The problem is rather innocent: the API call fails and we leak
javax.ws.rs.ProcessingException, which is not available in nodetool's
classpath. In fact, we already fixed the problem for GETs in commit
02e0598 ("APIClient: Fix error handling if connection to API server
fails") so do the same thing for POSTs.
Message-Id: <1471589525-26435-1-git-send-email-penberg@scylladb.com>
2016-08-19 14:44:22 +03:00
Tomasz Grabiec
27e3d1745a scylla-jmx: Exit on unknown parameter rather than infinite-loop
Ran into this while trying to use ccm with not up-to-date scylla-jmx.

Symptoms:

  $ ccm start
  Error starting node node1

and empty ~/.ccm/scylla-3/node1/logs/system.log.jmx
Message-Id: <1468399926-3565-1-git-send-email-tgrabiec@scylladb.com>
2016-07-13 12:05:42 +03:00
Amnon Heiman
4672cd360f scylla-jmx.service.in: Depend on scylla-server
The correct dependency between the jmx and the scylla-server is:
The scylla-jmx should not run if the scylla-server is not running, it
should shutdown when the scylla-server shuts down.

Starting the scylla-jmx should not start the scylla-server, instead, if
the scylla-server is not running it should fail to start.

This patch changes the setup to do so.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1467184319-3395-1-git-send-email-amnon@scylladb.com>
2016-06-29 11:36:43 +03:00
Pekka Enberg
36ae2fcfd7 dist: Execute Maven in batch mode
The batch mode produces much more readable logs because it's designed
for non-interactive builds and doesn't have the fancy download progress
meters.
Message-Id: <1464770158-32482-1-git-send-email-penberg@scylladb.com>
2016-06-06 16:19:22 +03:00
Pekka Enberg
c6edab5990 dist/redhat: Fix RPM package build
Commit 12daaf5 ("dist/redhat: fix rpm build error") did not fix the
error, at least not on our Jenkins build machines.

Looking at the RPM build logs, we create the build directory:

+ cd /builddir/build/BUILD
+ mkdir build

but then change directory to "scylla-jmx-1.2.rc1":

+ cd /builddir/build/BUILD
+ cd scylla-jmx-1.2.rc1
+ mvn install

and therefore fail the copy:

+ cp dist/common/systemd/scylla-jmx.service.in build/scylla-jmx.service
cp: cannot create regular file 'build/scylla-jmx.service': No such
file or directory

I don't know why Takuya put the "mkdir" in the "prep" section but
something like this should unblock the build.
2016-06-01 10:59:52 +03:00
Amnon Heiman
9e97cb530a APITimer: sum should return a value and values are in ns
When removing the pull based timers in the API the sum method in the
APITimer was left stubed by mistake.

This patch take the sum from the histogram as it should be.

Another missed changes are the units, in the yammer library the Timer
does unit conversion before returning the values.

This patch takes the unit conversion from the yammer library to be
compatible.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1464173726-7482-1-git-send-email-amnon@scylladb.com>
2016-05-25 14:16:40 +03:00
Takuya ASADA
12daaf546d dist/redhat: fix rpm build error
Since build/ is not exist, 'cp dist/common/systemd/scylla-jmx.service.in build/scylla-jmx.service' will fail.
So create build/ before starting build stage.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1464164524-20867-1-git-send-email-syuu@scylladb.com>
2016-05-25 13:24:04 +03:00
Pekka Enberg
4eb02743cd Merge "Removing counter pulling from the JMX" from Amnon
"This series uses the API that was added to scylla to remove the counter
 pulling and rely on the statistics collected by the API.

 The series extends the APIMeter, APIHistogram and APITimer to remove
 their pulling part and to fetch the information when needed from the
 API.

 For performence reason those objects will be cached, so that in the
 typical case of of multiple requests of different fields will cause a
 single API call."
2016-05-24 17:20:52 +03:00
Takuya ASADA
225d8ace17 dist/ubuntu: chmod a+rx on build_deb.sh
Add permission to execute build_deb.sh

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1464083473-1701-2-git-send-email-syuu@scylladb.com>
2016-05-24 12:53:15 +03:00
Takuya ASADA
e3c5acfcad dist: Support systemd for Ubuntu 15.10/16.04
Since Ubuntu 15.10/16.04 has moved to systemd, share CentOS/Fedora's systemd unit file with Ubuntu.

Fixes scylladb/scylla#1283

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1464083473-1701-1-git-send-email-syuu@scylladb.com>
2016-05-24 12:53:15 +03:00
Avi Kivity
f6710465ef dist: change scylla-jmx process name from 'java' to 'scylla-jmx'
Helps in top, pgrep and friends.  Unfortunately the only reasonable way
to do it is to create a symlink to /usr/bin/java and run that.
Message-Id: <1463580254-8369-1-git-send-email-avi@scylladb.com>
2016-05-20 13:38:28 +03:00
Amnon Heiman
1e4edeb858 MessagingService: Move to APITimer and drop the pulling
With the change to APITimer there is no longer a need to periodically
pull the API.

The verb will be register on the object initialization and will be
updated whenever they are been used.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 13:04:55 +03:00
Amnon Heiman
7756f4751a DroppedMessageMetrics: Change APISettableMeter to APIMeter
The APIMeter replaces the APISettableMeter as the Meter implementation.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 12:27:36 +03:00
Amnon Heiman
74d062851c CompactionMetrics: Use APITimer
This change the Timer from Timer to APITimer.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 12:25:51 +03:00
Amnon Heiman
78bb2ef25a CacheService: Do not get non existing caches metrics
With the change in the meter implementation, retrieving a non existing
metrics would take time.

For this, the CacheService would mark caches that are not supported with
null url, so the metrics will be register but will return 0 for all
request (instead of going to the API that will return 0).

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 12:20:35 +03:00
Amnon Heiman
8a73d6a840 CacheMetrics: Switch to APITimer
CacheMetrics is a general counter that is used for all possible caches.
For caches that we do not support, there is no need to go and fetch
their values.

When moving to the APITimer each such request will take longer (the
value will not be available as it use it) now it will be possible to
supply a null as a url which would cause to the metrics to return 0 for
all counters.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:32:59 +03:00
Amnon Heiman
ad49d05780 ClientRequestMetrics: Using the APITimer
The APITimer uses a different endpoint not to break existing API.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:30:55 +03:00
Amnon Heiman
2c07ca2e09 LatencyMetrics: Move to APITimer
The APITimer uses a different endpoint not to break existing API.

The addNano functionality was removed as all of the values are updated
from the APi.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:26:18 +03:00
Amnon Heiman
801af00ddb APIMetrics, APIMetricsRegistry: Return APIMeter and APITimer
Some of the specific functionality is needed from the APIMeter and
APITimer.

The MetricsRegistry now return the specific objects so they can be used
in their adapted form.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:22:43 +03:00
Amnon Heiman
d3b8ef1ae5 APITimer: Non pull based Timer
The yammer Timer object calculate rate based on a timer, which causes
periodic calls to the API.

This replaces the implementation so that a timer would get all its
values from the API.

For object registration the APITimer still inherit from Timer but
override all its functionality.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:12:01 +03:00
Amnon Heiman
eca6451832 APIHistogram: Support APITimer
With the move to APITimer, in many occasion a histogram will not update
itself, instead it will be updated by the APITimer.

This breaks the update values functionality so that a histogram that is
included in an APITimer will not try to update it self.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:08:05 +03:00
Amnon Heiman
4d1f8ed7c9 APIMeter: Move out of pull mode
This replaces the APIMeter implementation so it will not pull the API
regularly.

To by complient with the yammer object registration mechanism, it still
inherit from Meter, but all the functionalities are overriden.

Now all the data is taken from the API including the rate statistics.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 10:03:12 +03:00
Amnon Heiman
adf466519f APIClient: Support for non pull APIMeter
APIMeter will be modified not to use pulling and to retrieve the derived
information from the API.

To support that the APIClient was changed so it would be able to cache
json objects and histogram.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 09:35:42 +03:00
Amnon Heiman
a028e59699 CacheEntry: Support caching of jsonobject
This will allow to store json objects in the cache.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 09:32:50 +03:00
Amnon Heiman
2ff16fa2a5 scylla-jmx: set the APIBuilder in the command line
When setting the jmx through the command line, the jmx server creates
even before the main is called.
For the APIServer to take effect the builder should be set via system
properties.

This patch also add an option to run the java process with debug ports
open so an extern debug will be able to connect to the app.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1462447227-8367-2-git-send-email-amnon@scylladb.com>
2016-05-05 16:40:23 +03:00
Amnon Heiman
853963f833 APIMBeanServer: overload the queryName implementation
The mx4j implementation of queryName does not handle correctly pattern
matching.

This patch identify that a name contains a patern and do the patern
matching as it should have been done by the mx4j MBeanServer.

Fixes #28

Message-Id: <1462447227-8367-1-git-send-email-amnon@scylladb.com>
2016-05-05 16:40:20 +03:00
Avi Kivity
f8112b5d57 Merge "Remove the pulling mode for MBean registration" from Amnon
"This series replaces that mechanism with an implementation of the MBeanServer
that intercept the relevant MBean call and call the relevant registration
function.

The pulling mechanism was removed from Main."
2016-05-02 15:40:37 +03:00
Amnon Heiman
50c8ff548f Main: remove the pulling registration
With the addition of the APIMBeanServer there is no longer a need for
the pulling functionality to be perform for MBean registration.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-02 15:03:23 +03:00
Amnon Heiman
3e95c89310 RMIServerSocketFactoryImpl: regsiter the APIBuilder
This register the APIBuilder as the MBeanServerBuilder which will cause
the APIMBeanServer to be used as the MBeanServer.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-02 15:03:23 +03:00
Amnon Heiman
1daa5eb030 Adding the APIBuilder
The APIBuilder is an implementation for the MBeanServerBuilder that is
used to instantiate the APIMBeanServer as the platform MBeanServer.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-02 15:03:16 +03:00
Amnon Heiman
4e02c52aee Adding the APIMBeanServer
The APIMBeanServer is serve as a proxy for the MBeanServer.
It intercept calls to the MBeanServer and check for the column family
and stream registeration before they are perform.

Current implementation override queryNames as it's the one that is being
used by nodetool.

Additional methods can be override in the future if needed.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-02 15:02:08 +03:00
Amnon Heiman
645d04083c APIMBeanIntrospector: Creating an introspector for the MBeanserver
The MX4J introspector does not support *MXBean interfaces name, This
causes a problem with the garbage collector and java related MBeans.

To bypass that limitation the APIMBeanIntrospector inherit from
MBeanIntrospector and override the relevant functionality so MXBean
will be treated like MBean.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-02 15:00:57 +03:00
Amnon Heiman
5f17e6e0db pom: Add mx4j dependency
The mx4j is used to implement the API MBean Server.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-02 14:55:42 +03:00
Amnon Heiman
0bfaba0a82 StreamingMetrics: Preparation for removing pull mode
This patch expose the check stream registration as an external static
method.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-02 14:55:42 +03:00
Amnon Heiman
5c33a8afa7 ColumnFamilyStore: Preparation for removing the pull mode
This expose the ColumnFamilyStore registration via static method.

It would allow an external object (ie. MBeanServer) to update the
registration on demand.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-02 14:55:32 +03:00
Avi Kivity
9f2f92c379 Merge "Ubuntu 16.04 support" from Takuya 2016-05-01 11:14:50 +03:00
Takuya ASADA
19a0e72752 dist/ubuntu: fix build error on Ubuntu 16.04 2016-04-30 07:51:07 +00:00
Takuya ASADA
72aae6bf3b dist/ubuntu: resolve build time dependency by mk-build-deps command
Resolve dependency packages from debian/control, intead of doing apt-get manually.
2016-04-30 07:44:25 +00:00
Takuya ASADA
d8d3023334 dist/ubuntu: generate correct distribution codename on debian/changelog
Since we supported more than one version of Ubuntu, need to generate each codename on changelog.
2016-04-30 07:42:35 +00:00
Takuya ASADA
ef15d95416 dist: #!/bin/bash for all scripts
Like we did on scylla-server, switch to bash.
2016-04-30 07:40:42 +00:00
Amnon Heiman
c63ec3e96b StorageService: Add takeMultipleColumnFamilySnapshot support
This patch adds the functionality of takeMultipleColumnFamilySnapshot to
StorageService.

It follow origin logic of first check that all keyspaces and column
families exists and has no snapshot with that name and then run snapshot
on each of the combinations.

Two methods where added to simplify the implementation, but that can be
reused. One to get a map from keyspace to column family and one with the
current snapshots.

Fixes scylladb/scylla#1133

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1461659678-22030-1-git-send-email-amnon@scylladb.com>
2016-04-26 14:28:33 +03:00
Amnon Heiman
5903271c4d EndpointState: log and ignore not supported states
During upgrade or version inconsistency. The API can return an un
supported state.

Instead of throwing an expcetion the state will be ignore and a warning
will be written to the log.

An example (state where modified in the API)
$ nodetool gossipinfo
/127.0.0.1
  generation:1460450456
  heartbeat:32

The log shows:

Apr 12, 2016 3:40:20 PM org.apache.cassandra.gms.EndpointState
addApplicationState
WARNING: Unknown application state with id:25

Fixes scylladb/scylla#1164.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1460465073-3567-1-git-send-email-amnon@scylladb.com>
2016-04-12 15:53:16 +03:00
Takuya ASADA
a2ec7f2f60 dist/ubuntu: drop classical sysv init script, only support Upstart for Ubuntu 14.04LTS
Drop sysv init script on scylla-jmx.
Same as a5bb6c4b1b

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1459346746-3433-2-git-send-email-syuu@scylladb.com>
2016-03-30 18:10:18 +03:00
Takuya ASADA
0c5c1debde dist: do not auto-start scylla-server job on Ubuntu package install time
Same as f1d18e9980
Fixes scylladb/scylla#1134

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1459346746-3433-1-git-send-email-syuu@scylladb.com>
2016-03-30 18:10:17 +03:00
Pekka Enberg
524763cfed dist/ubuntu: Use tilde for release candidate builds
The version number ordering rules are different for rpm and deb. Use
tilde ('~') for the latter to ensure a release candidate is ordered
_before_ a final version.
2016-03-22 12:23:48 +02:00
Amnon Heiman
94f144e9b3 StorageService: Get the broadcast address from the API
When getting the tokens of the current node, we use the get_token api
call with the local broadcast address.

The current implementation that tries to figure it out from the
configuration is prone to error.

Currently in a configuration where the broadcast address is set to the
local API and the listening API is set to 127.0.0.1 we get a call to
nodetool info will return an exception:
ID                     : 54185d5d-6f62-4884-814c-5d17c2776de9
Gossip active          : true
Thrift active          : true
Native Transport active: true
Load                   : 178.09 KB
Generation No          : 1458349593
Uptime (seconds)       : 11
Heap Memory (MB)       : 47.23 / 247.50
Off Heap Memory (MB)   : 2.75
error: Index: 0, Size: 0
-- StackTrace --
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
	at java.util.ArrayList.rangeCheck(ArrayList.java:653)
	at java.util.ArrayList.get(ArrayList.java:429)
	at org.apache.cassandra.tools.NodeProbe.getEndpoint(NodeProbe.java:812)
	at org.apache.cassandra.tools.NodeProbe.getDataCenter(NodeProbe.java:830)
	at org.apache.cassandra.tools.NodeTool$Info.execute(NodeTool.java:425)
	at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:288)
	at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)

Becasue getTokens will return an empty list.

This patch changed how broadcast address is deduct. It Adds a reverse
mapping from hostid to ip address and use it with the get local id to
find the ip address in use.

This implementation would probably be replaced by a single API call in
the future.

After the change a call to nodetool info works.

Fixes scylladb/scylla#1027

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1458405434-8491-3-git-send-email-amnon@scylladb.com>
2016-03-22 09:43:13 +02:00
Amnon Heiman
a60c3156c6 ApiClient: Add getReverseMapStrValue method
Sometimes it is usefull to get a reverse of the map return by the API.
For example instread of ip address to hostid to get the hostid to ip
address.

Though it is possible to reverse a map, there is no need, it's easier to
generate the reverse mapping.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1458405434-8491-2-git-send-email-amnon@scylladb.com>
2016-03-22 09:42:57 +02:00
Amnon Heiman
8f90d413a1 ProcessingException was changed to IllegalStateException
This patch fix the exception handling for connection problem, instead of
ProcessingException it now expect IllegalStateException.

The rest of the functionality remains the same.

Fixes #26

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1458602355-23601-1-git-send-email-amnon@scylladb.com>
2016-03-22 08:55:35 +02:00
Pekka Enberg
d7e3dae323 dist/ubuntu: Relax Java dependencies
The requirement for Java 7 is too strict, especially as it's end-of-life.

Fixes #1029.
Message-Id: <1458132593-25935-1-git-send-email-penberg@scylladb.com>
2016-03-21 14:43:08 +02:00
Pekka Enberg
2cd5a5f048 StorageService: Fix scrub() variant API wiring
The 'nodetool scrub' command ends up calling the variant that is not
wired up to the Scylla API which causes the following error to be
printed out to the user:

  [penberg@nero scylla-tools-java]$ ./bin/nodetool scrub
  error: For input string: ""
  -- StackTrace --
  java.lang.NumberFormatException: For input string: ""
          at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
          at java.lang.Integer.parseInt(Integer.java:592)
          at java.lang.Integer.parseInt(Integer.java:615)
          at com.scylladb.jmx.api.APIClient.getIntValue(APIClient.java:216)
          at com.scylladb.jmx.api.APIClient.getIntValue(APIClient.java:220)
          at org.apache.cassandra.service.StorageService.scrub(StorageService.java:1291)

Fix the problem by implementing the said scrub() variant.
Message-Id: <1458035736-26349-1-git-send-email-penberg@scylladb.com>
2016-03-16 08:35:30 +02:00
Pekka Enberg
c4d8d7087e APIClient: Make API server errors human readable
Make the error messages returned by Scylla API server human readable
from 'nodetool'.

For example, if an API URL is missing, print out the following error:

  [penberg@nero scylla-tools-java]$ ./bin/nodetool getcompactionthreshold ks test4
  nodetool: Scylla API server HTTP GET to URL 'column_family/minimum_compaction/ks:test4' failed: Not found
  See 'nodetool help' or 'nodetool help <command>'.

instead of the scary-looking error that we now print:

  [penberg@nero scylla-tools-java]$ ./bin/nodetool getcompactionthreshold ks test4
  error: Not found
  -- StackTrace --
  java.lang.RuntimeException: Not found
          at com.scylladb.jmx.api.APIClient.getException(APIClient.java:116)
          at com.scylladb.jmx.api.APIClient.getRawValue(APIClient.java:160)
          at com.scylladb.jmx.api.APIClient.getRawValue(APIClient.java:174)
          at com.scylladb.jmx.api.APIClient.getIntValue(APIClient.java:216)
          at com.scylladb.jmx.api.APIClient.getIntValue(APIClient.java:220)
          at org.apache.cassandra.db.ColumnFamilyStore.getMinimumCompactionThreshold(ColumnFamilyStore.java:475)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          at java.lang.reflect.Method.invoke(Method.java:498)
          at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

          [snip]
Message-Id: <1458032300-17704-1-git-send-email-penberg@scylladb.com>
2016-03-16 08:34:47 +02:00
Pekka Enberg
02e0598506 APIClient: Fix error handling if connection to API server fails
Running 'nodetool status' now reports the following if the JMX proxy is
not able to connect to an API server:

  nodetool: Unable to connect to Scylla API server: java.net.ConnectException: Connection refused
  See 'nodetool help' or 'nodetool help <command>'.

instead of the scary-looking:

  error: javax.ws.rs.ProcessingException (no security manager: RMI class loader disabled)
  -- StackTrace --
  java.lang.ClassNotFoundException: javax.ws.rs.ProcessingException (no security manager: RMI class loader disabled)
          at sun.rmi.server.LoaderHandler.loadClass(LoaderHandler.java:393)
          at sun.rmi.server.LoaderHandler.loadClass(LoaderHandler.java:185)
          at java.rmi.server.RMIClassLoader$2.loadClass(RMIClassLoader.java:637)
          at java.rmi.server.RMIClassLoader.loadClass(RMIClassLoader.java:264)
          at sun.rmi.server.MarshalInputStream.resolveClass(MarshalInputStream.java:214)

That happens because the MBean propagates a
'javax.ws.rs.ProcessingException' to nodetool which does not have it in
it's classpath and loading via RMI fails.

Fixes #25.

Message-Id: <1457697628-31792-1-git-send-email-penberg@scylladb.com>
2016-03-14 11:50:29 +02:00
Pekka Enberg
a38bbfd603 Merge "JMX to listen on local traffic by default" from Amnon
"By default Origin accept local JMX connection. This series import the
 code from origin to set the jmx to listen to local traffic only and
 change the run script so that the default behaviuor would be local only
 traffic."
2016-03-11 14:33:54 +02:00
Amnon Heiman
105a1b5a1b scylla-jmx: Support local only jmx port by default
This patch set the jmx proxy to listen on local traffic by default and
adds a command line switch to allow remote conectivity.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-03-01 19:52:26 +02:00
Amnon Heiman
f3610f1a02 Main: use the RMIServerSocketFactoryImp jmx init
This patch init the jmx proxy from the RMIServerSocketFactoryImp init
function. This way the jmx can be set to listen on local port only.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-03-01 19:49:31 +02:00
Amnon Heiman
39a19b144d Import RMIServerSocketFactoryImp from origin
The RMIServerSocketFactoryImp is the way origin handle local port
configuration.

When used, the jmx can be set to listen on local traffic only.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-03-01 19:47:55 +02:00
Pekka Enberg
00a62ca126 Merge "Adding file information to stream" from Amnon
"This series depends on scylla patch fixing the stream information.

Now that the API report on file information in the stream they need to be
populated to the jmx.

After this patch the nodetool netstats report about file information:

$ nodetool  netstats
Mode: NORMAL
Bootstrap ee150e80-dcef-11e5-bee0-000000000000
    /127.0.0.2
        Sending 1 files, 0 bytes total. Already sent 1 files, 8391192 bytes total
            txnofile 8391192/8391192 bytes(100%) sent to idx:0/127.0.0.2
Read Repair Statistics:
Attempted: 6
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name                    Active   Pending      Completed
Commands                        n/a         0          16268
Responses                       n/a         0              2

Fixes scylladb/scylla#948"
2016-02-27 20:26:09 +02:00
Amnon Heiman
767517f6be SessionInfo: Add receiving_files and sending_files support
This patch adds the streaming session files receiving and sending
information. It is needed for the streaming information.

The constructor now expect the file information, so the
sessionInfoCompositeData was changed to add an empty value for them.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-02-27 03:34:22 +02:00
Amnon Heiman
afd49d7bd4 ProgressInfo: Add creation from json object and json array
This will allow to creat ProgressInfo object from json object and json
Array it needed to report stream file information.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-02-27 03:28:19 +02:00
Amnon Heiman
d589f3a3a3 StorageService: Sort the results of getTokenToEndpointMap
This patch takes the implementation of getTokenToEndpointMap from Origin
which sorts the map result.

Fixes scylladb/scylla#722

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1456142885-20838-1-git-send-email-amnon@scylladb.com>
2016-02-22 14:10:22 +02:00
Nadav Har'El
15ad444c40 scylla-jmx: implement forceRepairRangeAsync
Fix the stubbed implementation of forceRepairRangeAsync() which is
used, for example, when the "--start-token"/"--end-token" options are
passed to "nodetool repair".

forceRepairRangeAsync() works similarly to the existing forceRepairAsync()
just sending the additional start/end tokens as two new options to the
REST API. Unlike the parallel Cassandra code, we don't do any fancy
processing on these tokens to intersect them with the node's token ranges -
we'll do this intersection in the C++ code, where the repair is actually
done.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1455808238-25692-1-git-send-email-nyh@scylladb.com>
2016-02-21 11:36:31 +02:00
Amnon Heiman
ea0c593a75 MessagingService: Ignore exception on the dropped messages thread
The dropped messages thread pull information from the API, in various
scenario it can face a connection problem (specifically on startup and
shutdown) or other related exception, when scylla shutds down. It shold
ignore the connection problem, as it is been taken care of by another
thread that check the status and will shutdown when needed.

For other exception, it logs them while continue to connect.

Fixes scylladb/scylla#902

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1455799819-17957-1-git-send-email-amnon@scylladb.com>
2016-02-18 14:54:53 +02:00
Takuya ASADA
af8ce2d2ea dist: run as scylla on Ubuntu as well
Fixes #873

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1454607403-28849-1-git-send-email-syuu@scylladb.com>
2016-02-07 10:19:29 +02:00
Amnon Heiman
691f86983b StorageService: getTokens should return the tokens of the current node
StorageService.getTokens should return only the tokens of the current
node, not all the tokens.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1454240935-21903-1-git-send-email-amnon@scylladb.com>
2016-02-01 11:01:13 +02:00
Lucas Meneghel Rodrigues
6692f5a3c0 pom.xml: Add log4j classes
After observing logs of scylla-jmx, I started to notice
the following message:

Running '/bin/journalctl --unit scylla-jmx.service'
[stdout] -- Logs begin at Sat 2016-01-23 10:02:51 UTC, end at Sat 2016-01-23 10:07:26 UTC. --
[stdout] Jan 23 10:05:15 ip-172-30-0-9 systemd[1]: Started Scylla JMX.
[stdout] Jan 23 10:05:15 ip-172-30-0-9 systemd[1]: Starting Scylla JMX...
[stdout] Jan 23 10:05:16 ip-172-30-0-9 scylla-jmx[2685]: Using config file: /etc/scylla/scylla.yaml
[stdout] Jan 23 10:05:22 ip-172-30-0-9 scylla-jmx[2685]: Connecting to http://127.0.0.1:10000
[stdout] Jan 23 10:05:22 ip-172-30-0-9 scylla-jmx[2685]: Starting the JMX server
[stdout] Jan 23 10:05:29 ip-172-30-0-9 scylla-jmx[2685]: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[stdout] Jan 23 10:05:29 ip-172-30-0-9 scylla-jmx[2685]: SLF4J: Defaulting to no-operation (NOP) logger implementation
[stdout] Jan 23 10:05:29 ip-172-30-0-9 scylla-jmx[2685]: SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

So we're potentially losing a lot of information on our jmx service logs.
Let's update the log4j dependencies, and add the other ones that are
necessary for the logging to work.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@scylladb.com>
Message-Id: <1453746313-15054-1-git-send-email-lmr@scylladb.com>
2016-01-25 20:32:46 +02:00
Amnon Heiman
3e1a8961a2 StorageService: setLoggingLevel
This patch uses the system api to set log level.
After this patch the nodetool setloglevel would support modifying a log
level of a log object.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1453367412-29722-1-git-send-email-amnon@scylladb.com>
2016-01-21 12:05:54 +02:00
Pekka Enberg
eec251805a CompactionManager: Fix compaction manager API URLs
The URLs had "compaction_manager" twice in them...
2016-01-21 09:22:51 +02:00
Amnon Heiman
b6d55f0623 Remove leftover println from StreamingMetrics
This removes a debug print that was left in the code by accident.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1452673361-8242-1-git-send-email-amnon@scylladb.com>
2016-01-18 10:50:23 +02:00
Avi Kivity
358e7387ea version: rename development version to 666.development
Follows scylla-server.
Message-Id: <1452774312-10210-1-git-send-email-avi@scylladb.com>
2016-01-14 14:40:45 +02:00
Avi Kivity
d090136085 Use serial garbage collector
The serial garbage collector has the smallest memory footprint and the
smallest impact on the rest of the system, esp. in large multicores.
Message-Id: <1452433737-4413-1-git-send-email-avi@scylladb.com>
2016-01-11 11:54:46 +02:00
Avi Kivity
24cebcc9a1 dist: do not always restart jmx on shutdown
Restart=always leads to the following loop:

 1. scylla terminates abnormally
 2. scylla-jmx sees that, and terminates
 3. systemd sees that scylla-jmx terminated, and restarts it.
 4. scylla-jmx requires scylla, so systemd starts it.
 5. goto 1.

To prevent the loop, set Restart=on-abnormal; systemd will restart scylla-jmx
if some JVM bug got it killed, but not otherwise.

The downside to this patch is that if scylla-server goes down, so does
scylla-jmx, but if scylla-server is then restarted, scylla-jmx stays down.
To get scylla and scylla-jmx to start together, we need to create
scylla.service that requires both of them.
2016-01-07 09:11:51 +02:00
Pekka Enberg
d0757c4505 CompactionManager: Fix JSON conversion in getCompactions()
This makes 'nodetool compactionstats' work:

  [penberg@nero cassandra]$ ./bin/nodetool compactionstats
  pending tasks: 0
     compaction type    keyspace       table   completed    total   unit   progress
          compaction   keyspace1   standard1      170719   500096   keys     34.14%
          compaction   keyspace1   standard1      174781   441600   keys     39.58%
  Active compaction remaining time :   0h00m00s

Fixes scylladb/scylla#745.
2016-01-05 15:15:49 +02:00
Pekka Enberg
10caab8590 Merge "Adding streaming metrics support" from Amnon
"This series will enable straming support and the nodetool netstats command.

After this series:
$ nodetool netstats
Mode: NORMAL
Bootstrap 331955a0-aeff-11e5-895c-000000000000
    /127.0.0.2
        Sending 1 files, 140724545317112 bytes total. Already sent 0 files, 0 bytes total
Read Repair Statistics:
Attempted: 6
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name                    Active   Pending      Completed
Commands                        n/a         0            121
Responses                       n/a         0             64

Fixes scylladb/scylla #731"
2015-12-31 13:14:33 +02:00
Amnon Heiman
2ccb657fca Main: start the stream metrics pulling
This patch adds a call to main to start the stream metrics pulling.
2015-12-31 12:47:24 +02:00
Amnon Heiman
686207b59a Import the StreamingMetrics from origin
This patch import and modify the StreamingMetrics from orgin. It will
pull periodically the API to check for the current stream and when it
will find any, it will register their MBean.

After this patch during streaming (ie. node is adding to the cluster) it
will be possible to check with jconsole and see the stream.

A nodetool netstats example:
$ nodetool netstats
Mode: NORMAL
Bootstrap 331955a0-aeff-11e5-895c-000000000000
    /127.0.0.2
        Sending 1 files, 140724545317112 bytes total. Already sent 0
files, 0 bytes total
Read Repair Statistics:
Attempted: 6
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name                    Active   Pending      Completed
Commands                        n/a         0             85
Responses                       n/a         0             46

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-31 12:44:34 +02:00
Amnon Heiman
cda7448314 StreamSummary: Accept null values
This patch allows the StreamSummary to support missing values that return
from the API.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-31 12:44:34 +02:00
Amnon Heiman
2840880e95 SessionInfoCompositeData: to support null values
This patch allows the SessionInfoCompositeData to accept null values.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-31 12:44:34 +02:00
Amnon Heiman
36c4a7df27 SessionInfo: allow null and modified API
The API of the session info returns parameters in snake case instead of
camel case.

This patch chagne the expected field to match the API. It was also
modified to accept empty fields and store them as null.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-31 12:44:34 +02:00
Pekka Enberg
cd910aafa9 Merge "Support the changed load_map API" from Amnon
"The storage_service api was changed to return a map of string, double instead
 of formatted numbers.  This change update the JMX proxy to support this API.

 While going over the code a potential bug was found and was fix.

 The series adds method to the APIClient to return a map of string, double and
 uses that function to call the API."
2015-12-30 11:31:46 +02:00
Amnon Heiman
ccb474e424 StorageService: Support the update getLoadMap API
The API was modify to return the load map as a map of string to double
instead of formatted string.

This patch change the code to support the udpated API.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-30 10:50:57 +02:00
Amnon Heiman
e0e7dcdb5c APIClient: Add a mapStringDouble method
This patch adds a method to the APIClient that return a map of String
and Double.

It support both simple and with query parameters.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-30 10:50:49 +02:00
Amnon Heiman
71c4e892f6 APIClient: Fixing parsing long as int
The APIClient use getInt to return a long value wich can cause number
trancation.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-30 10:07:33 +02:00
Amnon Heiman
2eb9f19236 Clean the jmxproxy output
This patch clean the redundant output the jmx proxy creates.
It set the trace level of the called method to finest and remove some
println leftovers.

Fixes #22

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-30 09:27:33 +02:00
Amnon Heiman
6c2bb34ca3 StorageService: change repair to the updated API
The API now uses explicit parameters to pass the parameters to repair.
This patch changes how the parameters are passed to the API to be
compatible with the changed API.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Reviewed-by: Nadav Har'El <nyh@scylladb.com>
2015-12-29 17:24:20 +02:00
Nadav Har'El
69c6913668 scylla-jmx: fix the forceRepairAsync() used by "nodetool repair"
"nodetool repair" ends up calling one of the dozen forceAsyncRepair()
functions. This function ignored its option rather than passing it on,
so this patch fixes that.

Note that there are still many more forceAsyncRepair() overloads which
similarly ignore their options, and it is possible that certain invocation
of "nodetool repair" will need them, so we will need to fix all of them
in the future.

After this patch, "nodetool repair" no longer works because now Scylla
needs to be fixed to understand the "parallelism" and "incremental" options
passed to it.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-29 09:25:11 +02:00
Nadav Har'El
f8b4dfed38 scylla-jmx: use ":", not "=", to build options list
Scylla's repair REST API (see scylla/api/storage_service.cc) takes all
repair options as one "options" string. The options are separated by ",",
and for each option, the name and value are separated by ":". The existing
code wrongly used "=" instead of ":", so this patch fixes it.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-28 15:55:32 +02:00
Amnon Heiman
4f275cc44b StorageService: format the describering output
The describeRingJMX method, returns a formated output. The output should
be similiar to origin as oppose to the current implementation that
returns a json representation.

After the change an example of nodetool describering:
$ nodetool describering keyspace1
Schema Version:1074c31b-1f39-3df2-90ff-7f0b64bb3ea4
TokenRange:
	TokenRange(start_token:7485973865401664349,
end_token:-338297331236877217, endpoints:[127.0.0.1],
rpc_endpoints:[127.0.0.1],
endpoint_details:[EndpointDetails(host:127.0.0.1,
datacenter:datacenter1, rack:rack1)])
	TokenRange(start_token:-338297331236877217,
end_token:7485973865401664349, endpoints:[127.0.0.2],
rpc_endpoints:[127.0.0.2],
endpoint_details:[EndpointDetails(host:127.0.0.2,
datacenter:datacenter1, rack:rack1)])

On sycall-jmx:
Fixes #21

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-28 09:56:44 +02:00
Nadav Har'El
9b03fa1074 scylla-jmx: repairAsync: don't ignore options
repairAsync() builds an "options" argument from the options map it gets,
but then forgot to pass this argument to the request :-)

This is part of issue scylladb/#714.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-27 19:56:37 +02:00
Amnon Heiman
c8b9198f3b FailureDetector: the ip address should have a leading slash
The ip address of the nodes should have a leading forward slash.

Fixes scylladb/scylla#508

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-24 17:21:07 +02:00
Amnon Heiman
75479531e0 StorageService: rename the dc parameter in rebuild
The API uses the source_dc as a query parameter, the jmx should use the
same.

In addition, the rebuild method can get null as a datacenter value and
in that case it should not pass a parameter.

Fixes scylladb/scylla#668.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-18 11:01:47 +02:00
Pekka Enberg
7543882d6c Clean up after unused imports
Remove unused imports that Eclipse complains about.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-12-17 09:29:48 +02:00
Pekka Enberg
0f044e2f47 Rename "com.cloudius.urchin" package to "com.scylladb.jmx"
Move the Scylla JMX code under "com.scylladb.jmx" package.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-12-17 09:28:17 +02:00
Pekka Enberg
4dfa0737ac Rename urchin-mbean.jar to scylla-jmx.jar
Urchin is an obsolete working name for Scylla. Rename the Jar file to
scylla-jmx.jar.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-12-17 09:27:58 +02:00
Amnon Heiman
c452a9f7ba Limit JVM maximum heap size to 256 MB
This patch limit the JVM maximum heap size to 256MB.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-17 09:17:20 +02:00
Pekka Enberg
7c34fb8ce0 dist/ubuntu: Remove unneeded startup parameters
Fixes #19.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-12-16 17:56:07 +02:00
Pekka Enberg
4c2c4c85f1 Merge "Update the get history to be compatible with the API" from Amnon
"After this series the nodetool compacthistory displays:

$ nodetool compactionhistory
Compaction History:
id                                       keyspace_name      columnfamily_name            compacted_at              bytes_in       bytes_out      rows_merged
09d71860-a3f3-11e5-b1cf-000000000000     system             peers                        1450269994214             365            365
09d73f70-a3f3-11e5-88b4-000000000001     system             local                        1450269994215             816            690"
2015-12-16 15:16:52 +02:00
Amnon Heiman
107664dbf1 CompactionManager: Switch to the update compaction history API
This changes the CompactionManager getCompactionHistory to use the new
get_compaction_history API.

It uses the CompactionHistoryTabularData to parse and report the
results.

After this patch nodetool compactionhistory would work.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-16 14:52:21 +02:00
Amnon Heiman
8e7c432374 Importing CompactionHistoryTabularData from origin
This patch import and modify CompactionHistoryTabularData from origin.
It will be used by the getCompactionHistory method in CompactionManager.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-16 14:51:52 +02:00
Takuya ASADA
c25a844410 dist: remove unneeded parameters
Since scylla-jmx supports to read configuration from scylla.yaml, we
don't need to pass these parameters from program arguments.

Fixes #17.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-14 11:29:28 +02:00
Amnon Heiman
21696bec1f APIClient: snapshot details should align to the API
There was a confusion in the API between key and keyspace.
It was changed in the API so the JMX should be modified accordingly.
After this change

nodetool listsnapshots
Will show the current snapshots.

On scylla-jmx:
Fixes #15

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-14 09:06:50 +02:00
Amnon Heiman
fb9f3c8961 StorageService: getLoadMap should format the load
Similiar to origin, the load map should return a formated load value.

After this patch the nodetool status command:
$nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Tokens  Owns    Host ID
Rack
UN  127.0.0.1  394.97 MB  256     ?
292a6c7f-2063-484c-b54d-9015216f1750  rack1
UN  127.0.0.2  151.07 MB  256     ?
102b6ecd-2081-4073-8172-bf818c35e27b  rack1

Under scylla-jmx
Fixes #18

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-14 09:05:21 +02:00
Amnon Heiman
bb7409fbc7 Do not add command line param by default
With the addition of the configuration file, the scylla-jmx should not
add command line configuration parameters by default. Instead, it should
add those parameters only if they are explicitely given to it.

Fixes #16.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-14 08:38:21 +02:00
Amnon Heiman
67b244b8e4 StorageService: Fix a typo in the get snapshots API
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-08 10:41:14 +02:00
Pekka Enberg
bb209e8ce7 Merge "Add SCYLLA_HOME, SCYLLA_CONF for systemd/upstart" from Takuya
"Fixes scylladb/scylla#607.

Also it merges dist/common/scripts/jmx_run and scripts/scylla-jmx, and mark
sysconfig file as 'noreplace'."
2015-12-07 17:00:39 +02:00
Pekka Enberg
65e288c703 Merge "Adding the GCInspectorMBean" from Amnon
"This series import the GCInspectorMBean and its implementation from origin.
This would solve the warning given by cassandra-stress.

On syclla-jmx
Fixes #14"
2015-12-07 13:22:07 +02:00
Amnon Heiman
7f945be732 Main: call the GCInspector register method
Starts the GCInspector

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-07 13:00:07 +02:00
Amnon Heiman
7b9ea44354 Import the GCInspectorMXBean from origin
Although there is little relevant information in the GC inspector, some
application like cassandra-stress looks for it and fails if it cannot be
found.

This patch import the GCInspectorMBean and its implementation.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-07 13:00:07 +02:00
Amnon Heiman
ba0ed7cbc7 createColumnFamilyGauge to support double values return from the API
There are cases where the API uses double to return a value that the JMX
expect to be long.

For example in mean column row size. This type difference should not be
a problem and the result should be cast to long or int.

This patch allows the values to be double and cast the result to int or
long.

This fix (scylla-jmx)
Fixes #12

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-03 12:38:35 +02:00
Amnon Heiman
3a69e3d9da ColumnFamilyStore: getSSTableCountPerLevel should return null not empty array
When getSSTableCountPerLevel is called and the system is not using level
compaction the expected return is null and not an empty array.

This fix (scylla-jmx)
Fixes #11

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-03 12:20:30 +02:00
Amnon Heiman
e194ca85a4 ColumnFamilyStore: Use the combine API with metrics
The column family store API was changed so it would have a single API to
return the snapshot size.

This changes the JMX to use the same API regardless if it is called from
the ColumnFamilyMetrics or from ColumnFamilyStore.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-02 14:32:47 +02:00
Takuya ASADA
18674d67b0 dist: do not overwirte sysconfig file on 'yum update' 2015-12-02 01:27:47 +09:00
Takuya ASADA
bdc7f3fb57 dist: export SCYLLA_HOME, SCYLLA_CONF 2015-12-02 01:26:13 +09:00
Takuya ASADA
bcc99b274f dist: drop dist/common/scripts/jmx_run, use scripts/scylla-jmx 2015-12-02 01:19:03 +09:00
Pekka Enberg
c6fc4e8340 Merge "Ubuntu package cleanups" from Takuya 2015-11-25 16:34:11 +02:00
Takuya ASADA
bfe07efa6d dist: specify maven repo directory
Fixes #10.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-25 16:33:32 +02:00
Takuya ASADA
473702845f dist: fix 'ancient-standards-version 3.9.2 (current is 3.9.5)' message on building ubuntu package 2015-11-25 19:43:38 +09:00
Takuya ASADA
abb1838845 dist: fix 'init.d-script-not-included-in-package etc/init.d/scylla-jmx' message on building ubuntu package 2015-11-25 19:43:38 +09:00
Takuya ASADA
2310eb6844 dist: cleanup dependency-reduced-pom.xml when rebuilding ubuntu package 2015-11-25 19:43:38 +09:00
Takuya ASADA
378ed90653 dist: fix 'extra-license-file usr/share/doc/scylla-jmx/LICENSE.AGPL.gz' message on building ubuntu package 2015-11-25 19:43:38 +09:00
Takuya ASADA
fe3018cd35 dist: fix 'extended-description-line-too-long' message on building ubuntu package 2015-11-25 19:43:38 +09:00
Takuya ASADA
793f53e403 dist: fix 'missing-license-paragraph-in-dep5-copyright' message on building ubuntu package 2015-11-25 19:43:38 +09:00
Takuya ASADA
c079c5922c dist: make ubuntu package as 'debian non-native package' 2015-11-25 19:43:38 +09:00
Takuya ASADA
b007b3ae1b dist: specify maven repo directory 2015-11-24 19:22:54 +09:00
Takuya ASADA
dd2192ef97 dist: support ./SCYLLA-VERSION-GEN on ubuntu package
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-22 17:55:27 +02:00
Pekka Enberg
c7df07ac70 Merge "Support empty histogram from the API" from Amnon
"This series support API calls that returns an empty histogram. This is a
 typical scenario with counters that are not implemented yet, for example range
 latency.  The trigger for this series is the nodetoold proxyhistograms command

 After this series:
 ./bin/nodetool proxyhistograms
 proxy histograms
 Percentile      Read Latency     Write Latency     Range Latency
                     (micros)          (micros)          (micros)
 50%                654949.00         315852.00               NaN
 75%               8409007.00        4055269.00               NaN
 95%              20924300.00       17436917.00               NaN
 98%              25109160.00       20924300.00               NaN
 99%              25109160.00       25109160.00               NaN
 Min                 11865.00          11865.00               NaN
 Max              25109160.00       25109160.00               NaN"
2015-11-19 12:36:31 +02:00
Amnon Heiman
695e23bd4e RecentEstimatedHistogram: Support empty histogram
The RecentEstimatedHistogram updates its value from the API with an
array of recent values.

This array can be empty, in that case the getBuckets method should just
return a zero size array.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-19 11:55:34 +02:00
Amnon Heiman
e0dea0c27e EstimatedHistogram: Support empty histogram
When creating an estimated histogram from buckets it is a valid option
to get a zero size array as the buckets array.

In that case the newOffsets method would get a negative value for its
size, which should result in a zero length array of offsets.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-19 11:52:17 +02:00
Amnon Heiman
5362b2045d APIClient: Support Empty histogram
An empty histogram can return a valid response from the API but without
any buckets.
This is a valid scenario and common for counters of features that are
not supported yet.
In those cases, the APIClient should return a zero length array.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-19 11:49:31 +02:00
Pekka Enberg
85ab591d27 Merge "FailureDetector getAllEndpointStates to use the updated API" from Amnon
"This series complete the change in the API that replaces the formatted string
functionality with an API that return an array of objects.

The series import from origin the helper classes: ApplicationState,
EndpointState and HeartBeatState and modify them to accept their values from
the API.

After this seris will be applied nodetool gossipinfo will have the same output
as origin has."
2015-11-18 16:29:34 +02:00
Amnon Heiman
9f9dc88643 FailureDetector: Change getAllEndpointStates implementation
This patch change getAllEndpointStates implementation. The proxy now
gets from the API a list of objects, it creates the endpoint map from it
and create the result string.

After this patch the nodetool gossipinfo should be formatted like
origin.

After this patch the nodetool gossipinfo return:

./bin/nodetool gossipinfo
127.0.0.2
  generation:1447850743
  heartbeat:78
  RACK:rack1
  DC:datacenter1
  HOST_ID:459137d7-2c7c-4b65-9ef8-f1c93b29dd6b
  RPC_ADDRESS:127.0.0.2
  RELEASE_VERSION:2.1.8
  LOAD:86677
  STATUS:NORMAL,9219539092146142451
  SCHEMA:59adb24e-f3cd-3e02-97f0-5b395827453f
  NET_VERSION:0
127.0.0.1
  generation:1447850742
  heartbeat:75
  RACK:rack1
  DC:datacenter1
  HOST_ID:5216770b-6fc5-4d5b-8c87-33304fd87bc8
  RPC_ADDRESS:127.0.0.1
  RELEASE_VERSION:2.1.8
  LOAD:12655
  STATUS:NORMAL,927478638459366287
  SCHEMA:59adb24e-f3cd-3e02-97f0-5b395827453f
  NET_VERSION:0

Fix #508

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-18 14:48:15 +02:00
Amnon Heiman
e01ece2fcd Import ApplicationState, EndpointState and HeartBeatState from origin
This patch import ApplicationState, EndpointState and HeartBeatState
from origin that are used to report the endpoint state map.

The classes where modified to be created by the API objects.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-18 13:58:07 +02:00
Pekka Enberg
d06e6fdde1 Merge "Adding the messaging service counters" from Amnon
"This series adds the dropped, timeout and their recently version counter to the
 MessagingService."
2015-11-18 08:59:32 +02:00
Amnon Heiman
1292bd9ba4 MessagingService: Add the depricated getRecentTimeoutsPerHost and
getRecentTotalTimeouts

This patch adds the impelementation for the depricated method
getRecentTimeoutsPerHost and getRecentTotalTimeouts.

The implementatin is based on origin, the recent version of the method,
return the delta from the last call to the method.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-17 11:57:07 +02:00
Amnon Heiman
db7aad26f5 MessagingService add dropped and recently dropped messages impl
This patch adds the implementation of the dropped messages and the
recent dropped messages.

The MessagingService holds a timer that periodically load the dropped
messages from the API and distribute the results between the
DroppedMessagesMetrics instances.

This mimic the timer behaviour in origin, only it does one API call for
all Verb.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-17 11:56:03 +02:00
Amnon Heiman
896fd64de9 Import the DroppedMessageMetrics from origin
This patch import the DroppedMessageMetrics from origin, as oppose to
origin, it does not run timers but relay on the Messaging sevice.

This save the timer and API call for each of the Verb.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-16 11:52:29 +02:00
Amnon Heiman
1cb048effe Create and register the APISettableMetrics
This patch adds the ability to create and register the
APISettableMettics.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-16 11:29:21 +02:00
Amnon Heiman
ac049129de Adding the APISettableMeter
Sometimes it is required that a meter will not handle its own data, like
the APIMeter does.

This patch break the added functionality of APIMeter into two classes,
APISettableMeter is a Meter with a set value method and APIMeter adds
the functionality that reads from the API.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-16 11:27:31 +02:00
Amnon Heiman
b783a0d09a MessagingService: Add dropped and timeout support
This adds the implementation for dropped messages and timeout messages
counters in MessagingService.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-11 09:49:50 +02:00
Pekka Enberg
e530c13f87 Merge "Adding deprecated implementation to cache" from Amnon
"This series adds some deprecated methods implemenetation to the CacheService
depnding on its metrics.

It also stub the getDrainProgress in StorageService."
2015-11-11 09:40:04 +02:00
Amnon Heiman
fadfb9443c StorageService: Add describering functionality
This patch adds the describering method to StorageService, the
implementation is based on the storage_service API that is define in
storage_service.json

The implementation reflect the changes in the API, that returns an
object vs. the jmx_describe ring.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-11-11 09:14:05 +02:00
Pekka Enberg
81aa2a8279 Merge "JMX proxy to support configuration file" from Amnon
"Scylla's configuration can change the listening address and port of the API,
the jmx proxy need to use this same configuration.

This series adds the ability to add a path to a yaml configuration file and the
jmx proxy would read its configuration from there.  Configuration from system
properties/command line is still supported and the configuration hirarchy is as
follow from highest to lowest:
* command line
* configuration file according to the hirarchy:
  - command line
  - SCYLLA_CONF
  - SCYLLA_HOME
  - relative conf directory
* default values

The configuration definition was moved to a configuraion class that responsible
for getting the information from the command line and the configuration file."
2015-11-11 09:08:08 +02:00
Amnon Heiman
27c0eb8c99 StorageService: move should pass the parameter to the API
When calling the API move method, the proxy should pass the new_token
parameter.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-11-11 08:53:27 +02:00
Amnon Heiman
07b319d827 StorageService: Stub the getDrainProgress
Drain progress is not implemented yet, it is needed by the nodetool
command so it will not fail.

This patches the functionality until the API will be ready, which, in
that time it would be revert.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-10 17:03:50 +02:00
Amnon Heiman
01477809ac CacheService: Add depricated unimplemented methods
This patch follow origin in the implementation of the depricated methods
in CacheService. It propogate the request to the relevant metrics.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-10 16:54:48 +02:00
Amnon Heiman
54d451de88 CacheMetrics: Add recent hit rate
The depricated recent hit rate implementation was add from Origin as it
is still been used by external system.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-10 15:58:51 +02:00
Amnon Heiman
43e07fda0e Use the configuration object
This patch changes the APIClient to read the connection string from the
configuration object.

Main uses the same configuraion API to print it's connecting message and
call the configuration setup.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-08 12:12:27 +02:00
Amnon Heiman
39b50f63c7 Adding a configuration object for the jmx proxy
This patch adds a configuration object to the jmx proxy that support
both the system/command line based properties and it accept a yaml
configuration file. The later options allows the jmx to read scylla
configuration file and connect to it based on this configuration.

The configuration file reader uses a yaml parser that was added to the
pom.xml

If no configuration file is found in the command line, it would look for
SCYLLA_CONF then SCYLLA_HOME then for relative 'conf' directory

Signed-off-by: Amnon Heiman <amnon@scylladb.com>

need merge apiconfig
2015-11-08 12:12:27 +02:00
Avi Kivity
72f6f5dab4 Merge "Enabling nodetool netstats" from Amnon
"This series adds the jmx implementation to enable netstats.
After this series netstats should complete successfuly.
A run example:

$ ./bin/nodetool netstats
Mode: NORMAL
repair 397c91a0-8205-11e5-83e4-000000000001
repair 3977d5ba-8205-11e5-83e4-000000000001
repair 3977d624-8205-11e5-83e4-000000000001
repair 397c8fc8-8205-11e5-83e4-000000000001
.......
......
repair 3977d502-8205-11e5-83e4-000000000001
Read Repair Statistics:
Attempted: 1
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name                    Active   Pending      Completed
Commands                        n/a         0          21182
Responses                       n/a         0            597"
2015-11-08 11:48:40 +02:00
Amnon Heiman
9b14550d0a StorageService: register the StreamManagerMBean
This patch adds the registration of StreamManagerMBean to
StorageService, similiar to the way it is done in origin.

After this patch the StreamManager will be available via Jconsole.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-03 11:33:40 +02:00
Amnon Heiman
3cb168bed3 MessagingService: Add tasks statistics
This patch adds the implementation of:
getResponsePendingTasks()
getResponseCompletedTasks()
getDroppedMessages()

The implementation is based on the messaging_service API that defined in
messaging_service.json.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-03 11:33:40 +02:00
Amnon Heiman
20778b2df6 Add StreamManagerMBean with its subclasses
The StreamManager getStreams returns an hirarchy of classes. This patch
import StreamManagerMBean with the class hirarchy and add an
implementation to StreamManager.

The implementation is based on the stream_manager API.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-03 11:28:44 +02:00
Amnon Heiman
1db077618d APIClient: Add getMapStringIntegerValue and getMapStringLongValue
This patch adds two map reader function to the APIClient, one that parse
map<String,Integer> and one for map<String,Long>

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-03 11:18:33 +02:00
Takuya ASADA
2e11cb5471 dist: stop scylla-jmx when scylla-server stopped, don't respawn
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-10-30 15:40:02 +02:00
Takuya ASADA
c596a9e462 dist: fix warning when building scylla-jmx ubuntu package
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-10-30 09:15:13 +02:00
Amnon Heiman
00694ce40a StorageService: getLoadString should return units
This address issue #512

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-10-29 15:54:29 +02:00
Takuya ASADA
b5d29a0796 dist: add devscripts to install debuild before start building package
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-10-29 07:41:58 +02:00
122 changed files with 10032 additions and 5165 deletions

1
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1 @@
* @penberg

9
.gitignore vendored Normal file
View File

@ -0,0 +1,9 @@
/target/
/bin/
dependency-reduced-pom.xml
scylla-apiclient/target/
.classpath
.project
.settings
build/
/.idea/

View File

@ -1,22 +1,25 @@
# Urchin JMX Interface # Scylla JMX Server
This is the JMX interface for urchin.
## Compile Scylla JMX server implements the Apache Cassandra JMX interface for compatibility with tooling such as `nodetool`. The JMX server uses Scylla's REST API to communicate with a Scylla server.
To compile do:
``` ## Compiling
mvn install
To compile JMX server, run:
```console
$ mvn --file scylla-jmx-parent/pom.xml package
``` ```
## Run ## Running
The maven will create an uber-jar with all dependency under the target directory. You should run it with the remote jmx enable so the nodetool will be able to connect to it.
``` To start the JMX server, run:
java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=7199 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -jar target/urchin-mbean-1.0.jar
```console
$ ./scripts/scylla-jmx
``` ```
## Setting IP and Port To get help on supported options:
By default the the JMX would connect to a node on the localhost
on port 10000.
The jmx API uses the system properties to set the IP address and Port. ```console
To change the ip address use the apiaddress property (e.g. -Dapiaddress=1.1.1.1) $ ./scripts/scylla-jmx --help
To change the port use the apiport (e.g. -Dapiport=10001) ```

View File

@ -1,19 +1,49 @@
#!/bin/sh #!/bin/sh
VERSION=1.0 PRODUCT=scylla
VERSION=666.development
if test -f version if test -f version
then then
SCYLLA_VERSION=$(cat version | awk -F'-' '{print $1}') SCYLLA_VERSION=$(cat version | awk -F'-' '{print $1}')
SCYLLA_RELEASE=$(cat version | awk -F'-' '{print $2}') SCYLLA_RELEASE=$(cat version | awk -F'-' '{print $2}')
else else
DATE=$(date +%Y%m%d) DATE=$(date --utc +%Y%m%d)
GIT_COMMIT=$(git log --pretty=format:'%h' -n 1) GIT_COMMIT=$(git log --pretty=format:'%h' -n 1)
SCYLLA_VERSION=$VERSION SCYLLA_VERSION=$VERSION
SCYLLA_RELEASE=$DATE.$GIT_COMMIT SCYLLA_RELEASE=$DATE.$GIT_COMMIT
fi fi
usage() {
echo "usage: $0"
echo " [--version product-version-release] # override p-v-r"
exit 1
}
OVERRIDE=
while [[ $# > 0 ]]; do
case "$1" in
--version)
OVERRIDE="$2"
shift 2
;;
*)
usage
;;
esac
done
if [[ -n "$OVERRIDE" ]]; then
# regular expression for p-v-r: alphabetic+dashes for product, trailing non-dashes
# for release, everything else for version
RE='^([-a-z]+)-(.+)-([^-]+)$'
PRODUCT="$(sed -E "s/$RE/\\1/" <<<"$OVERRIDE")"
SCYLLA_VERSION="$(sed -E "s/$RE/\\2/" <<<"$OVERRIDE")"
SCYLLA_RELEASE="$(sed -E "s/$RE/\\3/" <<<"$OVERRIDE")"
fi
echo "$SCYLLA_VERSION-$SCYLLA_RELEASE" echo "$SCYLLA_VERSION-$SCYLLA_RELEASE"
mkdir -p build mkdir -p build
echo "$SCYLLA_VERSION" > build/SCYLLA-VERSION-FILE echo "$SCYLLA_VERSION" > build/SCYLLA-VERSION-FILE
echo "$SCYLLA_RELEASE" > build/SCYLLA-RELEASE-FILE echo "$SCYLLA_RELEASE" > build/SCYLLA-RELEASE-FILE
echo "$PRODUCT" > build/SCYLLA-PRODUCT-FILE

13
debian/control vendored
View File

@ -1,13 +0,0 @@
Source: scylla-jmx
Maintainer: Takuya ASADA <syuu@scylladb.com>
Homepage: http://scylladb.com
Section: database
Priority: optional
Standards-Version: 3.9.2
Build-Depends: debhelper (>= 9), maven, openjdk-7-jdk
Package: scylla-jmx
Architecture: all
Depends: ${shlibs:Depends}, ${misc:Depends}, openjdk-7-jre-headless, scylla-server
Description: Scylla JMX server binaries
Scylla is a highly scalable, eventually consistent, distributed, partitioned row DB.

12
debian/copyright vendored
View File

@ -1,12 +0,0 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: Scylla DB
Upstream-Contact: http://www.scylladb.com/
Source: https://github.com/scylladb/scylla-jmx
Files: *
Copyright: Copyright (C) 2015 ScyllaDB
License: AGPL-3.0
Files: debian/*
Copyright: Copyright (C) 2015 ScyllaDB
License: AGPL-3.0

27
debian/rules vendored
View File

@ -1,27 +0,0 @@
#!/usr/bin/make -f
DOC = $(CURDIR)/debian/scylla-jmx/usr/share/doc/scylla-jmx
DEST = $(CURDIR)/debian/scylla-jmx/usr/lib/scylla/jmx
override_dh_auto_build:
mvn install
override_dh_auto_clean:
rm -rf target
override_dh_auto_install:
mkdir -p $(CURDIR)/debian/scylla-jmx/etc/default/ && \
cp $(CURDIR)/dist/common/sysconfig/scylla-jmx \
$(CURDIR)/debian/scylla-jmx/etc/default/
mkdir -p $(DOC) && \
cp $(CURDIR)/*.md $(DOC)
cp $(CURDIR)/NOTICE $(DOC)
cp $(CURDIR)/LICENSE.AGPL $(DOC)
mkdir -p $(DEST)
cp $(CURDIR)/dist/common/scripts/* $(DEST)
cp $(CURDIR)/target/urchin-mbean-1.0.jar $(DEST)
%:
dh $@

View File

@ -1,22 +0,0 @@
# scylla-jmx - ScyllaDB
#
# ScyllaDB
description "ScyllaDB jmx"
start on starting scylla-server
stop on runlevel [!2345]
respawn
respawn limit 10 5
umask 022
expect fork
console log
script
. /etc/default/scylla-jmx
export JMX_LOCAL_PORT
/usr/lib/scylla/jmx/jmx_run
end script

View File

@ -1,5 +0,0 @@
#!/bin/sh -e
args="-Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=$JMX_LOCAL_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_LOCAL_PORT -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
exec java $args -jar /usr/lib/scylla/jmx/urchin-mbean-1.0.jar

View File

@ -1 +1,32 @@
JMX_LOCAL_PORT=7199 # scylla home dir
SCYLLA_HOME=/var/lib/scylla
# scylla config dir
SCYLLA_CONF=/etc/scylla
# The jmx port to open
# SCYLLA_JMX_PORT="-jp 7199"
# The API port to connect to
#SCYLLA_API_PORT="-p 10000"
# API address to connect to
#SCYLLA_API_ADDR="-a localhost"
# use alternate jmx address
#SCYLLA_JMX_ADDR="-ja localhost"
# A configuration file to use
#SCYLLA_JMX_FILE="-cf /etc/scylla.d/scylla-user.cfg"
# The location of the jmx proxy jar file
SCYLLA_JMX_LOCAL="-l /opt/scylladb/jmx"
# allow to run remotely
#SCYLLA_JMX_REMOTE="-r"
# allow debug
#SCYLLA_JMX_DEBUG="-d"
# specify JVM options
JAVA_TOOL_OPTIONS=""

18
dist/common/systemd/scylla-jmx.service vendored Normal file
View File

@ -0,0 +1,18 @@
[Unit]
Description=Scylla JMX
Requires=scylla-server.service
After=scylla-server.service
[Service]
Type=simple
EnvironmentFile=/etc/sysconfig/scylla-jmx
User=scylla
Group=scylla
ExecStart=/opt/scylladb/jmx/scylla-jmx $SCYLLA_JMX_PORT $SCYLLA_API_PORT $SCYLLA_API_ADDR $SCYLLA_JMX_ADDR $SCYLLA_JMX_FILE $SCYLLA_JMX_LOCAL $SCYLLA_JMX_REMOTE $SCYLLA_JMX_DEBUG
KillMode=process
Restart=on-abnormal
Slice=scylla-helper.slice
WorkingDirectory=/var/lib/scylla
[Install]
WantedBy=multi-user.target

View File

@ -1,4 +1,4 @@
scylla-jmx (0.10-1) unstable; urgency=medium %{product}-jmx (%{version}-%{release}-%{revision}) %{codename}; urgency=medium
* Initial release. * Initial release.

14
dist/debian/control.template vendored Normal file
View File

@ -0,0 +1,14 @@
Source: %{product}-jmx
Maintainer: Takuya ASADA <syuu@scylladb.com>
Homepage: http://scylladb.com
Section: database
Priority: optional
Standards-Version: 3.9.5
Rules-Requires-Root: no
Package: %{product}-jmx
Architecture: all
Depends: ${shlibs:Depends}, ${misc:Depends}, openjdk-8-jre-headless | openjdk-8-jre | oracle-java8-set-default | adoptopenjdk-8-hotspot-jre | openjdk-11-jre-headless | openjdk-11-jre |oracle-java11-set-default , %{product}-server
Description: Scylla JMX server binaries
Scylla is a highly scalable, eventually consistent, distributed,
partitioned row DB.

706
dist/debian/debian/copyright vendored Normal file
View File

@ -0,0 +1,706 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: Scylla DB
Upstream-Contact: http://www.scylladb.com/
Source: https://github.com/scylladb/scylla-jmx
Files: *
Copyright: Copyright (C) 2015 ScyllaDB
License: AGPL-3.0
Files: debian/*
Copyright: Copyright (C) 2015 ScyllaDB
License: AGPL-3.0
Files: scripts/git-archive-all
Copyright: Copyright (c) 2010 Ilya Kulakov
License: MIT
License: AGPL-3.0
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
.
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
.
Preamble
.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
.
The precise terms and conditions for copying, distribution and
modification follow.
.
TERMS AND CONDITIONS
.
0. Definitions.
.
"This License" refers to version 3 of the GNU Affero General Public License.
.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
.
A "covered work" means either the unmodified Program or a work based
on the Program.
.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
.
1. Source Code.
.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
.
The Corresponding Source for a work in source code form is that
same work.
.
2. Basic Permissions.
.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
.
4. Conveying Verbatim Copies.
.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
.
5. Conveying Modified Source Versions.
.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
.
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
.
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
.
6. Conveying Non-Source Forms.
.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
.
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
.
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
.
7. Additional Terms.
.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
.
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
.
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
.
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
.
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
.
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
.
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
.
8. Termination.
.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
.
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
.
9. Acceptance Not Required for Having Copies.
.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
.
10. Automatic Licensing of Downstream Recipients.
.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
.
11. Patents.
.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
.
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
.
12. No Surrender of Others' Freedom.
.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
.
13. Remote Network Interaction; Use with the GNU General Public License.
.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
.
14. Revised Versions of this License.
.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
.
15. Disclaimer of Warranty.
.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
.
16. Limitation of Liability.
.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
.
17. Interpretation of Sections 15 and 16.
.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
.
END OF TERMS AND CONDITIONS
.
How to Apply These Terms to Your New Programs
.
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
.
Also add information on how to contact you by electronic and paper mail.
.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<http://www.gnu.org/licenses/>.
License: MIT
Copyright (c) 2010 Ilya Kulakov
.
.
.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
.
.
.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
.
.
.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

23
dist/debian/debian/rules vendored Executable file
View File

@ -0,0 +1,23 @@
#!/usr/bin/make -f
include /usr/share/dpkg/pkg-info.mk
override_dh_auto_build:
override_dh_auto_clean:
override_dh_auto_install:
dh_auto_install
cd scylla-jmx; ./install.sh --packaging --root "$(CURDIR)/debian/tmp" --sysconfdir /etc/default
override_dh_installinit:
ifeq ($(DEB_SOURCE),scylla-jmx)
dh_installinit --no-start
else
dh_installinit --no-start --name scylla-jmx
endif
override_dh_strip_nondeterminism:
%:
dh $@

4
dist/debian/debian/scylla-jmx.install vendored Normal file
View File

@ -0,0 +1,4 @@
etc/default/scylla-jmx
etc/systemd/system/scylla-jmx.service.d/sysconfdir.conf
opt/scylladb/jmx/*
usr/lib/scylla/jmx/*

View File

@ -0,0 +1,7 @@
#!/bin/sh
if [ -d /run/systemd/system ]; then
systemctl --system daemon-reload >/dev/null || true
fi
#DEBHELPER#

7
dist/debian/debian/scylla-jmx.postrm vendored Normal file
View File

@ -0,0 +1,7 @@
#!/bin/sh
if [ -d /run/systemd/system ]; then
systemctl --system daemon-reload >/dev/null || true
fi
#DEBHELPER#

1
dist/debian/debian/scylla-jmx.service vendored Symbolic link
View File

@ -0,0 +1 @@
../../common/systemd/scylla-jmx.service

1
dist/debian/debian/source/format vendored Normal file
View File

@ -0,0 +1 @@
3.0 (quilt)

1
dist/debian/debian/source/options vendored Normal file
View File

@ -0,0 +1 @@
extend-diff-ignore = ^build/

80
dist/debian/debian_files_gen.py vendored Executable file
View File

@ -0,0 +1,80 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Copyright (C) 2020 ScyllaDB
#
#
# This file is part of Scylla.
#
# Scylla is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Scylla is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Scylla. If not, see <http://www.gnu.org/licenses/>.
#
import string
import os
import shutil
import re
from pathlib import Path
class DebianFilesTemplate(string.Template):
delimiter = '%'
scriptdir = os.path.dirname(__file__)
with open(os.path.join(scriptdir, 'changelog.template')) as f:
changelog_template = f.read()
with open(os.path.join(scriptdir, 'control.template')) as f:
control_template = f.read()
with open('build/SCYLLA-PRODUCT-FILE') as f:
product = f.read().strip()
with open('build/SCYLLA-VERSION-FILE') as f:
version = f.read().strip().replace('.rc', '~rc').replace('_', '-')
with open('build/SCYLLA-RELEASE-FILE') as f:
release = f.read().strip()
if os.path.exists('build/debian/debian'):
shutil.rmtree('build/debian/debian')
shutil.copytree('dist/debian/debian', 'build/debian/debian')
if product != 'scylla':
for p in Path('build/debian/debian').glob('scylla-*'):
# pat1: scylla-server.service
# -> scylla-enterprise-server.scylla-server.service
# pat2: scylla-server.scylla-fstrim.service
# -> scylla-enterprise-server.scylla-fstrim.service
# pat3: scylla-conf.install
# -> scylla-enterprise-conf.install
if m := re.match(r'^scylla(-[^.]+)\.service$', p.name):
p.rename(p.parent / f'{product}{m.group(1)}.{p.name}')
elif m := re.match(r'^scylla(-[^.]+\.scylla-[^.]+\.[^.]+)$', p.name):
p.rename(p.parent / f'{product}{m.group(1)}')
else:
p.rename(p.parent / p.name.replace('scylla', product, 1))
s = DebianFilesTemplate(changelog_template)
changelog_applied = s.substitute(product=product, version=version, release=release, revision='1', codename='stable')
s = DebianFilesTemplate(control_template)
control_applied = s.substitute(product=product)
with open('build/debian/debian/changelog', 'w') as f:
f.write(changelog_applied)
with open('build/debian/debian/control', 'w') as f:
f.write(control_applied)

View File

@ -1,34 +0,0 @@
#!/bin/sh -e
RPMBUILD=`pwd`/build/rpmbuild
if [ ! -e dist/redhat/build_rpm.sh ]; then
echo "run build_rpm.sh in top of scylla-jmx dir"
exit 1
fi
sudo yum install -y rpm-build git
OS=`awk '{print $1}' /etc/redhat-release`
if [ "$OS" = "Fedora" ] && [ ! -f /usr/bin/mock ]; then
sudo yum -y install mock
elif [ "$OS" = "CentOS" ] && [ ! -f /usr/bin/yum-builddep ]; then
sudo yum -y install yum-utils
fi
VERSION=$(./SCYLLA-VERSION-GEN)
SCYLLA_VERSION=$(cat build/SCYLLA-VERSION-FILE)
SCYLLA_RELEASE=$(cat build/SCYLLA-RELEASE-FILE)
mkdir -p $RPMBUILD/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
git archive --format=tar --prefix=scylla-jmx-$SCYLLA_VERSION/ HEAD -o build/rpmbuild/SOURCES/scylla-jmx-$VERSION.tar
cp dist/redhat/scylla-jmx.spec.in $RPMBUILD/SPECS/scylla-jmx.spec
sed -i -e "s/@@VERSION@@/$SCYLLA_VERSION/g" $RPMBUILD/SPECS/scylla-jmx.spec
sed -i -e "s/@@RELEASE@@/$SCYLLA_RELEASE/g" $RPMBUILD/SPECS/scylla-jmx.spec
if [ "$OS" = "Fedora" ]; then
rpmbuild -bs --define "_topdir $RPMBUILD" $RPMBUILD/SPECS/scylla-jmx.spec
/usr/bin/mock rebuild --resultdir=`pwd`/build/rpms $RPMBUILD/SRPMS/scylla-jmx-$VERSION*.src.rpm
else
sudo yum-builddep -y $RPMBUILD/SPECS/scylla-jmx.spec
rpmbuild -ba --define "_topdir $RPMBUILD" $RPMBUILD/SPECS/scylla-jmx.spec
fi

75
dist/redhat/scylla-jmx.spec vendored Normal file
View File

@ -0,0 +1,75 @@
Name: %{product}-jmx
Version: %{version}
Release: %{release}%{?dist}
Summary: Scylla JMX
Group: Applications/Databases
License: AGPLv3
URL: http://www.scylladb.com/
Source0: %{reloc_pkg}
BuildArch: noarch
BuildRequires: systemd-units
Requires: %{product}-server jre-1.8.0-headless
AutoReqProv: no
%description
%prep
%setup -q -n scylla-jmx
%build
%install
./install.sh --packaging --root "$RPM_BUILD_ROOT"
%pre
/usr/sbin/groupadd scylla 2> /dev/null || :
/usr/sbin/useradd -g scylla -s /sbin/nologin -r -d ${_sharedstatedir}/scylla scylla 2> /dev/null || :
ping -c1 `hostname` > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo
echo "**************************************************************"
echo "* WARNING: You need to add hostname on /etc/hosts, otherwise *"
echo "* scylla-jmx will not able to start up. *"
echo "**************************************************************"
echo
fi
%post
if [ $1 -eq 1 ] ; then
/usr/bin/systemctl preset scylla-jmx.service ||:
fi
/usr/bin/systemctl daemon-reload ||:
%preun
if [ $1 -eq 0 ] ; then
/usr/bin/systemctl --no-reload disable scylla-jmx.service ||:
/usr/bin/systemctl stop scylla-jmx.service ||:
fi
%postun
/usr/bin/systemctl daemon-reload ||:
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%config(noreplace) %{_sysconfdir}/sysconfig/scylla-jmx
%{_unitdir}/scylla-jmx.service
/opt/scylladb/jmx/scylla-jmx
/opt/scylladb/jmx/scylla-jmx-1.1.jar
/opt/scylladb/jmx/symlinks/scylla-jmx
%{_prefix}/lib/scylla/jmx/scylla-jmx
%{_prefix}/lib/scylla/jmx/scylla-jmx-1.1.jar
%{_prefix}/lib/scylla/jmx/symlinks/scylla-jmx
%changelog
* Fri Aug 7 2015 Takuya ASADA Takuya ASADA <syuu@cloudius-systems.com>
- inital version of scylla-tools.spec

View File

@ -1,74 +0,0 @@
Name: scylla-jmx
Version: @@VERSION@@
Release: @@RELEASE@@%{?dist}
Summary: Scylla JMX
Group: Applications/Databases
License: AGPLv3
URL: http://www.scylladb.com/
Source0: %{name}-@@VERSION@@-@@RELEASE@@.tar
BuildArch: noarch
BuildRequires: maven systemd-units java-devel
Requires: scylla-server java-headless
%description
%prep
%setup -q
%build
mvn install
%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT%{_sysconfdir}/sysconfig/
mkdir -p $RPM_BUILD_ROOT%{_unitdir}
mkdir -p $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m644 dist/common/sysconfig/scylla-jmx $RPM_BUILD_ROOT%{_sysconfdir}/sysconfig/
install -m644 dist/redhat/systemd/scylla-jmx.service $RPM_BUILD_ROOT%{_unitdir}/
install -d -m755 $RPM_BUILD_ROOT%{_prefix}/lib/scylla
install -d -m755 $RPM_BUILD_ROOT%{_prefix}/lib/scylla/jmx
install -m644 target/urchin-mbean-1.0.jar $RPM_BUILD_ROOT%{_prefix}/lib/scylla/jmx/
install -m755 dist/common/scripts/* $RPM_BUILD_ROOT%{_prefix}/lib/scylla/jmx
%pre
/usr/sbin/groupadd scylla 2> /dev/null || :
/usr/sbin/useradd -g scylla -s /sbin/nologin -r -d ${_sharedstatedir}/scylla scylla 2> /dev/null || :
ping -c1 `hostname` > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo
echo "**************************************************************"
echo "* WARNING: You need to add hostname on /etc/hosts, otherwise *"
echo "* scylla-jmx will not able to start up. *"
echo "**************************************************************"
echo
fi
%post
%systemd_post scylla-jmx.service
%preun
%systemd_preun scylla-jmx.service
%postun
%systemd_postun
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%{_sysconfdir}/sysconfig/scylla-jmx
%{_unitdir}/scylla-jmx.service
%{_prefix}/lib/scylla/jmx/jmx_run
%{_prefix}/lib/scylla/jmx/urchin-mbean-1.0.jar
%changelog
* Fri Aug 7 2015 Takuya ASADA Takuya ASADA <syuu@cloudius-systems.com>
- inital version of scylla-tools.spec

View File

@ -1,16 +0,0 @@
[Unit]
Description=Scylla JMX
Requires=scylla-server.service
After=scylla-server.service
[Service]
Type=simple
EnvironmentFile=/etc/sysconfig/scylla-jmx
User=scylla
Group=scylla
ExecStart=/usr/lib/scylla/jmx/jmx_run
KillMode=process
Restart=always
[Install]
WantedBy=multi-user.target

View File

@ -1,10 +0,0 @@
#!/bin/sh -e
if [ ! -e dist/ubuntu/build_deb.sh ]; then
echo "run build_deb.sh in top of scylla dir"
exit 1
fi
sudo apt-get -y install debhelper maven openjdk-7-jdk
debuild -r fakeroot --no-tgz-check -us -uc

View File

@ -0,0 +1,3 @@
License: MIT
https://github.com/Kentzo/git-archive-all

26
install-dependencies.sh Executable file
View File

@ -0,0 +1,26 @@
#!/bin/bash
#
# This file is open source software, licensed to you under the terms
# of the Apache License, Version 2.0 (the "License"). See the NOTICE file
# distributed with this work for additional information regarding copyright
# ownership. You may not use this file except in compliance with the License.
#
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
. /etc/os-release
if [ "$ID" = "ubuntu" ] || [ "$ID" = "debian" ]; then
apt -y install maven openjdk-8-jdk-headless
elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then
dnf install -y maven java-1.8.0-openjdk-devel
fi

173
install.sh Executable file
View File

@ -0,0 +1,173 @@
#!/bin/bash
#
# Copyright (C) 2019 ScyllaDB
#
#
# This file is part of Scylla.
#
# Scylla is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Scylla is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Scylla. If not, see <http://www.gnu.org/licenses/>.
#
set -e
print_usage() {
cat <<EOF
Usage: install.sh [options]
Options:
--root /path/to/root alternative install root (default /)
--prefix /prefix directory prefix (default /usr)
--nonroot shortcut of '--disttype nonroot'
--sysconfdir /etc/sysconfig specify sysconfig directory name
--packaging use install.sh for packaging
--without-systemd skip installing systemd units
--help this helpful message
EOF
exit 1
}
root=/
sysconfdir=/etc/sysconfig
nonroot=false
packaging=false
without_systemd=false
while [ $# -gt 0 ]; do
case "$1" in
"--root")
root="$2"
shift 2
;;
"--prefix")
prefix="$2"
shift 2
;;
"--nonroot")
nonroot=true
shift 1
;;
"--sysconfdir")
sysconfdir="$2"
shift 2
;;
"--packaging")
packaging=true
shift 1
;;
"--without-systemd")
without_systemd=true
shift 1
;;
"--help")
shift 1
print_usage
;;
*)
print_usage
;;
esac
done
check_usermode_support() {
user=$(systemctl --help|grep -e '--user')
[ -n "$user" ]
}
if ! $packaging; then
has_java=false
if [ -x /usr/bin/java ]; then
javaver=$(/usr/bin/java -version 2>&1|head -n1|cut -f 3 -d " ")
has_java=true
fi
if ! $has_java; then
echo "Please install openjdk-8, openjdk-11, or openjdk-17 before running install.sh."
exit 1
fi
fi
if [ -z "$prefix" ]; then
if $nonroot; then
prefix=~/scylladb
else
prefix=/opt/scylladb
fi
fi
rprefix=$(realpath -m "$root/$prefix")
if ! $nonroot; then
retc="$root/etc"
rsysconfdir="$root/$sysconfdir"
rusr="$root/usr"
rsystemd="$rusr/lib/systemd/system"
else
retc="$rprefix/etc"
rsysconfdir="$rprefix/$sysconfdir"
rsystemd="$HOME/.config/systemd/user"
fi
install -d -m755 "$rsysconfdir"
if ! $without_systemd; then
install -d -m755 "$rsystemd"
fi
install -d -m755 "$rprefix/scripts" "$rprefix/jmx" "$rprefix/jmx/symlinks"
install -m644 dist/common/sysconfig/scylla-jmx -Dt "$rsysconfdir"
if ! $without_systemd; then
install -m644 dist/common/systemd/scylla-jmx.service -Dt "$rsystemd"
fi
if ! $nonroot && ! $without_systemd; then
if [ "$sysconfdir" != "/etc/sysconfig" ]; then
install -d -m755 "$retc"/systemd/system/scylla-jmx.service.d
cat << EOS > "$retc"/systemd/system/scylla-jmx.service.d/sysconfdir.conf
[Service]
EnvironmentFile=
EnvironmentFile=$sysconfdir/scylla-jmx
EOS
fi
elif ! $without_systemd; then
install -d -m755 "$rsystemd"/scylla-jmx.service.d
cat << EOS > "$rsystemd"/scylla-jmx.service.d/nonroot.conf
[Service]
EnvironmentFile=
EnvironmentFile=$retc/sysconfig/scylla-jmx
ExecStart=
ExecStart=$rprefix/jmx/scylla-jmx \$SCYLLA_JMX_PORT \$SCYLLA_API_PORT \$SCYLLA_API_ADDR \$SCYLLA_JMX_ADDR \$SCYLLA_JMX_FILE \$SCYLLA_JMX_LOCAL \$SCYLLA_JMX_REMOTE \$SCYLLA_JMX_DEBUG
User=
Group=
WorkingDirectory=$rprefix
EOS
fi
install -m644 scylla-jmx-1.1.jar "$rprefix/jmx"
install -m755 scylla-jmx "$rprefix/jmx"
ln -sf /usr/bin/java "$rprefix/jmx/symlinks/scylla-jmx"
if ! $nonroot; then
install -m755 -d "$rusr"/lib/scylla/jmx/symlinks
ln -srf "$rprefix"/jmx/scylla-jmx-1.1.jar "$rusr"/lib/scylla/jmx/
ln -srf "$rprefix"/jmx/scylla-jmx "$rusr"/lib/scylla/jmx/
ln -sf /usr/bin/java "$rusr"/lib/scylla/jmx/symlinks/scylla-jmx
fi
if $nonroot; then
sed -i -e "s#/var/lib/scylla#$rprefix#g" "$rsysconfdir"/scylla-jmx
sed -i -e "s#/etc/scylla#$rprefix/etc/scylla#g" "$rsysconfdir"/scylla-jmx
sed -i -e "s#/opt/scylladb/jmx#$rprefix/jmx#g" "$rsysconfdir"/scylla-jmx
if ! $without_systemd && check_usermode_support; then
systemctl --user daemon-reload
fi
echo "Scylla-JMX non-root install completed."
elif ! $without_systemd && ! $packaging; then
systemctl --system daemon-reload
fi

108
pom.xml
View File

@ -2,73 +2,81 @@
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<groupId>com.cloudius.urchin</groupId> <artifactId>scylla-jmx</artifactId>
<artifactId>urchin-mbean</artifactId> <version>1.1</version>
<version>1.0</version>
<packaging>jar</packaging> <packaging>jar</packaging>
<name>Urchin MBean</name> <parent>
<groupId>it.cavallium.scylladb.jmx</groupId>
<artifactId>scylla-jmx-parent</artifactId>
<version>1.1</version>
<relativePath>./scylla-jmx-parent/pom.xml</relativePath>
</parent>
<properties> <name>Scylla JMX</name>
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
</properties>
<dependencies> <dependencies>
<dependency> <dependency>
<groupId>org.glassfish.jersey.core</groupId> <groupId>it.cavallium.scylladb.jmx</groupId>
<artifactId>jersey-common</artifactId> <artifactId>scylla-apiclient</artifactId>
<version>2.22.1</version> <version>1.1</version>
</dependency> </dependency>
<dependency>
<groupId>javax.ws.rs</groupId>
<artifactId>javax.ws.rs-api</artifactId>
<version>2.0.1</version>
</dependency>
<dependency>
<groupId>javax.ws.rs</groupId>
<artifactId>jsr311-api</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
<version>2.22.1</version>
</dependency>
<dependency> <dependency>
<groupId>junit</groupId> <groupId>junit</groupId>
<artifactId>junit</artifactId> <artifactId>junit</artifactId>
<version>4.8.2</version> <version>4.13.1</version>
<scope>test</scope> <scope>test</scope>
</dependency> </dependency>
<dependency>
<groupId>org.glassfish</groupId>
<artifactId>javax.json</artifactId>
<version>1.0.4</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>18.0</version>
</dependency>
<dependency>
<groupId>com.yammer.metrics</groupId>
<artifactId>metrics-core</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>com.google.collections</groupId>
<artifactId>google-collections</artifactId>
<version>1.0</version>
</dependency>
</dependencies> </dependencies>
<build> <build>
<plugins> <plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.10.1</version>
<configuration>
<source>11</source>
<target>11</target>
<compilerArgs>
<arg>--add-exports</arg>
<arg>java.management/com.sun.jmx.mbeanserver=scylla.jmx</arg>
<arg>--add-exports</arg>
<arg>java.management/com.sun.jmx.interceptor=scylla.jmx</arg>
</compilerArgs>
</configuration>
</plugin>
<plugin> <plugin>
<groupId>org.apache.maven.plugins</groupId> <groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId> <artifactId>maven-shade-plugin</artifactId>
<version>2.4.1</version> <version>3.4.1</version>
<configuration>
<artifactSet>
<includes>
<include>*:*</include>
</includes>
<excludes>
<exclude>com.sun.activation:jakarta.activation</exclude>
</excludes>
</artifactSet>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>module-info.class</exclude>
<exclude>META-INF/versions/*/module-info.class</exclude>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
<exclude>META-INF/MANIFEST.MF</exclude>
<exclude>META-INF/*.MD</exclude>
<exclude>META-INF/*.md</exclude>
<exclude>META-INF/LICENSE</exclude>
<exclude>META-INF/LICENSE.txt</exclude>
<exclude>META-INF/NOTICE</exclude>
</excludes>
</filter>
</filters>
</configuration>
<executions> <executions>
<execution> <execution>
<phase>package</phase> <phase>package</phase>
@ -79,7 +87,7 @@
<transformers> <transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<manifestEntries> <manifestEntries>
<Main-Class>com.cloudius.urchin.main.Main</Main-Class> <Main-Class>com.scylladb.jmx.main.Main</Main-Class>
</manifestEntries> </manifestEntries>
</transformer> </transformer>
</transformers> </transformers>

42
reloc/build_deb.sh Executable file
View File

@ -0,0 +1,42 @@
#!/bin/bash -e
print_usage() {
echo "build_deb.sh --reloc-pkg build/scylla-jmx-package.tar.gz"
echo " --reloc-pkg specify relocatable package path"
echo " --builddir specify Debian package build path"
exit 1
}
RELOC_PKG=build/scylla-jmx-package.tar.gz
BUILDDIR=build/debian
while [ $# -gt 0 ]; do
case "$1" in
"--reloc-pkg")
RELOC_PKG=$2
shift 2
;;
"--builddir")
BUILDDIR="$2"
shift 2
;;
*)
print_usage
;;
esac
done
RELOC_PKG=$(readlink -f $RELOC_PKG)
rm -rf "$BUILDDIR"/scylla-package "$BUILDDIR"/scylla-package.orig "$BUILDDIR"/debian
mkdir -p "$BUILDDIR"/scylla-package
tar -C "$BUILDDIR"/scylla-package -xpf $RELOC_PKG
cd "$BUILDDIR"/scylla-package
RELOC_PKG=$(readlink -f $RELOC_PKG)
mv scylla-jmx/debian debian
PKG_NAME=$(dpkg-parsechangelog --show-field Source)
# XXX: Drop revision number from version string.
# Since it always '1', this should be okay for now.
PKG_VERSION=$(dpkg-parsechangelog --show-field Version |sed -e 's/-1$//')
ln -fv $RELOC_PKG ../"$PKG_NAME"_"$PKG_VERSION".orig.tar.gz
debuild -rfakeroot -us -uc

70
reloc/build_reloc.sh Executable file
View File

@ -0,0 +1,70 @@
#!/bin/bash -e
. /etc/os-release
print_usage() {
echo "build_reloc.sh --clean --nodeps"
echo " --clean clean build directory"
echo " --nodeps skip installing dependencies"
echo " --version V product-version-release string (overriding SCYLLA-VERSION-GEN)"
exit 1
}
CLEAN=
NODEPS=
VERSION_OVERRIDE=
while [ $# -gt 0 ]; do
case "$1" in
"--clean")
CLEAN=yes
shift 1
;;
"--nodeps")
NODEPS=yes
shift 1
;;
"--version")
VERSION_OVERRIDE="$2"
shift 2
;;
*)
print_usage
;;
esac
done
VERSION=$(./SCYLLA-VERSION-GEN ${VERSION_OVERRIDE:+ --version "$VERSION_OVERRIDE"})
# the former command should generate build/SCYLLA-PRODUCT-FILE and some other version
# related files
PRODUCT=`cat build/SCYLLA-PRODUCT-FILE`
DEST="build/$PRODUCT-jmx-$VERSION.noarch.tar.gz"
is_redhat_variant() {
[ -f /etc/redhat-release ]
}
is_debian_variant() {
[ -f /etc/debian_version ]
}
if [ ! -e reloc/build_reloc.sh ]; then
echo "run build_reloc.sh in top of scylla dir"
exit 1
fi
if [ "$CLEAN" = "yes" ]; then
rm -rf build target
fi
if [ -f "$DEST" ]; then
rm "$DEST"
fi
if [ -z "$NODEPS" ]; then
sudo ./install-dependencies.sh
fi
mvn -B --file scylla-jmx-parent/pom.xml install
./SCYLLA-VERSION-GEN ${VERSION_OVERRIDE:+ --version "$VERSION_OVERRIDE"}
./dist/debian/debian_files_gen.py
scripts/create-relocatable-package.py "$DEST"

52
reloc/build_rpm.sh Executable file
View File

@ -0,0 +1,52 @@
#!/bin/bash -e
print_usage() {
echo "build_rpm.sh --reloc-pkg build/scylla-jmx-package.tar.gz"
echo " --reloc-pkg specify relocatable package path"
echo " --builddir specify rpmbuild directory"
exit 1
}
RELOC_PKG=build/scylla-jmx-package.tar.gz
BUILDDIR=build/redhat
while [ $# -gt 0 ]; do
case "$1" in
"--reloc-pkg")
RELOC_PKG=$2
shift 2
;;
"--builddir")
BUILDDIR="$2"
shift 2
;;
*)
print_usage
;;
esac
done
RELOC_PKG=$(readlink -f $RELOC_PKG)
RPMBUILD=$(readlink -f $BUILDDIR)
mkdir -p "$BUILDDIR"
tar -C "$BUILDDIR" -xpf $RELOC_PKG scylla-jmx/SCYLLA-RELEASE-FILE scylla-jmx/SCYLLA-RELOCATABLE-FILE scylla-jmx/SCYLLA-VERSION-FILE scylla-jmx/SCYLLA-PRODUCT-FILE scylla-jmx/dist/redhat
cd "$BUILDDIR"/scylla-jmx
RELOC_PKG_BASENAME=$(basename "$RELOC_PKG")
SCYLLA_VERSION=$(cat SCYLLA-VERSION-FILE)
SCYLLA_RELEASE=$(cat SCYLLA-RELEASE-FILE)
VERSION=$SCYLLA_VERSION-$SCYLLA_RELEASE
PRODUCT=$(cat SCYLLA-PRODUCT-FILE)
mkdir -p $RPMBUILD/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
ln -fv $RELOC_PKG $RPMBUILD/SOURCES/
parameters=(
-D"version $SCYLLA_VERSION"
-D"release $SCYLLA_RELEASE"
-D"product $PRODUCT"
-D"reloc_pkg $RELOC_PKG_BASENAME"
)
cp dist/redhat/scylla-jmx.spec $RPMBUILD/SPECS
# this rpm can be install on both fedora / centos7, so drop distribution name from the file name
rpmbuild -ba "${parameters[@]}" --define '_binary_payload w2.xzdio' --define "_topdir $RPMBUILD" --undefine "dist" $RPMBUILD/SPECS/scylla-jmx.spec

View File

@ -0,0 +1,64 @@
#!/usr/bin/python3
#
# Copyright (C) 2018 ScyllaDB
#
#
# This file is part of Scylla.
#
# Scylla is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Scylla is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Scylla. If not, see <http://www.gnu.org/licenses/>.
#
import argparse
import io
import os
import tarfile
import pathlib
RELOC_PREFIX='scylla-jmx'
def reloc_add(self, name, arcname=None, recursive=True, *, filter=None):
if arcname:
return self.add(name, arcname="{}/{}".format(RELOC_PREFIX, arcname))
else:
return self.add(name, arcname="{}/{}".format(RELOC_PREFIX, name))
tarfile.TarFile.reloc_add = reloc_add
ap = argparse.ArgumentParser(description='Create a relocatable scylla package.')
ap.add_argument('dest',
help='Destination file (tar format)')
args = ap.parse_args()
output = args.dest
ar = tarfile.open(output, mode='w|gz')
# relocatable package format version = 2.2
with open('build/.relocatable_package_version', 'w') as f:
f.write('2.2\n')
ar.add('build/.relocatable_package_version', arcname='.relocatable_package_version')
pathlib.Path('build/SCYLLA-RELOCATABLE-FILE').touch()
ar.reloc_add('build/SCYLLA-RELOCATABLE-FILE', arcname='SCYLLA-RELOCATABLE-FILE')
ar.reloc_add('build/SCYLLA-RELEASE-FILE', arcname='SCYLLA-RELEASE-FILE')
ar.reloc_add('build/SCYLLA-VERSION-FILE', arcname='SCYLLA-VERSION-FILE')
ar.reloc_add('build/SCYLLA-PRODUCT-FILE', arcname='SCYLLA-PRODUCT-FILE')
ar.reloc_add('dist')
ar.reloc_add('install.sh')
ar.reloc_add('target/scylla-jmx-1.1.jar', arcname='scylla-jmx-1.1.jar')
ar.reloc_add('scripts/scylla-jmx', arcname='scylla-jmx')
ar.reloc_add('README.md')
ar.reloc_add('NOTICE')
ar.reloc_add('build/debian/debian', arcname='debian')

494
scripts/git-archive-all Executable file
View File

@ -0,0 +1,494 @@
#! /usr/bin/env python
# coding=utf-8
from __future__ import print_function
from __future__ import unicode_literals
__version__ = "1.9"
import logging
from os import extsep, path, readlink, curdir
from subprocess import CalledProcessError, Popen, PIPE
import sys
import tarfile
from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED
class GitArchiver(object):
"""
GitArchiver
Scan a git repository and export all tracked files, and submodules.
Checks for .gitattributes files in each directory and uses 'export-ignore'
pattern entries for ignore files in the archive.
>>> archiver = GitArchiver(main_repo_abspath='my/repo/path')
>>> archiver.create('output.zip')
"""
LOG = logging.getLogger('GitArchiver')
def __init__(self, prefix='', exclude=True, force_sub=False, extra=None, main_repo_abspath=None):
"""
@param prefix: Prefix used to prepend all paths in the resulting archive.
Extra file paths are only prefixed if they are not relative.
E.g. if prefix is 'foo' and extra is ['bar', '/baz'] the resulting archive will look like this:
/
baz
foo/
bar
@type prefix: string
@param exclude: Determines whether archiver should follow rules specified in .gitattributes files.
@type exclude: bool
@param force_sub: Determines whether submodules are initialized and updated before archiving.
@type force_sub: bool
@param extra: List of extra paths to include in the resulting archive.
@type extra: list
@param main_repo_abspath: Absolute path to the main repository (or one of subdirectories).
If given path is path to a subdirectory (but not a submodule directory!) it will be replaced
with abspath to top-level directory of the repository.
If None, current cwd is used.
@type main_repo_abspath: string
"""
if extra is None:
extra = []
if main_repo_abspath is None:
main_repo_abspath = path.abspath('')
elif not path.isabs(main_repo_abspath):
raise ValueError("You MUST pass absolute path to the main git repository.")
try:
self.run_shell("[ -d .git ] || git rev-parse --git-dir > /dev/null 2>&1", main_repo_abspath)
except Exception as e:
raise ValueError("Not a git repository (or any of the parent directories).")
main_repo_abspath = path.abspath(self.read_git_shell('git rev-parse --show-toplevel', main_repo_abspath).rstrip())
self.prefix = prefix
self.exclude = exclude
self.extra = extra
self.force_sub = force_sub
self.main_repo_abspath = main_repo_abspath
def create(self, output_path, dry_run=False, output_format=None):
"""
Create the archive at output_file_path.
Type of the archive is determined either by extension of output_file_path or by output_format.
Supported formats are: gz, zip, bz2, xz, tar, tgz, txz
@param output_path: Output file path.
@type output_path: string
@param dry_run: Determines whether create should do nothing but print what it would archive.
@type dry_run: bool
@param output_format: Determines format of the output archive. If None, format is determined from extension
of output_file_path.
@type output_format: string
"""
if output_format is None:
file_name, file_ext = path.splitext(output_path)
output_format = file_ext[len(extsep):].lower()
self.LOG.debug("Output format is not explicitly set, determined format is {}.".format(output_format))
if not dry_run:
if output_format == 'zip':
archive = ZipFile(path.abspath(output_path), 'w')
def add_file(file_path, arcname):
if not path.islink(file_path):
archive.write(file_path, arcname, ZIP_DEFLATED)
else:
i = ZipInfo(arcname)
i.create_system = 3
i.external_attr = 0xA1ED0000
archive.writestr(i, readlink(file_path))
elif output_format in ['tar', 'bz2', 'gz', 'xz', 'tgz', 'txz']:
if output_format == 'tar':
t_mode = 'w'
elif output_format == 'tgz':
t_mode = 'w:gz'
elif output_format == 'txz':
t_mode = 'w:xz'
else:
t_mode = 'w:{}'.format(output_format)
archive = tarfile.open(path.abspath(output_path), t_mode)
add_file = lambda file_path, arcname: archive.add(file_path, arcname)
else:
raise RuntimeError("Unknown format: {}".format(output_format))
def archiver(file_path, arcname):
self.LOG.debug("Compressing {} => {}...".format(file_path, arcname))
add_file(file_path, arcname)
else:
archive = None
archiver = lambda file_path, arcname: self.LOG.info("{} => {}".format(file_path, arcname))
self.archive_all_files(archiver)
if archive is not None:
archive.close()
def get_exclude_patterns(self, repo_abspath, repo_file_paths):
"""
Returns exclude patterns for a given repo. It looks for .gitattributes files in repo_file_paths.
Resulting dictionary will contain exclude patterns per path (relative to the repo_abspath).
E.g. {('.', 'Catalyst', 'Editions', 'Base'), ['Foo*', '*Bar']}
@type repo_abspath: string
@param repo_abspath: Absolute path to the git repository.
@type repo_file_paths: list
@param repo_file_paths: List of paths relative to the repo_abspath that are under git control.
@rtype: dict
@return: Dictionary representing exclude patterns.
Keys are tuples of strings. Values are lists of strings.
Returns None if self.exclude is not set.
"""
if not self.exclude:
return None
def read_attributes(attributes_abspath):
patterns = []
if path.isfile(attributes_abspath):
attributes = open(attributes_abspath, 'r').readlines()
patterns = []
for line in attributes:
tokens = line.strip().split()
if "export-ignore" in tokens[1:]:
patterns.append(tokens[0])
return patterns
exclude_patterns = {(): []}
# There may be no gitattributes.
try:
global_attributes_abspath = self.read_shell("git config --get core.attributesfile", repo_abspath).rstrip()
exclude_patterns[()] = read_attributes(global_attributes_abspath)
except:
# And it's valid to not have them.
pass
for attributes_abspath in [path.join(repo_abspath, f) for f in repo_file_paths if f.endswith(".gitattributes")]:
# Each .gitattributes affects only files within its directory.
key = tuple(self.get_path_components(repo_abspath, path.dirname(attributes_abspath)))
exclude_patterns[key] = read_attributes(attributes_abspath)
local_attributes_abspath = path.join(repo_abspath, ".git", "info", "attributes")
key = tuple(self.get_path_components(repo_abspath, repo_abspath))
if key in exclude_patterns:
exclude_patterns[key].extend(read_attributes(local_attributes_abspath))
else:
exclude_patterns[key] = read_attributes(local_attributes_abspath)
return exclude_patterns
def is_file_excluded(self, repo_abspath, repo_file_path, exclude_patterns):
"""
Checks whether file at a given path is excluded.
@type repo_abspath: string
@param repo_abspath: Absolute path to the git repository.
@type repo_file_path: string
@param repo_file_path: Path to a file within repo_abspath.
@type exclude_patterns: dict
@param exclude_patterns: Exclude patterns with format specified for get_exclude_patterns.
@rtype: bool
@return: True if file should be excluded. Otherwise False.
"""
if exclude_patterns is None or not len(exclude_patterns):
return False
from fnmatch import fnmatch
file_name = path.basename(repo_file_path)
components = self.get_path_components(repo_abspath, path.join(repo_abspath, path.dirname(repo_file_path)))
is_excluded = False
# We should check all patterns specified in intermediate directories to the given file.
# At the end we should also check for the global patterns (key '()' or empty tuple).
while not is_excluded:
key = tuple(components)
if key in exclude_patterns:
patterns = exclude_patterns[key]
for p in patterns:
if fnmatch(file_name, p) or fnmatch(repo_file_path, p):
self.LOG.debug("Exclude pattern matched {}: {}".format(p, repo_file_path))
is_excluded = True
if not len(components):
break
components.pop()
return is_excluded
def archive_all_files(self, archiver):
"""
Archive all files using archiver.
@param archiver: Function that accepts 2 arguments: abspath to file on the system and relative path within archive.
"""
for file_path in self.extra:
archiver(path.abspath(file_path), path.join(self.prefix, file_path))
for file_path in self.walk_git_files():
archiver(path.join(self.main_repo_abspath, file_path), path.join(self.prefix, file_path))
def walk_git_files(self, repo_path=''):
"""
An iterator method that yields a file path relative to main_repo_abspath
for each file that should be included in the archive.
Skips those that match the exclusion patterns found in
any discovered .gitattributes files along the way.
Recurs into submodules as well.
@type repo_path: string
@param repo_path: Path to the git submodule repository relative to main_repo_abspath.
@rtype: iterator
@return: Iterator to traverse files under git control relative to main_repo_abspath.
"""
repo_abspath = path.join(self.main_repo_abspath, repo_path)
repo_file_paths = self.read_git_shell("git ls-files --cached --full-name --no-empty-directory", repo_abspath).splitlines()
exclude_patterns = self.get_exclude_patterns(repo_abspath, repo_file_paths)
for repo_file_path in repo_file_paths:
# Git puts path in quotes if file path has unicode characters.
repo_file_path = repo_file_path.strip('"') # file path relative to current repo
file_name = path.basename(repo_file_path)
main_repo_file_path = path.join(repo_path, repo_file_path) # file path relative to the main repo
# Only list symlinks and files that don't start with git.
if file_name.startswith(".git") or (not path.islink(main_repo_file_path) and path.isdir(main_repo_file_path)):
continue
if self.is_file_excluded(repo_abspath, repo_file_path, exclude_patterns):
continue
yield main_repo_file_path
if self.force_sub:
self.run_shell("git submodule init", repo_abspath)
self.run_shell("git submodule update", repo_abspath)
for submodule_path in self.read_shell("git submodule --quiet foreach 'pwd -P'", repo_abspath).splitlines():
# Shell command returns absolute paths to submodules.
submodule_path = path.relpath(submodule_path, self.main_repo_abspath)
for file_path in self.walk_git_files(submodule_path):
yield file_path
@staticmethod
def get_path_components(repo_abspath, abspath):
"""
Split given abspath into components relative to repo_abspath.
These components are primarily used as unique keys of files and folders within a repository.
E.g. if repo_abspath is '/Documents/Hobby/ParaView/' and abspath is
'/Documents/Hobby/ParaView/Catalyst/Editions/Base/', function will return:
['.', 'Catalyst', 'Editions', 'Base']
First element is always '.' (concrete symbol depends on OS).
@param repo_abspath: Absolute path to the git repository. Normalized via os.path.normpath.
@type repo_abspath: string
@param abspath: Absolute path to a file within repo_abspath. Normalized via os.path.normpath.
@type abspath: string
@return: List of path components.
@rtype: list
"""
repo_abspath = path.normpath(repo_abspath)
abspath = path.normpath(abspath)
if not path.isabs(repo_abspath):
raise ValueError("repo_abspath MUST be absolute path.")
if not path.isabs(abspath):
raise ValueError("abspath MUST be absoulte path.")
if not path.commonprefix([repo_abspath, abspath]):
raise ValueError("abspath (\"{}\") MUST have common prefix with repo_abspath (\"{}\")".format(abspath, repo_abspath))
components = []
while not abspath == repo_abspath:
abspath, tail = path.split(abspath)
if tail:
components.insert(0, tail)
components.insert(0, curdir)
return components
@staticmethod
def run_shell(cmd, cwd=None):
"""
Runs shell command.
@type cmd: string
@param cmd: Command to be executed.
@type cwd: string
@param cwd: Working directory.
@rtype: int
@return: Return code of the command.
@raise CalledProcessError: Raises exception if return code of the command is non-zero.
"""
p = Popen(cmd, shell=True, cwd=cwd)
p.wait()
if p.returncode:
raise CalledProcessError(returncode=p.returncode, cmd=cmd)
return p.returncode
@staticmethod
def read_shell(cmd, cwd=None, encoding='utf-8'):
"""
Runs shell command and reads output.
@type cmd: string
@param cmd: Command to be executed.
@type cwd: string
@param cwd: Working directory.
@type encoding: string
@param encoding: Encoding used to decode bytes returned by Popen into string.
@rtype: string
@return: Output of the command.
@raise CalledProcessError: Raises exception if return code of the command is non-zero.
"""
p = Popen(cmd, shell=True, stdout=PIPE, cwd=cwd)
output, _ = p.communicate()
output = output.decode(encoding)
if p.returncode:
if sys.version_info > (2,6):
raise CalledProcessError(returncode=p.returncode, cmd=cmd, output=output)
else:
raise CalledProcessError(returncode=p.returncode, cmd=cmd)
return output
@staticmethod
def read_git_shell(cmd, cwd=None):
"""
Runs git shell command, reads output and decodes it into unicode string
@type cmd: string
@param cmd: Command to be executed.
@type cwd: string
@param cwd: Working directory.
@rtype: string
@return: Output of the command.
@raise CalledProcessError: Raises exception if return code of the command is non-zero.
"""
p = Popen(cmd, shell=True, stdout=PIPE, cwd=cwd)
output, _ = p.communicate()
output = output.decode('unicode_escape').encode('raw_unicode_escape').decode('utf-8')
if p.returncode:
if sys.version_info > (2,6):
raise CalledProcessError(returncode=p.returncode, cmd=cmd, output=output)
else:
raise CalledProcessError(returncode=p.returncode, cmd=cmd)
return output
if __name__ == '__main__':
from optparse import OptionParser
parser = OptionParser(usage="usage: %prog [-v] [--prefix PREFIX] [--no-exclude] [--force-submodules] [--extra EXTRA1 [EXTRA2]] [--dry-run] OUTPUT_FILE",
version="%prog {}".format(__version__))
parser.add_option('--prefix',
type='string',
dest='prefix',
default=None,
help="prepend PREFIX to each filename in the archive. OUTPUT_FILE name is used by default to avoid tarbomb. You can set it to '' in order to explicitly request tarbomb")
parser.add_option('-v', '--verbose',
action='store_true',
dest='verbose',
help='enable verbose mode')
parser.add_option('--no-exclude',
action='store_false',
dest='exclude',
default=True,
help="don't read .gitattributes files for patterns containing export-ignore attrib")
parser.add_option('--force-submodules',
action='store_true',
dest='force_sub',
help="force a git submodule init && git submodule update at each level before iterating submodules")
parser.add_option('--extra',
action='append',
dest='extra',
default=[],
help="any additional files to include in the archive")
parser.add_option('--dry-run',
action='store_true',
dest='dry_run',
help="don't actually archive anything, just show what would be done")
options, args = parser.parse_args()
if len(args) != 1:
parser.error("You must specify exactly one output file")
output_file_path = args[0]
if path.isdir(output_file_path):
parser.error("You cannot use directory as output")
# avoid tarbomb
if options.prefix is not None:
options.prefix = path.join(options.prefix, '')
else:
import re
output_name = path.basename(output_file_path)
output_name = re.sub('(\.zip|\.tar|\.tgz|\.txz|\.gz|\.bz2|\.xz|\.tar\.gz|\.tar\.bz2|\.tar\.xz)$', '', output_name) or "Archive"
options.prefix = path.join(output_name, '')
try:
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter('%(message)s'))
GitArchiver.LOG.addHandler(handler)
GitArchiver.LOG.setLevel(logging.DEBUG if options.verbose else logging.INFO)
archiver = GitArchiver(options.prefix,
options.exclude,
options.force_sub,
options.extra)
archiver.create(output_file_path, options.dry_run)
except Exception as e:
parser.exit(2, "{}\n".format(e))
sys.exit(0)

View File

@ -1,21 +1,38 @@
#!/bin/sh #!/bin/bash
# #
# Copyright (C) 2015 Cloudius Systems, Ltd. # Copyright (C) 2015 Cloudius Systems, Ltd.
JMX_PORT=7199 JMX_PORT="7199"
API_ADDR="127.0.0.1" JMX_ADDR=
API_PORT="10000"
API_ADDR=
API_PORT=
CONF_FILE=""
DEBUG=""
PARAM_HELP="-h" PARAM_HELP="-h"
PARAM_JMX_PORT="-jp" PARAM_JMX_PORT="-jp"
PARAM_JMX_ADDR="-ja"
PARAM_API_PORT="-p" PARAM_API_PORT="-p"
PARAM_ADDR="-a" PARAM_ADDR="-a"
PARAM_LOCATION="-l" PARAM_LOCATION="-l"
LOCATION="target" LOCATION="target"
LOCATION_SCRIPTS="scripts"
PARAM_FILE="-cf"
ALLOW_REMOTE="-r"
ALLOW_DEBUG="-d"
REMOTE=0
HOSTNAME=`hostname`
PROPERTIES=
JMX_AUTH=-Dcom.sun.management.jmxremote.authenticate=false
JMX_SSL=-Dcom.sun.management.jmxremote.ssl=false
print_help() { print_help() {
cat <<HLPEND cat <<HLPEND
scylla-jmx [$PARAM_HELP] [$PARAM_PORT port] [$PARAM_ADDR address] scylla-jmx [$PARAM_HELP] [$PARAM_API_PORT port] [$PARAM_ADDR address] [$PARAM_JMX_PORT port] [$PARAM_FILE file]
This script is used to run the jmx proxy This script is used to run the jmx proxy
@ -27,7 +44,11 @@ This script receives the following command line arguments:
$PARAM_JMX_PORT <port> - The jmx port to open $PARAM_JMX_PORT <port> - The jmx port to open
$PARAM_API_PORT <port> - The API port to connect to $PARAM_API_PORT <port> - The API port to connect to
$PARAM_ADDR <address> - The API address to connect to $PARAM_ADDR <address> - The API address to connect to
$PARAM_JMX_ADDR <address> - JMX bind address
$PARAM_FILE <file> - A configuration file to use
$PARAM_LOCATION <location> - The location of the jmx proxy jar file $PARAM_LOCATION <location> - The location of the jmx proxy jar file
$ALLOW_REMOTE - When set allow remote jmx connectivity
$ALLOW_DEBUG - When set open debug ports for remote debugger
HLPEND HLPEND
} }
@ -35,31 +56,88 @@ while test "$#" -ne 0
do do
case "$1" in case "$1" in
"$PARAM_API_PORT") "$PARAM_API_PORT")
API_PORT=$2 API_PORT="-Dapiport="$2
shift 2 shift 2
;; ;;
"$PARAM_ADDR") "$PARAM_ADDR")
API_ADDR=$2 API_ADDR="-Dapiaddress="$2
shift 2
;;
"$PARAM_PORT")
API_ADDR=$2
shift 2 shift 2
;; ;;
"$PARAM_JMX_PORT") "$PARAM_JMX_PORT")
JMX_PORT=$2 JMX_PORT=$2
shift 2 shift 2
;; ;;
"$PARAM_JMX_ADDR")
JMX_ADDR=-Dcom.sun.management.jmxremote.host=$2
shift 2
;;
"$PARAM_LOCATION") "$PARAM_LOCATION")
LOCATION=$2 LOCATION=$2
LOCATION_SCRIPTS="$2"
shift 2 shift 2
;; ;;
"$PARAM_FILE")
CONF_FILE="-Dapiconfig="$2
shift 2
;;
"$ALLOW_REMOTE")
REMOTE=1
shift 1
;;
"$PARAM_HELP") "$PARAM_HELP")
print_help print_help
exit 0 exit 0
;; ;;
"$ALLOW_DEBUG")
DEBUG="-agentlib:jdwp=transport=dt_socket,address=127.0.0.1:7690,server=y,suspend=n"
shift 1
;;
-Dcom.sun.management.jmxremote.host=*)
JMX_ADDR=$1
HOSTNAME=${1:36}
shift
;;
-Dcom.sun.management.jmxremote.authenticate=*)
JMX_AUTH=$1
shift 1
;;
-Dcom.sun.management.jmxremote.ssl=*)
JMX_SSL=$1
shift 1
;;
-Dcom.sun.management.jmxremote.local.only=*)
JMX_LOCAL=$1
shift 1
;;
-D*)
PROPERTIES="$PROPERTIES $1"
shift 1
;;
*) *)
echo "Unknown parameter: $1"
print_help
exit 1
esac esac
done done
java -Dapiaddress=$API_ADDR -Dapiport=$API_PORT -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -jar $LOCATION/urchin-mbean-1.0.jar if [ $REMOTE -eq 0 ]; then
if [ -z $JMX_ADDR ]; then
JMX_ADDR=-Dcom.sun.management.jmxremote.host=localhost
fi
HOSTNAME=localhost
else
if [ -z $JMX_LOCAL ]; then
JMX_LOCAL=-Dcom.sun.management.jmxremote.local.only=false
fi
fi
"$LOCATION_SCRIPTS"/symlinks/scylla-jmx $DEBUG \
$API_PORT $API_ADDR $CONF_FILE -Xmx256m -XX:+UseSerialGC \
-XX:+HeapDumpOnOutOfMemoryError \
$JMX_AUTH $JMX_SSL $JMX_ADDR $JMX_LOCAL \
--add-exports java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \
--add-exports java.management/com.sun.jmx.interceptor=ALL-UNNAMED \
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMX_PORT \
-Djava.rmi.server.hostname=$HOSTNAME -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT \
-Djavax.management.builder.initial=com.scylladb.jmx.utils.APIBuilder \
$PROPERTIES -jar $LOCATION/scylla-jmx-1.1.jar

1
scripts/symlinks/scylla-jmx Symbolic link
View File

@ -0,0 +1 @@
/usr/bin/java

99
scylla-apiclient/pom.xml Normal file
View File

@ -0,0 +1,99 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>scylla-apiclient</artifactId>
<packaging>jar</packaging>
<version>1.1</version>
<parent>
<relativePath>../scylla-jmx-parent/pom.xml</relativePath>
<groupId>it.cavallium.scylladb.jmx</groupId>
<artifactId>scylla-jmx-parent</artifactId>
<version>1.1</version>
</parent>
<name>Scylla REST API client</name>
<properties>
<jackson.version>2.14.0</jackson.version>
<jackson.databind.version>2.14.0</jackson.databind.version>
</properties>
<dependencies>
<dependency>
<groupId>org.eclipse.parsson</groupId>
<artifactId>parsson</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>org.yaml</groupId>
<artifactId>snakeyaml</artifactId>
<version>1.33</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-common</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>jakarta.ws.rs</groupId>
<artifactId>jakarta.ws.rs-api</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.inject</groupId>
<artifactId>jersey-hk2</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>jakarta.json</groupId>
<artifactId>jakarta.json-api</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>31.1-jre</version>
</dependency>
<dependency>
<groupId>jakarta.activation</groupId>
<artifactId>jakarta.activation-api</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>${jackson.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson.databind.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jakarta.rs</groupId>
<artifactId>jackson-jakarta-rs-json-provider</artifactId>
<version>2.14.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.10.1</version>
<configuration>
<release>11</release>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -1,44 +1,64 @@
/* /*
* Copyright 2015 Cloudius Systems * Copyright 2015 Cloudius Systems
*/ */
package com.cloudius.urchin.api; package com.scylladb.jmx.api;
import com.fasterxml.jackson.jakarta.rs.json.JacksonJsonProvider;
import jakarta.json.Json;
import jakarta.json.JsonArray;
import jakarta.json.JsonObject;
import jakarta.json.JsonReader;
import jakarta.json.JsonReaderFactory;
import jakarta.json.JsonString;
import jakarta.ws.rs.ProcessingException;
import jakarta.ws.rs.client.Client;
import jakarta.ws.rs.client.ClientBuilder;
import jakarta.ws.rs.client.Entity;
import jakarta.ws.rs.client.Invocation;
import jakarta.ws.rs.client.WebTarget;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.core.MultivaluedMap;
import jakarta.ws.rs.core.Response;
import java.io.StringReader; import java.io.StringReader;
import java.lang.System.Logger.Level;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.HashMap; import java.util.HashMap;
import java.util.HashSet; import java.util.HashSet;
import java.util.Iterator; import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Map.Entry; import java.util.Map.Entry;
import java.util.Set; import java.util.Set;
import java.util.function.BiFunction;
import java.util.logging.Logger;
import javax.json.Json;
import javax.json.JsonArray;
import javax.json.JsonObject;
import javax.json.JsonReader;
import javax.json.JsonReaderFactory;
import javax.json.JsonString;
import javax.management.openmbean.TabularData; import javax.management.openmbean.TabularData;
import javax.management.openmbean.TabularDataSupport; import javax.management.openmbean.TabularDataSupport;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.Entity;
import javax.ws.rs.client.Invocation;
import javax.ws.rs.client.WebTarget;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.core.Response;
import org.glassfish.jersey.client.ClientConfig; import org.glassfish.jersey.client.ClientConfig;
import com.cloudius.urchin.utils.EstimatedHistogram;
import com.cloudius.urchin.utils.SnapshotDetailsTabularData; import com.scylladb.jmx.api.utils.SnapshotDetailsTabularData;
import com.yammer.metrics.core.HistogramValues;
public class APIClient { public class APIClient {
Map<String, CacheEntry> cache = new HashMap<String, CacheEntry>(); private Map<String, CacheEntry> cache = new HashMap<String, CacheEntry>();
String getCacheKey(String key, MultivaluedMap<String, String> param, long duration) { private final APIConfig config;
private final ClientConfig clientConfig;
private final Client client;
private JsonReaderFactory factory = Json.createReaderFactory(null);
private static final Logger logger = Logger.getLogger(APIClient.class.getName());
public APIClient(APIConfig config) {
this.config = config;
this.clientConfig = new ClientConfig();
clientConfig.register(new JacksonJsonProvider());
this.client = ClientBuilder.newClient(clientConfig);
}
private String getCacheKey(String key, MultivaluedMap<String, String> param, long duration) {
if (duration <= 0) { if (duration <= 0) {
return null; return null;
} }
@ -53,36 +73,31 @@ public class APIClient {
return key; return key;
} }
String getStringFromCache(String key, long duration) { private String getStringFromCache(String key, long duration) {
if (key == null) { if (key == null) {
return null; return null;
} }
CacheEntry value = cache.get(key); CacheEntry value = cache.get(key);
return (value!= null && value.valid(duration))? value.stringValue() : null; return (value != null && value.valid(duration)) ? value.stringValue() : null;
} }
EstimatedHistogram getEstimatedHistogramFromCache(String key, long duration) { private JsonObject getJsonObjectFromCache(String key, long duration) {
if (key == null) { if (key == null) {
return null; return null;
} }
CacheEntry value = cache.get(key); CacheEntry value = cache.get(key);
return (value!= null && value.valid(duration))? value.getEstimatedHistogram() : null; return (value != null && value.valid(duration)) ? value.jsonObject() : null;
} }
JsonReaderFactory factory = Json.createReaderFactory(null);
private static final java.util.logging.Logger logger = java.util.logging.Logger
.getLogger(APIClient.class.getName());
public static String getBaseUrl() { private String getBaseUrl() {
return "http://" + System.getProperty("apiaddress", "localhost") + ":" return config.getBaseUrl();
+ System.getProperty("apiport", "10000");
} }
public Invocation.Builder get(String path, MultivaluedMap<String, String> queryParams) { public Invocation.Builder get(String path, MultivaluedMap<String, String> queryParams) {
Client client = ClientBuilder.newClient( new ClientConfig());
WebTarget webTarget = client.target(getBaseUrl()).path(path); WebTarget webTarget = client.target(getBaseUrl()).path(path);
if (queryParams != null) { if (queryParams != null) {
for (Entry<String, List<String>> qp : queryParams.entrySet()) { for (Entry<String, List<String>> qp : queryParams.entrySet()) {
for (String e : qp.getValue()) { for (String e : qp.getValue()) {
webTarget = webTarget.queryParam(qp.getKey(), e); webTarget = webTarget.queryParam(qp.getKey(), e);
} }
@ -96,22 +111,34 @@ public class APIClient {
} }
public Response post(String path, MultivaluedMap<String, String> queryParams) { public Response post(String path, MultivaluedMap<String, String> queryParams) {
Response response = get(path, queryParams).post(Entity.entity(null, MediaType.TEXT_PLAIN)); return post(path, queryParams, null);
if (response.getStatus() != Response.Status.OK.getStatusCode() ) { }
throw getException(response.readEntity(String.class));
}
return response;
public Response post(String path, MultivaluedMap<String, String> queryParams, Object object, String type) {
try {
Response response = get(path, queryParams).post(Entity.entity(object, type));
if (response.getStatus() != Response.Status.OK.getStatusCode()) {
throw getException("Scylla API server HTTP POST to URL '" + path + "' failed",
response.readEntity(String.class));
}
return response;
} catch (ProcessingException e) {
throw new IllegalStateException("Unable to connect to Scylla API server: " + e.getMessage());
}
}
public Response post(String path, MultivaluedMap<String, String> queryParams, Object object) {
return post(path, queryParams, object, MediaType.TEXT_PLAIN);
} }
public void post(String path) { public void post(String path) {
post(path, null); post(path, null);
} }
public RuntimeException getException(String txt) { public IllegalStateException getException(String msg, String json) {
JsonReader reader = factory.createReader(new StringReader(txt)); JsonReader reader = factory.createReader(new StringReader(json));
JsonObject res = reader.readObject(); JsonObject res = reader.readObject();
return new RuntimeException(res.getString("message")); return new IllegalStateException(msg + ": " + res.getString("message"));
} }
public String postGetVal(String path, MultivaluedMap<String, String> queryParams) { public String postGetVal(String path, MultivaluedMap<String, String> queryParams) {
@ -131,40 +158,47 @@ public class APIClient {
get(path, queryParams).delete(); get(path, queryParams).delete();
return; return;
} }
get(path).delete(); Response response = get(path).delete();
if (response.getStatus() != Response.Status.OK.getStatusCode()) {
throw getException("Scylla API server HTTP delete to URL '" + path + "' failed",
response.readEntity(String.class));
}
} }
public void delete(String path) { public void delete(String path) {
delete(path, null); delete(path, null);
} }
public String getRawValue(String string, public String getRawValue(String string, MultivaluedMap<String, String> queryParams, long duration) {
MultivaluedMap<String, String> queryParams, long duration) { try {
if (string.equals("")) { if (string.equals("")) {
return ""; return "";
} }
String key = getCacheKey(string, queryParams, duration); String key = getCacheKey(string, queryParams, duration);
String res = getStringFromCache(key, duration); String res = getStringFromCache(key, duration);
if (res != null) { if (res != null) {
return res; return res;
} }
Response response = get(string, queryParams).get(Response.class); Response response = get(string, queryParams).get(Response.class);
if (response.getStatus() != Response.Status.OK.getStatusCode() ) { if (response.getStatus() != Response.Status.OK.getStatusCode()) {
// TBD // TBD
// We are currently not caching errors, // We are currently not caching errors,
// it should be reconsider. // it should be reconsider.
throw getException(response.readEntity(String.class)); throw getException("Scylla API server HTTP GET to URL '" + string + "' failed",
response.readEntity(String.class));
}
res = response.readEntity(String.class);
if (duration > 0) {
cache.put(key, new CacheEntry(res));
}
return res;
} catch (ProcessingException e) {
throw new IllegalStateException("Unable to connect to Scylla API server: " + e.getMessage());
} }
res = response.readEntity(String.class);
if (duration > 0) {
cache.put(key, new CacheEntry(res));
}
return res;
} }
public String getRawValue(String string, public String getRawValue(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) {
return getRawValue(string, queryParams, 0); return getRawValue(string, queryParams, 0);
} }
@ -177,23 +211,19 @@ public class APIClient {
} }
public String getStringValue(String string, MultivaluedMap<String, String> queryParams) { public String getStringValue(String string, MultivaluedMap<String, String> queryParams) {
return getRawValue(string, return getRawValue(string, queryParams).replaceAll("^\"|\"$", "");
queryParams).replaceAll("^\"|\"$", "");
} }
public String getStringValue(String string, MultivaluedMap<String, String> queryParams, long duration) { public String getStringValue(String string, MultivaluedMap<String, String> queryParams, long duration) {
return getRawValue(string, return getRawValue(string, queryParams, duration).replaceAll("^\"|\"$", "");
queryParams, duration).replaceAll("^\"|\"$", "");
} }
public String getStringValue(String string) { public String getStringValue(String string) {
return getStringValue(string, null); return getStringValue(string, null);
} }
public JsonReader getReader(String string, public JsonReader getReader(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) { return factory.createReader(new StringReader(getRawValue(string, queryParams)));
return factory.createReader(new StringReader(getRawValue(string,
queryParams)));
} }
public JsonReader getReader(String string) { public JsonReader getReader(String string) {
@ -205,8 +235,7 @@ public class APIClient {
return val.toArray(new String[val.size()]); return val.toArray(new String[val.size()]);
} }
public int getIntValue(String string, public int getIntValue(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) {
return Integer.parseInt(getRawValue(string, queryParams)); return Integer.parseInt(getRawValue(string, queryParams));
} }
@ -214,6 +243,19 @@ public class APIClient {
return getIntValue(string, null); return getIntValue(string, null);
} }
public static <T> BiFunction<APIClient, String, T> getReader(Class<T> type) {
if (type == String.class) {
return (c, s) -> type.cast(c.getRawValue(s));
} else if (type == Integer.class) {
return (c, s) -> type.cast(c.getIntValue(s));
} else if (type == Double.class) {
return (c, s) -> type.cast(c.getDoubleValue(s));
} else if (type == Long.class) {
return (c, s) -> type.cast(c.getLongValue(s));
}
throw new IllegalArgumentException(type.getName());
}
public boolean getBooleanValue(String string) { public boolean getBooleanValue(String string) {
return Boolean.parseBoolean(getRawValue(string)); return Boolean.parseBoolean(getRawValue(string));
} }
@ -222,8 +264,7 @@ public class APIClient {
return Double.parseDouble(getRawValue(string)); return Double.parseDouble(getRawValue(string));
} }
public List<String> getListStrValue(String string, public List<String> getListStrValue(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) {
JsonReader reader = getReader(string, queryParams); JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray(); JsonArray arr = reader.readArray();
List<String> res = new ArrayList<String>(arr.size()); List<String> res = new ArrayList<String>(arr.size());
@ -278,8 +319,7 @@ public class APIClient {
return join(arr, ","); return join(arr, ",");
} }
public static String mapToString(Map<String, String> mp, String pairJoin, public static String mapToString(Map<String, String> mp, String pairJoin, String joiner) {
String joiner) {
String res = ""; String res = "";
if (mp != null) { if (mp != null) {
for (String name : mp.keySet()) { for (String name : mp.keySet()) {
@ -296,19 +336,15 @@ public class APIClient {
return mapToString(mp, "=", ","); return mapToString(mp, "=", ",");
} }
public static boolean set_query_param( public static boolean set_query_param(MultivaluedMap<String, String> queryParams, String key, String value) {
MultivaluedMap<String, String> queryParams, String key, String value) { if (queryParams != null && key != null && value != null && !value.equals("")) {
if (queryParams != null && key != null && value != null
&& !value.equals("")) {
queryParams.add(key, value); queryParams.add(key, value);
return true; return true;
} }
return false; return false;
} }
public static boolean set_bool_query_param( public static boolean set_bool_query_param(MultivaluedMap<String, String> queryParams, String key, boolean value) {
MultivaluedMap<String, String> queryParams, String key,
boolean value) {
if (queryParams != null && key != null && value) { if (queryParams != null && key != null && value) {
queryParams.add(key, "true"); queryParams.add(key, "true");
return true; return true;
@ -327,8 +363,7 @@ public class APIClient {
for (int i = 0; i < arr.size(); i++) { for (int i = 0; i < arr.size(); i++) {
JsonObject obj = arr.getJsonObject(i); JsonObject obj = arr.getJsonObject(i);
if (obj.containsKey("key") && obj.containsKey("value")) { if (obj.containsKey("key") && obj.containsKey("value")) {
map.put(obj.getString("key"), map.put(obj.getString("key"), listStrFromJArr(obj.getJsonArray("value")));
listStrFromJArr(obj.getJsonArray("value")));
} }
} }
reader.close(); reader.close();
@ -350,8 +385,7 @@ public class APIClient {
for (int i = 0; i < arr.size(); i++) { for (int i = 0; i < arr.size(); i++) {
JsonObject obj = arr.getJsonObject(i); JsonObject obj = arr.getJsonObject(i);
if (obj.containsKey("key") && obj.containsKey("value")) { if (obj.containsKey("key") && obj.containsKey("value")) {
map.put(listStrFromJArr(obj.getJsonArray("key")), map.put(listStrFromJArr(obj.getJsonArray("key")), listStrFromJArr(obj.getJsonArray("value")));
listStrFromJArr(obj.getJsonArray("value")));
} }
} }
reader.close(); reader.close();
@ -362,8 +396,7 @@ public class APIClient {
return getMapListStrValue(string, null); return getMapListStrValue(string, null);
} }
public Set<String> getSetStringValue(String string, public Set<String> getSetStringValue(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) {
JsonReader reader = getReader(string, queryParams); JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray(); JsonArray arr = reader.readArray();
Set<String> res = new HashSet<String>(); Set<String> res = new HashSet<String>();
@ -378,14 +411,13 @@ public class APIClient {
return getSetStringValue(string, null); return getSetStringValue(string, null);
} }
public Map<String, String> getMapStrValue(String string, public Map<String, String> getMapStrValue(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) {
if (string.equals("")) { if (string.equals("")) {
return null; return null;
} }
JsonReader reader = getReader(string, queryParams); JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray(); JsonArray arr = reader.readArray();
Map<String, String> map = new HashMap<String, String>(); Map<String, String> map = new LinkedHashMap<String, String>();
for (int i = 0; i < arr.size(); i++) { for (int i = 0; i < arr.size(); i++) {
JsonObject obj = arr.getJsonObject(i); JsonObject obj = arr.getJsonObject(i);
if (obj.containsKey("key") && obj.containsKey("value")) { if (obj.containsKey("key") && obj.containsKey("value")) {
@ -400,8 +432,28 @@ public class APIClient {
return getMapStrValue(string, null); return getMapStrValue(string, null);
} }
public List<InetAddress> getListInetAddressValue(String string, public Map<String, String> getReverseMapStrValue(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) { if (string.equals("")) {
return null;
}
JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray();
Map<String, String> map = new HashMap<String, String>();
for (int i = 0; i < arr.size(); i++) {
JsonObject obj = arr.getJsonObject(i);
if (obj.containsKey("key") && obj.containsKey("value")) {
map.put(obj.getString("value"), obj.getString("key"));
}
}
reader.close();
return map;
}
public Map<String, String> getReverseMapStrValue(String string) {
return getReverseMapStrValue(string, null);
}
public List<InetAddress> getListInetAddressValue(String string, MultivaluedMap<String, String> queryParams) {
List<String> vals = getListStrValue(string, queryParams); List<String> vals = getListStrValue(string, queryParams);
List<InetAddress> res = new ArrayList<InetAddress>(); List<InetAddress> res = new ArrayList<InetAddress>();
for (String val : vals) { for (String val : vals) {
@ -424,23 +476,21 @@ public class APIClient {
return null; return null;
} }
private TabularDataSupport getSnapshotData(String ks, JsonArray arr) { private TabularDataSupport getSnapshotData(String key, JsonArray arr) {
TabularDataSupport data = new TabularDataSupport( TabularDataSupport data = new TabularDataSupport(SnapshotDetailsTabularData.TABULAR_TYPE);
SnapshotDetailsTabularData.TABULAR_TYPE);
for (int i = 0; i < arr.size(); i++) { for (int i = 0; i < arr.size(); i++) {
JsonObject obj = arr.getJsonObject(i); JsonObject obj = arr.getJsonObject(i);
if (obj.containsKey("key") && obj.containsKey("cf")) { if (obj.containsKey("ks") && obj.containsKey("cf")) {
SnapshotDetailsTabularData.from(obj.getString("key"), ks, SnapshotDetailsTabularData.from(key, obj.getString("ks"), obj.getString("cf"), obj.getJsonNumber("total").longValue(),
obj.getString("cf"), obj.getInt("total"), obj.getJsonNumber("live").longValue(), data);
obj.getInt("live"), data);
} }
} }
return data; return data;
} }
public Map<String, TabularData> getMapStringSnapshotTabularDataValue( public Map<String, TabularData> getMapStringSnapshotTabularDataValue(String string,
String string, MultivaluedMap<String, String> queryParams) { MultivaluedMap<String, String> queryParams) {
if (string.equals("")) { if (string.equals("")) {
return null; return null;
} }
@ -474,8 +524,7 @@ public class APIClient {
for (int i = 0; i < arr.size(); i++) { for (int i = 0; i < arr.size(); i++) {
try { try {
obj = arr.getJsonObject(i); obj = arr.getJsonObject(i);
res.put(InetAddress.getByName(obj.getString("key")), res.put(InetAddress.getByName(obj.getString("key")), Float.parseFloat(obj.getString("value")));
Float.parseFloat(obj.getString("value")));
} catch (UnknownHostException e) { } catch (UnknownHostException e) {
logger.warning("Bad formatted address " + obj.getString("key")); logger.warning("Bad formatted address " + obj.getString("key"));
} }
@ -486,13 +535,26 @@ public class APIClient {
public Map<InetAddress, Float> getMapInetAddressFloatValue(String string) { public Map<InetAddress, Float> getMapInetAddressFloatValue(String string) {
return getMapInetAddressFloatValue(string, null); return getMapInetAddressFloatValue(string, null);
} }
public Map<String, Long> getMapStringLongValue(String string) {
// TODO Auto-generated method stub public Map<String, Long> getMapStringLongValue(String string, MultivaluedMap<String, String> queryParams) {
return null; Map<String, Long> res = new HashMap<String, Long>();
JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray();
JsonObject obj = null;
for (int i = 0; i < arr.size(); i++) {
obj = arr.getJsonObject(i);
res.put(obj.getString("key"), obj.getJsonNumber("value").longValue());
}
return res;
} }
public long[] getLongArrValue(String string, public Map<String, Long> getMapStringLongValue(String string) {
MultivaluedMap<String, String> queryParams) { return getMapStringLongValue(string, null);
}
public long[] getLongArrValue(String string, MultivaluedMap<String, String> queryParams) {
JsonReader reader = getReader(string, queryParams); JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray(); JsonArray arr = reader.readArray();
long[] res = new long[arr.size()]; long[] res = new long[arr.size()];
@ -507,13 +569,25 @@ public class APIClient {
return getLongArrValue(string, null); return getLongArrValue(string, null);
} }
public Map<String, Integer> getMapStringIntegerValue(String string) { public Map<String, Integer> getMapStringIntegerValue(String string, MultivaluedMap<String, String> queryParams) {
// TODO Auto-generated method stub Map<String, Integer> res = new HashMap<String, Integer>();
return null;
JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray();
JsonObject obj = null;
for (int i = 0; i < arr.size(); i++) {
obj = arr.getJsonObject(i);
res.put(obj.getString("key"), obj.getInt("value"));
}
return res;
} }
public int[] getIntArrValue(String string, public Map<String, Integer> getMapStringIntegerValue(String string) {
MultivaluedMap<String, String> queryParams) { return getMapStringIntegerValue(string, null);
}
public int[] getIntArrValue(String string, MultivaluedMap<String, String> queryParams) {
JsonReader reader = getReader(string, queryParams); JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray(); JsonArray arr = reader.readArray();
int[] res = new int[arr.size()]; int[] res = new int[arr.size()];
@ -528,8 +602,7 @@ public class APIClient {
return getIntArrValue(string, null); return getIntArrValue(string, null);
} }
public Map<String, Long> getListMapStringLongValue(String string, public Map<String, Long> getListMapStringLongValue(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) {
if (string.equals("")) { if (string.equals("")) {
return null; return null;
} }
@ -546,7 +619,7 @@ public class APIClient {
if (obj.get(k) instanceof JsonString) { if (obj.get(k) instanceof JsonString) {
key = obj.getString(k); key = obj.getString(k);
} else { } else {
val = obj.getInt(k); val = obj.getJsonNumber(k).longValue();
} }
} }
if (val > 0 && !key.equals("")) { if (val > 0 && !key.equals("")) {
@ -562,8 +635,7 @@ public class APIClient {
return getListMapStringLongValue(string, null); return getListMapStringLongValue(string, null);
} }
public JsonArray getJsonArray(String string, public JsonArray getJsonArray(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) {
if (string.equals("")) { if (string.equals("")) {
return null; return null;
} }
@ -577,8 +649,7 @@ public class APIClient {
return getJsonArray(string, null); return getJsonArray(string, null);
} }
public List<Map<String, String>> getListMapStrValue(String string, public List<Map<String, String>> getListMapStrValue(String string, MultivaluedMap<String, String> queryParams) {
MultivaluedMap<String, String> queryParams) {
JsonArray arr = getJsonArray(string, queryParams); JsonArray arr = getJsonArray(string, queryParams);
List<Map<String, String>> res = new ArrayList<Map<String, String>>(); List<Map<String, String>> res = new ArrayList<Map<String, String>>();
for (int i = 0; i < arr.size(); i++) { for (int i = 0; i < arr.size(); i++) {
@ -596,61 +667,36 @@ public class APIClient {
return null; return null;
} }
public JsonObject getJsonObj(String string, public JsonObject getJsonObj(String string, MultivaluedMap<String, String> queryParams, long duration) {
MultivaluedMap<String, String> queryParams) {
if (string.equals("")) { if (string.equals("")) {
return null; return null;
} }
JsonReader reader = getReader(string, queryParams);
JsonObject res = reader.readObject();
reader.close();
return res;
}
public HistogramValues getHistogramValue(String url,
MultivaluedMap<String, String> queryParams) {
HistogramValues res = new HistogramValues();
JsonObject obj = getJsonObj(url, queryParams);
res.count = obj.getJsonNumber("count").longValue();
res.max = obj.getJsonNumber("max").longValue();
res.min = obj.getJsonNumber("min").longValue();
res.sum = obj.getJsonNumber("sum").longValue();
res.variance = obj.getJsonNumber("variance").doubleValue();
res.mean = obj.getJsonNumber("mean").doubleValue();
JsonArray arr = obj.getJsonArray("sample");
if (arr != null) {
res.sample = new long[arr.size()];
for (int i = 0; i < arr.size(); i++) {
res.sample[i] = arr.getJsonNumber(i).longValue();
}
}
return res;
}
public HistogramValues getHistogramValue(String url) {
return getHistogramValue(url, null);
}
public EstimatedHistogram getEstimatedHistogram(String string,
MultivaluedMap<String, String> queryParams, long duration) {
String key = getCacheKey(string, queryParams, duration); String key = getCacheKey(string, queryParams, duration);
EstimatedHistogram res = getEstimatedHistogramFromCache(key, duration); JsonObject res = getJsonObjectFromCache(key, duration);
if (res != null) { if (res != null) {
return res; return res;
} }
res = new EstimatedHistogram(getEstimatedHistogramAsLongArrValue(string, queryParams)); JsonReader reader = getReader(string, queryParams);
res = reader.readObject();
reader.close();
if (duration > 0) { if (duration > 0) {
cache.put(key, new CacheEntry(res)); cache.put(key, new CacheEntry(res));
} }
return res; return res;
} }
public long[] getEstimatedHistogramAsLongArrValue(String string,
MultivaluedMap<String, String> queryParams) { public JsonObject getJsonObj(String string, MultivaluedMap<String, String> queryParams) {
return getJsonObj(string, queryParams, 0);
}
public long[] getEstimatedHistogramAsLongArrValue(String string, MultivaluedMap<String, String> queryParams) {
JsonObject obj = getJsonObj(string, queryParams); JsonObject obj = getJsonObj(string, queryParams);
JsonArray arr = obj.getJsonArray("buckets"); JsonArray arr = obj.getJsonArray("buckets");
if (arr == null) {
return new long[0];
}
long res[] = new long[arr.size()]; long res[] = new long[arr.size()];
for (int i = 0; i< arr.size(); i++) { for (int i = 0; i < arr.size(); i++) {
res[i] = arr.getJsonNumber(i).longValue(); res[i] = arr.getJsonNumber(i).longValue();
} }
return res; return res;
@ -659,4 +705,37 @@ public class APIClient {
public long[] getEstimatedHistogramAsLongArrValue(String string) { public long[] getEstimatedHistogramAsLongArrValue(String string) {
return getEstimatedHistogramAsLongArrValue(string, null); return getEstimatedHistogramAsLongArrValue(string, null);
} }
public Map<String, Double> getMapStringDouble(String string, MultivaluedMap<String, String> queryParams) {
if (string.equals("")) {
return null;
}
JsonReader reader = getReader(string, queryParams);
JsonArray arr = reader.readArray();
Map<String, Double> map = new HashMap<String, Double>();
for (int i = 0; i < arr.size(); i++) {
JsonObject obj = arr.getJsonObject(i);
Iterator<String> it = obj.keySet().iterator();
String key = "";
double val = -1;
while (it.hasNext()) {
String k = it.next();
if (obj.get(k) instanceof JsonString) {
key = obj.getString(k);
} else {
val = obj.getJsonNumber(k).doubleValue();
}
}
if (!key.equals("")) {
map.put(key, val);
}
}
reader.close();
return map;
}
public Map<String, Double> getMapStringDouble(String string) {
return getMapStringDouble(string, null);
}
} }

View File

@ -0,0 +1,111 @@
package com.scylladb.jmx.api;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.Map;
import org.yaml.snakeyaml.Yaml;
/*
* Copyright (C) 2015 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
public class APIConfig {
private String address = "localhost";
private String port = "10000";
public String getAddress() {
return address;
}
public String getPort() {
return port;
}
public String getBaseUrl() {
return "http://" + address + ":" + port;
}
private void readFile(String name) {
System.out.println("Using config file: " + name);
InputStream input;
try {
input = new FileInputStream(new File(name));
Yaml yaml = new Yaml();
@SuppressWarnings("unchecked")
Map<String, Object> map = (Map<String, Object>) yaml.load(input);
if (map.containsKey("listen_address")) {
address = (String) map.get("listen_address");
}
if (map.containsKey("api_address")) {
address = (String) map.get("api_address");
}
if (map.containsKey("api_port")) {
port = map.get("api_port").toString();
}
} catch (FileNotFoundException e) {
System.err.println("fail reading from config file: " + name);
System.exit(-1);
}
}
public static boolean fileExists(String name) {
File varTmpDir = new File(name);
return varTmpDir.exists();
}
private boolean loadIfExists(String path, String name) {
if (path == null) {
return false;
}
if (!fileExists(path + name)) {
return false;
}
readFile(path + name);
return true;
}
/**
* setConfig load the JMX proxy configuration The configuration hierarchy is
* as follow: Command line argument takes precedence over everything Then
* configuration file in the command line (command line argument can replace
* specific values in it. Then SCYLLA_CONF/scylla.yaml Then
* SCYLLA_HOME/conf/scylla.yaml Then conf/scylla.yaml Then the default
* values With file configuration, to make it clearer what is been used,
* only one file will be chosen with the highest precedence
*/
public APIConfig() {
if (!System.getProperty("apiconfig", "").equals("")) {
readFile(System.getProperty("apiconfig"));
} else if (!loadIfExists(System.getenv("SCYLLA_CONF"), "/scylla.yaml")
&& !loadIfExists(System.getenv("SCYLLA_HOME"), "/conf/scylla.yaml")) {
loadIfExists("", "conf/scylla.yaml");
}
if (!System.getProperty("apiaddress", "").equals("")) {
address = System.getProperty("apiaddress");
}
if (!System.getProperty("apiport", "").equals("")) {
port = System.getProperty("apiport", "10000");
}
}
}

View File

@ -19,15 +19,16 @@
* along with Scylla. If not, see <http://www.gnu.org/licenses/>. * along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/ */
package com.cloudius.urchin.api; package com.scylladb.jmx.api;
import com.cloudius.urchin.utils.EstimatedHistogram;
public class CacheEntry { import jakarta.json.JsonObject;
long time;
Object value;
CacheEntry(Object res) { class CacheEntry {
private long time;
private Object value;
public CacheEntry(Object res) {
time = System.currentTimeMillis(); time = System.currentTimeMillis();
this.value = res; this.value = res;
} }
@ -40,7 +41,7 @@ public class CacheEntry {
return (String) value; return (String) value;
} }
public EstimatedHistogram getEstimatedHistogram() { public JsonObject jsonObject() {
return (EstimatedHistogram)value; return (JsonObject) value;
} }
} }

View File

@ -22,71 +22,59 @@
* Modified by Cloudius Systems * Modified by Cloudius Systems
*/ */
package com.cloudius.urchin.utils; package com.scylladb.jmx.api.utils;
import java.io.*; import java.io.File;
import java.text.DecimalFormat; import java.text.DecimalFormat;
public class FileUtils public class FileUtils {
{
private static final double KB = 1024d; private static final double KB = 1024d;
private static final double MB = 1024*1024d; private static final double MB = 1024 * 1024d;
private static final double GB = 1024*1024*1024d; private static final double GB = 1024 * 1024 * 1024d;
private static final double TB = 1024*1024*1024*1024d; private static final double TB = 1024 * 1024 * 1024 * 1024d;
private static final DecimalFormat df = new DecimalFormat("#.##"); private static final DecimalFormat df = new DecimalFormat("#.##");
public static String stringifyFileSize(double value) public static String stringifyFileSize(double value) {
{
double d; double d;
if ( value >= TB ) if (value >= TB) {
{
d = value / TB; d = value / TB;
String val = df.format(d); String val = df.format(d);
return val + " TB"; return val + " TB";
} } else if (value >= GB) {
else if ( value >= GB )
{
d = value / GB; d = value / GB;
String val = df.format(d); String val = df.format(d);
return val + " GB"; return val + " GB";
} } else if (value >= MB) {
else if ( value >= MB )
{
d = value / MB; d = value / MB;
String val = df.format(d); String val = df.format(d);
return val + " MB"; return val + " MB";
} } else if (value >= KB) {
else if ( value >= KB )
{
d = value / KB; d = value / KB;
String val = df.format(d); String val = df.format(d);
return val + " KB"; return val + " KB";
} } else {
else
{
String val = df.format(value); String val = df.format(value);
return val + " bytes"; return val + " bytes";
} }
} }
/** /**
* Get the size of a directory in bytes * Get the size of a directory in bytes
* @param directory The directory for which we need size. *
* @param directory
* The directory for which we need size.
* @return The size of the directory * @return The size of the directory
*/ */
public static long folderSize(File directory) public static long folderSize(File directory) {
{
long length = 0; long length = 0;
for (File file : directory.listFiles()) for (File file : directory.listFiles()) {
{ if (file.isFile()) {
if (file.isFile())
length += file.length(); length += file.length();
else } else {
length += folderSize(file); length += folderSize(file);
}
} }
return length; return length;
} }
} }

View File

@ -22,47 +22,42 @@
* Modified by Cloudius Systems * Modified by Cloudius Systems
*/ */
package com.cloudius.urchin.utils; package com.scylladb.jmx.api.utils;
import com.google.common.base.Objects; import com.google.common.base.Objects;
public class Pair<T1, T2> public class Pair<T1, T2> {
{
public final T1 left; public final T1 left;
public final T2 right; public final T2 right;
protected Pair(T1 left, T2 right) protected Pair(T1 left, T2 right) {
{
this.left = left; this.left = left;
this.right = right; this.right = right;
} }
@Override @Override
public final int hashCode() public final int hashCode() {
{
int hashCode = 31 + (left == null ? 0 : left.hashCode()); int hashCode = 31 + (left == null ? 0 : left.hashCode());
return 31*hashCode + (right == null ? 0 : right.hashCode()); return 31 * hashCode + (right == null ? 0 : right.hashCode());
} }
@Override @Override
public final boolean equals(Object o) public final boolean equals(Object o) {
{ if (!(o instanceof Pair)) {
if(!(o instanceof Pair))
return false; return false;
}
@SuppressWarnings("rawtypes") @SuppressWarnings("rawtypes")
Pair that = (Pair)o; Pair that = (Pair) o;
// handles nulls properly // handles nulls properly
return Objects.equal(left, that.left) && Objects.equal(right, that.right); return Objects.equal(left, that.left) && Objects.equal(right, that.right);
} }
@Override @Override
public String toString() public String toString() {
{
return "(" + left + "," + right + ")"; return "(" + left + "," + right + ")";
} }
public static <X, Y> Pair<X, Y> create(X x, Y y) public static <X, Y> Pair<X, Y> create(X x, Y y) {
{
return new Pair<X, Y>(x, y); return new Pair<X, Y>(x, y);
} }
} }

View File

@ -20,21 +20,27 @@
* *
* Modified by Cloudius Systems * Modified by Cloudius Systems
*/ */
package com.cloudius.urchin.utils; package com.scylladb.jmx.api.utils;
import java.util.Map; import java.util.Map;
import javax.management.openmbean.*;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.OpenDataException;
import javax.management.openmbean.OpenType;
import javax.management.openmbean.SimpleType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import com.google.common.base.Throwables; import com.google.common.base.Throwables;
public class SnapshotDetailsTabularData { public class SnapshotDetailsTabularData {
private static final String[] ITEM_NAMES = new String[] { "Snapshot name", private static final String[] ITEM_NAMES = new String[] { "Snapshot name", "Keyspace name", "Column family name",
"Keyspace name", "Column family name", "True size", "Size on disk" }; "True size", "Size on disk" };
private static final String[] ITEM_DESCS = new String[] { "snapshot_name", private static final String[] ITEM_DESCS = new String[] { "snapshot_name", "keyspace_name", "columnfamily_name",
"keyspace_name", "columnfamily_name", "TrueDiskSpaceUsed", "TrueDiskSpaceUsed", "TotalDiskSpaceUsed" };
"TotalDiskSpaceUsed" };
private static final String TYPE_NAME = "SnapshotDetails"; private static final String TYPE_NAME = "SnapshotDetails";
@ -48,28 +54,22 @@ public class SnapshotDetailsTabularData {
static { static {
try { try {
ITEM_TYPES = new OpenType[] { SimpleType.STRING, SimpleType.STRING, ITEM_TYPES = new OpenType[] { SimpleType.STRING, SimpleType.STRING, SimpleType.STRING, SimpleType.STRING,
SimpleType.STRING, SimpleType.STRING, SimpleType.STRING }; SimpleType.STRING };
COMPOSITE_TYPE = new CompositeType(TYPE_NAME, ROW_DESC, ITEM_NAMES, COMPOSITE_TYPE = new CompositeType(TYPE_NAME, ROW_DESC, ITEM_NAMES, ITEM_DESCS, ITEM_TYPES);
ITEM_DESCS, ITEM_TYPES);
TABULAR_TYPE = new TabularType(TYPE_NAME, ROW_DESC, COMPOSITE_TYPE, TABULAR_TYPE = new TabularType(TYPE_NAME, ROW_DESC, COMPOSITE_TYPE, ITEM_NAMES);
ITEM_NAMES);
} catch (OpenDataException e) { } catch (OpenDataException e) {
throw Throwables.propagate(e); throw Throwables.propagate(e);
} }
} }
public static void from(final String snapshot, final String ks, public static void from(final String snapshot, final String ks, final String cf,
final String cf, Map.Entry<String, Pair<Long, Long>> snapshotDetail, TabularDataSupport result) {
Map.Entry<String, Pair<Long, Long>> snapshotDetail,
TabularDataSupport result) {
try { try {
final String totalSize = FileUtils.stringifyFileSize(snapshotDetail final String totalSize = FileUtils.stringifyFileSize(snapshotDetail.getValue().left);
.getValue().left); final String liveSize = FileUtils.stringifyFileSize(snapshotDetail.getValue().right);
final String liveSize = FileUtils.stringifyFileSize(snapshotDetail
.getValue().right);
result.put(new CompositeDataSupport(COMPOSITE_TYPE, ITEM_NAMES, result.put(new CompositeDataSupport(COMPOSITE_TYPE, ITEM_NAMES,
new Object[] { snapshot, ks, cf, liveSize, totalSize })); new Object[] { snapshot, ks, cf, liveSize, totalSize }));
} catch (OpenDataException e) { } catch (OpenDataException e) {
@ -77,8 +77,8 @@ public class SnapshotDetailsTabularData {
} }
} }
public static void from(final String snapshot, final String ks, public static void from(final String snapshot, final String ks, final String cf, long total, long live,
final String cf, long total, long live, TabularDataSupport result) { TabularDataSupport result) {
try { try {
final String totalSize = FileUtils.stringifyFileSize(total); final String totalSize = FileUtils.stringifyFileSize(total);
final String liveSize = FileUtils.stringifyFileSize(live); final String liveSize = FileUtils.stringifyFileSize(live);

View File

@ -0,0 +1,15 @@
module scylla.apiclient {
exports com.scylladb.jmx.api;
exports com.scylladb.jmx.api.utils;
requires org.eclipse.parsson;
requires jakarta.ws.rs;
requires com.fasterxml.jackson.jakarta.rs.json;
requires jersey.client;
requires java.logging;
requires jakarta.json;
requires java.management;
requires org.yaml.snakeyaml;
requires com.google.common;
requires jersey.common;
requires jersey.hk2;
}

29
scylla-jmx-parent/pom.xml Normal file
View File

@ -0,0 +1,29 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>it.cavallium.scylladb.jmx</groupId>
<artifactId>scylla-jmx-parent</artifactId>
<version>1.1</version>
<packaging>pom</packaging>
<modules>
<module>../</module>
<module>../scylla-apiclient</module>
</modules>
<name>Scylla JMX Parent</name>
<distributionManagement>
<repository>
<id>mchv-release-distribution</id>
<name>MCHV Release Apache Maven Packages Distribution</name>
<url>https://mvn.mchv.eu/repository/mchv</url>
</repository>
<snapshotRepository>
<id>mchv-snapshot-distribution</id>
<name>MCHV Snapshot Apache Maven Packages Distribution</name>
<url>https://mvn.mchv.eu/repository/mchv-snapshot</url>
</snapshotRepository>
</distributionManagement>
</project>

View File

@ -1,37 +0,0 @@
/*
* Copyright 2015 Cloudius Systems
*/
package com.cloudius.urchin.main;
import com.cloudius.urchin.api.APIClient;
import org.apache.cassandra.db.ColumnFamilyStore;
import org.apache.cassandra.db.commitlog.CommitLog;
import org.apache.cassandra.db.compaction.CompactionManager;
import org.apache.cassandra.gms.Gossiper;
import org.apache.cassandra.gms.FailureDetector;
import org.apache.cassandra.locator.EndpointSnitchInfo;
import org.apache.cassandra.net.MessagingService;
import org.apache.cassandra.service.CacheService;
import org.apache.cassandra.service.StorageProxy;
import org.apache.cassandra.service.StorageService;
public class Main {
public static void main(String[] args) throws Exception {
System.out.println("Connecting to " + APIClient.getBaseUrl());
System.out.println("Starting the JMX server");
StorageService.getInstance();
StorageProxy.getInstance();
MessagingService.getInstance();
CommitLog.getInstance();
Gossiper.getInstance();
EndpointSnitchInfo.getInstance();
FailureDetector.getInstance();
ColumnFamilyStore.register_mbeans();
CacheService.getInstance();
CompactionManager.getInstance();
Thread.sleep(Long.MAX_VALUE);
}
}

View File

@ -1,399 +0,0 @@
package com.cloudius.urchin.metrics;
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
import java.util.concurrent.TimeUnit;
import com.yammer.metrics.core.APIMetricsRegistry;
import com.yammer.metrics.core.Counter;
import com.yammer.metrics.core.Gauge;
import com.yammer.metrics.core.Histogram;
import com.yammer.metrics.core.Meter;
import com.yammer.metrics.core.MetricName;
import com.yammer.metrics.core.APITimer;
import com.yammer.metrics.core.Timer;
import com.yammer.metrics.reporting.JmxReporter;
public class APIMetrics {
private static final APIMetricsRegistry DEFAULT_REGISTRY = new APIMetricsRegistry();
private static final Thread SHUTDOWN_HOOK = new Thread() {
public void run() {
JmxReporter.shutdownDefault();
}
};
static {
JmxReporter.startDefault(DEFAULT_REGISTRY);
Runtime.getRuntime().addShutdownHook(SHUTDOWN_HOOK);
}
private APIMetrics() { /* unused */
}
/**
* Given a new {@link com.yammer.metrics.core.Gauge}, registers it under the
* given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param metric
* the metric
* @param <T>
* the type of the value returned by the metric
* @return {@code metric}
*/
public static <T> Gauge<T> newGauge(Class<?> klass, String name,
Gauge<T> metric) {
return DEFAULT_REGISTRY.newGauge(klass, name, metric);
}
/**
* Given a new {@link com.yammer.metrics.core.Gauge}, registers it under the
* given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @param metric
* the metric
* @param <T>
* the type of the value returned by the metric
* @return {@code metric}
*/
public static <T> Gauge<T> newGauge(Class<?> klass, String name,
String scope, Gauge<T> metric) {
return DEFAULT_REGISTRY.newGauge(klass, name, scope, metric);
}
/**
* Given a new {@link com.yammer.metrics.core.Gauge}, registers it under the
* given metric name.
*
* @param metricName
* the name of the metric
* @param metric
* the metric
* @param <T>
* the type of the value returned by the metric
* @return {@code metric}
*/
public static <T> Gauge<T> newGauge(MetricName metricName, Gauge<T> metric) {
return DEFAULT_REGISTRY.newGauge(metricName, metric);
}
/**
* Creates a new {@link com.yammer.metrics.core.Counter} and registers it
* under the given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @return a new {@link com.yammer.metrics.core.Counter}
*/
public static Counter newCounter(String url, Class<?> klass, String name) {
return DEFAULT_REGISTRY.newCounter(url, klass, name);
}
/**
* Creates a new {@link com.yammer.metrics.core.Counter} and registers it
* under the given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @return a new {@link com.yammer.metrics.core.Counter}
*/
public static Counter newCounter(String url, Class<?> klass, String name,
String scope) {
return DEFAULT_REGISTRY.newCounter(url, klass, name, scope);
}
/**
* Creates a new {@link com.yammer.metrics.core.Counter} and registers it
* under the given metric name.
*
* @param metricName
* the name of the metric
* @return a new {@link com.yammer.metrics.core.Counter}
*/
public static Counter newCounter(String url, MetricName metricName) {
return DEFAULT_REGISTRY.newCounter(url, metricName);
}
/**
* Creates a new {@link com.yammer.metrics.core.Histogram} and registers it
* under the given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param biased
* whether or not the histogram should be biased
* @return a new {@link com.yammer.metrics.core.Histogram}
*/
public static Histogram newHistogram(String url, Class<?> klass,
String name, boolean biased) {
return DEFAULT_REGISTRY.newHistogram(url, klass, name, biased);
}
/**
* Creates a new {@link com.yammer.metrics.core.Histogram} and registers it
* under the given class, name, and scope.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @param biased
* whether or not the histogram should be biased
* @return a new {@link com.yammer.metrics.core.Histogram}
*/
public static Histogram newHistogram(String url, Class<?> klass,
String name, String scope, boolean biased) {
return DEFAULT_REGISTRY.newHistogram(url, klass, name, scope, biased);
}
/**
* Creates a new {@link com.yammer.metrics.core.Histogram} and registers it
* under the given metric name.
*
* @param metricName
* the name of the metric
* @param biased
* whether or not the histogram should be biased
* @return a new {@link com.yammer.metrics.core.Histogram}
*/
public static Histogram newHistogram(String url, MetricName metricName,
boolean biased) {
return DEFAULT_REGISTRY.newHistogram(url, metricName, biased);
}
/**
* Creates a new non-biased {@link com.yammer.metrics.core.Histogram} and
* registers it under the given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @return a new {@link com.yammer.metrics.core.Histogram}
*/
public static Histogram newHistogram(String url, Class<?> klass, String name) {
return DEFAULT_REGISTRY.newHistogram(url, klass, name);
}
/**
* Creates a new non-biased {@link com.yammer.metrics.core.Histogram} and
* registers it under the given class, name, and scope.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @return a new {@link com.yammer.metrics.core.Histogram}
*/
public static Histogram newHistogram(String url, Class<?> klass,
String name, String scope) {
return DEFAULT_REGISTRY.newHistogram(url, klass, name, scope);
}
/**
* Creates a new non-biased {@link com.yammer.metrics.core.Histogram} and
* registers it under the given metric name.
*
* @param metricName
* the name of the metric
* @return a new {@link com.yammer.metrics.core.Histogram}
*/
public static Histogram newHistogram(String url, MetricName metricName) {
return newHistogram(url, metricName, false);
}
/**
* Creates a new {@link com.yammer.metrics.core.Meter} and registers it
* under the given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param eventType
* the plural name of the type of events the meter is measuring
* (e.g., {@code "requests"})
* @param unit
* the rate unit of the new meter
* @return a new {@link com.yammer.metrics.core.Meter}
*/
public static Meter newMeter(String url, Class<?> klass, String name,
String eventType, TimeUnit unit) {
return DEFAULT_REGISTRY.newMeter(url, klass, name, eventType, unit);
}
/**
* Creates a new {@link com.yammer.metrics.core.Meter} and registers it
* under the given class, name, and scope.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @param eventType
* the plural name of the type of events the meter is measuring
* (e.g., {@code "requests"})
* @param unit
* the rate unit of the new meter
* @return a new {@link com.yammer.metrics.core.Meter}
*/
public static Meter newMeter(String url, Class<?> klass, String name,
String scope, String eventType, TimeUnit unit) {
return DEFAULT_REGISTRY.newMeter(url, klass, name, scope, eventType,
unit);
}
/**
* Creates a new {@link com.yammer.metrics.core.Meter} and registers it
* under the given metric name.
*
* @param metricName
* the name of the metric
* @param eventType
* the plural name of the type of events the meter is measuring
* (e.g., {@code "requests"})
* @param unit
* the rate unit of the new meter
* @return a new {@link com.yammer.metrics.core.Meter}
*/
public static Meter newMeter(String url, MetricName metricName,
String eventType, TimeUnit unit) {
return DEFAULT_REGISTRY.newMeter(url, metricName, eventType, unit);
}
/**
* Creates a new {@link com.yammer.metrics.core.APITimer} and registers it
* under the given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param durationUnit
* the duration scale unit of the new timer
* @param rateUnit
* the rate scale unit of the new timer
* @return a new {@link com.yammer.metrics.core.APITimer}
*/
public static Timer newTimer(String url, Class<?> klass, String name,
TimeUnit durationUnit, TimeUnit rateUnit) {
return DEFAULT_REGISTRY.newTimer(url, klass, name, durationUnit, rateUnit);
}
/**
* Creates a new {@link com.yammer.metrics.core.APITimer} and registers it
* under the given class and name, measuring elapsed time in milliseconds
* and invocations per second.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @return a new {@link com.yammer.metrics.core.APITimer}
*/
public static Timer newTimer(String url, Class<?> klass, String name) {
return DEFAULT_REGISTRY.newTimer(url, klass, name);
}
/**
* Creates a new {@link com.yammer.metrics.core.APITimer} and registers it
* under the given class, name, and scope.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @param durationUnit
* the duration scale unit of the new timer
* @param rateUnit
* the rate scale unit of the new timer
* @return a new {@link com.yammer.metrics.core.APITimer}
*/
public static Timer newTimer(String url, Class<?> klass, String name, String scope,
TimeUnit durationUnit, TimeUnit rateUnit) {
return DEFAULT_REGISTRY.newTimer(url, klass, name, scope, durationUnit,
rateUnit);
}
/**
* Creates a new {@link com.yammer.metrics.core.APITimer} and registers it
* under the given class, name, and scope, measuring elapsed time in
* milliseconds and invocations per second.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @return a new {@link com.yammer.metrics.core.APITimer}
*/
public static Timer newTimer(String url, Class<?> klass, String name, String scope) {
return DEFAULT_REGISTRY.newTimer(url, klass, name, scope);
}
/**
* Creates a new {@link com.yammer.metrics.core.APITimer} and registers it
* under the given metric name.
*
* @param metricName
* the name of the metric
* @param durationUnit
* the duration scale unit of the new timer
* @param rateUnit
* the rate scale unit of the new timer
* @return a new {@link com.yammer.metrics.core.APITimer}
*/
public static Timer newTimer(String url, MetricName metricName, TimeUnit durationUnit,
TimeUnit rateUnit) {
return DEFAULT_REGISTRY.newTimer(url, metricName, durationUnit, rateUnit);
}
/**
* Returns the (static) default registry.
*
* @return the metrics registry
*/
public static APIMetricsRegistry defaultRegistry() {
return DEFAULT_REGISTRY;
}
/**
* Shuts down all thread pools for the default registry.
*/
public static void shutdown() {
DEFAULT_REGISTRY.shutdown();
JmxReporter.shutdownDefault();
Runtime.getRuntime().removeShutdownHook(SHUTDOWN_HOOK);
}
}

View File

@ -1,312 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
package com.cloudius.urchin.utils;
import java.util.Arrays;
import java.util.concurrent.atomic.AtomicLongArray;
import com.google.common.base.Objects;
import org.slf4j.Logger;
public class EstimatedHistogram {
/**
* The series of values to which the counts in `buckets` correspond: 1, 2,
* 3, 4, 5, 6, 7, 8, 10, 12, 14, 17, 20, etc. Thus, a `buckets` of [0, 0, 1,
* 10] would mean we had seen one value of 3 and 10 values of 4.
*
* The series starts at 1 and grows by 1.2 each time (rounding and removing
* duplicates). It goes from 1 to around 36M by default (creating 90+1
* buckets), which will give us timing resolution from microseconds to 36
* seconds, with less precision as the numbers get larger.
*
* Each bucket represents values from (previous bucket offset, current
* offset].
*/
private final long[] bucketOffsets;
// buckets is one element longer than bucketOffsets -- the last element is
// values greater than the last offset
final AtomicLongArray buckets;
public EstimatedHistogram() {
this(90);
}
public EstimatedHistogram(int bucketCount) {
bucketOffsets = newOffsets(bucketCount);
buckets = new AtomicLongArray(bucketOffsets.length + 1);
}
public EstimatedHistogram(long[] offsets, long[] bucketData) {
assert bucketData.length == offsets.length + 1;
bucketOffsets = offsets;
buckets = new AtomicLongArray(bucketData);
}
public EstimatedHistogram(long[] bucketData) {
bucketOffsets = newOffsets(bucketData.length - 1);
buckets = new AtomicLongArray(bucketData);
}
private static long[] newOffsets(int size) {
long[] result = new long[size];
long last = 1;
result[0] = last;
for (int i = 1; i < size; i++) {
long next = Math.round(last * 1.2);
if (next == last)
next++;
result[i] = next;
last = next;
}
return result;
}
/**
* @return the histogram values corresponding to each bucket index
*/
public long[] getBucketOffsets() {
return bucketOffsets;
}
/**
* Increments the count of the bucket closest to n, rounding UP.
*
* @param n
*/
public void add(long n) {
int index = Arrays.binarySearch(bucketOffsets, n);
if (index < 0) {
// inexact match, take the first bucket higher than n
index = -index - 1;
}
// else exact match; we're good
buckets.incrementAndGet(index);
}
/**
* @return the count in the given bucket
*/
long get(int bucket) {
return buckets.get(bucket);
}
/**
* @param reset
* zero out buckets afterwards if true
* @return a long[] containing the current histogram buckets
*/
public long[] getBuckets(boolean reset) {
final int len = buckets.length();
long[] rv = new long[len];
if (reset)
for (int i = 0; i < len; i++)
rv[i] = buckets.getAndSet(i, 0L);
else
for (int i = 0; i < len; i++)
rv[i] = buckets.get(i);
return rv;
}
/**
* @return the smallest value that could have been added to this histogram
*/
public long min() {
for (int i = 0; i < buckets.length(); i++) {
if (buckets.get(i) > 0)
return i == 0 ? 0 : 1 + bucketOffsets[i - 1];
}
return 0;
}
/**
* @return the largest value that could have been added to this histogram.
* If the histogram overflowed, returns Long.MAX_VALUE.
*/
public long max() {
int lastBucket = buckets.length() - 1;
if (buckets.get(lastBucket) > 0)
return Long.MAX_VALUE;
for (int i = lastBucket - 1; i >= 0; i--) {
if (buckets.get(i) > 0)
return bucketOffsets[i];
}
return 0;
}
/**
* @param percentile
* @return estimated value at given percentile
*/
public long percentile(double percentile) {
assert percentile >= 0 && percentile <= 1.0;
int lastBucket = buckets.length() - 1;
if (buckets.get(lastBucket) > 0)
throw new IllegalStateException(
"Unable to compute when histogram overflowed");
long pcount = (long) Math.floor(count() * percentile);
if (pcount == 0)
return 0;
long elements = 0;
for (int i = 0; i < lastBucket; i++) {
elements += buckets.get(i);
if (elements >= pcount)
return bucketOffsets[i];
}
return 0;
}
/**
* @return the mean histogram value (average of bucket offsets, weighted by
* count)
* @throws IllegalStateException
* if any values were greater than the largest bucket threshold
*/
public long mean() {
int lastBucket = buckets.length() - 1;
if (buckets.get(lastBucket) > 0)
throw new IllegalStateException(
"Unable to compute ceiling for max when histogram overflowed");
long elements = 0;
long sum = 0;
for (int i = 0; i < lastBucket; i++) {
long bCount = buckets.get(i);
elements += bCount;
sum += bCount * bucketOffsets[i];
}
return (long) Math.ceil((double) sum / elements);
}
/**
* @return the total number of non-zero values
*/
public long count() {
long sum = 0L;
for (int i = 0; i < buckets.length(); i++)
sum += buckets.get(i);
return sum;
}
/**
* @return true if this histogram has overflowed -- that is, a value larger
* than our largest bucket could bound was added
*/
public boolean isOverflowed() {
return buckets.get(buckets.length() - 1) > 0;
}
/**
* log.debug() every record in the histogram
*
* @param log
*/
public void log(Logger log) {
// only print overflow if there is any
int nameCount;
if (buckets.get(buckets.length() - 1) == 0)
nameCount = buckets.length() - 1;
else
nameCount = buckets.length();
String[] names = new String[nameCount];
int maxNameLength = 0;
for (int i = 0; i < nameCount; i++) {
names[i] = nameOfRange(bucketOffsets, i);
maxNameLength = Math.max(maxNameLength, names[i].length());
}
// emit log records
String formatstr = "%" + maxNameLength + "s: %d";
for (int i = 0; i < nameCount; i++) {
long count = buckets.get(i);
// sort-of-hack to not print empty ranges at the start that are only
// used to demarcate the
// first populated range. for code clarity we don't omit this record
// from the maxNameLength
// calculation, and accept the unnecessary whitespace prefixes that
// will occasionally occur
if (i == 0 && count == 0)
continue;
log.debug(String.format(formatstr, names[i], count));
}
}
private static String nameOfRange(long[] bucketOffsets, int index) {
StringBuilder sb = new StringBuilder();
appendRange(sb, bucketOffsets, index);
return sb.toString();
}
private static void appendRange(StringBuilder sb, long[] bucketOffsets,
int index) {
sb.append("[");
if (index == 0)
if (bucketOffsets[0] > 0)
// by original definition, this histogram is for values greater
// than zero only;
// if values of 0 or less are required, an entry of lb-1 must be
// inserted at the start
sb.append("1");
else
sb.append("-Inf");
else
sb.append(bucketOffsets[index - 1] + 1);
sb.append("..");
if (index == bucketOffsets.length)
sb.append("Inf");
else
sb.append(bucketOffsets[index]);
sb.append("]");
}
@Override
public boolean equals(Object o) {
if (this == o)
return true;
if (!(o instanceof EstimatedHistogram))
return false;
EstimatedHistogram that = (EstimatedHistogram) o;
return Arrays.equals(getBucketOffsets(), that.getBucketOffsets())
&& Arrays.equals(getBuckets(false), that.getBuckets(false));
}
@Override
public int hashCode() {
return Objects.hashCode(getBucketOffsets(), getBuckets(false));
}
}

View File

@ -1,62 +0,0 @@
package com.cloudius.urchin.utils;
/*
* Copyright (C) 2015 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
/**
*
* RecentEstimatedHistogram In the (deprecated) 'recent' functionality, each
* call to get the values cleans the value.
*
* The RecentEstimatedHistogram support recent call to EstimatedHistogram.
* It holds the latest total values and a call to getBuckets return the delta.
*
*/
public class RecentEstimatedHistogram extends EstimatedHistogram {
public RecentEstimatedHistogram() {
}
public RecentEstimatedHistogram(int bucketCount) {
super(bucketCount);
}
public RecentEstimatedHistogram(long[] offsets, long[] bucketData) {
super(offsets, bucketData);
}
/**
* Set the current buckets to new value and return the delta from the last
* getBuckets call
*
* @param bucketData
* - new bucket value
* @return a long[] containing the current histogram difference buckets
*/
public long[] getBuckets(long[] bucketData) {
final int len = buckets.length();
long[] rv = new long[len];
for (int i = 0; i < len; i++) {
rv[i] = bucketData[i];
rv[i] -= buckets.getAndSet(i, bucketData[i]);
}
return rv;
}
}

View File

@ -0,0 +1,77 @@
/*
* Copyright 2015 Cloudius Systems
*/
package com.scylladb.jmx.main;
import static java.lang.management.ManagementFactory.getPlatformMBeanServer;
import static java.util.Arrays.asList;
import java.lang.reflect.Constructor;
import javax.management.MBeanServer;
import org.apache.cassandra.db.commitlog.CommitLog;
import org.apache.cassandra.db.compaction.CompactionManager;
import org.apache.cassandra.gms.FailureDetector;
import org.apache.cassandra.gms.Gossiper;
import org.apache.cassandra.locator.EndpointSnitchInfo;
import org.apache.cassandra.net.MessagingService;
import org.apache.cassandra.service.CacheService;
import org.apache.cassandra.service.GCInspector;
import org.apache.cassandra.service.StorageProxy;
import org.apache.cassandra.service.StorageService;
import org.apache.cassandra.streaming.StreamManager;
import com.scylladb.jmx.api.APIClient;
import com.scylladb.jmx.api.APIConfig;
import com.scylladb.jmx.metrics.APIMBean;
public class Main {
private static APIConfig config;
private static APIClient client;
public static synchronized APIConfig getApiConfig() {
if (config == null) {
config = new APIConfig();
}
return config;
}
public static synchronized APIClient getApiClient() {
if (client == null) {
client = new APIClient(getApiConfig());
}
return client;
}
public static void main(String[] args) throws Exception {
System.out.printf("Java %s%n", System.getProperty("java.version"));
System.out.printf("Connecting to %s%n", getApiConfig().getBaseUrl());
System.out.println("Starting the JMX server");
MBeanServer server = getPlatformMBeanServer();
for (Class<? extends APIMBean> clazz : asList(StorageService.class, StorageProxy.class, MessagingService.class,
CommitLog.class, Gossiper.class, EndpointSnitchInfo.class, FailureDetector.class, CacheService.class,
CompactionManager.class, GCInspector.class, StreamManager.class)) {
Constructor<? extends APIMBean> c = clazz.getDeclaredConstructor(APIClient.class);
APIMBean m = c.newInstance(getApiClient());
server.registerMBean(m, null);
}
try {
// forces check for dynamically created mbeans
server.queryNames(null, null);
} catch (IllegalStateException e) {
// ignore this. Just means we started before scylla.
}
String jmxPort = System.getProperty("com.sun.management.jmxremote.port");
System.out.println("JMX is enabled to receive remote connections on port: " + jmxPort);
for (;;) {
Thread.sleep(Long.MAX_VALUE);
}
}
}

View File

@ -0,0 +1,195 @@
package com.scylladb.jmx.metrics;
import java.lang.reflect.Field;
import java.util.EnumSet;
import java.util.Set;
import java.util.function.Function;
import java.util.function.Predicate;
import javax.management.BadAttributeValueExpException;
import javax.management.BadBinaryOpValueExpException;
import javax.management.BadStringOperationException;
import javax.management.InstanceAlreadyExistsException;
import javax.management.InstanceNotFoundException;
import javax.management.InvalidApplicationException;
import javax.management.MBeanRegistration;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;
import javax.management.QueryExp;
import com.scylladb.jmx.api.APIClient;
import com.sun.jmx.mbeanserver.JmxMBeanServer;
/**
* Base type for MBeans in scylla-jmx. Wraps auto naming and {@link APIClient}
* holding.
*
* @author calle
*
*/
public class APIMBean implements MBeanRegistration {
protected final APIClient client;
protected final String mbeanName;
public APIMBean(APIClient client) {
this(null, client);
}
public APIMBean(String mbeanName, APIClient client) {
this.mbeanName = mbeanName;
this.client = client;
}
/**
* Helper method to add/remove dynamically created MBeans from a server
* instance.
*
* @param server
* The {@link MBeanServer} to check
* @param all
* All {@link ObjectName}s that should be bound
* @param predicate
* {@link QueryExp} predicate to filter relevant object names.
* @param generator
* {@link Function} to create a new MBean instance for a given
* {@link ObjectName}
* @return
* @throws MalformedObjectNameException
*/
public static boolean checkRegistration(JmxMBeanServer server, Set<ObjectName> all,
EnumSet<RegistrationMode> mode, final Predicate<ObjectName> predicate,
Function<ObjectName, Object> generator) throws MalformedObjectNameException {
Set<ObjectName> registered = queryNames(server, predicate);
if (mode.contains(RegistrationMode.Remove)) {
for (ObjectName name : registered) {
if (!all.contains(name)) {
try {
server.getMBeanServerInterceptor().unregisterMBean(name);
} catch (MBeanRegistrationException | InstanceNotFoundException e) {
}
}
}
}
int added = 0;
if (mode.contains(RegistrationMode.Add)) {
for (ObjectName name : all) {
if (!registered.contains(name)) {
try {
server.getMBeanServerInterceptor().registerMBean(generator.apply(name), name);
added++;
} catch (InstanceAlreadyExistsException | MBeanRegistrationException
| NotCompliantMBeanException e) {
}
}
}
}
return added > 0;
}
/**
* Helper method to query {@link ObjectName}s from an {@link MBeanServer}
* based on {@link Predicate}
*
* @param server
* @param predicate
* @return
*/
public static Set<ObjectName> queryNames(JmxMBeanServer server, final Predicate<ObjectName> predicate) {
@SuppressWarnings("serial")
Set<ObjectName> registered = server.queryNames(null, new QueryExp() {
@Override
public void setMBeanServer(MBeanServer s) {
}
@Override
public boolean apply(ObjectName name) throws BadStringOperationException, BadBinaryOpValueExpException,
BadAttributeValueExpException, InvalidApplicationException {
return predicate.test(name);
}
});
return registered;
}
JmxMBeanServer server;
ObjectName name;
protected final ObjectName getBoundName() {
return name;
}
/**
* Figure out an {@link ObjectName} for this object based on either
* contructor parameter, static field, or just package/class name.
*
* @return
* @throws MalformedObjectNameException
*/
protected ObjectName generateName() throws MalformedObjectNameException {
String mbeanName = this.mbeanName;
if (mbeanName == null) {
Field f;
try {
f = getClass().getDeclaredField("MBEAN_NAME");
f.setAccessible(true);
mbeanName = (String) f.get(null);
} catch (Throwable t) {
}
}
if (mbeanName == null) {
for (Class<?> c : getClass().getInterfaces()) {
Field f;
try {
f = c.getDeclaredField("OBJECT_NAME");
f.setAccessible(true);
mbeanName = (String) f.get(null);
break;
} catch (Throwable t) {
}
}
}
if (mbeanName == null) {
String name = getClass().getName();
int i = name.lastIndexOf('.');
mbeanName = name.substring(0, i) + ":type=" + name.substring(i + 1);
}
return new ObjectName(mbeanName);
}
/**
* Keeps track of bound server and optionally generates an
* {@link ObjectName} for this instance.
*/
@Override
public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception {
if (this.server != null) {
throw new IllegalStateException("Can only exist in a single MBeanServer");
}
this.server = (JmxMBeanServer) server;
if (name == null) {
name = generateName();
}
this.name = name;
return name;
}
@Override
public void postRegister(Boolean registrationDone) {
}
@Override
public void preDeregister() throws Exception {
}
@Override
public void postDeregister() {
assert server != null;
assert name != null;
this.server = null;
this.name = null;
}
}

View File

@ -0,0 +1,137 @@
package com.scylladb.jmx.metrics;
import static java.util.Arrays.asList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.function.Predicate;
import java.util.function.Supplier;
import javax.management.InstanceNotFoundException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import org.apache.cassandra.metrics.Metrics;
import org.apache.cassandra.metrics.MetricsRegistry;
import com.scylladb.jmx.api.APIClient;
import com.sun.jmx.mbeanserver.JmxMBeanServer;
/**
* Base type for MBeans containing {@link Metrics}.
*
* @author calle
*
*/
public abstract class MetricsMBean extends APIMBean {
private static final Map<JmxMBeanServer, Map<String, Integer>> registered = new HashMap<>();
private static final Object registrationLock = new Object();
private final Collection<Metrics> metrics;
public MetricsMBean(APIClient client, Metrics... metrics) {
this(null, client, metrics);
}
public MetricsMBean(String mbeanName, APIClient client, Metrics... metrics) {
this(mbeanName, client, asList(metrics));
}
public MetricsMBean(String mbeanName, APIClient client, Collection<Metrics> metrics) {
super(mbeanName, client);
this.metrics = metrics;
}
protected Predicate<ObjectName> getTypePredicate() {
String domain = name.getDomain();
String type = name.getKeyProperty("type");
return n -> {
return domain.equals(n.getDomain()) && type.equals(n.getKeyProperty("type"));
};
}
// Has to be called with registrationLock hold
private static boolean shouldRegisterGlobals(JmxMBeanServer server, String domainAndType, boolean reversed) {
Map<String, Integer> serverMap = registered.get(server);
if (serverMap == null) {
assert !reversed;
serverMap = new HashMap<>();
serverMap.put(domainAndType, 1);
registered.put(server, serverMap);
return true;
}
Integer count = serverMap.get(domainAndType);
if (count == null) {
assert !reversed;
serverMap.put(domainAndType, 1);
return true;
}
if (reversed) {
--count;
if (count == 0) {
serverMap.remove(domainAndType);
if (serverMap.isEmpty()) {
registered.remove(server);
}
return true;
}
serverMap.put(domainAndType, count);
return false;
} else {
serverMap.put(domainAndType, count + 1);
}
return false;
}
private void register(MetricsRegistry registry, JmxMBeanServer server, boolean reversed) throws MalformedObjectNameException {
// Check if we're the first/last of our type bound/removed.
synchronized (registrationLock) {
boolean registerGlobals = shouldRegisterGlobals(server, name.getDomain() + ":" + name.getKeyProperty("type"), reversed);
if (registerGlobals) {
for (Metrics m : metrics) {
m.registerGlobals(registry);
}
}
}
for (Metrics m : metrics) {
m.register(registry);
}
}
@Override
public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception {
// Get name etc.
name = super.preRegister(server, name);
// Register all metrics in server
register(new MetricsRegistry(client, (JmxMBeanServer) server), (JmxMBeanServer) server, false);
return name;
}
@Override
public void postDeregister() {
// We're officially unbound. Remove all metrics we added.
try {
register(new MetricsRegistry(client, server) {
// Unbind instead of bind. Yes.
@Override
public void register(Supplier<MetricMBean> s, ObjectName... objectNames) {
for (ObjectName name : objectNames) {
try {
server.getMBeanServerInterceptor().unregisterMBean(name);
} catch (MBeanRegistrationException | InstanceNotFoundException e) {
}
}
}
}, server, true);
} catch (MalformedObjectNameException e) {
// TODO : log?
}
super.postDeregister();
}
}

View File

@ -0,0 +1,69 @@
package com.scylladb.jmx.metrics;
import static com.scylladb.jmx.metrics.RegistrationMode.Remove;
import static com.scylladb.jmx.metrics.RegistrationMode.Wait;
import static java.util.EnumSet.allOf;
import static java.util.EnumSet.of;
import java.net.UnknownHostException;
import java.util.EnumSet;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import javax.management.OperationsException;
import com.scylladb.jmx.api.APIClient;
import com.sun.jmx.mbeanserver.JmxMBeanServer;
/**
* Helper type to do optional locking for registration. Allows for
* per-bind-point locks and registration, instead of per-type or per-instance
* locks which may be misguiding, since for example one instance can be bound to
* many MBeanServers etc.
*
* Also allows for polled checks, i.e. try-lock and either wait or skip. Wait,
* because we probably should not repeat things hidden by this type too often,
* and skip because for example a periodic task checking can just skip if a
* user-initiated registration check is being done.
*
* @author calle
*
*/
@SuppressWarnings("restriction")
public abstract class RegistrationChecker {
private final Lock lock = new ReentrantLock();
public static final EnumSet<RegistrationMode> REMOVE_NO_WAIT = of(Remove);
public static final EnumSet<RegistrationMode> ADD_AND_REMOVE = allOf(RegistrationMode.class);
public final void reap(APIClient client, JmxMBeanServer server) throws OperationsException, UnknownHostException {
check(client, server, REMOVE_NO_WAIT);
}
public final void check(APIClient client, JmxMBeanServer server) throws OperationsException, UnknownHostException {
check(client, server, ADD_AND_REMOVE);
}
public final void check(APIClient client, JmxMBeanServer server, EnumSet<RegistrationMode> mode)
throws OperationsException, UnknownHostException {
if (!lock.tryLock()) {
if (mode.contains(Wait)) {
// someone is doing update.
// since this is jmx, and sloppy, we'll just
// assume that once he is done, things are
// good enough.
lock.lock();
lock.unlock();
}
return;
}
try {
doCheck(client, server, mode);
} finally {
lock.unlock();
}
}
protected abstract void doCheck(APIClient client, JmxMBeanServer server, EnumSet<RegistrationMode> mode)
throws OperationsException, UnknownHostException;
}

View File

@ -0,0 +1,5 @@
package com.scylladb.jmx.metrics;
public enum RegistrationMode {
Wait, Add, Remove,
}

View File

@ -0,0 +1,496 @@
package com.scylladb.jmx.utils;
/**
* Copyright 2016 ScyllaDB
*/
import static com.scylladb.jmx.main.Main.getApiClient;
import static com.sun.jmx.mbeanserver.Util.wildmatch;
import static java.util.logging.Level.SEVERE;
import static javax.management.MBeanServerDelegate.DELEGATE_NAME;
import java.security.AccessController;
import java.security.PrivilegedActionException;
import java.security.PrivilegedExceptionAction;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.locks.ReentrantReadWriteLock;
import java.util.logging.Logger;
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
import javax.management.DynamicMBean;
import javax.management.InstanceAlreadyExistsException;
import javax.management.InstanceNotFoundException;
import javax.management.MBeanServer;
import javax.management.MBeanServerBuilder;
import javax.management.MBeanServerDelegate;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.QueryExp;
import javax.management.RuntimeOperationsException;
import com.sun.jmx.interceptor.DefaultMBeanServerInterceptor;
import com.sun.jmx.mbeanserver.JmxMBeanServer;
import com.sun.jmx.mbeanserver.NamedObject;
import com.sun.jmx.mbeanserver.Repository;
/**
* This class purposly knows way to much of the inner workings
* of Oracle JDK MBeanServer workings, and pervert it for
* performance sakes. It is not portable to other MBean implementations.
*
*/
@SuppressWarnings("restriction")
public class APIBuilder extends MBeanServerBuilder {
private static final Logger logger = Logger.getLogger(APIBuilder.class.getName());
private static class TableRepository extends Repository {
private static final Logger logger = Logger.getLogger(TableRepository.class.getName());
private final Repository wrapped;
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private final Map<TableMetricParams, DynamicMBean> tableMBeans = new HashMap<>();
private static boolean isTableMetricName(ObjectName name) {
return isTableMetricDomain(name.getDomain());
}
private static boolean isTableMetricDomain(String domain) {
return TableMetricParams.TABLE_METRICS_DOMAIN.equals(domain);
}
public TableRepository(String defaultDomain, final Repository repository) {
super(defaultDomain);
wrapped = repository;
}
@Override
public String getDefaultDomain() {
return wrapped.getDefaultDomain();
}
@Override
public boolean contains(final ObjectName name) {
if (!isTableMetricName(name)) {
return wrapped.contains(name);
} else {
lock.readLock().lock();
try {
return tableMBeans.containsKey(new TableMetricParams(name));
} finally {
lock.readLock().unlock();
}
}
}
@Override
public String[] getDomains() {
final String[] domains = wrapped.getDomains();
if (tableMBeans.isEmpty()) {
return domains;
}
final String[] res = new String[domains.length + 1];
System.arraycopy(domains, 0, res, 0, domains.length);
res[domains.length] = TableMetricParams.TABLE_METRICS_DOMAIN;
return res;
}
@Override
public Integer getCount() {
lock.readLock().lock();
try {
return wrapped.getCount() + tableMBeans.size();
} finally {
lock.readLock().unlock();
}
}
@Override
public void addMBean(final DynamicMBean bean, final ObjectName name, final RegistrationContext ctx)
throws InstanceAlreadyExistsException {
if (!isTableMetricName(name)) {
wrapped.addMBean(bean, name, ctx);
} else {
final TableMetricParams key = new TableMetricParams(name);
lock.writeLock().lock();
try {
if (tableMBeans.containsKey(key)) {
throw new InstanceAlreadyExistsException(name.toString());
}
tableMBeans.put(key, bean);
if (ctx == null) return;
try {
ctx.registering();
} catch (RuntimeOperationsException x) {
throw x;
} catch (RuntimeException x) {
throw new RuntimeOperationsException(x);
}
} finally {
lock.writeLock().unlock();
}
}
}
@Override
public void remove(final ObjectName name, final RegistrationContext ctx) throws InstanceNotFoundException {
if (!isTableMetricName(name)) {
wrapped.remove(name, ctx);
} else {
final TableMetricParams key = new TableMetricParams(name);
lock.writeLock().lock();
try {
if (tableMBeans.remove(key) == null) {
throw new InstanceNotFoundException(name.toString());
}
if (ctx == null) {
return;
}
try {
ctx.unregistered();
} catch (Exception x) {
logger.log(SEVERE, "Unexpected error.", x);
}
} finally {
lock.writeLock().unlock();
}
}
}
@Override
public DynamicMBean retrieve(final ObjectName name) {
if (!isTableMetricName(name)) {
return wrapped.retrieve(name);
} else {
lock.readLock().lock();
try {
return tableMBeans.get(new TableMetricParams(name));
} finally {
lock.readLock().unlock();
}
}
}
private void addAll(final Set<NamedObject> res) {
for (Map.Entry<TableMetricParams, DynamicMBean> e : tableMBeans.entrySet()) {
try {
res.add(new NamedObject(e.getKey().toName(), e.getValue()));
} catch (MalformedObjectNameException e1) {
// This should never happen
logger.log(SEVERE, "Unexpected error.", e1);
}
}
}
private void addAllMatching(final Set<NamedObject> res,
final ObjectNamePattern pattern) {
for (Map.Entry<TableMetricParams, DynamicMBean> e : tableMBeans.entrySet()) {
try {
ObjectName name = e.getKey().toName();
if (pattern.matchKeys(name)) {
res.add(new NamedObject(name, e.getValue()));
}
} catch (MalformedObjectNameException e1) {
// This should never happen
logger.log(SEVERE, "Unexpected error.", e1);
}
}
}
@Override
public Set<NamedObject> query(final ObjectName pattern, final QueryExp query) {
Set<NamedObject> res = wrapped.query(pattern, query);
ObjectName name;
if (pattern == null ||
pattern.getCanonicalName().length() == 0 ||
pattern.equals(ObjectName.WILDCARD)) {
name = ObjectName.WILDCARD;
} else {
name = pattern;
}
lock.readLock().lock();
try {
// If pattern is not a pattern, retrieve this mbean !
if (!name.isPattern() && isTableMetricName(name)) {
final DynamicMBean bean = tableMBeans.get(new TableMetricParams(name));
if (bean != null) {
res.add(new NamedObject(name, bean));
return res;
}
}
// All names in all domains
if (name == ObjectName.WILDCARD) {
addAll(res);
return res;
}
final String canonical_key_property_list_string =
name.getCanonicalKeyPropertyListString();
final boolean allNames =
(canonical_key_property_list_string.length()==0);
final ObjectNamePattern namePattern =
(allNames?null:new ObjectNamePattern(name));
// All names in default domain
if (name.getDomain().length() == 0) {
if (isTableMetricDomain(getDefaultDomain())) {
if (allNames) {
addAll(res);
} else {
addAllMatching(res, namePattern);
}
}
return res;
}
if (!name.isDomainPattern()) {
if (isTableMetricDomain(getDefaultDomain())) {
if (allNames) {
addAll(res);
} else {
addAllMatching(res, namePattern);
}
}
return res;
}
// Pattern matching in the domain name (*, ?)
final String dom2Match = name.getDomain();
if (wildmatch(TableMetricParams.TABLE_METRICS_DOMAIN, dom2Match)) {
if (allNames) {
addAll(res);
} else {
addAllMatching(res, namePattern);
}
}
} finally {
lock.readLock().unlock();
}
return res;
}
}
private final static class ObjectNamePattern {
private final String[] keys;
private final String[] values;
private final String properties;
private final boolean isPropertyListPattern;
private final boolean isPropertyValuePattern;
/**
* The ObjectName pattern against which ObjectNames are matched.
**/
public final ObjectName pattern;
/**
* Builds a new ObjectNamePattern object from an ObjectName pattern.
* @param pattern The ObjectName pattern under examination.
**/
public ObjectNamePattern(ObjectName pattern) {
this(pattern.isPropertyListPattern(),
pattern.isPropertyValuePattern(),
pattern.getCanonicalKeyPropertyListString(),
pattern.getKeyPropertyList(),
pattern);
}
/**
* Builds a new ObjectNamePattern object from an ObjectName pattern
* constituents.
* @param propertyListPattern pattern.isPropertyListPattern().
* @param propertyValuePattern pattern.isPropertyValuePattern().
* @param canonicalProps pattern.getCanonicalKeyPropertyListString().
* @param keyPropertyList pattern.getKeyPropertyList().
* @param pattern The ObjectName pattern under examination.
**/
ObjectNamePattern(boolean propertyListPattern,
boolean propertyValuePattern,
String canonicalProps,
Map<String,String> keyPropertyList,
ObjectName pattern) {
this.isPropertyListPattern = propertyListPattern;
this.isPropertyValuePattern = propertyValuePattern;
this.properties = canonicalProps;
final int len = keyPropertyList.size();
this.keys = new String[len];
this.values = new String[len];
int i = 0;
for (Map.Entry<String,String> entry : keyPropertyList.entrySet()) {
keys[i] = entry.getKey();
values[i] = entry.getValue();
i++;
}
this.pattern = pattern;
}
/**
* Return true if the given ObjectName matches the ObjectName pattern
* for which this object has been built.
* WARNING: domain name is not considered here because it is supposed
* not to be wildcard when called. PropertyList is also
* supposed not to be zero-length.
* @param name The ObjectName we want to match against the pattern.
* @return true if <code>name</code> matches the pattern.
**/
public boolean matchKeys(ObjectName name) {
// If key property value pattern but not key property list
// pattern, then the number of key properties must be equal
//
if (isPropertyValuePattern &&
!isPropertyListPattern &&
(name.getKeyPropertyList().size() != keys.length)) {
return false;
}
// If key property value pattern or key property list pattern,
// then every property inside pattern should exist in name
//
if (isPropertyValuePattern || isPropertyListPattern) {
for (int i = keys.length - 1; i >= 0 ; i--) {
// Find value in given object name for key at current
// index in receiver
//
String v = name.getKeyProperty(keys[i]);
// Did we find a value for this key ?
//
if (v == null) {
return false;
}
// If this property is ok (same key, same value), go to next
//
if (isPropertyValuePattern &&
pattern.isPropertyValuePattern(keys[i])) {
// wildmatch key property values
// values[i] is the pattern;
// v is the string
if (wildmatch(v,values[i])) {
continue;
} else {
return false;
}
}
if (v.equals(values[i])) {
continue;
}
return false;
}
return true;
}
// If no pattern, then canonical names must be equal
//
final String p1 = name.getCanonicalKeyPropertyListString();
final String p2 = properties;
return (p1.equals(p2));
}
}
public static class TableMetricParams {
public static final String TABLE_METRICS_DOMAIN = "org.apache.cassandra.metrics";
private final ObjectName name;
public TableMetricParams(ObjectName name) {
this.name = name;
}
public ObjectName toName() throws MalformedObjectNameException {
return name;
}
private static boolean equal(Object a, Object b) {
return (a == null) ? b == null : a.equals(b);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof TableMetricParams)) {
return false;
}
TableMetricParams oo = (TableMetricParams) o;
return equal(name.getKeyProperty("keyspace"), oo.name.getKeyProperty("keyspace"))
&& equal(name.getKeyProperty("scope"), oo.name.getKeyProperty("scope"))
&& equal(name.getKeyProperty("name"), oo.name.getKeyProperty("name"))
&& equal(name.getKeyProperty("type"), oo.name.getKeyProperty("type"));
}
private static int hash(Object o) {
return o == null ? 0 : o.hashCode();
}
private static int safeAdd(int ... nums) {
long res = 0;
for (int n : nums) {
res = (res + n) % Integer.MAX_VALUE;
}
return (int)res;
}
@Override
public int hashCode() {
return safeAdd(hash(name.getKeyProperty("keyspace")),
hash(name.getKeyProperty("scope")),
hash(name.getKeyProperty("name")),
hash(name.getKeyProperty("type")));
}
}
@Override
public MBeanServer newMBeanServer(String defaultDomain, MBeanServer outer, MBeanServerDelegate delegate) {
// It is important to set |interceptors| to true while creating the
// JmxMBeanSearver. It is required for calls to
// JmxMBeanServer.setMBeanServerInterceptor() to be allowed.
JmxMBeanServer nested = (JmxMBeanServer) JmxMBeanServer.newMBeanServer(defaultDomain, outer, delegate, true);
// This is not very clean, we depend on knowledge of how the Sun/Oracle
// MBean chain looks internally. But we need haxxor support, so
// lets replace the interceptor.
// Note: Removed reflection gunk to eliminate jdk9+ warnings on
// execution. Also, if we can get by without reflection, it is
// better.
final DefaultMBeanServerInterceptor interceptor = new DefaultMBeanServerInterceptor(outer != null ? outer : nested,
delegate, nested.getMBeanInstantiator(),
new TableRepository(defaultDomain, new Repository(defaultDomain)));
nested.setMBeanServerInterceptor(interceptor);
final MBeanServerDelegate d = nested.getMBeanServerDelegate();
try {
// Interceptor needs the delegate present. Normally done
// by inaccessible method in JmxMBeanServer
AccessController.doPrivileged(new PrivilegedExceptionAction<Object>() {
public Object run() throws Exception {
interceptor.registerMBean(d, DELEGATE_NAME);
return null;
}
});
} catch (PrivilegedActionException e) {
logger.log(SEVERE, "Unexpected error.", e);
throw new RuntimeException(e);
}
return new APIMBeanServer(getApiClient(), nested);
}
}

View File

@ -0,0 +1,327 @@
package com.scylladb.jmx.utils;
import static java.util.Arrays.asList;
import static java.util.concurrent.Executors.newScheduledThreadPool;
import static java.util.concurrent.TimeUnit.MINUTES;
import java.io.ObjectInputStream;
import java.net.UnknownHostException;
import java.util.Set;
import java.util.concurrent.ScheduledExecutorService;
import java.util.logging.Logger;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
import javax.management.Attribute;
import javax.management.AttributeList;
import javax.management.AttributeNotFoundException;
import javax.management.InstanceAlreadyExistsException;
import javax.management.InstanceNotFoundException;
import javax.management.IntrospectionException;
import javax.management.InvalidAttributeValueException;
import javax.management.ListenerNotFoundException;
import javax.management.MBeanException;
import javax.management.MBeanInfo;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.NotificationFilter;
import javax.management.NotificationListener;
import javax.management.ObjectInstance;
import javax.management.ObjectName;
import javax.management.OperationsException;
import javax.management.QueryExp;
import javax.management.ReflectionException;
import javax.management.loading.ClassLoaderRepository;
import org.apache.cassandra.db.ColumnFamilyStore;
import org.apache.cassandra.metrics.StreamingMetrics;
import com.scylladb.jmx.api.APIClient;
import com.scylladb.jmx.metrics.RegistrationChecker;
import com.sun.jmx.mbeanserver.JmxMBeanServer;
@SuppressWarnings("restriction")
public class APIMBeanServer implements MBeanServer {
@SuppressWarnings("unused")
private static final Logger logger = Logger.getLogger(APIMBeanServer.class.getName());
private static final ScheduledExecutorService executor = newScheduledThreadPool(1);
private final RegistrationChecker columnFamilyStoreChecker = ColumnFamilyStore.createRegistrationChecker();
private final RegistrationChecker streamingMetricsChecker = StreamingMetrics.createRegistrationChecker();
private final APIClient client;
private final JmxMBeanServer server;
public APIMBeanServer(APIClient client, JmxMBeanServer server) {
this.client = client;
this.server = server;
executor.scheduleWithFixedDelay(() -> {
for (RegistrationChecker c : asList(columnFamilyStoreChecker, streamingMetricsChecker)) {
try {
c.reap(client, server);
} catch (OperationsException | UnknownHostException e) {
// TODO: log?
}
}
}, 1, 5, MINUTES);
}
private static ObjectInstance prepareForRemote(final ObjectInstance i) {
return new ObjectInstance(prepareForRemote(i.getObjectName()), i.getClassName());
}
private static ObjectName prepareForRemote(final ObjectName n) {
/*
* ObjectName.getInstance has changed in JDK (micro) updates so it no longer applies
* overridable methods -> wrong name published.
* Fix by doing explicit ObjectName instansiation.
*/
try {
return new ObjectName(n.getCanonicalName());
} catch (MalformedObjectNameException e) {
throw new IllegalArgumentException(n.toString());
}
}
@Override
public ObjectInstance createMBean(String className, ObjectName name) throws ReflectionException,
InstanceAlreadyExistsException, MBeanRegistrationException, MBeanException, NotCompliantMBeanException {
return prepareForRemote(server.createMBean(className, name));
}
@Override
public ObjectInstance createMBean(String className, ObjectName name, ObjectName loaderName)
throws ReflectionException, InstanceAlreadyExistsException, MBeanRegistrationException, MBeanException,
NotCompliantMBeanException, InstanceNotFoundException {
return prepareForRemote(server.createMBean(className, name, loaderName));
}
@Override
public ObjectInstance createMBean(String className, ObjectName name, Object[] params, String[] signature)
throws ReflectionException, InstanceAlreadyExistsException, MBeanRegistrationException, MBeanException,
NotCompliantMBeanException {
return prepareForRemote(server.createMBean(className, name, params, signature));
}
@Override
public ObjectInstance createMBean(String className, ObjectName name, ObjectName loaderName, Object[] params,
String[] signature) throws ReflectionException, InstanceAlreadyExistsException, MBeanRegistrationException,
MBeanException, NotCompliantMBeanException, InstanceNotFoundException {
return prepareForRemote(server.createMBean(className, name, loaderName, params, signature));
}
@Override
public ObjectInstance registerMBean(Object object, ObjectName name)
throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {
return prepareForRemote(server.registerMBean(object, name));
}
@Override
public void unregisterMBean(ObjectName name) throws InstanceNotFoundException, MBeanRegistrationException {
server.unregisterMBean(name);
}
@Override
public ObjectInstance getObjectInstance(ObjectName name) throws InstanceNotFoundException {
checkRegistrations(name);
return prepareForRemote(server.getObjectInstance(name));
}
@Override
public Set<ObjectName> queryNames(ObjectName name, QueryExp query) {
checkRegistrations(name);
return server.queryNames(name, query).stream().map(n -> prepareForRemote(n)).collect(Collectors.toSet());
}
@Override
public Set<ObjectInstance> queryMBeans(ObjectName name, QueryExp query) {
checkRegistrations(name);
return server.queryMBeans(name, query).stream().map(i -> prepareForRemote(i)).collect(Collectors.toSet());
}
@Override
public boolean isRegistered(ObjectName name) {
checkRegistrations(name);
return server.isRegistered(name);
}
@Override
public Integer getMBeanCount() {
return server.getMBeanCount();
}
@Override
public Object getAttribute(ObjectName name, String attribute)
throws MBeanException, AttributeNotFoundException, InstanceNotFoundException, ReflectionException {
checkRegistrations(name);
return server.getAttribute(name, attribute);
}
@Override
public AttributeList getAttributes(ObjectName name, String[] attributes)
throws InstanceNotFoundException, ReflectionException {
checkRegistrations(name);
return server.getAttributes(name, attributes);
}
@Override
public void setAttribute(ObjectName name, Attribute attribute) throws InstanceNotFoundException,
AttributeNotFoundException, InvalidAttributeValueException, MBeanException, ReflectionException {
checkRegistrations(name);
server.setAttribute(name, attribute);
}
@Override
public AttributeList setAttributes(ObjectName name, AttributeList attributes)
throws InstanceNotFoundException, ReflectionException {
checkRegistrations(name);
return server.setAttributes(name, attributes);
}
@Override
public Object invoke(ObjectName name, String operationName, Object[] params, String[] signature)
throws InstanceNotFoundException, MBeanException, ReflectionException {
checkRegistrations(name);
return server.invoke(name, operationName, params, signature);
}
@Override
public String getDefaultDomain() {
return server.getDefaultDomain();
}
@Override
public String[] getDomains() {
return server.getDomains();
}
@Override
public void addNotificationListener(ObjectName name, NotificationListener listener, NotificationFilter filter,
Object handback) throws InstanceNotFoundException {
server.addNotificationListener(name, listener, filter, handback);
}
@Override
public void addNotificationListener(ObjectName name, ObjectName listener, NotificationFilter filter,
Object handback) throws InstanceNotFoundException {
server.addNotificationListener(name, listener, filter, handback);
}
@Override
public void removeNotificationListener(ObjectName name, ObjectName listener)
throws InstanceNotFoundException, ListenerNotFoundException {
server.removeNotificationListener(name, listener);
}
@Override
public void removeNotificationListener(ObjectName name, ObjectName listener, NotificationFilter filter,
Object handback) throws InstanceNotFoundException, ListenerNotFoundException {
server.removeNotificationListener(name, listener, filter, handback);
}
@Override
public void removeNotificationListener(ObjectName name, NotificationListener listener)
throws InstanceNotFoundException, ListenerNotFoundException {
server.removeNotificationListener(name, listener);
}
@Override
public void removeNotificationListener(ObjectName name, NotificationListener listener, NotificationFilter filter,
Object handback) throws InstanceNotFoundException, ListenerNotFoundException {
server.removeNotificationListener(name, listener, filter, handback);
}
@Override
public MBeanInfo getMBeanInfo(ObjectName name)
throws InstanceNotFoundException, IntrospectionException, ReflectionException {
checkRegistrations(name);
return server.getMBeanInfo(name);
}
@Override
public boolean isInstanceOf(ObjectName name, String className) throws InstanceNotFoundException {
return server.isInstanceOf(name, className);
}
@Override
public Object instantiate(String className) throws ReflectionException, MBeanException {
return server.instantiate(className);
}
@Override
public Object instantiate(String className, ObjectName loaderName)
throws ReflectionException, MBeanException, InstanceNotFoundException {
return server.instantiate(className, loaderName);
}
@Override
public Object instantiate(String className, Object[] params, String[] signature)
throws ReflectionException, MBeanException {
return server.instantiate(className, params, signature);
}
@Override
public Object instantiate(String className, ObjectName loaderName, Object[] params, String[] signature)
throws ReflectionException, MBeanException, InstanceNotFoundException {
return server.instantiate(className, loaderName, params, signature);
}
@Override
@Deprecated
public ObjectInputStream deserialize(ObjectName name, byte[] data)
throws InstanceNotFoundException, OperationsException {
return server.deserialize(name, data);
}
@Override
@Deprecated
public ObjectInputStream deserialize(String className, byte[] data)
throws OperationsException, ReflectionException {
return server.deserialize(className, data);
}
@Override
@Deprecated
public ObjectInputStream deserialize(String className, ObjectName loaderName, byte[] data)
throws InstanceNotFoundException, OperationsException, ReflectionException {
return server.deserialize(className, loaderName, data);
}
@Override
public ClassLoader getClassLoaderFor(ObjectName mbeanName) throws InstanceNotFoundException {
return server.getClassLoaderFor(mbeanName);
}
@Override
public ClassLoader getClassLoader(ObjectName loaderName) throws InstanceNotFoundException {
return server.getClassLoader(loaderName);
}
@Override
public ClassLoaderRepository getClassLoaderRepository() {
return server.getClassLoaderRepository();
}
static final Pattern tables = Pattern.compile("^\\*?((Index)?ColumnFamil(ies|y)|(Index)?(Table(s)?)?)$");
private void checkRegistrations(ObjectName name) {
if (name != null && server.isRegistered(name)) {
return;
}
try {
String type = name != null ? name.getKeyProperty("type") : null;
if (type == null || tables.matcher(type).matches()) {
columnFamilyStoreChecker.check(client, server);
}
if (type == null || StreamingMetrics.TYPE_NAME.equals(type)) {
streamingMetricsChecker.check(client, server);
}
} catch (OperationsException | UnknownHostException e) {
// TODO: log
}
}
}

View File

@ -0,0 +1,18 @@
package com.scylladb.jmx.utils;
import jakarta.xml.bind.annotation.adapters.XmlAdapter;
import java.time.Instant;
import java.util.Date;
public class DateXmlAdapter extends XmlAdapter<String, Date> {
@Override
public String marshal(Date v) throws Exception {
return Instant.ofEpochMilli(v.getTime()).toString();
}
@Override
public Date unmarshal(String v) throws Exception {
return new Date(Instant.parse(v).toEpochMilli());
}
}

View File

@ -1,29 +0,0 @@
package com.yammer.metrics.core;
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
import com.cloudius.urchin.api.APIClient;
import com.yammer.metrics.core.Counter;
public class APICounter extends Counter {
String url;
private APIClient c = new APIClient();
public APICounter(String _url) {
super();
url = _url;
}
/**
* Returns the counter's current value.
*
* @return the counter's current value
*/
public long count() {
return c.getLongValue(url);
}
}

View File

@ -1,201 +0,0 @@
package com.yammer.metrics.core;
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
import java.lang.reflect.Field;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReference;
import com.cloudius.urchin.api.APIClient;
import com.yammer.metrics.stats.Sample;
import com.yammer.metrics.stats.Snapshot;
public class APIHistogram extends Histogram {
Field countField;
Field minField;
Field maxField;
Field sumField;
Field varianceField;
Field sampleField;
long last_update = 0;
static final long UPDATE_INTERVAL = 50;
long updateInterval;
String url;
private APIClient c = new APIClient();
private void setFields() {
try {
minField = Histogram.class.getDeclaredField("min");
minField.setAccessible(true);
maxField = Histogram.class.getDeclaredField("max");
maxField.setAccessible(true);
sumField = Histogram.class.getDeclaredField("sum");
sumField.setAccessible(true);
varianceField = Histogram.class.getDeclaredField("variance");
varianceField.setAccessible(true);
sampleField = Histogram.class.getDeclaredField("sample");
sampleField.setAccessible(true);
countField = Histogram.class.getDeclaredField("count");
countField.setAccessible(true);
try {
getCount().set(0);
} catch (IllegalArgumentException | IllegalAccessException e) {
// There's no reason to get here
// and there's nothing we can do even if we would
}
} catch (NoSuchFieldException | SecurityException e) {
e.printStackTrace();
}
}
public AtomicLong getMin() throws IllegalArgumentException,
IllegalAccessException {
return (AtomicLong) minField.get(this);
}
public AtomicLong getMax() throws IllegalArgumentException,
IllegalAccessException {
return (AtomicLong) maxField.get(this);
}
public AtomicLong getSum() throws IllegalArgumentException,
IllegalAccessException {
return (AtomicLong) sumField.get(this);
}
public AtomicLong getCount() throws IllegalArgumentException,
IllegalAccessException {
return (AtomicLong) countField.get(this);
}
@SuppressWarnings("unchecked")
public AtomicReference<double[]> getVariance()
throws IllegalArgumentException, IllegalAccessException {
return (AtomicReference<double[]>) varianceField.get(this);
}
public Sample getSample() throws IllegalArgumentException,
IllegalAccessException {
return (Sample) sampleField.get(this);
}
public APIHistogram(String url, Sample sample) {
super(sample);
setFields();
this.url = url;
}
public APIHistogram(String url, SampleType type, long updateInterval) {
super(type);
setFields();
this.url = url;
this.updateInterval = updateInterval;
}
public APIHistogram(String url, SampleType type) {
this(url, type, UPDATE_INTERVAL);
}
public void update() {
long now = System.currentTimeMillis();
if (now - last_update < UPDATE_INTERVAL) {
return;
}
last_update = now;
clear();
HistogramValues vals = c.getHistogramValue(url);
try {
if (vals.sample != null) {
for (long v : vals.sample) {
getSample().update(v);
}
}
getCount().set(vals.count);
getMax().set(vals.max);
getMin().set(vals.min);
getSum().set(vals.sum);
double[] newValue = new double[2];
newValue[0] = vals.mean;
newValue[1] = vals.variance;
getVariance().getAndSet(newValue);
} catch (IllegalArgumentException | IllegalAccessException e) {
e.printStackTrace();
}
}
/**
* Returns the number of values recorded.
*
* @return the number of values recorded
*/
public long count() {
update();
return super.count();
}
/*
* (non-Javadoc)
*
* @see com.yammer.metrics.core.Summarizable#max()
*/
@Override
public double max() {
update();
return super.max();
}
/*
* (non-Javadoc)
*
* @see com.yammer.metrics.core.Summarizable#min()
*/
@Override
public double min() {
update();
return super.min();
}
/*
* (non-Javadoc)
*
* @see com.yammer.metrics.core.Summarizable#mean()
*/
@Override
public double mean() {
update();
return super.mean();
}
/*
* (non-Javadoc)
*
* @see com.yammer.metrics.core.Summarizable#stdDev()
*/
@Override
public double stdDev() {
update();
return super.stdDev();
}
/*
* (non-Javadoc)
*
* @see com.yammer.metrics.core.Summarizable#sum()
*/
@Override
public double sum() {
update();
return super.sum();
}
@Override
public Snapshot getSnapshot() {
update();
return super.getSnapshot();
}
}

View File

@ -1,45 +0,0 @@
package com.yammer.metrics.core;
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import com.cloudius.urchin.api.APIClient;
public class APIMeter extends Meter {
String url;
private APIClient c = new APIClient();
public APIMeter(String _url, ScheduledExecutorService tickThread,
String eventType, TimeUnit rateUnit, Clock clock) {
super(tickThread, eventType, rateUnit, clock);
// TODO Auto-generated constructor stub
url = _url;
}
public long get_value() {
return c.getLongValue(url);
}
// Meter doesn't have a set value method.
// to mimic it, we clear the old value and set it to a new one.
// This is safe because the only this method would be used
// to update the values
public long set(long new_value) {
long res = super.count();
mark(-res);
mark(new_value);
return res;
}
@Override
void tick() {
set(get_value());
super.tick();
}
}

View File

@ -1,362 +0,0 @@
package com.yammer.metrics.core;
import java.lang.reflect.Field;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import com.yammer.metrics.core.APICounter;
import com.yammer.metrics.core.APIMeter;
import com.yammer.metrics.core.Clock;
import com.yammer.metrics.core.Counter;
import com.yammer.metrics.core.Meter;
import com.yammer.metrics.core.Metric;
import com.yammer.metrics.core.MetricName;
import com.yammer.metrics.core.MetricsRegistry;
import com.yammer.metrics.core.ThreadPools;
import com.yammer.metrics.core.Histogram.SampleType;
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
public class APIMetricsRegistry extends MetricsRegistry {
Field fieldMetrics;
Field fieldClock;
Field fieldThreadPool;
public APIMetricsRegistry() {
try {
fieldMetrics = MetricsRegistry.class.getDeclaredField("metrics");
fieldMetrics.setAccessible(true);
fieldClock = MetricsRegistry.class.getDeclaredField("clock");
fieldClock.setAccessible(true);
fieldThreadPool = MetricsRegistry.class
.getDeclaredField("threadPools");
fieldThreadPool.setAccessible(true);
} catch (NoSuchFieldException | SecurityException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public ThreadPools getThreadPools() {
try {
return (ThreadPools) fieldThreadPool.get(this);
} catch (IllegalArgumentException | IllegalAccessException e) {
e.printStackTrace();
}
return null;
}
public Clock getClock() {
try {
return (Clock) fieldClock.get(this);
} catch (IllegalArgumentException | IllegalAccessException e) {
e.printStackTrace();
}
return null;
}
@SuppressWarnings("unchecked")
public ConcurrentMap<MetricName, Metric> getMetrics() {
try {
return (ConcurrentMap<MetricName, Metric>) fieldMetrics.get(this);
} catch (IllegalArgumentException | IllegalAccessException e) {
e.printStackTrace();
}
return null;
}
/**
* Creates a new {@link Counter} and registers it under the given class and
* name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @return a new {@link Counter}
*/
public Counter newCounter(String url, Class<?> klass, String name) {
return newCounter(url, klass, name, null);
}
/**
* Creates a new {@link Counter} and registers it under the given class and
* name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @return a new {@link Counter}
*/
public Counter newCounter(String url, Class<?> klass, String name,
String scope) {
return newCounter(url, createName(klass, name, scope));
}
/**
* Creates a new {@link Counter} and registers it under the given metric
* name.
*
* @param metricName
* the name of the metric
* @return a new {@link Counter}
*/
public Counter newCounter(String url, MetricName metricName) {
return getOrAdd(metricName, new APICounter(url));
}
/**
* Creates a new {@link Meter} and registers it under the given class and
* name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param eventType
* the plural name of the type of events the meter is measuring
* (e.g., {@code "requests"})
* @param unit
* the rate unit of the new meter
* @return a new {@link Meter}
*/
public Meter newMeter(String url, Class<?> klass, String name,
String eventType, TimeUnit unit) {
return newMeter(url, klass, name, null, eventType, unit);
}
/**
* Creates a new {@link Meter} and registers it under the given class, name,
* and scope.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @param eventType
* the plural name of the type of events the meter is measuring
* (e.g., {@code "requests"})
* @param unit
* the rate unit of the new meter
* @return a new {@link Meter}
*/
public Meter newMeter(String url, Class<?> klass, String name,
String scope, String eventType, TimeUnit unit) {
return newMeter(url, createName(klass, name, scope), eventType, unit);
}
private ScheduledExecutorService newMeterTickThreadPool() {
return getThreadPools().newScheduledThreadPool(2, "meter-tick");
}
/**
* Creates a new {@link Meter} and registers it under the given metric name.
*
* @param metricName
* the name of the metric
* @param eventType
* the plural name of the type of events the meter is measuring
* (e.g., {@code "requests"})
* @param unit
* the rate unit of the new meter
* @return a new {@link Meter}
*/
public Meter newMeter(String url, MetricName metricName, String eventType,
TimeUnit unit) {
final Metric existingMetric = getMetrics().get(metricName);
if (existingMetric != null) {
return (Meter) existingMetric;
}
return getOrAdd(metricName, new APIMeter(url, newMeterTickThreadPool(),
eventType, unit, getClock()));
}
/**
* Creates a new {@link Histogram} and registers it under the given class
* and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param biased
* whether or not the histogram should be biased
* @return a new {@link Histogram}
*/
public Histogram newHistogram(String url, Class<?> klass, String name,
boolean biased) {
return newHistogram(url, klass, name, null, biased);
}
/**
* Creates a new {@link Histogram} and registers it under the given class,
* name, and scope.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @param biased
* whether or not the histogram should be biased
* @return a new {@link Histogram}
*/
public Histogram newHistogram(String url, Class<?> klass, String name,
String scope, boolean biased) {
return newHistogram(url, createName(klass, name, scope), biased);
}
/**
* Creates a new non-biased {@link Histogram} and registers it under the
* given class and name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @return a new {@link Histogram}
*/
public Histogram newHistogram(String url, Class<?> klass, String name) {
return newHistogram(url, klass, name, false);
}
/**
* Creates a new non-biased {@link Histogram} and registers it under the
* given class, name, and scope.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @return a new {@link Histogram}
*/
public Histogram newHistogram(String url, Class<?> klass, String name,
String scope) {
return newHistogram(url, klass, name, scope, false);
}
/**
* Creates a new {@link Histogram} and registers it under the given metric
* name.
*
* @param metricName
* the name of the metric
* @param biased
* whether or not the histogram should be biased
* @return a new {@link Histogram}
*/
public Histogram newHistogram(String url, MetricName metricName,
boolean biased) {
return getOrAdd(metricName, new APIHistogram(url,
biased ? SampleType.BIASED : SampleType.UNIFORM));
}
/**
* Creates a new {@link Timer} and registers it under the given class and
* name, measuring elapsed time in milliseconds and invocations per second.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @return a new {@link Timer}
*/
public Timer newTimer(String url, Class<?> klass, String name) {
return newTimer(url, klass, name, null, TimeUnit.MILLISECONDS,
TimeUnit.SECONDS);
}
/**
* Creates a new {@link Timer} and registers it under the given class and
* name.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param durationUnit
* the duration scale unit of the new timer
* @param rateUnit
* the rate scale unit of the new timer
* @return a new {@link Timer}
*/
public Timer newTimer(String url, Class<?> klass, String name,
TimeUnit durationUnit, TimeUnit rateUnit) {
return newTimer(url, klass, name, null, durationUnit, rateUnit);
}
/**
* Creates a new {@link Timer} and registers it under the given class, name,
* and scope, measuring elapsed time in milliseconds and invocations per
* second.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @return a new {@link Timer}
*/
public Timer newTimer(String url, Class<?> klass, String name, String scope) {
return newTimer(url, klass, name, scope, TimeUnit.MILLISECONDS,
TimeUnit.SECONDS);
}
/**
* Creates a new {@link Timer} and registers it under the given class, name,
* and scope.
*
* @param klass
* the class which owns the metric
* @param name
* the name of the metric
* @param scope
* the scope of the metric
* @param durationUnit
* the duration scale unit of the new timer
* @param rateUnit
* the rate scale unit of the new timer
* @return a new {@link Timer}
*/
public Timer newTimer(String url, Class<?> klass, String name,
String scope, TimeUnit durationUnit, TimeUnit rateUnit) {
return newTimer(url, createName(klass, name, scope), durationUnit,
rateUnit);
}
/**
* Creates a new {@link Timer} and registers it under the given metric name.
*
* @param metricName
* the name of the metric
* @param durationUnit
* the duration scale unit of the new timer
* @param rateUnit
* the rate scale unit of the new timer
* @return a new {@link Timer}
*/
public Timer newTimer(String url, MetricName metricName,
TimeUnit durationUnit, TimeUnit rateUnit) {
final Metric existingMetric = getMetrics().get(metricName);
if (existingMetric != null) {
return (Timer) existingMetric;
}
return getOrAdd(metricName, new APITimer(url, newMeterTickThreadPool(),
durationUnit, rateUnit, getClock()));
}
}

View File

@ -1,44 +0,0 @@
/*
* Copyright 2015 Cloudius Systems
*
*/
package com.yammer.metrics.core;
import java.lang.reflect.Field;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import com.yammer.metrics.core.Histogram.SampleType;
/**
* A timer metric which aggregates timing durations and provides duration
* statistics, plus throughput statistics via {@link Meter}.
*/
public class APITimer extends Timer {
public APITimer(String url, ScheduledExecutorService tickThread,
TimeUnit durationUnit, TimeUnit rateUnit) {
super(tickThread, durationUnit, rateUnit);
setHistogram(url);
}
public APITimer(String url, ScheduledExecutorService tickThread,
TimeUnit durationUnit, TimeUnit rateUnit, Clock clock) {
super(tickThread, durationUnit, rateUnit, clock);
setHistogram(url);
}
private void setHistogram(String url) {
Field histogram;
try {
histogram = Timer.class.getDeclaredField("histogram");
histogram.setAccessible(true);
histogram.set(this, new APIHistogram(url, SampleType.BIASED));
} catch (NoSuchFieldException | SecurityException
| IllegalArgumentException | IllegalAccessException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}

View File

@ -1,11 +0,0 @@
package com.yammer.metrics.core;
public class HistogramValues {
public long count;
public long min;
public long max;
public long sum;
public double variance;
public double mean;
public long sample[];
}

View File

@ -0,0 +1,16 @@
module scylla.jmx {
opens com.scylladb.jmx.utils;
exports com.scylladb.jmx.utils;
opens com.scylladb.jmx.main;
exports com.scylladb.jmx.main;
opens com.scylladb.jmx.metrics;
exports com.scylladb.jmx.metrics;
requires java.logging;
requires java.management;
requires scylla.apiclient;
requires jakarta.json;
requires jakarta.ws.rs;
requires com.google.common;
requires jakarta.xml.bind;
requires com.fasterxml.jackson.annotation;
}

View File

@ -23,61 +23,146 @@
*/ */
package org.apache.cassandra.db; package org.apache.cassandra.db;
import java.lang.management.ManagementFactory; import static jakarta.json.Json.createObjectBuilder;
import java.net.ConnectException; import static java.lang.String.valueOf;
import java.util.*; import static java.util.Arrays.asList;
import java.util.concurrent.*; import static java.util.stream.Collectors.toMap;
import javax.json.JsonArray; import jakarta.json.Json;
import javax.json.JsonObject; import jakarta.json.JsonArray;
import javax.management.*; import jakarta.json.JsonObject;
import jakarta.json.JsonObjectBuilder;
import jakarta.json.JsonReader;
import jakarta.ws.rs.core.MultivaluedHashMap;
import jakarta.ws.rs.core.MultivaluedMap;
import java.io.StringReader;
import java.io.OutputStream;
import java.util.Collections;
import java.util.EnumSet;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.logging.Logger;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.OperationsException;
import javax.management.openmbean.CompositeData; import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.OpenDataException; import javax.management.openmbean.OpenDataException;
import javax.ws.rs.ProcessingException; import javax.management.openmbean.OpenType;
import javax.ws.rs.core.MultivaluedHashMap; import javax.management.openmbean.SimpleType;
import javax.ws.rs.core.MultivaluedMap; import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import org.apache.cassandra.metrics.ColumnFamilyMetrics; import org.apache.cassandra.metrics.TableMetrics;
import com.cloudius.urchin.api.APIClient; import com.scylladb.jmx.api.APIClient;
import com.scylladb.jmx.metrics.MetricsMBean;
import com.scylladb.jmx.metrics.RegistrationChecker;
import com.scylladb.jmx.metrics.RegistrationMode;
import com.sun.jmx.mbeanserver.JmxMBeanServer;
import com.google.common.base.Throwables; import com.google.common.base.Throwables;
public class ColumnFamilyStore implements ColumnFamilyStoreMBean { public class ColumnFamilyStore extends MetricsMBean implements ColumnFamilyStoreMBean {
private static final java.util.logging.Logger logger = java.util.logging.Logger private static final Logger logger = Logger.getLogger(ColumnFamilyStore.class.getName());
.getLogger(ColumnFamilyStore.class.getName()); @SuppressWarnings("unused")
private APIClient c = new APIClient(); private final String type;
private String type; private final String keyspace;
private String keyspace; private final String name;
private String name; private static final String[] COUNTER_NAMES = new String[]{"raw", "count", "error", "string"};
private String mbeanName; private static final String[] COUNTER_DESCS = new String[]
static final int INTERVAL = 1000; // update every 1second { "partition key in raw hex bytes", // Table name and comments match Cassandra, we will use the partition key
public final ColumnFamilyMetrics metric; "value of this partition for given sampler",
"value is within the error bounds plus or minus of this",
"the partition key turned into a human readable format" };
private static final CompositeType COUNTER_COMPOSITE_TYPE;
private static final TabularType COUNTER_TYPE;
private static Map<String, ColumnFamilyStore> cf = new HashMap<String, ColumnFamilyStore>(); private static final String[] SAMPLER_NAMES = new String[]{"cardinality", "partitions"};
private static Timer timer = new Timer("Column Family"); private static final String[] SAMPLER_DESCS = new String[]
{ "cardinality of partitions",
"list of counter results" };
private static final String SAMPLING_RESULTS_NAME = "SAMPLING_RESULTS";
private static final CompositeType SAMPLING_RESULT;
public static final String SNAPSHOT_TRUNCATE_PREFIX = "truncated";
public static final String SNAPSHOT_DROP_PREFIX = "dropped";
private JsonObject tableSamplerResult = null;
private Future<JsonObject> futureTableSamperResult = null;
private ExecutorService service = null;
static
{
try
{
OpenType<?>[] counterTypes = new OpenType[] { SimpleType.STRING, SimpleType.LONG, SimpleType.LONG, SimpleType.STRING };
COUNTER_COMPOSITE_TYPE = new CompositeType(SAMPLING_RESULTS_NAME, SAMPLING_RESULTS_NAME, COUNTER_NAMES, COUNTER_DESCS, counterTypes);
COUNTER_TYPE = new TabularType(SAMPLING_RESULTS_NAME, SAMPLING_RESULTS_NAME, COUNTER_COMPOSITE_TYPE, COUNTER_NAMES);
OpenType<?>[] samplerTypes = new OpenType[] { SimpleType.LONG, COUNTER_TYPE };
SAMPLING_RESULT = new CompositeType(SAMPLING_RESULTS_NAME, SAMPLING_RESULTS_NAME, SAMPLER_NAMES, SAMPLER_DESCS, samplerTypes);
} catch (OpenDataException e)
{
throw Throwables.propagate(e);
}
}
protected synchronized void startTableSampling(MultivaluedMap<String, String> queryParams) {
if (futureTableSamperResult != null) {
return;
}
futureTableSamperResult = service.submit(() -> {
tableSamplerResult = client.getJsonObj("column_family/toppartitions/" + getCFName(), queryParams);
return null;
});
}
/*
* Wait until the action is completed
* It is safe to call this method multiple times
*/
public synchronized void waitUntilSamplingCompleted() {
try {
if (futureTableSamperResult != null) {
futureTableSamperResult.get();
futureTableSamperResult = null;
}
} catch (InterruptedException | ExecutionException e) {
futureTableSamperResult = null;
throw new RuntimeException("Failed getting table statistics", e);
}
}
public static final Set<String> TYPE_NAMES = new HashSet<>(asList("ColumnFamilies", "IndexTables", "Tables"));
public void log(String str) { public void log(String str) {
logger.info(str); logger.finest(str);
} }
public static void register_mbeans() { public ColumnFamilyStore(APIClient client, String type, String keyspace, String name) {
TimerTask taskToExecute = new CheckRegistration(); super(client,
timer.schedule(taskToExecute, 100, INTERVAL); new TableMetrics(keyspace, name, false /* hardcoded for now */));
}
public ColumnFamilyStore(String type, String keyspace, String name) {
this.type = type; this.type = type;
this.keyspace = keyspace; this.keyspace = keyspace;
this.name = name; this.name = name;
mbeanName = getName(type, keyspace, name); service = Executors.newSingleThreadExecutor();
try { }
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName nameObj = new ObjectName(mbeanName); public ColumnFamilyStore(APIClient client, ObjectName name) {
mbs.registerMBean(this, nameObj); this(client, name.getKeyProperty("type"), name.getKeyProperty("keyspace"), name.getKeyProperty("columnfamily"));
} catch (Exception e) {
throw new RuntimeException(e);
}
metric = new ColumnFamilyMetrics(this);
} }
/** true if this CFS contains secondary index data */ /** true if this CFS contains secondary index data */
@ -97,422 +182,96 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
return keyspace + ":" + name; return keyspace + ":" + name;
} }
private static String getName(String type, String keyspace, String name) { private static ObjectName getName(String type, String keyspace, String name) throws MalformedObjectNameException {
return "org.apache.cassandra.db:type=" + type + ",keyspace=" + keyspace return new ObjectName(
+ ",columnfamily=" + name; "org.apache.cassandra.db:type=" + type + ",keyspace=" + keyspace + ",columnfamily=" + name);
} }
private static final class CheckRegistration extends TimerTask { public static RegistrationChecker createRegistrationChecker() {
private APIClient c = new APIClient(); return new RegistrationChecker() {
private int missed_response = 0; @Override
// After MAX_RETRY retry we assume the API is not available protected void doCheck(APIClient client, JmxMBeanServer server, EnumSet<RegistrationMode> mode)
// and the jmx will shutdown throws OperationsException {
private static final int MAX_RETRY = 30; JsonArray mbeans = client.getJsonArray("/column_family/");
@Override Set<ObjectName> all = new HashSet<ObjectName>();
public void run() { for (int i = 0; i < mbeans.size(); i++) {
try { JsonObject mbean = mbeans.getJsonObject(i);
JsonArray mbeans = c.getJsonArray("/column_family/"); all.add(getName(mbean.getString("type"), mbean.getString("ks"), mbean.getString("cf")));
Set<String> all_cf = new HashSet<String>(); }
for (int i = 0; i < mbeans.size(); i++) { checkRegistration(server, all, mode,
JsonObject mbean = mbeans.getJsonObject(i); n -> TYPE_NAMES.contains(n.getKeyProperty("type")), n -> new ColumnFamilyStore(client, n));
String name = getName(mbean.getString("type"), }
mbean.getString("ks"), mbean.getString("cf")); };
if (!cf.containsKey(name)) { }
ColumnFamilyStore cfs = new ColumnFamilyStore(
mbean.getString("type"), mbean.getString("ks"),
mbean.getString("cf"));
cf.put(name, cfs);
}
all_cf.add(name);
}
// removing deleted column family
for (String n : cf.keySet()) {
if (!all_cf.contains(n)) {
cf.remove(n);
}
}
missed_response = 0;
} catch (ProcessingException e) {
if (Throwables.getRootCause(e) instanceof ConnectException) {
if (missed_response++ > MAX_RETRY) {
System.err.println("API is not available, JMX is shuting down");
System.exit(-1);
}
} else {
// ignoring exceptions, will retry on the next interval
}
} catch (Exception e) {
// ignoring exceptions, will retry on the next interval
}
}
}
/** /**
* @return the name of the column family * @return the name of the column family
*/ */
@Override
public String getColumnFamilyName() { public String getColumnFamilyName() {
log(" getColumnFamilyName()"); log(" getColumnFamilyName()");
return name; return name;
} }
/**
* Returns the total amount of data stored in the memtable, including column
* related overhead.
*
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#memtableOnHeapSize
* @return The size in bytes.
* @deprecated
*/
@Deprecated
public long getMemtableDataSize() {
log(" getMemtableDataSize()");
return c.getLongValue("/column_family/metrics/memtable_on_heap_size/" + getCFName());
}
/**
* Returns the total number of columns present in the memtable.
*
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#memtableColumnsCount
* @return The number of columns.
*/
@Deprecated
public long getMemtableColumnsCount() {
log(" getMemtableColumnsCount()");
return metric.memtableColumnsCount.value();
}
/**
* Returns the number of times that a flush has resulted in the memtable
* being switched out.
*
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#memtableSwitchCount
* @return the number of memtable switches
*/
@Deprecated
public int getMemtableSwitchCount() {
log(" getMemtableSwitchCount()");
return c.getIntValue("/column_family/metrics/memtable_switch_count/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#recentSSTablesPerRead
* @return a histogram of the number of sstable data files accessed per
* read: reading this property resets it
*/
@Deprecated
public long[] getRecentSSTablesPerReadHistogram() {
log(" getRecentSSTablesPerReadHistogram()");
return metric.getRecentSSTablesPerRead();
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#sstablesPerReadHistogram
* @return a histogram of the number of sstable data files accessed per read
*/
@Deprecated
public long[] getSSTablesPerReadHistogram() {
log(" getSSTablesPerReadHistogram()");
return metric.sstablesPerRead.getBuckets(false);
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return the number of read operations on this column family
*/
@Deprecated
public long getReadCount() {
log(" getReadCount()");
return c.getIntValue("/column_family/metrics/read/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return total read latency (divide by getReadCount() for average)
*/
@Deprecated
public long getTotalReadLatencyMicros() {
log(" getTotalReadLatencyMicros()");
return c.getLongValue("/column_family/metrics/read_latency/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getLifetimeReadLatencyHistogramMicros() {
log(" getLifetimeReadLatencyHistogramMicros()");
return metric.readLatency.totalLatencyHistogram.getBuckets(false);
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getRecentReadLatencyHistogramMicros() {
log(" getRecentReadLatencyHistogramMicros()");
return metric.readLatency.getRecentLatencyHistogram();
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return average latency per read operation since the last call
*/
@Deprecated
public double getRecentReadLatencyMicros() {
log(" getRecentReadLatencyMicros()");
return metric.readLatency.getRecentLatency();
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return the number of write operations on this column family
*/
@Deprecated
public long getWriteCount() {
log(" getWriteCount()");
return c.getLongValue("/column_family/metrics/write/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return total write latency (divide by getReadCount() for average)
*/
@Deprecated
public long getTotalWriteLatencyMicros() {
log(" getTotalWriteLatencyMicros()");
return c.getLongValue("/column_family/metrics/write_latency/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getLifetimeWriteLatencyHistogramMicros() {
log(" getLifetimeWriteLatencyHistogramMicros()");
return metric.writeLatency.totalLatencyHistogram.getBuckets(false);
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getRecentWriteLatencyHistogramMicros() {
log(" getRecentWriteLatencyHistogramMicros()");
return metric.writeLatency.getRecentLatencyHistogram();
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return average latency per write operation since the last call
*/
@Deprecated
public double getRecentWriteLatencyMicros() {
log(" getRecentWriteLatencyMicros()");
return metric.writeLatency.getRecentLatency();
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#pendingFlushes
* @return the estimated number of tasks pending for this column family
*/
@Deprecated
public int getPendingTasks() {
log(" getPendingTasks()");
return c.getIntValue("/column_family/metrics/pending_flushes/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#liveSSTableCount
* @return the number of SSTables on disk for this CF
*/
@Deprecated
public int getLiveSSTableCount() {
log(" getLiveSSTableCount()");
return c.getIntValue("/column_family/metrics/live_ss_table_count/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#liveDiskSpaceUsed
* @return disk space used by SSTables belonging to this CF
*/
@Deprecated
public long getLiveDiskSpaceUsed() {
log(" getLiveDiskSpaceUsed()");
return c.getLongValue("/column_family/metrics/live_disk_space_used/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#totalDiskSpaceUsed
* @return total disk space used by SSTables belonging to this CF, including
* obsolete ones waiting to be GC'd
*/
@Deprecated
public long getTotalDiskSpaceUsed() {
log(" getTotalDiskSpaceUsed()");
return c.getLongValue("/column_family/metrics/total_disk_space_used/" + getCFName());
}
/** /**
* force a major compaction of this column family * force a major compaction of this column family
*/ */
public void forceMajorCompaction() public void forceMajorCompaction() throws ExecutionException, InterruptedException {
throws ExecutionException, InterruptedException {
log(" forceMajorCompaction() throws ExecutionException, InterruptedException"); log(" forceMajorCompaction() throws ExecutionException, InterruptedException");
c.post("column_family/major_compaction/" + getCFName()); client.post("column_family/major_compaction/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#minRowSize
* @return the size of the smallest compacted row
*/
@Deprecated
public long getMinRowSize() {
log(" getMinRowSize()");
return c.getLongValue("/column_family/metrics/min_row_size/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#maxRowSize
* @return the size of the largest compacted row
*/
@Deprecated
public long getMaxRowSize() {
log(" getMaxRowSize()");
return c.getLongValue("/column_family/metrics/max_row_size/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#meanRowSize
* @return the average row size across all the sstables
*/
@Deprecated
public long getMeanRowSize() {
log(" getMeanRowSize()");
return c.getLongValue("/column_family/metrics/mean_row_size/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#bloomFilterFalsePositives
*/
@Deprecated
public long getBloomFilterFalsePositives() {
log(" getBloomFilterFalsePositives()");
return c.getLongValue("/column_family/metrics/bloom_filter_false_positives/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#recentBloomFilterFalsePositives
*/
@Deprecated
public long getRecentBloomFilterFalsePositives() {
log(" getRecentBloomFilterFalsePositives()");
return c.getLongValue("/column_family/metrics/recent_bloom_filter_false_positives/" +getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#bloomFilterFalseRatio
*/
@Deprecated
public double getBloomFilterFalseRatio() {
log(" getBloomFilterFalseRatio()");
return c.getDoubleValue("/column_family/metrics/bloom_filter_false_ratio/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#recentBloomFilterFalseRatio
*/
@Deprecated
public double getRecentBloomFilterFalseRatio() {
log(" getRecentBloomFilterFalseRatio()");
return c.getDoubleValue("/column_family/metrics/recent_bloom_filter_false_ratio/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#bloomFilterDiskSpaceUsed
*/
@Deprecated
public long getBloomFilterDiskSpaceUsed() {
log(" getBloomFilterDiskSpaceUsed()");
return c.getLongValue("/column_family/metrics/bloom_filter_disk_space_used/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#bloomFilterOffHeapMemoryUsed
*/
@Deprecated
public long getBloomFilterOffHeapMemoryUsed() {
log(" getBloomFilterOffHeapMemoryUsed()");
return c.getLongValue("/column_family/metrics/bloom_filter_off_heap_memory_used/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#indexSummaryOffHeapMemoryUsed
*/
@Deprecated
public long getIndexSummaryOffHeapMemoryUsed() {
log(" getIndexSummaryOffHeapMemoryUsed()");
return c.getLongValue("/column_family/metrics/index_summary_off_heap_memory_used/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#compressionMetadataOffHeapMemoryUsed
*/
@Deprecated
public long getCompressionMetadataOffHeapMemoryUsed() {
log(" getCompressionMetadataOffHeapMemoryUsed()");
return c.getLongValue("/column_family/metrics/compression_metadata_off_heap_memory_used/" + getCFName());
} }
/** /**
* Gets the minimum number of sstables in queue before compaction kicks off * Gets the minimum number of sstables in queue before compaction kicks off
*/ */
@Override
public int getMinimumCompactionThreshold() { public int getMinimumCompactionThreshold() {
log(" getMinimumCompactionThreshold()"); log(" getMinimumCompactionThreshold()");
return c.getIntValue("column_family/minimum_compaction/" + getCFName()); return client.getIntValue("column_family/minimum_compaction/" + getCFName());
} }
/** /**
* Sets the minimum number of sstables in queue before compaction kicks off * Sets the minimum number of sstables in queue before compaction kicks off
*/ */
@Override
public void setMinimumCompactionThreshold(int threshold) { public void setMinimumCompactionThreshold(int threshold) {
log(" setMinimumCompactionThreshold(int threshold)"); log(" setMinimumCompactionThreshold(int threshold)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("value", Integer.toString(threshold)); queryParams.add("value", Integer.toString(threshold));
c.post("column_family/minimum_compaction/" + getCFName(), queryParams); client.post("column_family/minimum_compaction/" + getCFName(), queryParams);
} }
/** /**
* Gets the maximum number of sstables in queue before compaction kicks off * Gets the maximum number of sstables in queue before compaction kicks off
*/ */
@Override
public int getMaximumCompactionThreshold() { public int getMaximumCompactionThreshold() {
log(" getMaximumCompactionThreshold()"); log(" getMaximumCompactionThreshold()");
return c.getIntValue("column_family/maximum_compaction/" + getCFName()); return client.getIntValue("column_family/maximum_compaction/" + getCFName());
} }
/** /**
* Sets the maximum and maximum number of SSTables in queue before * Sets the maximum and maximum number of SSTables in queue before
* compaction kicks off * compaction kicks off
*/ */
@Override
public void setCompactionThresholds(int minThreshold, int maxThreshold) { public void setCompactionThresholds(int minThreshold, int maxThreshold) {
log(" setCompactionThresholds(int minThreshold, int maxThreshold)"); log(" setCompactionThresholds(int minThreshold, int maxThreshold)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("minimum", Integer.toString(minThreshold)); queryParams.add("minimum", Integer.toString(minThreshold));
queryParams.add("maximum", Integer.toString(maxThreshold)); queryParams.add("maximum", Integer.toString(maxThreshold));
c.post("column_family/compaction" + getCFName(), queryParams); client.post("column_family/compaction" + getCFName(), queryParams);
} }
/** /**
* Sets the maximum number of sstables in queue before compaction kicks off * Sets the maximum number of sstables in queue before compaction kicks off
*/ */
@Override
public void setMaximumCompactionThreshold(int threshold) { public void setMaximumCompactionThreshold(int threshold) {
log(" setMaximumCompactionThreshold(int threshold)"); log(" setMaximumCompactionThreshold(int threshold)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("value", Integer.toString(threshold)); queryParams.add("value", Integer.toString(threshold));
c.post("column_family/maximum_compaction/" + getCFName(), queryParams); client.post("column_family/maximum_compaction/" + getCFName(), queryParams);
} }
/** /**
@ -525,7 +284,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
log(" setCompactionStrategyClass(String className)"); log(" setCompactionStrategyClass(String className)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("class_name", className); queryParams.add("class_name", className);
c.post("column_family/compaction_strategy/" + getCFName(), queryParams); client.post("column_family/compaction_strategy/" + getCFName(), queryParams);
} }
/** /**
@ -533,17 +292,16 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
*/ */
public String getCompactionStrategyClass() { public String getCompactionStrategyClass() {
log(" getCompactionStrategyClass()"); log(" getCompactionStrategyClass()");
return c.getStringValue( return client.getStringValue("column_family/compaction_strategy/" + getCFName());
"column_family/compaction_strategy/" + getCFName());
} }
/** /**
* Get the compression parameters * Get the compression parameters
*/ */
@Override
public Map<String, String> getCompressionParameters() { public Map<String, String> getCompressionParameters() {
log(" getCompressionParameters()"); log(" getCompressionParameters()");
return c.getMapStrValue( return client.getMapStrValue("column_family/compression_parameters/" + getCFName());
"column_family/compression_parameters/" + getCFName());
} }
/** /**
@ -552,73 +310,49 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
* @param opts * @param opts
* map of string names to values * map of string names to values
*/ */
@Override
public void setCompressionParameters(Map<String, String> opts) { public void setCompressionParameters(Map<String, String> opts) {
log(" setCompressionParameters(Map<String,String> opts)"); log(" setCompressionParameters(Map<String,String> opts)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("opts", APIClient.mapToString(opts)); queryParams.add("opts", APIClient.mapToString(opts));
c.post("column_family/compression_parameters/" + getCFName(), client.post("column_family/compression_parameters/" + getCFName(), queryParams);
queryParams);
} }
/** /**
* Set new crc check chance * Set new crc check chance
*/ */
@Override
public void setCrcCheckChance(double crcCheckChance) { public void setCrcCheckChance(double crcCheckChance) {
log(" setCrcCheckChance(double crcCheckChance)"); log(" setCrcCheckChance(double crcCheckChance)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("check_chance", Double.toString(crcCheckChance)); queryParams.add("check_chance", Double.toString(crcCheckChance));
c.post("column_family/crc_check_chance/" + getCFName(), queryParams); client.post("column_family/crc_check_chance/" + getCFName(), queryParams);
} }
@Override
public boolean isAutoCompactionDisabled() { public boolean isAutoCompactionDisabled() {
log(" isAutoCompactionDisabled()"); log(" isAutoCompactionDisabled()");
return c.getBooleanValue("column_family/autocompaction/" + getCFName()); return !client.getBooleanValue("column_family/autocompaction/" + getCFName());
} }
/** Number of tombstoned cells retreived during the last slicequery */ /** Number of tombstoned cells retreived during the last slicequery */
@Deprecated @Deprecated
public double getTombstonesPerSlice() { public double getTombstonesPerSlice() {
log(" getTombstonesPerSlice()"); log(" getTombstonesPerSlice()");
return c.getDoubleValue(""); return client.getDoubleValue("");
} }
/** Number of live cells retreived during the last slicequery */ /** Number of live cells retreived during the last slicequery */
@Deprecated @Deprecated
public double getLiveCellsPerSlice() { public double getLiveCellsPerSlice() {
log(" getLiveCellsPerSlice()"); log(" getLiveCellsPerSlice()");
return c.getDoubleValue(""); return client.getDoubleValue("");
} }
@Override
public long estimateKeys() { public long estimateKeys() {
log(" estimateKeys()"); log(" estimateKeys()");
return c.getLongValue("column_family/estimate_keys/" + getCFName()); return client.getLongValue("column_family/estimate_keys/" + getCFName());
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#estimatedRowSizeHistogram
*/
@Deprecated
public long[] getEstimatedRowSizeHistogram() {
log(" getEstimatedRowSizeHistogram()");
return metric.estimatedRowSizeHistogram.value();
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#estimatedColumnCountHistogram
*/
@Deprecated
public long[] getEstimatedColumnCountHistogram() {
log(" getEstimatedColumnCountHistogram()");
return metric.estimatedColumnCountHistogram.value();
}
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#compressionRatio
*/
@Deprecated
public double getCompressionRatio() {
log(" getCompressionRatio()");
return c.getDoubleValue("/column_family/metrics/compression_ratio/" + getCFName());
} }
/** /**
@ -626,9 +360,10 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
* *
* @return list of the index names * @return list of the index names
*/ */
@Override
public List<String> getBuiltIndexes() { public List<String> getBuiltIndexes() {
log(" getBuiltIndexes()"); log(" getBuiltIndexes()");
return c.getListStrValue("column_family/built_indexes/" + getCFName()); return client.getListStrValue("column_family/built_indexes/" + getCFName());
} }
/** /**
@ -637,30 +372,49 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
* @param key * @param key
* @return list of filenames containing the key * @return list of filenames containing the key
*/ */
@Override
public List<String> getSSTablesForKey(String key) { public List<String> getSSTablesForKey(String key) {
log(" getSSTablesForKey(String key)"); log(" getSSTablesForKey(String key)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("key", key); queryParams.add("key", key);
return c.getListStrValue("column_family/sstables/by_key/" + getCFName(), return client.getListStrValue("column_family/sstables/by_key/" + getCFName(), queryParams);
queryParams);
} }
/**
* Returns a list of filenames that contain the given key on this node
* @param key
* @param hexFormat if key is in hex string format
* @return list of filenames containing the key
*/
@Override
public List<String> getSSTablesForKey(String key, boolean hexFormat)
{
log(" getSSTablesForKey(String key)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("key", key);
if (hexFormat) {
queryParams.add("format", "hex");
}
return client.getListStrValue("column_family/sstables/by_key/" + getCFName(), queryParams);
}
/** /**
* Scan through Keyspace/ColumnFamily's data directory determine which * Scan through Keyspace/ColumnFamily's data directory determine which
* SSTables should be loaded and load them * SSTables should be loaded and load them
*/ */
@Override
public void loadNewSSTables() { public void loadNewSSTables() {
log(" loadNewSSTables()"); log(" loadNewSSTables()");
c.post("column_family/sstable/" + getCFName()); client.post("column_family/sstable/" + getCFName());
} }
/** /**
* @return the number of SSTables in L0. Always return 0 if Leveled * @return the number of SSTables in L0. Always return 0 if Leveled
* compaction is not enabled. * compaction is not enabled.
*/ */
@Override
public int getUnleveledSSTables() { public int getUnleveledSSTables() {
log(" getUnleveledSSTables()"); log(" getUnleveledSSTables()");
return c.getIntValue("column_family/sstables/unleveled/" + getCFName()); return client.getIntValue("column_family/sstables/unleveled/" + getCFName());
} }
/** /**
@ -668,10 +422,16 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
* used. array index corresponds to level(int[0] is for level 0, * used. array index corresponds to level(int[0] is for level 0,
* ...). * ...).
*/ */
@Override
public int[] getSSTableCountPerLevel() { public int[] getSSTableCountPerLevel() {
log(" getSSTableCountPerLevel()"); log(" getSSTableCountPerLevel()");
return c.getIntArrValue( int[] res = client.getIntArrValue("column_family/sstables/per_level/" + getCFName());
"column_family/sstables/per_level/" + getCFName()); if (res.length == 0) {
// no sstable count
// should return null
return null;
}
return res;
} }
/** /**
@ -680,18 +440,20 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
* *
* @return ratio * @return ratio
*/ */
@Override
public double getDroppableTombstoneRatio() { public double getDroppableTombstoneRatio() {
log(" getDroppableTombstoneRatio()"); log(" getDroppableTombstoneRatio()");
return c.getDoubleValue("column_family/droppable_ratio/" + getCFName()); return client.getDoubleValue("column_family/droppable_ratio/" + getCFName());
} }
/** /**
* @return the size of SSTables in "snapshots" subdirectory which aren't * @return the size of SSTables in "snapshots" subdirectory which aren't
* live anymore * live anymore
*/ */
@Override
public long trueSnapshotsSize() { public long trueSnapshotsSize() {
log(" trueSnapshotsSize()"); log(" trueSnapshotsSize()");
return c.getLongValue("column_family/snapshots_size/" + getCFName()); return client.getLongValue("column_family/metrics/snapshots_size/" + getCFName());
} }
public String getKeyspace() { public String getKeyspace() {
@ -699,48 +461,104 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean {
} }
@Override @Override
public long getRangeCount() { public String getTableName() {
log("getRangeCount()"); log(" getTableName()");
return metric.rangeLatency.latency.count(); return name;
} }
@Override @Override
public long getTotalRangeLatencyMicros() { public void forceMajorCompaction(boolean splitOutput) throws ExecutionException, InterruptedException {
log("getTotalRangeLatencyMicros()"); log(" forceMajorCompaction(boolean) throws ExecutionException, InterruptedException");
return metric.rangeLatency.totalLatency.count(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.putSingle("value", valueOf(splitOutput));
client.post("column_family/major_compaction/" + getCFName(), queryParams);
} }
@Override @Override
public long[] getLifetimeRangeLatencyHistogramMicros() { public void setCompactionParametersJson(String options) {
log("getLifetimeRangeLatencyHistogramMicros()"); log(" setCompactionParametersJson");
return metric.rangeLatency.totalLatencyHistogram.getBuckets(false); JsonReader reader = Json.createReaderFactory(null).createReader(new StringReader(options));
setCompactionParameters(
reader.readObject().entrySet().stream().collect(toMap(Map.Entry::getKey, e -> e.toString())));
} }
@Override @Override
public long[] getRecentRangeLatencyHistogramMicros() { public String getCompactionParametersJson() {
log("getRecentRangeLatencyHistogramMicros()"); log(" getCompactionParametersJson");
return metric.rangeLatency.getRecentLatencyHistogram(); JsonObjectBuilder b = createObjectBuilder();
getCompactionParameters().forEach(b::add);
return b.build().toString();
} }
@Override @Override
public double getRecentRangeLatencyMicros() { public void setCompactionParameters(Map<String, String> options) {
log("getRecentRangeLatencyMicros()"); for (Map.Entry<String, String> e : options.entrySet()) {
return metric.rangeLatency.getRecentLatency(); // See below
if ("class".equals(e.getKey())) {
setCompactionStrategyClass(e.getValue());
} else {
throw new IllegalArgumentException(e.getKey());
}
}
} }
@Override @Override
public void beginLocalSampling(String sampler, int capacity) { public Map<String, String> getCompactionParameters() {
// We only currently support class. Here could have been a call that can
// be expanded only on the server side, but that raises controversy.
// Lets add some technical debt instead.
return Collections.singletonMap("class", getCompactionStrategyClass());
}
@Override
public boolean isCompactionDiskSpaceCheckEnabled() {
// TODO Auto-generated method stub // TODO Auto-generated method stub
log("beginLocalSampling()"); log(" isCompactionDiskSpaceCheckEnabled()");
return false;
} }
@Override @Override
public CompositeData finishLocalSampling(String sampler, int count) public void compactionDiskSpaceCheck(boolean enable) {
throws OpenDataException {
// TODO Auto-generated method stub // TODO Auto-generated method stub
log("finishLocalSampling()"); log(" compactionDiskSpaceCheck()");
return null;
} }
@Override
public void beginLocalSampling(String sampler_base, int capacity) {
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("capacity", Integer.toString(capacity));
if (sampler_base.contains(":")) {
String[] parts = sampler_base.split(":");
queryParams.add("duration", parts[1]);
} else {
queryParams.add("duration", "10000");
}
startTableSampling(queryParams);
log(" beginLocalSampling()");
}
@Override
public CompositeData finishLocalSampling(String samplerType, int count) throws OpenDataException {
log(" finishLocalSampling()");
waitUntilSamplingCompleted();
TabularDataSupport result = new TabularDataSupport(COUNTER_TYPE);
JsonArray counters = tableSamplerResult.getJsonArray((samplerType.equalsIgnoreCase("reads")) ? "read" : "write");
long cardinality = tableSamplerResult.getJsonNumber((samplerType.equalsIgnoreCase("reads")) ? "read_cardinality" : "write_cardinality").longValue();
long size = 0;
if (counters != null) {
size = (count > counters.size()) ? counters.size() : count;
for (int i = 0; i < size; i++) {
JsonObject counter = counters.getJsonObject(i);
result.put(new CompositeDataSupport(COUNTER_COMPOSITE_TYPE, COUNTER_NAMES,
new Object[] { counter.getString("partition"), // raw
counter.getJsonNumber("count").longValue(), // count
counter.getJsonNumber("error").longValue(), // error
counter.getString("partition") })); // string
}
}
return new CompositeDataSupport(SAMPLING_RESULT, SAMPLER_NAMES, new Object[] { cardinality, result });
}
} }

View File

@ -17,6 +17,7 @@
*/ */
package org.apache.cassandra.db; package org.apache.cassandra.db;
import java.util.Collection;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutionException;
@ -27,263 +28,28 @@ import javax.management.openmbean.OpenDataException;
/** /**
* The MBean interface for ColumnFamilyStore * The MBean interface for ColumnFamilyStore
*/ */
public interface ColumnFamilyStoreMBean public interface ColumnFamilyStoreMBean {
{
/** /**
* @return the name of the column family * @return the name of the column family
*/ */
@Deprecated
public String getColumnFamilyName(); public String getColumnFamilyName();
/** public String getTableName();
* Returns the total amount of data stored in the memtable, including
* column related overhead.
*
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#memtableOnHeapSize
* @return The size in bytes.
* @deprecated
*/
@Deprecated
public long getMemtableDataSize();
/**
* Returns the total number of columns present in the memtable.
*
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#memtableColumnsCount
* @return The number of columns.
*/
@Deprecated
public long getMemtableColumnsCount();
/**
* Returns the number of times that a flush has resulted in the
* memtable being switched out.
*
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#memtableSwitchCount
* @return the number of memtable switches
*/
@Deprecated
public int getMemtableSwitchCount();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#recentSSTablesPerRead
* @return a histogram of the number of sstable data files accessed per read: reading this property resets it
*/
@Deprecated
public long[] getRecentSSTablesPerReadHistogram();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#sstablesPerReadHistogram
* @return a histogram of the number of sstable data files accessed per read
*/
@Deprecated
public long[] getSSTablesPerReadHistogram();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return the number of read operations on this column family
*/
@Deprecated
public long getReadCount();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return total read latency (divide by getReadCount() for average)
*/
@Deprecated
public long getTotalReadLatencyMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getLifetimeReadLatencyHistogramMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getRecentReadLatencyHistogramMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#readLatency
* @return average latency per read operation since the last call
*/
@Deprecated
public double getRecentReadLatencyMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return the number of write operations on this column family
*/
@Deprecated
public long getWriteCount();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return total write latency (divide by getReadCount() for average)
*/
@Deprecated
public long getTotalWriteLatencyMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getLifetimeWriteLatencyHistogramMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getRecentWriteLatencyHistogramMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#writeLatency
* @return average latency per write operation since the last call
*/
@Deprecated
public double getRecentWriteLatencyMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#rangeLatency
* @return the number of range slice operations on this column family
*/
@Deprecated
public long getRangeCount();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#rangeLatency
* @return total range slice latency (divide by getRangeCount() for average)
*/
@Deprecated
public long getTotalRangeLatencyMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#rangeLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getLifetimeRangeLatencyHistogramMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#rangeLatency
* @return an array representing the latency histogram
*/
@Deprecated
public long[] getRecentRangeLatencyHistogramMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#rangeLatency
* @return average latency per range slice operation since the last call
*/
@Deprecated
public double getRecentRangeLatencyMicros();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#pendingFlushes
* @return the estimated number of tasks pending for this column family
*/
@Deprecated
public int getPendingTasks();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#liveSSTableCount
* @return the number of SSTables on disk for this CF
*/
@Deprecated
public int getLiveSSTableCount();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#liveDiskSpaceUsed
* @return disk space used by SSTables belonging to this CF
*/
@Deprecated
public long getLiveDiskSpaceUsed();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#totalDiskSpaceUsed
* @return total disk space used by SSTables belonging to this CF, including obsolete ones waiting to be GC'd
*/
@Deprecated
public long getTotalDiskSpaceUsed();
/** /**
* force a major compaction of this column family * force a major compaction of this column family
*
* @param splitOutput
* true if the output of the major compaction should be split in
* several sstables
*/ */
public void forceMajorCompaction() throws ExecutionException, InterruptedException; public void forceMajorCompaction(boolean splitOutput) throws ExecutionException, InterruptedException;
/** // NOT even default-throw implementing
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#minRowSize // forceCompactionForTokenRange
* @return the size of the smallest compacted row // as this is clearly a misplaced method that should not be in the mbean interface
*/ // (uses internal cassandra types)
@Deprecated
public long getMinRowSize();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#maxRowSize
* @return the size of the largest compacted row
*/
@Deprecated
public long getMaxRowSize();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#meanRowSize
* @return the average row size across all the sstables
*/
@Deprecated
public long getMeanRowSize();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#bloomFilterFalsePositives
*/
@Deprecated
public long getBloomFilterFalsePositives();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#recentBloomFilterFalsePositives
*/
@Deprecated
public long getRecentBloomFilterFalsePositives();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#bloomFilterFalseRatio
*/
@Deprecated
public double getBloomFilterFalseRatio();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#recentBloomFilterFalseRatio
*/
@Deprecated
public double getRecentBloomFilterFalseRatio();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#bloomFilterDiskSpaceUsed
*/
@Deprecated
public long getBloomFilterDiskSpaceUsed();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#bloomFilterOffHeapMemoryUsed
*/
@Deprecated
public long getBloomFilterOffHeapMemoryUsed();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#indexSummaryOffHeapMemoryUsed
*/
@Deprecated
public long getIndexSummaryOffHeapMemoryUsed();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#compressionMetadataOffHeapMemoryUsed
*/
@Deprecated
public long getCompressionMetadataOffHeapMemoryUsed();
/** /**
* Gets the minimum number of sstables in queue before compaction kicks off * Gets the minimum number of sstables in queue before compaction kicks off
@ -301,7 +67,8 @@ public interface ColumnFamilyStoreMBean
public int getMaximumCompactionThreshold(); public int getMaximumCompactionThreshold();
/** /**
* Sets the maximum and maximum number of SSTables in queue before compaction kicks off * Sets the maximum and maximum number of SSTables in queue before
* compaction kicks off
*/ */
public void setCompactionThresholds(int minThreshold, int maxThreshold); public void setCompactionThresholds(int minThreshold, int maxThreshold);
@ -311,26 +78,44 @@ public interface ColumnFamilyStoreMBean
public void setMaximumCompactionThreshold(int threshold); public void setMaximumCompactionThreshold(int threshold);
/** /**
* Sets the compaction strategy by class name * Sets the compaction parameters locally for this node
* @param className the name of the compaction strategy class *
* Note that this will be set until an ALTER with compaction = {..} is
* executed or the node is restarted
*
* @param options
* compaction options with the same syntax as when doing ALTER
* ... WITH compaction = {..}
*/ */
public void setCompactionStrategyClass(String className); public void setCompactionParametersJson(String options);
public String getCompactionParametersJson();
/** /**
* Gets the compaction strategy class name * Sets the compaction parameters locally for this node
*
* Note that this will be set until an ALTER with compaction = {..} is
* executed or the node is restarted
*
* @param options
* compaction options map
*/ */
public String getCompactionStrategyClass(); public void setCompactionParameters(Map<String, String> options);
public Map<String, String> getCompactionParameters();
/** /**
* Get the compression parameters * Get the compression parameters
*/ */
public Map<String,String> getCompressionParameters(); public Map<String, String> getCompressionParameters();
/** /**
* Set the compression parameters * Set the compression parameters
* @param opts map of string names to values *
* @param opts
* map of string names to values
*/ */
public void setCompressionParameters(Map<String,String> opts); public void setCompressionParameters(Map<String, String> opts);
/** /**
* Set new crc check chance * Set new crc check chance
@ -339,81 +124,92 @@ public interface ColumnFamilyStoreMBean
public boolean isAutoCompactionDisabled(); public boolean isAutoCompactionDisabled();
/** Number of tombstoned cells retreived during the last slicequery */
@Deprecated
public double getTombstonesPerSlice();
/** Number of live cells retreived during the last slicequery */
@Deprecated
public double getLiveCellsPerSlice();
public long estimateKeys(); public long estimateKeys();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#estimatedRowSizeHistogram
*/
@Deprecated
public long[] getEstimatedRowSizeHistogram();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#estimatedColumnCountHistogram
*/
@Deprecated
public long[] getEstimatedColumnCountHistogram();
/**
* @see org.apache.cassandra.metrics.ColumnFamilyMetrics#compressionRatio
*/
@Deprecated
public double getCompressionRatio();
/** /**
* Returns a list of the names of the built column indexes for current store * Returns a list of the names of the built column indexes for current store
*
* @return list of the index names * @return list of the index names
*/ */
public List<String> getBuiltIndexes(); public List<String> getBuiltIndexes();
/** /**
* Returns a list of filenames that contain the given key on this node * Returns a list of filenames that contain the given key on this node
*
* @param key * @param key
* @return list of filenames containing the key * @return list of filenames containing the key
*/ */
public List<String> getSSTablesForKey(String key); public List<String> getSSTablesForKey(String key);
/** /**
* Scan through Keyspace/ColumnFamily's data directory * Returns a list of filenames that contain the given key on this node
* determine which SSTables should be loaded and load them * @param key
* @param hexFormat if key is in hex string format
* @return list of filenames containing the key
*/
public List<String> getSSTablesForKey(String key, boolean hexFormat);
/**
* Scan through Keyspace/ColumnFamily's data directory determine which
* SSTables should be loaded and load them
*/ */
public void loadNewSSTables(); public void loadNewSSTables();
/** /**
* @return the number of SSTables in L0. Always return 0 if Leveled compaction is not enabled. * @return the number of SSTables in L0. Always return 0 if Leveled
* compaction is not enabled.
*/ */
public int getUnleveledSSTables(); public int getUnleveledSSTables();
/** /**
* @return sstable count for each level. null unless leveled compaction is used. * @return sstable count for each level. null unless leveled compaction is
* array index corresponds to level(int[0] is for level 0, ...). * used. array index corresponds to level(int[0] is for level 0,
* ...).
*/ */
public int[] getSSTableCountPerLevel(); public int[] getSSTableCountPerLevel();
/** /**
* Get the ratio of droppable tombstones to real columns (and non-droppable tombstones) * @return sstable fanout size for level compaction strategy.
*/
default public int getLevelFanoutSize() {
// TODO: implement for real. This is sort of default.
return 10;
}
/**
* Get the ratio of droppable tombstones to real columns (and non-droppable
* tombstones)
*
* @return ratio * @return ratio
*/ */
public double getDroppableTombstoneRatio(); public double getDroppableTombstoneRatio();
/** /**
* @return the size of SSTables in "snapshots" subdirectory which aren't live anymore * @return the size of SSTables in "snapshots" subdirectory which aren't
* live anymore
*/ */
public long trueSnapshotsSize(); public long trueSnapshotsSize();
/** /**
* begin sampling for a specific sampler with a given capacity. The cardinality may * begin sampling for a specific sampler with a given capacity. The
* be larger than the capacity, but depending on the use case it may affect its accuracy * cardinality may be larger than the capacity, but depending on the use
* case it may affect its accuracy
*/ */
public void beginLocalSampling(String sampler, int capacity); public void beginLocalSampling(String sampler, int capacity);
/** /**
* @return top <i>count</i> items for the sampler since beginLocalSampling was called * @return top <i>count</i> items for the sampler since beginLocalSampling
* was called
*/ */
public CompositeData finishLocalSampling(String sampler, int count) throws OpenDataException; public CompositeData finishLocalSampling(String sampler, int count) throws OpenDataException;
/*
* Is Compaction space check enabled
*/
public boolean isCompactionDiskSpaceCheckEnabled();
/*
* Enable/Disable compaction space check
*/
public void compactionDiskSpaceCheck(boolean enable);
} }

View File

@ -22,85 +22,39 @@
*/ */
package org.apache.cassandra.db.commitlog; package org.apache.cassandra.db.commitlog;
import java.io.*; import java.io.IOException;
import java.lang.management.ManagementFactory; import java.util.ArrayList;
import java.util.*; import java.util.HashMap;
import java.util.HashSet;
import javax.management.MBeanServer; import java.util.List;
import javax.management.ObjectName; import java.util.Map;
import java.util.Set;
import org.apache.cassandra.metrics.CommitLogMetrics; import org.apache.cassandra.metrics.CommitLogMetrics;
import com.cloudius.urchin.api.APIClient; import com.scylladb.jmx.api.APIClient;
import com.scylladb.jmx.metrics.MetricsMBean;
/* /*
* Commit Log tracks every write operation into the system. The aim of the commit log is to be able to * Commit Log tracks every write operation into the system. The aim of the commit log is to be able to
* successfully recover data that was not stored to disk via the Memtable. * successfully recover data that was not stored to disk via the Memtable.
*/ */
public class CommitLog implements CommitLogMBean { public class CommitLog extends MetricsMBean implements CommitLogMBean {
CommitLogMetrics metrics = new CommitLogMetrics();
private static final java.util.logging.Logger logger = java.util.logging.Logger private static final java.util.logging.Logger logger = java.util.logging.Logger
.getLogger(CommitLog.class.getName()); .getLogger(CommitLog.class.getName());
private APIClient c = new APIClient();
public void log(String str) { public void log(String str) {
logger.info(str); logger.finest(str);
} }
private static final CommitLog instance = new CommitLog(); public CommitLog(APIClient client) {
super("org.apache.cassandra.db:type=Commitlog", client, new CommitLogMetrics());
public static CommitLog getInstance() {
return instance;
}
private CommitLog() {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
try {
mbs.registerMBean(this,
new ObjectName("org.apache.cassandra.db:type=Commitlog"));
} catch (Exception e) {
throw new RuntimeException(e);
}
}
/**
* Get the number of completed tasks
*
* @see org.apache.cassandra.metrics.CommitLogMetrics#completedTasks
*/
@Deprecated
public long getCompletedTasks() {
log(" getCompletedTasks()");
return c.getLongValue("");
}
/**
* Get the number of tasks waiting to be executed
*
* @see org.apache.cassandra.metrics.CommitLogMetrics#pendingTasks
*/
@Deprecated
public long getPendingTasks() {
log(" getPendingTasks()");
return c.getLongValue("");
}
/**
* Get the current size used by all the commitlog segments.
*
* @see org.apache.cassandra.metrics.CommitLogMetrics#totalCommitLogSize
*/
@Deprecated
public long getTotalCommitlogSize() {
log(" getTotalCommitlogSize()");
return c.getLongValue("");
} }
/** /**
* Recover a single file. * Recover a single file.
*/ */
@Override
public void recover(String path) throws IOException { public void recover(String path) throws IOException {
log(" recover(String path) throws IOException"); log(" recover(String path) throws IOException");
} }
@ -109,9 +63,10 @@ public class CommitLog implements CommitLogMBean {
* @return file names (not full paths) of active commit log segments * @return file names (not full paths) of active commit log segments
* (segments containing unflushed data) * (segments containing unflushed data)
*/ */
@Override
public List<String> getActiveSegmentNames() { public List<String> getActiveSegmentNames() {
log(" getActiveSegmentNames()"); log(" getActiveSegmentNames()");
List<String> lst = c.getListStrValue("/commitlog/segments/active"); List<String> lst = client.getListStrValue("/commitlog/segments/active");
Set<String> set = new HashSet<String>(); Set<String> set = new HashSet<String>();
for (String l : lst) { for (String l : lst) {
String name = l.substring(l.lastIndexOf("/") + 1, l.length()); String name = l.substring(l.lastIndexOf("/") + 1, l.length());
@ -124,9 +79,10 @@ public class CommitLog implements CommitLogMBean {
* @return Files which are pending for archival attempt. Does NOT include * @return Files which are pending for archival attempt. Does NOT include
* failed archive attempts. * failed archive attempts.
*/ */
@Override
public List<String> getArchivingSegmentNames() { public List<String> getArchivingSegmentNames() {
log(" getArchivingSegmentNames()"); log(" getArchivingSegmentNames()");
List<String> lst = c.getListStrValue("/commitlog/segments/archiving"); List<String> lst = client.getListStrValue("/commitlog/segments/archiving");
Set<String> set = new HashSet<String>(); Set<String> set = new HashSet<String>();
for (String l : lst) { for (String l : lst) {
String name = l.substring(l.lastIndexOf("/") + 1, l.length()); String name = l.substring(l.lastIndexOf("/") + 1, l.length());
@ -139,35 +95,54 @@ public class CommitLog implements CommitLogMBean {
public String getArchiveCommand() { public String getArchiveCommand() {
// TODO Auto-generated method stub // TODO Auto-generated method stub
log(" getArchiveCommand()"); log(" getArchiveCommand()");
return c.getStringValue(""); return client.getStringValue("");
} }
@Override @Override
public String getRestoreCommand() { public String getRestoreCommand() {
// TODO Auto-generated method stub // TODO Auto-generated method stub
log(" getRestoreCommand()"); log(" getRestoreCommand()");
return c.getStringValue(""); return client.getStringValue("");
} }
@Override @Override
public String getRestoreDirectories() { public String getRestoreDirectories() {
// TODO Auto-generated method stub // TODO Auto-generated method stub
log(" getRestoreDirectories()"); log(" getRestoreDirectories()");
return c.getStringValue(""); return client.getStringValue("");
} }
@Override @Override
public long getRestorePointInTime() { public long getRestorePointInTime() {
// TODO Auto-generated method stub // TODO Auto-generated method stub
log(" getRestorePointInTime()"); log(" getRestorePointInTime()");
return c.getLongValue(""); return client.getLongValue("");
} }
@Override @Override
public String getRestorePrecision() { public String getRestorePrecision() {
// TODO Auto-generated method stub // TODO Auto-generated method stub
log(" getRestorePrecision()"); log(" getRestorePrecision()");
return c.getStringValue(""); return client.getStringValue("");
} }
@Override
public long getActiveContentSize() {
// scylla does not compress commit log, so this is equivalent
return getActiveOnDiskSize();
}
@Override
public long getActiveOnDiskSize() {
return client.getLongValue("/commitlog/metrics/total_commit_log_size");
}
@Override
public Map<String, Double> getActiveSegmentCompressionRatios() {
HashMap<String, Double> res = new HashMap<>();
for (String name : getActiveSegmentNames()) {
res.put(name, 1.0);
}
return res;
}
} }

View File

@ -19,32 +19,9 @@ package org.apache.cassandra.db.commitlog;
import java.io.IOException; import java.io.IOException;
import java.util.List; import java.util.List;
import java.util.Map;
public interface CommitLogMBean { public interface CommitLogMBean {
/**
* Get the number of completed tasks
*
* @see org.apache.cassandra.metrics.CommitLogMetrics#completedTasks
*/
@Deprecated
public long getCompletedTasks();
/**
* Get the number of tasks waiting to be executed
*
* @see org.apache.cassandra.metrics.CommitLogMetrics#pendingTasks
*/
@Deprecated
public long getPendingTasks();
/**
* Get the current size used by all the commitlog segments.
*
* @see org.apache.cassandra.metrics.CommitLogMetrics#totalCommitLogSize
*/
@Deprecated
public long getTotalCommitlogSize();
/** /**
* Command to execute to archive a commitlog segment. Blank to disabled. * Command to execute to archive a commitlog segment. Blank to disabled.
*/ */
@ -92,4 +69,21 @@ public interface CommitLogMBean {
* failed archive attempts. * failed archive attempts.
*/ */
public List<String> getArchivingSegmentNames(); public List<String> getArchivingSegmentNames();
/**
* @return The size of the mutations in all active commit log segments
* (uncompressed).
*/
public long getActiveContentSize();
/**
* @return The space taken on disk by the commit log (compressed).
*/
public long getActiveOnDiskSize();
/**
* @return A map between active log segments and the compression ratio
* achieved for each.
*/
public Map<String, Double> getActiveSegmentCompressionRatios();
} }

View File

@ -0,0 +1,98 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
package org.apache.cassandra.db.compaction;
import jakarta.json.JsonArray;
import jakarta.json.JsonObject;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.OpenDataException;
import javax.management.openmbean.OpenType;
import javax.management.openmbean.SimpleType;
import javax.management.openmbean.TabularData;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import com.google.common.base.Throwables;
public class CompactionHistoryTabularData {
private static final String[] ITEM_NAMES = new String[] { "id", "keyspace_name", "columnfamily_name",
"compacted_at", "bytes_in", "bytes_out", "rows_merged" };
private static final String[] ITEM_DESCS = new String[] { "time uuid", "keyspace name", "column family name",
"compaction finished at", "total bytes in", "total bytes out", "total rows merged" };
private static final String TYPE_NAME = "CompactionHistory";
private static final String ROW_DESC = "CompactionHistory";
private static final OpenType<?>[] ITEM_TYPES;
private static final CompositeType COMPOSITE_TYPE;
private static final TabularType TABULAR_TYPE;
static {
try {
ITEM_TYPES = new OpenType[] { SimpleType.STRING, SimpleType.STRING, SimpleType.STRING, SimpleType.LONG,
SimpleType.LONG, SimpleType.LONG, SimpleType.STRING };
COMPOSITE_TYPE = new CompositeType(TYPE_NAME, ROW_DESC, ITEM_NAMES, ITEM_DESCS, ITEM_TYPES);
TABULAR_TYPE = new TabularType(TYPE_NAME, ROW_DESC, COMPOSITE_TYPE, ITEM_NAMES);
} catch (OpenDataException e) {
throw Throwables.propagate(e);
}
}
public static TabularData from(JsonArray resultSet) throws OpenDataException {
TabularDataSupport result = new TabularDataSupport(TABULAR_TYPE);
for (int i = 0; i < resultSet.size(); i++) {
JsonObject row = resultSet.getJsonObject(i);
String id = row.getString("id");
String ksName = row.getString("ks");
String cfName = row.getString("cf");
long compactedAt = row.getJsonNumber("compacted_at").longValue();
long bytesIn = row.getJsonNumber("bytes_in").longValue();
long bytesOut = row.getJsonNumber("bytes_out").longValue();
JsonArray merged = row.getJsonArray("rows_merged");
StringBuilder sb = new StringBuilder();
if (merged != null) {
sb.append('{');
for (int m = 0; m < merged.size(); m++) {
JsonObject entry = merged.getJsonObject(m);
if (m > 0) {
sb.append(',');
}
sb.append(entry.getString("key")).append(':').append(entry.getString("value"));
}
sb.append('}');
}
result.put(new CompositeDataSupport(COMPOSITE_TYPE, ITEM_NAMES,
new Object[] { id, ksName, cfName, compactedAt, bytesIn, bytesOut, sb.toString() }));
}
return result;
}
}

View File

@ -17,18 +17,23 @@
*/ */
package org.apache.cassandra.db.compaction; package org.apache.cassandra.db.compaction;
import java.lang.management.ManagementFactory; import jakarta.json.JsonArray;
import java.util.*; import jakarta.json.JsonObject;
import jakarta.ws.rs.core.MultivaluedHashMap;
import jakarta.ws.rs.core.MultivaluedMap;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.logging.Logger;
import javax.management.MBeanServer; import javax.management.openmbean.OpenDataException;
import javax.management.ObjectName;
import javax.management.openmbean.TabularData; import javax.management.openmbean.TabularData;
import javax.ws.rs.core.MultivaluedHashMap;
import javax.ws.rs.core.MultivaluedMap;
import org.apache.cassandra.metrics.CompactionMetrics; import org.apache.cassandra.metrics.CompactionMetrics;
import com.cloudius.urchin.api.APIClient; import com.scylladb.jmx.api.APIClient;
import com.scylladb.jmx.metrics.MetricsMBean;
/** /**
* A singleton which manages a private executor of ongoing compactions. * A singleton which manages a private executor of ongoing compactions.
@ -40,91 +45,58 @@ import com.cloudius.urchin.api.APIClient;
/* /*
* Copyright 2015 Cloudius Systems * Copyright 2015 Cloudius Systems
* *
* Modified by Cloudius Systems * Modified by Cloudius Systems
*/ */
public class CompactionManager implements CompactionManagerMBean { public class CompactionManager extends MetricsMBean implements CompactionManagerMBean {
public static final String MBEAN_OBJECT_NAME = "org.apache.cassandra.db:type=CompactionManager"; public static final String MBEAN_OBJECT_NAME = "org.apache.cassandra.db:type=CompactionManager";
private static final java.util.logging.Logger logger = java.util.logging.Logger private static final Logger logger = Logger.getLogger(CompactionManager.class.getName());
.getLogger(CompactionManager.class.getName());
public static final CompactionManager instance;
private APIClient c = new APIClient();
CompactionMetrics metrics = new CompactionMetrics();
public void log(String str) { public void log(String str) {
logger.info(str); logger.finest(str);
} }
static { public CompactionManager(APIClient client) {
instance = new CompactionManager(); super(MBEAN_OBJECT_NAME, client, new CompactionMetrics());
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
try {
mbs.registerMBean(instance, new ObjectName(MBEAN_OBJECT_NAME));
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static CompactionManager getInstance() {
return instance;
} }
/** List of running compaction objects. */ /** List of running compaction objects. */
@Override
public List<Map<String, String>> getCompactions() { public List<Map<String, String>> getCompactions() {
log(" getCompactions()"); log(" getCompactions()");
return c.getListMapStrValue("compaction_manager/compactions"); List<Map<String, String>> results = new ArrayList<Map<String, String>>();
JsonArray compactions = client.getJsonArray("compaction_manager/compactions");
for (int i = 0; i < compactions.size(); i++) {
JsonObject compaction = compactions.getJsonObject(i);
Map<String, String> result = new HashMap<String, String>();
result.put("total", Long.toString(compaction.getJsonNumber("total").longValue()));
result.put("completed", Long.toString(compaction.getJsonNumber("completed").longValue()));
result.put("taskType", compaction.getString("task_type"));
result.put("keyspace", compaction.getString("ks"));
result.put("columnfamily", compaction.getString("cf"));
result.put("unit", compaction.getString("unit"));
result.put("compactionId", (compaction.containsKey("id"))? compaction.getString("id") : "<none>");
results.add(result);
}
return results;
} }
/** List of running compaction summary strings. */ /** List of running compaction summary strings. */
@Override
public List<String> getCompactionSummary() { public List<String> getCompactionSummary() {
log(" getCompactionSummary()"); log(" getCompactionSummary()");
return c.getListStrValue("compaction_manager/compaction_summary"); return client.getListStrValue("compaction_manager/compaction_summary");
} }
/** compaction history **/ /** compaction history **/
@Override
public TabularData getCompactionHistory() { public TabularData getCompactionHistory() {
log(" getCompactionHistory()"); log(" getCompactionHistory()");
return c.getCQLResult("SELECT * from system.compaction_history"); try {
} return CompactionHistoryTabularData.from(client.getJsonArray("/compaction_manager/compaction_history"));
} catch (OpenDataException e) {
/** return null;
* @see org.apache.cassandra.metrics.CompactionMetrics#pendingTasks }
* @return estimated number of compactions remaining to perform
*/
@Deprecated
public int getPendingTasks() {
log(" getPendingTasks()");
return c.getIntValue("");
}
/**
* @see org.apache.cassandra.metrics.CompactionMetrics#completedTasks
* @return number of completed compactions since server [re]start
*/
@Deprecated
public long getCompletedTasks() {
log(" getCompletedTasks()");
return c.getLongValue("");
}
/**
* @see org.apache.cassandra.metrics.CompactionMetrics#bytesCompacted
* @return total number of bytes compacted since server [re]start
*/
@Deprecated
public long getTotalBytesCompacted() {
log(" getTotalBytesCompacted()");
return c.getLongValue("");
}
/**
* @see org.apache.cassandra.metrics.CompactionMetrics#totalCompactionsCompleted
* @return total number of compactions since server [re]start
*/
@Deprecated
public long getTotalCompactionsCompleted() {
log(" getTotalCompactionsCompleted()");
return c.getLongValue("");
} }
/** /**
@ -138,12 +110,12 @@ public class CompactionManager implements CompactionManagerMBean {
* contain keyspace and columnfamily name in path(for 2.1+) or * contain keyspace and columnfamily name in path(for 2.1+) or
* file name itself. * file name itself.
*/ */
@Override
public void forceUserDefinedCompaction(String dataFiles) { public void forceUserDefinedCompaction(String dataFiles) {
log(" forceUserDefinedCompaction(String dataFiles)"); log(" forceUserDefinedCompaction(String dataFiles)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("dataFiles", dataFiles); queryParams.add("dataFiles", dataFiles);
c.post("compaction_manager/compaction_manager/force_user_defined_compaction", client.post("compaction_manager/force_user_defined_compaction", queryParams);
queryParams);
} }
/** /**
@ -153,28 +125,30 @@ public class CompactionManager implements CompactionManagerMBean {
* the type of compaction to stop. Can be one of: - COMPACTION - * the type of compaction to stop. Can be one of: - COMPACTION -
* VALIDATION - CLEANUP - SCRUB - INDEX_BUILD * VALIDATION - CLEANUP - SCRUB - INDEX_BUILD
*/ */
@Override
public void stopCompaction(String type) { public void stopCompaction(String type) {
log(" stopCompaction(String type)"); log(" stopCompaction(String type)");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("type", type); queryParams.add("type", type);
c.post("compaction_manager/compaction_manager/stop_compaction", client.post("compaction_manager/stop_compaction", queryParams);
queryParams);
} }
/** /**
* Returns core size of compaction thread pool * Returns core size of compaction thread pool
*/ */
@Override
public int getCoreCompactorThreads() { public int getCoreCompactorThreads() {
log(" getCoreCompactorThreads()"); log(" getCoreCompactorThreads()");
return c.getIntValue(""); return client.getIntValue("");
} }
/** /**
* Allows user to resize maximum size of the compaction thread pool. * Allows user to resize maximum size of the compaction thread pool.
* *
* @param number * @param number
* New maximum of compaction threads * New maximum of compaction threads
*/ */
@Override
public void setCoreCompactorThreads(int number) { public void setCoreCompactorThreads(int number) {
log(" setCoreCompactorThreads(int number)"); log(" setCoreCompactorThreads(int number)");
} }
@ -182,17 +156,19 @@ public class CompactionManager implements CompactionManagerMBean {
/** /**
* Returns maximum size of compaction thread pool * Returns maximum size of compaction thread pool
*/ */
@Override
public int getMaximumCompactorThreads() { public int getMaximumCompactorThreads() {
log(" getMaximumCompactorThreads()"); log(" getMaximumCompactorThreads()");
return c.getIntValue(""); return client.getIntValue("");
} }
/** /**
* Allows user to resize maximum size of the compaction thread pool. * Allows user to resize maximum size of the compaction thread pool.
* *
* @param number * @param number
* New maximum of compaction threads * New maximum of compaction threads
*/ */
@Override
public void setMaximumCompactorThreads(int number) { public void setMaximumCompactorThreads(int number) {
log(" setMaximumCompactorThreads(int number)"); log(" setMaximumCompactorThreads(int number)");
} }
@ -200,17 +176,19 @@ public class CompactionManager implements CompactionManagerMBean {
/** /**
* Returns core size of validation thread pool * Returns core size of validation thread pool
*/ */
@Override
public int getCoreValidationThreads() { public int getCoreValidationThreads() {
log(" getCoreValidationThreads()"); log(" getCoreValidationThreads()");
return c.getIntValue(""); return client.getIntValue("");
} }
/** /**
* Allows user to resize maximum size of the compaction thread pool. * Allows user to resize maximum size of the compaction thread pool.
* *
* @param number * @param number
* New maximum of compaction threads * New maximum of compaction threads
*/ */
@Override
public void setCoreValidationThreads(int number) { public void setCoreValidationThreads(int number) {
log(" setCoreValidationThreads(int number)"); log(" setCoreValidationThreads(int number)");
} }
@ -218,19 +196,31 @@ public class CompactionManager implements CompactionManagerMBean {
/** /**
* Returns size of validator thread pool * Returns size of validator thread pool
*/ */
@Override
public int getMaximumValidatorThreads() { public int getMaximumValidatorThreads() {
log(" getMaximumValidatorThreads()"); log(" getMaximumValidatorThreads()");
return c.getIntValue(""); return client.getIntValue("");
} }
/** /**
* Allows user to resize maximum size of the validator thread pool. * Allows user to resize maximum size of the validator thread pool.
* *
* @param number * @param number
* New maximum of validator threads * New maximum of validator threads
*/ */
@Override
public void setMaximumValidatorThreads(int number) { public void setMaximumValidatorThreads(int number) {
log(" setMaximumValidatorThreads(int number)"); log(" setMaximumValidatorThreads(int number)");
} }
@Override
public void stopCompactionById(String compactionId) {
// scylla does not have neither compaction ids nor the file described
// in:
// "Ids can be found in the transaction log files whose name starts with
// compaction_, located in the table transactions folder"
// (nodetool)
// TODO: throw?
log(" stopCompactionById");
}
} }

View File

@ -19,6 +19,7 @@ package org.apache.cassandra.db.compaction;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import javax.management.openmbean.TabularData; import javax.management.openmbean.TabularData;
public interface CompactionManagerMBean { public interface CompactionManagerMBean {
@ -31,34 +32,6 @@ public interface CompactionManagerMBean {
/** compaction history **/ /** compaction history **/
public TabularData getCompactionHistory(); public TabularData getCompactionHistory();
/**
* @see org.apache.cassandra.metrics.CompactionMetrics#pendingTasks
* @return estimated number of compactions remaining to perform
*/
@Deprecated
public int getPendingTasks();
/**
* @see org.apache.cassandra.metrics.CompactionMetrics#completedTasks
* @return number of completed compactions since server [re]start
*/
@Deprecated
public long getCompletedTasks();
/**
* @see org.apache.cassandra.metrics.CompactionMetrics#bytesCompacted
* @return total number of bytes compacted since server [re]start
*/
@Deprecated
public long getTotalBytesCompacted();
/**
* @see org.apache.cassandra.metrics.CompactionMetrics#totalCompactionsCompleted
* @return total number of compactions since server [re]start
*/
@Deprecated
public long getTotalCompactionsCompleted();
/** /**
* Triggers the compaction of user specified sstables. You can specify files * Triggers the compaction of user specified sstables. You can specify files
* from various keyspaces and columnfamilies. If you do so, user defined * from various keyspaces and columnfamilies. If you do so, user defined
@ -70,15 +43,37 @@ public interface CompactionManagerMBean {
*/ */
public void forceUserDefinedCompaction(String dataFiles); public void forceUserDefinedCompaction(String dataFiles);
/**
* Triggers the cleanup of user specified sstables.
* You can specify files from various keyspaces and columnfamilies.
* If you do so, cleanup is performed each file individually
*
* @param dataFiles a comma separated list of sstable file to cleanup.
* must contain keyspace and columnfamily name in path(for 2.1+) or file name itself.
*/
default public void forceUserDefinedCleanup(String dataFiles) {
throw new UnsupportedOperationException();
}
/** /**
* Stop all running compaction-like tasks having the provided {@code type}. * Stop all running compaction-like tasks having the provided {@code type}.
* *
* @param type * @param type
* the type of compaction to stop. Can be one of: - COMPACTION - * the type of compaction to stop. Can be one of: - COMPACTION -
* VALIDATION - CLEANUP - SCRUB - INDEX_BUILD * VALIDATION - CLEANUP - SCRUB - INDEX_BUILD
*/ */
public void stopCompaction(String type); public void stopCompaction(String type);
/**
* Stop an individual running compaction using the compactionId.
*
* @param compactionId
* Compaction ID of compaction to stop. Such IDs can be found in
* the transaction log files whose name starts with compaction_,
* located in the table transactions folder.
*/
public void stopCompactionById(String compactionId);
/** /**
* Returns core size of compaction thread pool * Returns core size of compaction thread pool
*/ */
@ -86,7 +81,7 @@ public interface CompactionManagerMBean {
/** /**
* Allows user to resize maximum size of the compaction thread pool. * Allows user to resize maximum size of the compaction thread pool.
* *
* @param number * @param number
* New maximum of compaction threads * New maximum of compaction threads
*/ */
@ -99,7 +94,7 @@ public interface CompactionManagerMBean {
/** /**
* Allows user to resize maximum size of the compaction thread pool. * Allows user to resize maximum size of the compaction thread pool.
* *
* @param number * @param number
* New maximum of compaction threads * New maximum of compaction threads
*/ */
@ -112,7 +107,7 @@ public interface CompactionManagerMBean {
/** /**
* Allows user to resize maximum size of the compaction thread pool. * Allows user to resize maximum size of the compaction thread pool.
* *
* @param number * @param number
* New maximum of compaction threads * New maximum of compaction threads
*/ */
@ -125,7 +120,7 @@ public interface CompactionManagerMBean {
/** /**
* Allows user to resize maximum size of the validator thread pool. * Allows user to resize maximum size of the validator thread pool.
* *
* @param number * @param number
* New maximum of validator threads * New maximum of validator threads
*/ */

View File

@ -0,0 +1,35 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*/
/*
* Moddified by ScyllaDB
*/
package org.apache.cassandra.gms;
public enum ApplicationState {
STATUS, LOAD, SCHEMA, DC, RACK, RELEASE_VERSION, REMOVAL_COORDINATOR, INTERNAL_IP, RPC_ADDRESS, X_11_PADDING, // padding
// specifically
// for
// 1.1
SEVERITY, NET_VERSION, HOST_ID, TOKENS,
// pad to allow adding new states to existing cluster
X1, X2, X3, X4, X5, X6, X7, X8, X9, X10,
}

View File

@ -0,0 +1,109 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*/
/*
* Moddified by ScyllaDB
*/
package org.apache.cassandra.gms;
import java.util.HashMap;
import java.util.Map;
/**
* This abstraction represents both the HeartBeatState and the ApplicationState
* in an EndpointState instance. Any state for a given endpoint can be retrieved
* from this instance.
*/
public class EndpointState {
private volatile HeartBeatState hbState;
final Map<ApplicationState, String> applicationState = new HashMap<ApplicationState, String>();
private volatile long updateTimestamp;
private volatile boolean isAlive;
ApplicationState[] applicationValues;
private static final java.util.logging.Logger logger = java.util.logging.Logger
.getLogger(EndpointState.class.getName());
EndpointState(HeartBeatState initialHbState) {
applicationValues = ApplicationState.values();
hbState = initialHbState;
updateTimestamp = System.nanoTime();
isAlive = true;
}
HeartBeatState getHeartBeatState() {
return hbState;
}
void setHeartBeatState(HeartBeatState newHbState) {
hbState = newHbState;
}
public String getApplicationState(ApplicationState key) {
return applicationState.get(key);
}
/**
* TODO replace this with operations that don't expose private state
*/
@Deprecated
public Map<ApplicationState, String> getApplicationStateMap() {
return applicationState;
}
void addApplicationState(ApplicationState key, String value) {
applicationState.put(key, value);
}
void addApplicationState(int key, String value) {
if (key >= applicationValues.length) {
logger.warning("Unknown application state with id:" + key);
return;
}
addApplicationState(applicationValues[key], value);
}
/* getters and setters */
/**
* @return System.nanoTime() when state was updated last time.
*/
public long getUpdateTimestamp() {
return updateTimestamp;
}
public void setUpdateTimestamp(long ts) {
updateTimestamp = ts;
}
public boolean isAlive() {
return isAlive;
}
public void setAliave(boolean alive) {
isAlive = alive;
}
@Override
public String toString() {
return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = " + applicationState;
}
}

View File

@ -24,77 +24,155 @@
package org.apache.cassandra.gms; package org.apache.cassandra.gms;
import java.lang.management.ManagementFactory; import jakarta.json.JsonArray;
import jakarta.json.JsonObject;
import jakarta.json.JsonValue;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.*; import java.util.HashMap;
import javax.management.MBeanServer; import java.util.Map;
import javax.management.ObjectName;
import com.cloudius.urchin.api.APIClient; import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.OpenDataException;
import javax.management.openmbean.OpenType;
import javax.management.openmbean.SimpleType;
import javax.management.openmbean.TabularData;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
public class FailureDetector implements FailureDetectorMBean { import com.scylladb.jmx.api.APIClient;
import com.scylladb.jmx.metrics.APIMBean;
public class FailureDetector extends APIMBean implements FailureDetectorMBean {
public static final String MBEAN_NAME = "org.apache.cassandra.net:type=FailureDetector"; public static final String MBEAN_NAME = "org.apache.cassandra.net:type=FailureDetector";
private static final java.util.logging.Logger logger = java.util.logging.Logger private static final java.util.logging.Logger logger = java.util.logging.Logger
.getLogger(FailureDetector.class.getName()); .getLogger(FailureDetector.class.getName());
private APIClient c = new APIClient(); public FailureDetector(APIClient c) {
super(c);
}
public void log(String str) { public void log(String str) {
logger.info(str); logger.finest(str);
}
private static final FailureDetector instance = new FailureDetector();
public static FailureDetector getInstance() {
return instance;
}
private FailureDetector() {
// Register this instance with JMX
try {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
mbs.registerMBean(this, new ObjectName(MBEAN_NAME));
} catch (Exception e) {
throw new RuntimeException(e);
}
} }
@Override
public void dumpInterArrivalTimes() { public void dumpInterArrivalTimes() {
log(" dumpInterArrivalTimes()"); log(" dumpInterArrivalTimes()");
} }
@Override
public void setPhiConvictThreshold(double phi) { public void setPhiConvictThreshold(double phi) {
log(" setPhiConvictThreshold(double phi)"); log(" setPhiConvictThreshold(double phi)");
} }
@Override
public double getPhiConvictThreshold() { public double getPhiConvictThreshold() {
log(" getPhiConvictThreshold()"); log(" getPhiConvictThreshold()");
return c.getDoubleValue("/failure_detector/phi"); return client.getDoubleValue("/failure_detector/phi");
} }
@Override
public String getAllEndpointStates() { public String getAllEndpointStates() {
log(" getAllEndpointStates()"); log(" getAllEndpointStates()");
return c.getStringValue("/failure_detector/endpoints");
StringBuilder sb = new StringBuilder();
for (Map.Entry<String, EndpointState> entry : getEndpointStateMap().entrySet()) {
sb.append('/').append(entry.getKey()).append("\n");
appendEndpointState(sb, entry.getValue());
}
return sb.toString();
} }
private void appendEndpointState(StringBuilder sb, EndpointState endpointState) {
sb.append(" generation:").append(endpointState.getHeartBeatState().getGeneration()).append("\n");
sb.append(" heartbeat:").append(endpointState.getHeartBeatState().getHeartBeatVersion()).append("\n");
for (Map.Entry<ApplicationState, String> state : endpointState.applicationState.entrySet()) {
if (state.getKey() == ApplicationState.TOKENS) {
continue;
}
sb.append(" ").append(state.getKey()).append(":").append(state.getValue()).append("\n");
}
}
public Map<String, EndpointState> getEndpointStateMap() {
Map<String, EndpointState> res = new HashMap<String, EndpointState>();
JsonArray arr = client.getJsonArray("/failure_detector/endpoints");
for (int i = 0; i < arr.size(); i++) {
JsonObject obj = arr.getJsonObject(i);
EndpointState ep = new EndpointState(new HeartBeatState(obj.getInt("generation"), obj.getInt("version")));
ep.setAliave(obj.getBoolean("is_alive"));
ep.setUpdateTimestamp(obj.getJsonNumber("update_time").longValue());
JsonArray states = obj.getJsonArray("application_state");
if (states != null) {
for (int j = 0; j < states.size(); j++) {
JsonObject state = states.getJsonObject(j);
ep.addApplicationState(state.getInt("application_state"), state.getString("value"));
}
}
res.put(obj.getString("addrs"), ep);
}
return res;
}
@Override
public String getEndpointState(String address) throws UnknownHostException { public String getEndpointState(String address) throws UnknownHostException {
log(" getEndpointState(String address) throws UnknownHostException"); log(" getEndpointState(String address) throws UnknownHostException");
return c.getStringValue("/failure_detector/endpoints/states/" + address); return client.getStringValue("/failure_detector/endpoints/states/" + address);
} }
@Override
public Map<String, String> getSimpleStates() { public Map<String, String> getSimpleStates() {
log(" getSimpleStates()"); log(" getSimpleStates()");
return c.getMapStrValue("/failure_detector/simple_states"); return client.getMapStrValue("/failure_detector/simple_states");
} }
@Override
public int getDownEndpointCount() { public int getDownEndpointCount() {
log(" getDownEndpointCount()"); log(" getDownEndpointCount()");
return c.getIntValue("/failure_detector/count/endpoint/down"); return client.getIntValue("/failure_detector/count/endpoint/down");
} }
@Override
public int getUpEndpointCount() { public int getUpEndpointCount() {
log(" getUpEndpointCount()"); log(" getUpEndpointCount()");
return c.getIntValue("/failure_detector/count/endpoint/up"); return client.getIntValue("/failure_detector/count/endpoint/up");
} }
// From origin:
// this is useless except to provide backwards compatibility in
// phi_convict_threshold,
// because everyone seems pretty accustomed to the default of 8, and users
// who have
// already tuned their phi_convict_threshold for their own environments
// won't need to
// change.
private final double PHI_FACTOR = 1.0 / Math.log(10.0); // 0.434...
@Override
public TabularData getPhiValues() throws OpenDataException {
final CompositeType ct = new CompositeType("Node", "Node", new String[] { "Endpoint", "PHI" },
new String[] { "IP of the endpoint", "PHI value" },
new OpenType[] { SimpleType.STRING, SimpleType.DOUBLE });
final TabularDataSupport results = new TabularDataSupport(
new TabularType("PhiList", "PhiList", ct, new String[] { "Endpoint" }));
final JsonArray arr = client.getJsonArray("/failure_detector/endpoint_phi_values");
for (JsonValue v : arr) {
JsonObject o = (JsonObject) v;
String endpoint = o.getString("endpoint");
double phi = Double.parseDouble(o.getString("phi"));
if (phi != Double.MIN_VALUE) {
// returned values are scaled by PHI_FACTOR so that the are on
// the same scale as PhiConvictThreshold
final CompositeData data = new CompositeDataSupport(ct, new String[] { "Endpoint", "PHI" },
new Object[] { endpoint, phi * PHI_FACTOR });
results.put(data);
}
}
return results;
}
} }

View File

@ -20,8 +20,10 @@ package org.apache.cassandra.gms;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.Map; import java.util.Map;
public interface FailureDetectorMBean import javax.management.openmbean.OpenDataException;
{ import javax.management.openmbean.TabularData;
public interface FailureDetectorMBean {
public void dumpInterArrivalTimes(); public void dumpInterArrivalTimes();
public void setPhiConvictThreshold(double phi); public void setPhiConvictThreshold(double phi);
@ -37,4 +39,6 @@ public interface FailureDetectorMBean
public int getDownEndpointCount(); public int getDownEndpointCount();
public int getUpEndpointCount(); public int getUpEndpointCount();
public TabularData getPhiValues() throws OpenDataException;
} }

View File

@ -23,15 +23,13 @@
*/ */
package org.apache.cassandra.gms; package org.apache.cassandra.gms;
import java.lang.management.ManagementFactory; import jakarta.ws.rs.core.MultivaluedHashMap;
import jakarta.ws.rs.core.MultivaluedMap;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.logging.Logger;
import javax.management.MBeanServer; import com.scylladb.jmx.api.APIClient;
import javax.management.ObjectName; import com.scylladb.jmx.metrics.APIMBean;
import javax.ws.rs.core.MultivaluedHashMap;
import javax.ws.rs.core.MultivaluedMap;
import com.cloudius.urchin.api.APIClient;
/** /**
* This module is responsible for Gossiping information for the local endpoint. * This module is responsible for Gossiping information for the local endpoint.
@ -48,57 +46,43 @@ import com.cloudius.urchin.api.APIClient;
* node as down in the Failure Detector. * node as down in the Failure Detector.
*/ */
public class Gossiper implements GossiperMBean { public class Gossiper extends APIMBean implements GossiperMBean {
public static final String MBEAN_NAME = "org.apache.cassandra.net:type=Gossiper"; public static final String MBEAN_NAME = "org.apache.cassandra.net:type=Gossiper";
private static final java.util.logging.Logger logger = java.util.logging.Logger private static final Logger logger = Logger.getLogger(Gossiper.class.getName());
.getLogger(Gossiper.class.getName());
private APIClient c = new APIClient(); public Gossiper(APIClient c) {
super(c);
}
public void log(String str) { public void log(String str) {
logger.info(str); logger.finest(str);
}
private static final Gossiper instance = new Gossiper();
public static Gossiper getInstance() {
return instance;
}
private Gossiper() {
// Register this instance with JMX
try {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
mbs.registerMBean(this, new ObjectName(MBEAN_NAME));
} catch (Exception e) {
throw new RuntimeException(e);
}
} }
@Override
public long getEndpointDowntime(String address) throws UnknownHostException { public long getEndpointDowntime(String address) throws UnknownHostException {
log(" getEndpointDowntime(String address) throws UnknownHostException"); log(" getEndpointDowntime(String address) throws UnknownHostException");
return c.getLongValue("gossiper/downtime/" + address); return client.getLongValue("gossiper/downtime/" + address);
} }
public int getCurrentGenerationNumber(String address) @Override
throws UnknownHostException { public int getCurrentGenerationNumber(String address) throws UnknownHostException {
log(" getCurrentGenerationNumber(String address) throws UnknownHostException"); log(" getCurrentGenerationNumber(String address) throws UnknownHostException");
return c.getIntValue("gossiper/generation_number/" + address); return client.getIntValue("gossiper/generation_number/" + address);
} }
public void unsafeAssassinateEndpoint(String address) @Override
throws UnknownHostException { public void unsafeAssassinateEndpoint(String address) throws UnknownHostException {
log(" unsafeAssassinateEndpoint(String address) throws UnknownHostException"); log(" unsafeAssassinateEndpoint(String address) throws UnknownHostException");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>();
queryParams.add("unsafe", "True"); queryParams.add("unsafe", "True");
c.post("gossiper/assassinate/" + address, queryParams); client.post("gossiper/assassinate/" + address, queryParams);
} }
@Override
public void assassinateEndpoint(String address) throws UnknownHostException { public void assassinateEndpoint(String address) throws UnknownHostException {
log(" assassinateEndpoint(String address) throws UnknownHostException"); log(" assassinateEndpoint(String address) throws UnknownHostException");
c.post("gossiper/assassinate/" + address, null); client.post("gossiper/assassinate/" + address, null);
} }
} }

View File

@ -19,12 +19,13 @@ package org.apache.cassandra.gms;
import java.net.UnknownHostException; import java.net.UnknownHostException;
public interface GossiperMBean public interface GossiperMBean {
{
public long getEndpointDowntime(String address) throws UnknownHostException; public long getEndpointDowntime(String address) throws UnknownHostException;
public int getCurrentGenerationNumber(String address) throws UnknownHostException; public int getCurrentGenerationNumber(String address) throws UnknownHostException;
public void unsafeAssassinateEndpoint(String address) throws UnknownHostException; public void unsafeAssassinateEndpoint(String address) throws UnknownHostException;
public void assassinateEndpoint(String address) throws UnknownHostException;
} }

View File

@ -0,0 +1,65 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*/
/*
* Moddified by ScyllaDB
*/
package org.apache.cassandra.gms;
/**
* HeartBeat State associated with any given endpoint.
*/
class HeartBeatState {
private int generation;
private int version;
HeartBeatState(int gen) {
this(gen, 0);
}
HeartBeatState(int gen, int ver) {
generation = gen;
version = ver;
}
int getGeneration() {
return generation;
}
void updateHeartBeat() {
}
int getHeartBeatVersion() {
return version;
}
void forceNewerGenerationUnsafe() {
generation += 1;
}
void forceHighestPossibleVersionUnsafe() {
version = Integer.MAX_VALUE;
}
@Override
public String toString() {
return String.format("HeartBeat: generation = %d, version = %d", generation, version);
}
}

View File

@ -17,41 +17,27 @@
*/ */
package org.apache.cassandra.locator; package org.apache.cassandra.locator;
import java.lang.management.ManagementFactory; import static java.util.Collections.singletonMap;
import jakarta.ws.rs.core.MultivaluedHashMap;
import jakarta.ws.rs.core.MultivaluedMap;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.logging.Logger;
import javax.management.MBeanServer; import com.scylladb.jmx.api.APIClient;
import javax.management.ObjectName; import com.scylladb.jmx.metrics.APIMBean;
import javax.ws.rs.core.MultivaluedHashMap;
import javax.ws.rs.core.MultivaluedMap;
import com.cloudius.urchin.api.APIClient; public class EndpointSnitchInfo extends APIMBean implements EndpointSnitchInfoMBean {
public static final String MBEAN_NAME = "org.apache.cassandra.db:type=EndpointSnitchInfo";
private static final Logger logger = Logger.getLogger(EndpointSnitchInfo.class.getName());
public class EndpointSnitchInfo implements EndpointSnitchInfoMBean { public EndpointSnitchInfo(APIClient c) {
private static final java.util.logging.Logger logger = java.util.logging.Logger super(c);
.getLogger(EndpointSnitchInfo.class.getName()); }
private APIClient c = new APIClient();
public void log(String str) { public void log(String str) {
logger.info(str); logger.finest(str);
}
private static final EndpointSnitchInfo instance = new EndpointSnitchInfo();
public static EndpointSnitchInfo getInstance() {
return instance;
}
private EndpointSnitchInfo() {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
try {
mbs.registerMBean(this, new ObjectName(
"org.apache.cassandra.db:type=EndpointSnitchInfo"));
} catch (Exception e) {
throw new RuntimeException(e);
}
} }
/** /**
@ -64,12 +50,9 @@ public class EndpointSnitchInfo implements EndpointSnitchInfoMBean {
@Override @Override
public String getRack(String host) throws UnknownHostException { public String getRack(String host) throws UnknownHostException {
log("getRack(String host) throws UnknownHostException"); log("getRack(String host) throws UnknownHostException");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = host != null ? new MultivaluedHashMap<String, String>(
if (host == null) { singletonMap("host", InetAddress.getByName(host).getHostAddress())) : null;
host = InetAddress.getLoopbackAddress().getHostAddress(); return client.getStringValue("/snitch/rack", queryParams, 10000);
}
queryParams.add("host", host);
return c.getStringValue("/snitch/rack", queryParams, 10000);
} }
/** /**
@ -82,12 +65,9 @@ public class EndpointSnitchInfo implements EndpointSnitchInfoMBean {
@Override @Override
public String getDatacenter(String host) throws UnknownHostException { public String getDatacenter(String host) throws UnknownHostException {
log(" getDatacenter(String host) throws UnknownHostException"); log(" getDatacenter(String host) throws UnknownHostException");
MultivaluedMap<String, String> queryParams = new MultivaluedHashMap<String, String>(); MultivaluedMap<String, String> queryParams = host != null ? new MultivaluedHashMap<String, String>(
if (host == null) { singletonMap("host", InetAddress.getByName(host).getHostAddress())) : null;
host = InetAddress.getLoopbackAddress().getHostAddress(); return client.getStringValue("/snitch/datacenter", queryParams, 10000);
}
queryParams.add("host", host);
return c.getStringValue("/snitch/datacenter", queryParams, 10000);
} }
/** /**
@ -98,7 +78,16 @@ public class EndpointSnitchInfo implements EndpointSnitchInfoMBean {
@Override @Override
public String getSnitchName() { public String getSnitchName() {
log(" getSnitchName()"); log(" getSnitchName()");
return c.getStringValue("/snitch/name"); return client.getStringValue("/snitch/name");
} }
@Override
public String getRack() {
return client.getStringValue("/snitch/rack", null, 10000);
}
@Override
public String getDatacenter() {
return client.getStringValue("/snitch/datacenter", null, 10000);
}
} }

View File

@ -22,25 +22,40 @@ import java.net.UnknownHostException;
/** /**
* MBean exposing standard Snitch info * MBean exposing standard Snitch info
*/ */
public interface EndpointSnitchInfoMBean public interface EndpointSnitchInfoMBean {
{
/** /**
* Provides the Rack name depending on the respective snitch used, given the host name/ip * Provides the Rack name depending on the respective snitch used, given the
* host name/ip
*
* @param host * @param host
* @throws UnknownHostException * @throws UnknownHostException
*/ */
public String getRack(String host) throws UnknownHostException; public String getRack(String host) throws UnknownHostException;
/** /**
* Provides the Datacenter name depending on the respective snitch used, given the hostname/ip * Provides the Datacenter name depending on the respective snitch used,
* given the hostname/ip
*
* @param host * @param host
* @throws UnknownHostException * @throws UnknownHostException
*/ */
public String getDatacenter(String host) throws UnknownHostException; public String getDatacenter(String host) throws UnknownHostException;
/**
* Provides the Rack name depending on the respective snitch used for this
* node
*/
public String getRack();
/**
* Provides the Datacenter name depending on the respective snitch used for
* this node
*/
public String getDatacenter();
/** /**
* Provides the snitch name of the cluster * Provides the snitch name of the cluster
*
* @return Snitch name * @return Snitch name
*/ */
public String getSnitchName(); public String getSnitchName();

View File

@ -25,34 +25,20 @@
package org.apache.cassandra.metrics; package org.apache.cassandra.metrics;
import com.cloudius.urchin.metrics.APIMetrics; import javax.management.MalformedObjectNameException;
import com.yammer.metrics.core.*;
// TODO: In StorageProxy
public class CASClientRequestMetrics extends ClientRequestMetrics { public class CASClientRequestMetrics extends ClientRequestMetrics {
public final Histogram contention; public CASClientRequestMetrics(String scope, String url) {
/* Used only for write */ super(scope, url);
public final Counter conditionNotMet;
public final Counter unfinishedCommit;
public CASClientRequestMetrics(String url, String scope) {
super(url, scope);
contention = APIMetrics.newHistogram(url + "contention",
factory.createMetricName("ContentionHistogram"), true);
conditionNotMet = APIMetrics.newCounter(url + "condition_not_met",
factory.createMetricName("ConditionNotMet"));
unfinishedCommit = APIMetrics.newCounter(url + "unfinished_commit",
factory.createMetricName("UnfinishedCommit"));
} }
public void release() { @Override
super.release(); public void register(MetricsRegistry registry) throws MalformedObjectNameException {
APIMetrics.defaultRegistry().removeMetric( super.register(registry);
factory.createMetricName("ContentionHistogram")); registry.register(() -> registry.histogram(uri + "/contention", true), names("ContentionHistogram"));
APIMetrics.defaultRegistry().removeMetric( registry.register(() -> registry.counter(uri + "/condition_not_met"), names("ConditionNotMet"));
factory.createMetricName("ConditionNotMet")); registry.register(() -> registry.counter(uri + "/unfinished_commit"), names("UnfinishedCommit"));
APIMetrics.defaultRegistry().removeMetric(
factory.createMetricName("UnfinishedCommit"));
} }
} }

View File

@ -23,32 +23,19 @@
*/ */
package org.apache.cassandra.metrics; package org.apache.cassandra.metrics;
import java.util.concurrent.TimeUnit; import javax.management.MalformedObjectNameException;
import com.cloudius.urchin.api.APIClient;
import com.cloudius.urchin.metrics.APIMetrics;
import com.cloudius.urchin.metrics.DefaultNameFactory;
import com.cloudius.urchin.metrics.MetricNameFactory;
import com.yammer.metrics.core.Gauge;
import com.yammer.metrics.core.Meter;
/** /**
* Metrics for {@code ICache}. * Metrics for {@code ICache}.
*/ */
public class CacheMetrics { public class CacheMetrics implements Metrics {
/** Cache capacity in bytes */
public final Gauge<Long> capacity;
/** Total number of cache hits */
public final Meter hits;
/** Total number of cache requests */
public final Meter requests;
/** cache hit rate */
public final Gauge<Double> hitRate;
/** Total size of cache, in bytes */
public final Gauge<Long> size;
/** Total number of cache entries */
public final Gauge<Integer> entries;
private APIClient c = new APIClient(); private final String type;
private final String url;
private String compose(String value) {
return "/cache_service/metrics/" + url + "/" + value;
}
/** /**
* Create metrics for given cache. * Create metrics for given cache.
@ -59,48 +46,21 @@ public class CacheMetrics {
* Cache to measure metrics * Cache to measure metrics
*/ */
public CacheMetrics(String type, final String url) { public CacheMetrics(String type, final String url) {
this.type = type;
this.url = url;
}
@Override
public void register(MetricsRegistry registry) throws MalformedObjectNameException {
MetricNameFactory factory = new DefaultNameFactory("Cache", type); MetricNameFactory factory = new DefaultNameFactory("Cache", type);
capacity = APIMetrics.newGauge(factory.createMetricName("Capacity"), registry.register(() -> registry.gauge(compose("capacity")), factory.createMetricName("Capacity"));
new Gauge<Long>() { registry.register(() -> registry.meter(compose("hits_moving_avrage")), factory.createMetricName("Hits"));
public Long value() { registry.register(() -> registry.meter(compose("requests_moving_avrage")),
return c.getLongValue("/cache_service/metrics/" + url factory.createMetricName("Requests"));
+ "/capacity");
}
});
hits = APIMetrics.newMeter("/cache_service/metrics/" + url
+ "/hits", factory.createMetricName("Hits"), "hits",
TimeUnit.SECONDS);
requests = APIMetrics.newMeter("/cache_service/metrics/" + url
+ "/requests", factory.createMetricName("Requests"),
"requests", TimeUnit.SECONDS);
hitRate = APIMetrics.newGauge(factory.createMetricName("HitRate"),
new Gauge<Double>() {
@Override
public Double value() {
return c.getDoubleValue("/cache_service/metrics/" + url
+ "/hit_rate");
}
});
size = APIMetrics.newGauge(factory.createMetricName("Size"),
new Gauge<Long>() {
public Long value() {
return c.getLongValue("/cache_service/metrics/" + url
+ "/size");
}
});
entries = APIMetrics.newGauge(factory.createMetricName("Entries"),
new Gauge<Integer>() {
public Integer value() {
return c.getIntValue("/cache_service/metrics/" + url
+ "/entries");
}
});
}
// for backward compatibility registry.register(() -> registry.gauge(Double.class, compose("hit_rate")), factory.createMetricName("HitRate"));
@Deprecated registry.register(() -> registry.gauge(compose("size")), factory.createMetricName("Size"));
public double getRecentHitRate() { registry.register(() -> registry.gauge(Integer.class, compose("entries")), factory.createMetricName("Entries"));
return 0;
} }
} }

View File

@ -27,51 +27,17 @@
package org.apache.cassandra.metrics; package org.apache.cassandra.metrics;
import java.util.concurrent.TimeUnit; import javax.management.MalformedObjectNameException;
import com.cloudius.urchin.metrics.APIMetrics;
import com.cloudius.urchin.metrics.DefaultNameFactory;
import com.yammer.metrics.Metrics;
import com.yammer.metrics.core.Counter;
import com.yammer.metrics.core.Meter;
public class ClientRequestMetrics extends LatencyMetrics { public class ClientRequestMetrics extends LatencyMetrics {
@Deprecated public ClientRequestMetrics(String scope, String url) {
public static final Counter readTimeouts = Metrics super("ClientRequest", scope, url);
.newCounter(DefaultNameFactory.createMetricName(
"ClientRequestMetrics", "ReadTimeouts", null));
@Deprecated
public static final Counter writeTimeouts = Metrics
.newCounter(DefaultNameFactory.createMetricName(
"ClientRequestMetrics", "WriteTimeouts", null));
@Deprecated
public static final Counter readUnavailables = Metrics
.newCounter(DefaultNameFactory.createMetricName(
"ClientRequestMetrics", "ReadUnavailables", null));
@Deprecated
public static final Counter writeUnavailables = Metrics
.newCounter(DefaultNameFactory.createMetricName(
"ClientRequestMetrics", "WriteUnavailables", null));
public final Meter timeouts;
public final Meter unavailables;
public ClientRequestMetrics(String url, String scope) {
super(url, "ClientRequest", scope);
timeouts = APIMetrics.newMeter(url + "/timeouts",
factory.createMetricName("Timeouts"), "timeouts",
TimeUnit.SECONDS);
unavailables = APIMetrics.newMeter(url + "/unavailables",
factory.createMetricName("Unavailables"), "unavailables",
TimeUnit.SECONDS);
} }
public void release() { @Override
super.release(); public void register(MetricsRegistry registry) throws MalformedObjectNameException {
APIMetrics.defaultRegistry().removeMetric( super.register(registry);
factory.createMetricName("Timeouts")); registry.register(() -> registry.meter(uri + "/timeouts_rates"), names("Timeouts"));
APIMetrics.defaultRegistry().removeMetric( registry.register(() -> registry.meter(uri + "/unavailables_rates"), names("Unavailables"));
factory.createMetricName("Unavailables"));
} }
} }

View File

@ -1,577 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
package org.apache.cassandra.metrics;
import java.util.HashSet;
import java.util.Set;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.TimeUnit;
import org.apache.cassandra.db.ColumnFamilyStore;
import com.cloudius.urchin.api.APIClient;
import com.cloudius.urchin.metrics.APIMetrics;
import com.cloudius.urchin.metrics.MetricNameFactory;
import com.cloudius.urchin.utils.EstimatedHistogram;
import com.cloudius.urchin.utils.RecentEstimatedHistogram;
import com.google.common.collect.Maps;
import com.google.common.collect.Sets;
import com.yammer.metrics.Metrics;
import com.yammer.metrics.core.*;
/**
* Metrics for {@link ColumnFamilyStore}.
*/
public class ColumnFamilyMetrics {
private APIClient c = new APIClient();
/**
* Total amount of data stored in the memtable that resides on-heap,
* including column related overhead and overwritten rows.
*/
public final Gauge<Long> memtableOnHeapSize;
/**
* Total amount of data stored in the memtable that resides off-heap,
* including column related overhead and overwritten rows.
*/
public final Gauge<Long> memtableOffHeapSize;
/**
* Total amount of live data stored in the memtable, excluding any data
* structure overhead
*/
public final Gauge<Long> memtableLiveDataSize;
/**
* Total amount of data stored in the memtables (2i and pending flush
* memtables included) that resides on-heap.
*/
public final Gauge<Long> allMemtablesOnHeapSize;
/**
* Total amount of data stored in the memtables (2i and pending flush
* memtables included) that resides off-heap.
*/
public final Gauge<Long> allMemtablesOffHeapSize;
/**
* Total amount of live data stored in the memtables (2i and pending flush
* memtables included) that resides off-heap, excluding any data structure
* overhead
*/
public final Gauge<Long> allMemtablesLiveDataSize;
/** Total number of columns present in the memtable. */
public final Gauge<Long> memtableColumnsCount;
/** Number of times flush has resulted in the memtable being switched out. */
public final Counter memtableSwitchCount;
/** Current compression ratio for all SSTables */
public final Gauge<Double> compressionRatio;
/** Histogram of estimated row size (in bytes). */
public final Gauge<long[]> estimatedRowSizeHistogram;
/** Approximate number of keys in table. */
public final Gauge<Long> estimatedRowCount;
/** Histogram of estimated number of columns. */
public final Gauge<long[]> estimatedColumnCountHistogram;
/** Histogram of the number of sstable data files accessed per read */
public final ColumnFamilyHistogram sstablesPerReadHistogram;
/** (Local) read metrics */
public final LatencyMetrics readLatency;
/** (Local) range slice metrics */
public final LatencyMetrics rangeLatency;
/** (Local) write metrics */
public final LatencyMetrics writeLatency;
/** Estimated number of tasks pending for this column family */
public final Counter pendingFlushes;
/** Estimate of number of pending compactios for this CF */
public final Gauge<Integer> pendingCompactions;
/** Number of SSTables on disk for this CF */
public final Gauge<Integer> liveSSTableCount;
/** Disk space used by SSTables belonging to this CF */
public final Counter liveDiskSpaceUsed;
/**
* Total disk space used by SSTables belonging to this CF, including
* obsolete ones waiting to be GC'd
*/
public final Counter totalDiskSpaceUsed;
/** Size of the smallest compacted row */
public final Gauge<Long> minRowSize;
/** Size of the largest compacted row */
public final Gauge<Long> maxRowSize;
/** Size of the smallest compacted row */
public final Gauge<Long> meanRowSize;
/** Number of false positives in bloom filter */
public final Gauge<Long> bloomFilterFalsePositives;
/** Number of false positives in bloom filter from last read */
public final Gauge<Long> recentBloomFilterFalsePositives;
/** False positive ratio of bloom filter */
public final Gauge<Double> bloomFilterFalseRatio;
/** False positive ratio of bloom filter from last read */
public final Gauge<Double> recentBloomFilterFalseRatio;
/** Disk space used by bloom filter */
public final Gauge<Long> bloomFilterDiskSpaceUsed;
/** Off heap memory used by bloom filter */
public final Gauge<Long> bloomFilterOffHeapMemoryUsed;
/** Off heap memory used by index summary */
public final Gauge<Long> indexSummaryOffHeapMemoryUsed;
/** Off heap memory used by compression meta data */
public final Gauge<Long> compressionMetadataOffHeapMemoryUsed;
/** Key cache hit rate for this CF */
public final Gauge<Double> keyCacheHitRate;
/** Tombstones scanned in queries on this CF */
public final ColumnFamilyHistogram tombstoneScannedHistogram;
/** Live cells scanned in queries on this CF */
public final ColumnFamilyHistogram liveScannedHistogram;
/** Column update time delta on this CF */
public final ColumnFamilyHistogram colUpdateTimeDeltaHistogram;
/** Disk space used by snapshot files which */
public final Gauge<Long> trueSnapshotsSize;
/** Row cache hits, but result out of range */
public final Counter rowCacheHitOutOfRange;
/** Number of row cache hits */
public final Counter rowCacheHit;
/** Number of row cache misses */
public final Counter rowCacheMiss;
/** CAS Prepare metrics */
public final LatencyMetrics casPrepare;
/** CAS Propose metrics */
public final LatencyMetrics casPropose;
/** CAS Commit metrics */
public final LatencyMetrics casCommit;
public final Timer coordinatorReadLatency;
public final Timer coordinatorScanLatency;
/** Time spent waiting for free memtable space, either on- or off-heap */
public final Timer waitingOnFreeMemtableSpace;
private final MetricNameFactory factory;
private static final MetricNameFactory globalNameFactory = new AllColumnFamilyMetricNameFactory();
public final Counter speculativeRetries;
// for backward compatibility
@Deprecated
public final EstimatedHistogramWrapper sstablesPerRead;
// it should not be called directly
@Deprecated
protected final RecentEstimatedHistogram recentSSTablesPerRead = new RecentEstimatedHistogram(35);
private String cfName;
public final static LatencyMetrics globalReadLatency = new LatencyMetrics(
"/column_family/metrics/read_latency", globalNameFactory, "Read");
public final static LatencyMetrics globalWriteLatency = new LatencyMetrics(
"/column_family/metrics/write_latency", globalNameFactory, "Write");
public final static LatencyMetrics globalRangeLatency = new LatencyMetrics(
"/column_family/metrics/range_latency", globalNameFactory, "Range");
/**
* stores metrics that will be rolled into a single global metric
*/
public final static ConcurrentMap<String, Set<Metric>> allColumnFamilyMetrics = Maps
.newConcurrentMap();
/**
* Stores all metric names created that can be used when unregistering
*/
public final static Set<String> all = Sets.newHashSet();
/**
* Creates metrics for given {@link ColumnFamilyStore}.
*
* @param cfs
* ColumnFamilyStore to measure metrics
*/
public ColumnFamilyMetrics(final ColumnFamilyStore cfs) {
factory = new ColumnFamilyMetricNameFactory(cfs);
cfName = cfs.getCFName();
memtableColumnsCount = createColumnFamilyGauge(
"/column_family/metrics/memtable_columns_count",
"MemtableColumnsCount");
memtableOnHeapSize = createColumnFamilyGauge(
"/column_family/metrics/memtable_on_heap_size",
"MemtableOnHeapSize");
memtableOffHeapSize = createColumnFamilyGauge(
"/column_family/metrics/memtable_off_heap_size",
"MemtableOffHeapSize");
memtableLiveDataSize = createColumnFamilyGauge(
"/column_family/metrics/memtable_live_data_size",
"MemtableLiveDataSize");
allMemtablesOnHeapSize = createColumnFamilyGauge(
"/column_family/metrics/all_memtables_on_heap_size",
"AllMemtablesHeapSize");
allMemtablesOffHeapSize = createColumnFamilyGauge(
"/column_family/metrics/all_memtables_off_heap_size",
"AllMemtablesOffHeapSize");
allMemtablesLiveDataSize = createColumnFamilyGauge(
"/column_family/metrics/all_memtables_live_data_size",
"AllMemtablesLiveDataSize");
memtableSwitchCount = createColumnFamilyCounter(
"/column_family/metrics/memtable_switch_count",
"MemtableSwitchCount");
estimatedRowSizeHistogram = Metrics.newGauge(
factory.createMetricName("EstimatedRowSizeHistogram"),
new Gauge<long[]>() {
public long[] value() {
return c.getEstimatedHistogramAsLongArrValue("/column_family/metrics/estimated_row_size_histogram/"
+ cfName);
}
});
estimatedRowCount= Metrics.newGauge(
factory.createMetricName("EstimatedRowCount"),
new Gauge<Long>() {
public Long value() {
return c.getLongValue("/column_family/metrics/estimated_row_count/"
+ cfName);
}
});
estimatedColumnCountHistogram = Metrics.newGauge(
factory.createMetricName("EstimatedColumnCountHistogram"),
new Gauge<long[]>() {
public long[] value() {
return c.getEstimatedHistogramAsLongArrValue("/column_family/metrics/estimated_column_count_histogram/"
+ cfName);
}
});
sstablesPerReadHistogram = createColumnFamilyHistogram(
"/column_family/metrics/sstables_per_read_histogram",
"SSTablesPerReadHistogram");
compressionRatio = createColumnFamilyGauge("CompressionRatio",
new Gauge<Double>() {
public Double value() {
return c.getDoubleValue("/column_family/metrics/compression_ratio/"
+ cfName);
}
}, new Gauge<Double>() // global gauge
{
public Double value() {
return c.getDoubleValue("/column_family/metrics/compression_ratio/");
}
});
readLatency = new LatencyMetrics("/column_family/metrics/read_latency",
cfName, factory, "Read");
writeLatency = new LatencyMetrics(
"/column_family/metrics/write_latency", cfName, factory,
"Write");
rangeLatency = new LatencyMetrics(
"/column_family/metrics/range_latency", cfName, factory,
"Range");
pendingFlushes = createColumnFamilyCounter(
"/column_family/metrics/pending_flushes", "PendingFlushes");
pendingCompactions = createColumnFamilyGaugeInt(
"/column_family/metrics/pending_compactions",
"PendingCompactions");
liveSSTableCount = createColumnFamilyGaugeInt(
"/column_family/metrics/live_ss_table_count",
"LiveSSTableCount");
liveDiskSpaceUsed = createColumnFamilyCounter(
"/column_family/metrics/live_disk_space_used",
"LiveDiskSpaceUsed");
totalDiskSpaceUsed = createColumnFamilyCounter(
"/column_family/metrics/total_disk_space_used",
"TotalDiskSpaceUsed");
minRowSize = createColumnFamilyGauge(
"/column_family/metrics/min_row_size", "MinRowSize");
maxRowSize = createColumnFamilyGauge(
"/column_family/metrics/max_row_size", "MaxRowSize");
meanRowSize = createColumnFamilyGauge(
"/column_family/metrics/mean_row_size", "MeanRowSize");
bloomFilterFalsePositives = createColumnFamilyGauge(
"/column_family/metrics/bloom_filter_false_positives",
"BloomFilterFalsePositives");
recentBloomFilterFalsePositives = createColumnFamilyGauge(
"/column_family/metrics/recent_bloom_filter_false_positives",
"RecentBloomFilterFalsePositives");
bloomFilterFalseRatio = createColumnFamilyGaugeDouble(
"/column_family/metrics/bloom_filter_false_ratio",
"BloomFilterFalseRatio");
recentBloomFilterFalseRatio = createColumnFamilyGaugeDouble(
"/column_family/metrics/recent_bloom_filter_false_ratio",
"RecentBloomFilterFalseRatio");
bloomFilterDiskSpaceUsed = createColumnFamilyGauge(
"/column_family/metrics/bloom_filter_disk_space_used",
"BloomFilterDiskSpaceUsed");
bloomFilterOffHeapMemoryUsed = createColumnFamilyGauge(
"/column_family/metrics/bloom_filter_off_heap_memory_used",
"BloomFilterOffHeapMemoryUsed");
indexSummaryOffHeapMemoryUsed = createColumnFamilyGauge(
"/column_family/metrics/index_summary_off_heap_memory_used",
"IndexSummaryOffHeapMemoryUsed");
compressionMetadataOffHeapMemoryUsed = createColumnFamilyGauge(
"/column_family/metrics/compression_metadata_off_heap_memory_used",
"CompressionMetadataOffHeapMemoryUsed");
speculativeRetries = createColumnFamilyCounter(
"/column_family/metrics/speculative_retries",
"SpeculativeRetries");
keyCacheHitRate = Metrics.newGauge(
factory.createMetricName("KeyCacheHitRate"),
new Gauge<Double>() {
@Override
public Double value() {
return c.getDoubleValue("/column_family/metrics/key_cache_hit_rate/"
+ cfName);
}
});
tombstoneScannedHistogram = createColumnFamilyHistogram(
"/column_family/metrics/tombstone_scanned_histogram",
"TombstoneScannedHistogram");
liveScannedHistogram = createColumnFamilyHistogram(
"/column_family/metrics/live_scanned_histogram",
"LiveScannedHistogram");
colUpdateTimeDeltaHistogram = createColumnFamilyHistogram(
"/column_family/metrics/col_update_time_delta_histogram",
"ColUpdateTimeDeltaHistogram");
coordinatorReadLatency = APIMetrics.newTimer("/column_family/metrics/coordinator/read/" + cfName,
factory.createMetricName("CoordinatorReadLatency"),
TimeUnit.MICROSECONDS, TimeUnit.SECONDS);
coordinatorScanLatency = APIMetrics.newTimer("/column_family/metrics/coordinator/scan/" + cfName,
factory.createMetricName("CoordinatorScanLatency"),
TimeUnit.MICROSECONDS, TimeUnit.SECONDS);
waitingOnFreeMemtableSpace = APIMetrics.newTimer("/column_family/metrics/waiting_on_free_memtable/" + cfName,
factory.createMetricName("WaitingOnFreeMemtableSpace"),
TimeUnit.MICROSECONDS, TimeUnit.SECONDS);
trueSnapshotsSize = createColumnFamilyGauge(
"/column_family/metrics/true_snapshots_size", "SnapshotsSize");
rowCacheHitOutOfRange = createColumnFamilyCounter(
"/column_family/metrics/row_cache_hit_out_of_range",
"RowCacheHitOutOfRange");
rowCacheHit = createColumnFamilyCounter(
"/column_family/metrics/row_cache_hit", "RowCacheHit");
rowCacheMiss = createColumnFamilyCounter(
"/column_family/metrics/row_cache_miss", "RowCacheMiss");
casPrepare = new LatencyMetrics("/column_family/metrics/cas_prepare/"
+ cfName, factory, "CasPrepare");
casPropose = new LatencyMetrics("/column_family/metrics/cas_propose/"
+ cfName, factory, "CasPropose");
casCommit = new LatencyMetrics("/column_family/metrics/cas_commit/"
+ cfName, factory, "CasCommit");
sstablesPerRead = new EstimatedHistogramWrapper("/column_family/metrics/sstables_per_read_histogram/" + cfName);
}
/**
* Release all associated metrics.
*/
public void release() {
for (String name : all) {
allColumnFamilyMetrics.get(name).remove(
Metrics.defaultRegistry().allMetrics()
.get(factory.createMetricName(name)));
Metrics.defaultRegistry().removeMetric(
factory.createMetricName(name));
}
readLatency.release();
writeLatency.release();
rangeLatency.release();
Metrics.defaultRegistry().removeMetric(
factory.createMetricName("EstimatedRowSizeHistogram"));
Metrics.defaultRegistry().removeMetric(
factory.createMetricName("EstimatedColumnCountHistogram"));
Metrics.defaultRegistry().removeMetric(
factory.createMetricName("KeyCacheHitRate"));
Metrics.defaultRegistry().removeMetric(
factory.createMetricName("CoordinatorReadLatency"));
Metrics.defaultRegistry().removeMetric(
factory.createMetricName("CoordinatorScanLatency"));
Metrics.defaultRegistry().removeMetric(
factory.createMetricName("WaitingOnFreeMemtableSpace"));
}
/**
* Create a gauge that will be part of a merged version of all column
* families. The global gauge will merge each CF gauge by adding their
* values
*/
protected Gauge<Double> createColumnFamilyGaugeDouble(final String url,
final String name) {
Gauge<Double> gauge = new Gauge<Double>() {
public Double value() {
return c.getDoubleValue(url + "/" + cfName);
}
};
return createColumnFamilyGauge(url, name, gauge);
}
/**
* Create a gauge that will be part of a merged version of all column
* families. The global gauge will merge each CF gauge by adding their
* values
*/
protected Gauge<Long> createColumnFamilyGauge(final String url, final String name) {
Gauge<Long> gauge = new Gauge<Long>() {
public Long value() {
return c.getLongValue(url + "/" + cfName);
}
};
return createColumnFamilyGauge(url, name, gauge);
}
/**
* Create a gauge that will be part of a merged version of all column
* families. The global gauge will merge each CF gauge by adding their
* values
*/
protected Gauge<Integer> createColumnFamilyGaugeInt(final String url,
final String name) {
Gauge<Integer> gauge = new Gauge<Integer>() {
public Integer value() {
return c.getIntValue(url + "/" + cfName);
}
};
return createColumnFamilyGauge(url, name, gauge);
}
/**
* Create a gauge that will be part of a merged version of all column
* families. The global gauge will merge each CF gauge by adding their
* values
*/
protected <T extends Number> Gauge<T> createColumnFamilyGauge(final String url,
final String name, Gauge<T> gauge) {
return createColumnFamilyGauge(name, gauge, new Gauge<Long>() {
public Long value() {
// This is an optimiztion, call once for all column families
// instead
// of iterating over all of them
return c.getLongValue(url);
}
});
}
/**
* Create a gauge that will be part of a merged version of all column
* families. The global gauge is defined as the globalGauge parameter
*/
protected <G, T> Gauge<T> createColumnFamilyGauge(String name,
Gauge<T> gauge, Gauge<G> globalGauge) {
Gauge<T> cfGauge = APIMetrics.newGauge(factory.createMetricName(name),
gauge);
if (register(name, cfGauge)) {
Metrics.newGauge(globalNameFactory.createMetricName(name),
globalGauge);
}
return cfGauge;
}
/**
* Creates a counter that will also have a global counter thats the sum of
* all counters across different column families
*/
protected Counter createColumnFamilyCounter(final String url, final String name) {
Counter cfCounter = APIMetrics.newCounter(url + "/" + cfName,
factory.createMetricName(name));
if (register(name, cfCounter)) {
Metrics.newGauge(globalNameFactory.createMetricName(name),
new Gauge<Long>() {
public Long value() {
// This is an optimiztion, call once for all column
// families instead
// of iterating over all of them
return c.getLongValue(url);
}
});
}
return cfCounter;
}
/**
* Create a histogram-like interface that will register both a CF, keyspace
* and global level histogram and forward any updates to both
*/
protected ColumnFamilyHistogram createColumnFamilyHistogram(String url,
String name) {
Histogram cfHistogram = APIMetrics.newHistogram(url + "/" + cfName,
factory.createMetricName(name), true);
register(name, cfHistogram);
// TBD add keyspace and global histograms
// keyspaceHistogram,
// Metrics.newHistogram(globalNameFactory.createMetricName(name),
// true));
return new ColumnFamilyHistogram(cfHistogram, null, null);
}
/**
* Registers a metric to be removed when unloading CF.
*
* @return true if first time metric with that name has been registered
*/
private boolean register(String name, Metric metric) {
boolean ret = allColumnFamilyMetrics.putIfAbsent(name,
new HashSet<Metric>()) == null;
allColumnFamilyMetrics.get(name).add(metric);
all.add(name);
return ret;
}
public long[] getRecentSSTablesPerRead() {
return recentSSTablesPerRead
.getBuckets(sstablesPerRead.getBuckets(false));
}
public class ColumnFamilyHistogram {
public final Histogram[] all;
public final Histogram cf;
private ColumnFamilyHistogram(Histogram cf, Histogram keyspace,
Histogram global) {
this.cf = cf;
this.all = new Histogram[] { cf, keyspace, global };
}
}
class ColumnFamilyMetricNameFactory implements MetricNameFactory {
private final String keyspaceName;
private final String columnFamilyName;
private final boolean isIndex;
ColumnFamilyMetricNameFactory(ColumnFamilyStore cfs) {
this.keyspaceName = cfs.getKeyspace();
this.columnFamilyName = cfs.getColumnFamilyName();
isIndex = cfs.isIndex();
}
public MetricName createMetricName(String metricName) {
String groupName = ColumnFamilyMetrics.class.getPackage().getName();
String type = isIndex ? "IndexColumnFamily" : "ColumnFamily";
StringBuilder mbeanName = new StringBuilder();
mbeanName.append(groupName).append(":");
mbeanName.append("type=").append(type);
mbeanName.append(",keyspace=").append(keyspaceName);
mbeanName.append(",scope=").append(columnFamilyName);
mbeanName.append(",name=").append(metricName);
return new MetricName(groupName, type, metricName, keyspaceName
+ "." + columnFamilyName, mbeanName.toString());
}
}
static class AllColumnFamilyMetricNameFactory implements MetricNameFactory {
public MetricName createMetricName(String metricName) {
String groupName = ColumnFamilyMetrics.class.getPackage().getName();
StringBuilder mbeanName = new StringBuilder();
mbeanName.append(groupName).append(":");
mbeanName.append("type=ColumnFamily");
mbeanName.append(",name=").append(metricName);
return new MetricName(groupName, "ColumnFamily", metricName, "all",
mbeanName.toString());
}
}
}

View File

@ -23,65 +23,38 @@
*/ */
package org.apache.cassandra.metrics; package org.apache.cassandra.metrics;
import com.cloudius.urchin.api.APIClient; import javax.management.MalformedObjectNameException;
import com.cloudius.urchin.metrics.APIMetrics;
import com.cloudius.urchin.metrics.DefaultNameFactory;
import com.cloudius.urchin.metrics.MetricNameFactory;
import com.yammer.metrics.core.Gauge;
import com.yammer.metrics.core.Timer;
import java.util.concurrent.TimeUnit;
/** /**
* Metrics for commit log * Metrics for commit log
*/ */
public class CommitLogMetrics { public class CommitLogMetrics implements Metrics {
public static final MetricNameFactory factory = new DefaultNameFactory(
"CommitLog");
private APIClient c = new APIClient();
/** Number of completed tasks */
public final Gauge<Long> completedTasks;
/** Number of pending tasks */
public final Gauge<Long> pendingTasks;
/** Current size used by all the commit log segments */
public final Gauge<Long> totalCommitLogSize;
/**
* Time spent waiting for a CLS to be allocated - under normal conditions
* this should be zero
*/
public final Timer waitingOnSegmentAllocation;
/**
* The time spent waiting on CL sync; for Periodic this is only occurs when
* the sync is lagging its sync interval
*/
public final Timer waitingOnCommit;
public CommitLogMetrics() { public CommitLogMetrics() {
completedTasks = APIMetrics.newGauge( }
factory.createMetricName("CompletedTasks"), new Gauge<Long>() {
public Long value() { @Override
return c.getLongValue("/commitlog/metrics/completed_tasks"); public void register(MetricsRegistry registry) throws MalformedObjectNameException {
} MetricNameFactory factory = new DefaultNameFactory("CommitLog");
}); /** Number of completed tasks */
pendingTasks = APIMetrics.newGauge( registry.register(() -> registry.gauge("/commitlog/metrics/completed_tasks"),
factory.createMetricName("PendingTasks"), new Gauge<Long>() { factory.createMetricName("CompletedTasks"));
public Long value() { /** Number of pending tasks */
return c.getLongValue("/commitlog/metrics/pending_tasks"); registry.register(() -> registry.gauge("/commitlog/metrics/pending_tasks"),
} factory.createMetricName("PendingTasks"));
}); /** Current size used by all the commit log segments */
totalCommitLogSize = APIMetrics.newGauge( registry.register(() -> registry.gauge("/commitlog/metrics/total_commit_log_size"),
factory.createMetricName("TotalCommitLogSize"), factory.createMetricName("TotalCommitLogSize"));
new Gauge<Long>() { /**
public Long value() { * Time spent waiting for a CLS to be allocated - under normal
return c.getLongValue("/commitlog/metrics/total_commit_log_size"); * conditions this should be zero
} */
}); registry.register(() -> registry.timer("/commitlog/metrics/waiting_on_segment_allocation"),
waitingOnSegmentAllocation = APIMetrics.newTimer("/commit_log/metrics/waiting_on_segment_allocation", factory.createMetricName("WaitingOnSegmentAllocation"));
factory.createMetricName("WaitingOnSegmentAllocation"), /**
TimeUnit.MICROSECONDS, TimeUnit.SECONDS); * The time spent waiting on CL sync; for Periodic this is only occurs
waitingOnCommit = APIMetrics.newTimer("/commit_log/metrics/waiting_on_commit", * when the sync is lagging its sync interval
factory.createMetricName("WaitingOnCommit"), */
TimeUnit.MICROSECONDS, TimeUnit.SECONDS); registry.register(() -> registry.timer("/commitlog/metrics/waiting_on_commit"),
factory.createMetricName("WaitingOnCommit"));
} }
} }

View File

@ -18,57 +18,59 @@
/* /*
* Copyright 2015 Cloudius Systems * Copyright 2015 Cloudius Systems
* *
* Modified by Cloudius Systems * Modified by Cloudius Systems
*/ */
package org.apache.cassandra.metrics; package org.apache.cassandra.metrics;
import java.util.concurrent.TimeUnit; import jakarta.json.JsonArray;
import jakarta.json.JsonObject;
import java.util.HashMap;
import java.util.Map;
import com.cloudius.urchin.api.APIClient; import javax.management.MalformedObjectNameException;
import com.cloudius.urchin.metrics.APIMetrics;
import com.cloudius.urchin.metrics.DefaultNameFactory;
import com.cloudius.urchin.metrics.MetricNameFactory;
import com.yammer.metrics.core.Counter;
import com.yammer.metrics.core.Gauge;
import com.yammer.metrics.core.Meter;
/** /**
* Metrics for compaction. * Metrics for compaction.
*/ */
public class CompactionMetrics { public class CompactionMetrics implements Metrics {
public static final MetricNameFactory factory = new DefaultNameFactory(
"Compaction");
private APIClient c = new APIClient();
/** Estimated number of compactions remaining to perform */
public final Gauge<Integer> pendingTasks;
/** Number of completed compactions since server [re]start */
public final Gauge<Long> completedTasks;
/** Total number of compactions since server [re]start */
public final Meter totalCompactionsCompleted;
/** Total number of bytes compacted since server [re]start */
public final Counter bytesCompacted;
public CompactionMetrics() { public CompactionMetrics() {
}
pendingTasks = APIMetrics.newGauge( @Override
factory.createMetricName("PendingTasks"), new Gauge<Integer>() { public void register(MetricsRegistry registry) throws MalformedObjectNameException {
public Integer value() { MetricNameFactory factory = new DefaultNameFactory("Compaction");
return c.getIntValue("/compaction_manager/metrics/pending_tasks"); /** Estimated number of compactions remaining to perform */
} registry.register(() -> registry.gauge(Integer.class, "/compaction_manager/metrics/pending_tasks"),
}); factory.createMetricName("PendingTasks"));
completedTasks = APIMetrics.newGauge( /** Number of completed compactions since server [re]start */
factory.createMetricName("CompletedTasks"), new Gauge<Long>() { registry.register(() -> registry.gauge("/compaction_manager/metrics/completed_tasks"),
public Long value() { factory.createMetricName("CompletedTasks"));
return c.getLongValue("/compaction_manager/metrics/completed_tasks"); /** Total number of compactions since server [re]start */
} registry.register(() -> registry.meter("/compaction_manager/metrics/total_compactions_completed"),
}); factory.createMetricName("TotalCompactionsCompleted"));
totalCompactionsCompleted = APIMetrics.newMeter( /** Total number of bytes compacted since server [re]start */
"/compaction_manager/metrics/total_compactions_completed", registry.register(() -> registry.meter("/compaction_manager/metrics/bytes_compacted"),
factory.createMetricName("TotalCompactionsCompleted"),
"compaction completed", TimeUnit.SECONDS);
bytesCompacted = APIMetrics.newCounter(
"/compaction_manager/metrics/bytes_compacted",
factory.createMetricName("BytesCompacted")); factory.createMetricName("BytesCompacted"));
registry.register(() -> registry.gauge((client) -> {
Map<String, Map<String, Integer>> result = new HashMap<>();
JsonArray compactions = client.getJsonArray("compaction_manager/metrics/pending_tasks_by_table");
for (int i = 0; i < compactions.size(); i++) {
JsonObject c = compactions.getJsonObject(i);
String ks = c.getString("ks");
String cf = c.getString("cf");
if (!result.containsKey(ks)) {
result.put(ks, new HashMap<>());
}
Map<String, Integer> map = result.get(ks);
map.put(cf, (int)(c.getJsonNumber("task").longValue()));
}
return result;
}), factory.createMetricName("PendingTasksByTableName"));
} }
} }

View File

@ -15,15 +15,10 @@
* See the License for the specific language governing permissions and * See the License for the specific language governing permissions and
* limitations under the License. * limitations under the License.
*/ */
package org.apache.cassandra.metrics;
/* import javax.management.MalformedObjectNameException;
* Copyright 2015 Cloudius Systems import javax.management.ObjectName;
*
* Modified by Cloudius Systems
*/
package com.cloudius.urchin.metrics;
import com.yammer.metrics.core.MetricName;
/** /**
* MetricNameFactory that generates default MetricName of metrics. * MetricNameFactory that generates default MetricName of metrics.
@ -43,19 +38,14 @@ public class DefaultNameFactory implements MetricNameFactory {
this.scope = scope; this.scope = scope;
} }
public MetricName createMetricName(String metricName) { @Override
public ObjectName createMetricName(String metricName) throws MalformedObjectNameException {
return createMetricName(type, metricName, scope); return createMetricName(type, metricName, scope);
} }
public static MetricName createMetricName(String type, String metricName, public static ObjectName createMetricName(String type, String name, String scope)
String scope) { throws MalformedObjectNameException {
return new MetricName(GROUP_NAME, type, metricName, scope, StringBuilder nameBuilder = new StringBuilder();
createDefaultMBeanName(type, metricName, scope));
}
protected static String createDefaultMBeanName(String type, String name,
String scope) {
final StringBuilder nameBuilder = new StringBuilder();
nameBuilder.append(GROUP_NAME); nameBuilder.append(GROUP_NAME);
nameBuilder.append(":type="); nameBuilder.append(":type=");
nameBuilder.append(type); nameBuilder.append(type);
@ -67,6 +57,6 @@ public class DefaultNameFactory implements MetricNameFactory {
nameBuilder.append(",name="); nameBuilder.append(",name=");
nameBuilder.append(name); nameBuilder.append(name);
} }
return nameBuilder.toString(); return new ObjectName(nameBuilder.toString());
} }
} }

View File

@ -0,0 +1,50 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*/
/*
* Moddified by ScyllaDB
*/
package org.apache.cassandra.metrics;
import javax.management.MalformedObjectNameException;
import org.apache.cassandra.net.MessagingService;
/**
* Metrics for dropped messages by verb.
*/
public class DroppedMessageMetrics implements Metrics {
private final MessagingService.Verb verb;
public DroppedMessageMetrics(MessagingService.Verb verb) {
this.verb = verb;
}
@Override
public void register(MetricsRegistry registry) throws MalformedObjectNameException {
MetricNameFactory factory = new DefaultNameFactory("DroppedMessage", verb.toString());
/** Number of dropped messages */
// TODO: this API url does not exist. Add meter calls for verbs.
registry.register(() -> registry.meter("/messaging_service/messages/dropped/" + verb),
factory.createMetricName("Dropped"));
}
}

View File

@ -1,55 +0,0 @@
package org.apache.cassandra.metrics;
/*
* Copyright (C) 2015 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
import javax.ws.rs.core.MultivaluedMap;
import com.cloudius.urchin.api.APIClient;
import com.cloudius.urchin.utils.EstimatedHistogram;
public class EstimatedHistogramWrapper {
private APIClient c = new APIClient();
private String url;
private MultivaluedMap<String, String> queryParams;
private static final int DURATION = 50;
private int duration;
public EstimatedHistogramWrapper(String url, MultivaluedMap<String, String> queryParams, int duration) {
this.url = url;
this.queryParams = queryParams;
this.duration = duration;
}
public EstimatedHistogramWrapper(String url) {
this(url, null, DURATION);
}
public EstimatedHistogramWrapper(String url, int duration) {
this(url, null, duration);
}
public EstimatedHistogram get() {
return c.getEstimatedHistogram(url, queryParams, duration);
}
public long[] getBuckets(boolean reset) {
return get().getBuckets(reset);
}
}

View File

@ -23,42 +23,19 @@
*/ */
package org.apache.cassandra.metrics; package org.apache.cassandra.metrics;
import java.util.List; import java.util.Arrays;
import java.util.concurrent.TimeUnit;
import com.cloudius.urchin.metrics.APIMetrics; import javax.management.MalformedObjectNameException;
import com.cloudius.urchin.metrics.DefaultNameFactory; import javax.management.ObjectName;
import com.cloudius.urchin.metrics.MetricNameFactory;
import com.cloudius.urchin.utils.EstimatedHistogram;
import com.cloudius.urchin.utils.RecentEstimatedHistogram;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Lists;
import com.yammer.metrics.core.Counter;
import com.yammer.metrics.core.Timer;
/** /**
* Metrics about latencies * Metrics about latencies
*/ */
public class LatencyMetrics { public class LatencyMetrics implements Metrics {
/** Latency */ protected final MetricNameFactory[] factories;
public final Timer latency;
/** Total latency in micro sec */
public final Counter totalLatency;
/** parent metrics to replicate any updates to **/
private List<LatencyMetrics> parents = Lists.newArrayList();
protected final MetricNameFactory factory;
protected final String namePrefix; protected final String namePrefix;
protected final String uri;
@Deprecated public EstimatedHistogramWrapper totalLatencyHistogram; protected final String param;
/*
* It should not be called directly, use the getRecentLatencyHistogram
*/
@Deprecated protected final RecentEstimatedHistogram recentLatencyHistogram = new RecentEstimatedHistogram();
protected long lastLatency;
protected long lastOpCount;
/** /**
* Create LatencyMetrics with given group, type, and scope. Name prefix for * Create LatencyMetrics with given group, type, and scope. Name prefix for
@ -69,8 +46,8 @@ public class LatencyMetrics {
* @param scope * @param scope
* Scope * Scope
*/ */
public LatencyMetrics(String url, String type, String scope) { public LatencyMetrics(String type, String scope, String uri) {
this(url, type, "", scope); this(type, "", scope, uri, null);
} }
/** /**
@ -84,88 +61,35 @@ public class LatencyMetrics {
* @param scope * @param scope
* Scope of metrics * Scope of metrics
*/ */
public LatencyMetrics(String url, String type, String namePrefix, public LatencyMetrics(String type, String namePrefix, String scope, String uri, String param) {
String scope) { this(namePrefix, uri, param, new DefaultNameFactory(type, scope));
this(url, new DefaultNameFactory(type, scope), namePrefix);
} }
/** public LatencyMetrics(String namePrefix, String uri, MetricNameFactory... factories) {
* Create LatencyMetrics with given group, type, prefix to append to each this(namePrefix, uri, null, factories);
* metric name, and scope.
*
* @param factory
* MetricName factory to use
* @param namePrefix
* Prefix to append to each metric name
*/
public LatencyMetrics(String url, MetricNameFactory factory,
String namePrefix) {
this(url, null, factory, namePrefix);
} }
public LatencyMetrics(String url, String paramName, public LatencyMetrics(String namePrefix, String uri, String param, MetricNameFactory... factories) {
MetricNameFactory factory, String namePrefix) { this.factories = factories;
this.factory = factory;
this.namePrefix = namePrefix; this.namePrefix = namePrefix;
this.uri = uri;
paramName = (paramName == null)? "" : "/" + paramName; this.param = param;
latency = APIMetrics.newTimer(url + "/histogram" + paramName,
factory.createMetricName(namePrefix + "Latency"),
TimeUnit.MICROSECONDS, TimeUnit.SECONDS);
totalLatency = APIMetrics.newCounter(url + paramName,
factory.createMetricName(namePrefix + "TotalLatency"));
totalLatencyHistogram = new EstimatedHistogramWrapper(url + "/estimated_histogram" + paramName);
} }
/** protected ObjectName[] names(String suffix) throws MalformedObjectNameException {
* Create LatencyMetrics with given group, type, prefix to append to each return Arrays.stream(factories).map(f -> {
* metric name, and scope. Any updates to this will also run on parent try {
* return f.createMetricName(namePrefix + suffix);
* @param factory } catch (MalformedObjectNameException e) {
* MetricName factory to use throw new RuntimeException(e); // dung...
* @param namePrefix }
* Prefix to append to each metric name }).toArray(size -> new ObjectName[size]);
* @param parents
* any amount of parents to replicate updates to
*/
public LatencyMetrics(String url, MetricNameFactory factory,
String namePrefix, LatencyMetrics... parents) {
this(url, factory, namePrefix);
this.parents.addAll(ImmutableList.copyOf(parents));
} }
/** takes nanoseconds **/ @Override
public void addNano(long nanos) { public void register(MetricsRegistry registry) throws MalformedObjectNameException {
// convert to microseconds. 1 millionth String paramName = (param == null) ? "" : "/" + param;
latency.update(nanos, TimeUnit.NANOSECONDS); registry.register(() -> registry.timer(uri + "/moving_average_histogram" + paramName), names("Latency"));
totalLatency.inc(nanos / 1000); registry.register(() -> registry.counter(uri + paramName), names("TotalLatency"));
for (LatencyMetrics parent : parents) {
parent.addNano(nanos);
}
}
public void release() {
APIMetrics.defaultRegistry()
.removeMetric(factory.createMetricName(namePrefix + "Latency"));
APIMetrics.defaultRegistry().removeMetric(
factory.createMetricName(namePrefix + "TotalLatency"));
}
@Deprecated
public synchronized double getRecentLatency() {
long ops = latency.count();
long n = totalLatency.count();
if (ops == lastOpCount)
return 0;
try {
return ((double) n - lastLatency) / (ops - lastOpCount);
} finally {
lastLatency = n;
lastOpCount = ops;
}
}
public long[] getRecentLatencyHistogram() {
return recentLatencyHistogram.getBuckets(totalLatencyHistogram.getBuckets(false));
} }
} }

View File

@ -15,23 +15,26 @@
* See the License for the specific language governing permissions and * See the License for the specific language governing permissions and
* limitations under the License. * limitations under the License.
*/ */
/* package org.apache.cassandra.metrics;
* Copyright 2015 Cloudius Systems
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
/**
* Simplified version of {@link Metrics} naming factory paradigm, simply
* generating {@link ObjectName} and nothing more.
*
* @author calle
* *
* Modified by Cloudius Systems
*/ */
public interface MetricNameFactory {
package com.cloudius.urchin.metrics;
import com.yammer.metrics.core.MetricName;
public interface MetricNameFactory
{
/** /**
* Create a qualified name from given metric name. * Create a qualified name from given metric name.
* *
* @param metricName part of qualified name. * @param metricName
* part of qualified name.
* @return new String with given metric name. * @return new String with given metric name.
* @throws MalformedObjectNameException
*/ */
MetricName createMetricName(String metricName); ObjectName createMetricName(String metricName) throws MalformedObjectNameException;
} }

View File

@ -0,0 +1,38 @@
package org.apache.cassandra.metrics;
import java.util.function.Function;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
/**
* Action interface for any type that encapsulates n metrics.
*
* @author calle
*
*/
public interface Metrics {
/**
* Implementors should issue
* {@link MetricsRegistry#register(java.util.function.Supplier, javax.management.ObjectName...)}
* for every {@link Metrics} they generate. This method is called in both
* bind (create) and unbind (remove) phase, so an appropriate use of
* {@link Function} binding is advisable.
*
* @param registry
* @throws MalformedObjectNameException
*/
void register(MetricsRegistry registry) throws MalformedObjectNameException;
/**
* Same as {{@link #register(MetricsRegistry)}, but for {@link Metric}s that
* are "global" (i.e. static - not bound to an individual bean instance.
* This method is called whenever the first encapsulating MBean is
* added/removed from a {@link MBeanServer}.
*
* @param registry
* @throws MalformedObjectNameException
*/
default void registerGlobals(MetricsRegistry registry) throws MalformedObjectNameException {
}
}

View File

@ -0,0 +1,813 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.cassandra.metrics;
import static com.scylladb.jmx.api.APIClient.getReader;
import static java.lang.Math.floor;
import static java.util.logging.Level.SEVERE;
import jakarta.json.JsonArray;
import jakarta.json.JsonNumber;
import jakarta.json.JsonObject;
import java.util.Arrays;
import java.util.Locale;
import java.util.concurrent.TimeUnit;
import java.util.function.BiFunction;
import java.util.function.Function;
import java.util.function.Supplier;
import java.util.logging.Logger;
import javax.management.InstanceAlreadyExistsException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;
import com.scylladb.jmx.api.APIClient;
import com.sun.jmx.mbeanserver.JmxMBeanServer;
/**
* Makes integrating 3.0 metrics API with 2.0.
* <p>
* The 3.0 API comes with poor JMX integration
* </p>
*/
public class MetricsRegistry {
private static final long CACHE_DURATION = 1000;
private static final long UPDATE_INTERVAL = 50;
private static final Logger logger = Logger.getLogger(MetricsRegistry.class.getName());
private final APIClient client;
private final JmxMBeanServer mBeanServer;
public MetricsRegistry(APIClient client, JmxMBeanServer mBeanServer) {
this.client = client;
this.mBeanServer = mBeanServer;
}
public MetricsRegistry(MetricsRegistry other) {
this(other.client, other.mBeanServer);
}
public MetricMBean gauge(String url) {
return gauge(Long.class, url);
}
public <T> MetricMBean gauge(Class<T> type, final String url) {
return gauge(getReader(type), url);
}
public <T> MetricMBean gauge(final BiFunction<APIClient, String, T> function, final String url) {
return gauge(c -> function.apply(c, url));
}
public <T> MetricMBean gauge(final Function<APIClient, T> function) {
return gauge(() -> function.apply(client));
}
private class JmxGauge implements JmxGaugeMBean {
private final Supplier<?> function;
public JmxGauge(Supplier<?> function) {
this.function = function;
}
@Override
public Object getValue() {
return function.get();
}
}
public <T> MetricMBean gauge(final Supplier<T> function) {
return new JmxGauge(function);
}
/**
* Default approach to register is to actually register/add to
* {@link MBeanServer} For unbind phase, override here.
*
* @param bean
* @param objectNames
*/
public void register(Supplier<MetricMBean> f, ObjectName... objectNames) {
MetricMBean bean = f.get();
for (ObjectName name : objectNames) {
try {
mBeanServer.getMBeanServerInterceptor().registerMBean(bean, name);
} catch (InstanceAlreadyExistsException | MBeanRegistrationException | NotCompliantMBeanException e) {
logger.log(SEVERE, "Could not register mbean", e);
}
}
}
private class JmxCounter implements JmxCounterMBean {
private final String url;
public JmxCounter(String url) {
super();
this.url = url;
}
@Override
public long getCount() {
return client.getLongValue(url);
}
}
public MetricMBean counter(final String url) {
if (url != null) {
return new JmxCounter(url);
}
return new JmxCounter(url) {
@Override
public long getCount() {
return 0;
}
};
}
private abstract class IntermediatelyUpdated {
private final long interval;
private final Supplier<JsonObject> supplier;
private long lastUpdate;
public IntermediatelyUpdated(String url, long interval) {
this.supplier = () -> client.getJsonObj(url, null);
this.interval = interval;
}
public IntermediatelyUpdated(Supplier<JsonObject> supplier, long interval) {
this.supplier = supplier;
this.interval = interval;
}
public abstract void update(JsonObject obj);
public final void update() {
long now = System.currentTimeMillis();
if (now - lastUpdate < interval) {
return;
}
try {
JsonObject obj = supplier.get();
update(obj);
} finally {
lastUpdate = now;
}
}
}
private static class Meter {
public final long count;
public final double oneMinuteRate;
public final double fiveMinuteRate;
public final double fifteenMinuteRate;
public final double meanRate;
public Meter(long count, double oneMinuteRate, double fiveMinuteRate, double fifteenMinuteRate,
double meanRate) {
this.count = count;
this.oneMinuteRate = oneMinuteRate;
this.fiveMinuteRate = fiveMinuteRate;
this.fifteenMinuteRate = fifteenMinuteRate;
this.meanRate = meanRate;
}
public Meter() {
this(0, 0, 0, 0, 0);
}
public Meter(JsonObject obj) {
JsonArray rates = obj.getJsonArray("rates");
oneMinuteRate = rates.getJsonNumber(0).doubleValue();
fiveMinuteRate = rates.getJsonNumber(1).doubleValue();
fifteenMinuteRate = rates.getJsonNumber(2).doubleValue();
meanRate = obj.getJsonNumber("mean_rate").doubleValue();
count = obj.getJsonNumber("count").longValue();
}
}
private static final TimeUnit RATE_UNIT = TimeUnit.SECONDS;
private static final TimeUnit DURATION_UNIT = TimeUnit.MICROSECONDS;
private static final TimeUnit API_DURATION_UNIT = TimeUnit.MICROSECONDS;
private static final double DURATION_FACTOR = 1.0 / API_DURATION_UNIT.convert(1, DURATION_UNIT);
private static double toDuration(double micro) {
return micro * DURATION_FACTOR;
}
private static String unitString(TimeUnit u) {
String s = u.toString().toLowerCase(Locale.US);
return s.substring(0, s.length() - 1);
}
private class JmxMeter extends IntermediatelyUpdated implements JmxMeterMBean {
private Meter meter = new Meter();
public JmxMeter(String url, long interval) {
super(url, interval);
}
public JmxMeter(Supplier<JsonObject> supplier, long interval) {
super(supplier, interval);
}
@Override
public void update(JsonObject obj) {
meter = new Meter(obj);
}
@Override
public long getCount() {
update();
return meter.count;
}
@Override
public double getMeanRate() {
update();
return meter.meanRate;
}
@Override
public double getOneMinuteRate() {
update();
return meter.oneMinuteRate;
}
@Override
public double getFiveMinuteRate() {
update();
return meter.fiveMinuteRate;
}
@Override
public double getFifteenMinuteRate() {
update();
return meter.fifteenMinuteRate;
}
@Override
public String getRateUnit() {
return "event/" + unitString(RATE_UNIT);
}
}
public MetricMBean meter(String url) {
return new JmxMeter(url, CACHE_DURATION);
}
private static long[] asLongArray(JsonArray a) {
return a.getValuesAs(JsonNumber.class).stream().mapToLong(n -> n.longValue()).toArray();
}
private static interface Samples {
default double getValue(double quantile) {
return 0;
}
default long[] getValues() {
return new long[0];
}
}
private static class BufferSamples implements Samples {
private final long[] samples;
public BufferSamples(long[] samples) {
this.samples = samples;
Arrays.sort(this.samples);
}
@Override
public long[] getValues() {
return samples;
}
@Override
public double getValue(double quantile) {
if (quantile < 0.0 || quantile > 1.0) {
throw new IllegalArgumentException(quantile + " is not in [0..1]");
}
if (samples.length == 0) {
return 0.0;
}
final double pos = quantile * (samples.length + 1);
if (pos < 1) {
return samples[0];
}
if (pos >= samples.length) {
return samples[samples.length - 1];
}
final double lower = samples[(int) pos - 1];
final double upper = samples[(int) pos];
return lower + (pos - floor(pos)) * (upper - lower);
}
}
private static class Histogram {
private final long count;
private final long min;
private final long max;
private final double mean;
private final double stdDev;
private final Samples samples;
public Histogram(long count, long min, long max, double mean, double stdDev, Samples samples) {
this.count = count;
this.min = min;
this.max = max;
this.mean = mean;
this.stdDev = stdDev;
this.samples = samples;
}
public Histogram() {
this(0, 0, 0, 0, 0, new Samples() {
});
}
public Histogram(JsonObject obj) {
this(obj.getJsonNumber("count").longValue(), obj.getJsonNumber("min").longValue(),
obj.getJsonNumber("max").longValue(), obj.getJsonNumber("mean").doubleValue(),
obj.getJsonNumber("variance").doubleValue(), new BufferSamples(getValues(obj)));
}
public Histogram(EstimatedHistogram h) {
this(h.count(), h.min(), h.max(), h.mean(), 0, h);
}
private static long[] getValues(JsonObject obj) {
JsonArray arr = obj.getJsonArray("sample");
if (arr != null) {
return asLongArray(arr);
}
return new long[0];
}
public long[] getValues() {
return samples.getValues();
}
// Origin (and previous iterations of scylla-jxm)
// uses biased/ExponentiallyDecaying measurements
// for the history & quantile resolution.
// However, for use that is just gobbletigook, since
// we, at occasions of being asked, and when certain time
// has passed, ask the actual scylla server for a
// "values" buffer. A buffer with no information whatsoever
// on how said values correlate to actual sampling
// time.
// So, applying time weights at this level is just
// wrong. We can just as well treat this as a uniform
// distribution.
// Obvious improvement: Send time/value tuples instead.
public double getValue(double quantile) {
return samples.getValue(quantile);
}
public long getCount() {
return count;
}
public long getMin() {
return min;
}
public long getMax() {
return max;
}
public double getMean() {
return mean;
}
public double getStdDev() {
return stdDev;
}
}
private static class EstimatedHistogram implements Samples {
/**
* The series of values to which the counts in `buckets` correspond: 1,
* 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 17, 20, etc. Thus, a `buckets` of
* [0, 0, 1, 10] would mean we had seen one value of 3 and 10 values of
* 4.
*
* The series starts at 1 and grows by 1.2 each time (rounding and
* removing duplicates). It goes from 1 to around 36M by default
* (creating 90+1 buckets), which will give us timing resolution from
* microseconds to 36 seconds, with less precision as the numbers get
* larger.
*
* Each bucket represents values from (previous bucket offset, current
* offset].
*/
private final long[] bucketOffsets;
// buckets is one element longer than bucketOffsets -- the last element
// is
// values greater than the last offset
private long[] buckets;
public EstimatedHistogram(JsonObject obj) {
this(asLongArray(obj.getJsonArray("bucket_offsets")), asLongArray(obj.getJsonArray("buckets")));
}
public EstimatedHistogram(long[] offsets, long[] bucketData) {
assert bucketData.length == offsets.length + 1;
bucketOffsets = offsets;
buckets = bucketData;
}
/**
* @return the smallest value that could have been added to this
* histogram
*/
public long min() {
for (int i = 0; i < buckets.length; i++) {
if (buckets[i] > 0) {
return i == 0 ? 0 : 1 + bucketOffsets[i - 1];
}
}
return 0;
}
/**
* @return the largest value that could have been added to this
* histogram. If the histogram overflowed, returns
* Long.MAX_VALUE.
*/
public long max() {
int lastBucket = buckets.length - 1;
if (buckets[lastBucket] > 0) {
return Long.MAX_VALUE;
}
for (int i = lastBucket - 1; i >= 0; i--) {
if (buckets[i] > 0) {
return bucketOffsets[i];
}
}
return 0;
}
@Override
public long[] getValues() {
return buckets;
}
/**
* @param percentile
* @return estimated value at given percentile
*/
@Override
public double getValue(double percentile) {
assert percentile >= 0 && percentile <= 1.0;
int lastBucket = buckets.length - 1;
if (buckets[lastBucket] > 0) {
throw new IllegalStateException("Unable to compute when histogram overflowed");
}
long pcount = (long) Math.floor(count() * percentile);
if (pcount == 0) {
return 0;
}
long elements = 0;
for (int i = 0; i < lastBucket; i++) {
elements += buckets[i];
if (elements >= pcount) {
return bucketOffsets[i];
}
}
return 0;
}
/**
* @return the mean histogram value (average of bucket offsets, weighted
* by count)
* @throws IllegalStateException
* if any values were greater than the largest bucket
* threshold
*/
public long mean() {
int lastBucket = buckets.length - 1;
if (buckets[lastBucket] > 0) {
throw new IllegalStateException("Unable to compute ceiling for max when histogram overflowed");
}
long elements = 0;
long sum = 0;
for (int i = 0; i < lastBucket; i++) {
long bCount = buckets[i];
elements += bCount;
sum += bCount * bucketOffsets[i];
}
return (long) Math.ceil((double) sum / elements);
}
/**
* @return the total number of non-zero values
*/
public long count() {
return Arrays.stream(buckets).sum();
}
/**
* @return true if this histogram has overflowed -- that is, a value
* larger than our largest bucket could bound was added
*/
@SuppressWarnings("unused")
public boolean isOverflowed() {
return buckets[buckets.length - 1] > 0;
}
}
private class JmxHistogram extends IntermediatelyUpdated implements JmxHistogramMBean {
private Histogram histogram = new Histogram();
public JmxHistogram(String url, long interval) {
super(url, interval);
}
@Override
public void update(JsonObject obj) {
if (obj.containsKey("hist")) {
obj = obj.getJsonObject("hist");
}
if (obj.containsKey("buckets")) {
histogram = new Histogram(new EstimatedHistogram(obj));
} else {
histogram = new Histogram(obj);
}
}
@Override
public long getCount() {
update();
return histogram.getCount();
}
@Override
public long getMin() {
update();
return histogram.getMin();
}
@Override
public long getMax() {
update();
return histogram.getMax();
}
@Override
public double getMean() {
update();
return histogram.getMean();
}
@Override
public double getStdDev() {
update();
return histogram.getStdDev();
}
@Override
public double get50thPercentile() {
update();
return histogram.getValue(.5);
}
@Override
public double get75thPercentile() {
update();
return histogram.getValue(.75);
}
@Override
public double get95thPercentile() {
update();
return histogram.getValue(.95);
}
@Override
public double get98thPercentile() {
update();
return histogram.getValue(.98);
}
@Override
public double get99thPercentile() {
update();
return histogram.getValue(.99);
}
@Override
public double get999thPercentile() {
update();
return histogram.getValue(.999);
}
@Override
public long[] values() {
update();
return histogram.getValues();
}
}
public MetricMBean histogram(String url, boolean considerZeroes) {
return new JmxHistogram(url, UPDATE_INTERVAL);
}
private class JmxTimer extends JmxMeter implements JmxTimerMBean {
private Histogram histogram = new Histogram();
public JmxTimer(String url, long interval) {
super(url, interval);
}
@Override
public void update(JsonObject obj) {
// TODO: this is not atomic.
super.update(obj.getJsonObject("meter"));
histogram = new Histogram(obj.getJsonObject("hist"));
}
@Override
public double getMin() {
update();
return toDuration(histogram.getMin());
}
@Override
public double getMax() {
update();
return toDuration(histogram.getMax());
}
@Override
public double getMean() {
update();
return toDuration(histogram.getMean());
}
@Override
public double getStdDev() {
update();
return toDuration(histogram.getStdDev());
}
@Override
public double get50thPercentile() {
update();
return toDuration(histogram.getValue(.5));
}
@Override
public double get75thPercentile() {
update();
return toDuration(histogram.getValue(.75));
}
@Override
public double get95thPercentile() {
update();
return toDuration(histogram.getValue(.95));
}
@Override
public double get98thPercentile() {
update();
return toDuration(histogram.getValue(.98));
}
@Override
public double get99thPercentile() {
update();
return toDuration(histogram.getValue(.99));
}
@Override
public double get999thPercentile() {
update();
return toDuration(histogram.getValue(.999));
}
@Override
public long[] values() {
update();
return histogram.getValues();
}
@Override
public String getDurationUnit() {
update();
return DURATION_UNIT.toString().toLowerCase(Locale.US);
}
}
public MetricMBean timer(String url) {
return new JmxTimer(url, UPDATE_INTERVAL);
}
public interface MetricMBean {
}
public static interface JmxGaugeMBean extends MetricMBean {
Object getValue();
}
public interface JmxHistogramMBean extends MetricMBean {
long getCount();
long getMin();
long getMax();
double getMean();
double getStdDev();
double get50thPercentile();
double get75thPercentile();
double get95thPercentile();
double get98thPercentile();
double get99thPercentile();
double get999thPercentile();
long[] values();
}
public interface JmxCounterMBean extends MetricMBean {
long getCount();
}
public interface JmxMeterMBean extends MetricMBean {
long getCount();
double getMeanRate();
double getOneMinuteRate();
double getFiveMinuteRate();
double getFifteenMinuteRate();
String getRateUnit();
}
public interface JmxTimerMBean extends JmxMeterMBean {
double getMin();
double getMax();
double getMean();
double getStdDev();
double get50thPercentile();
double get75thPercentile();
double get95thPercentile();
double get98thPercentile();
double get99thPercentile();
double get999thPercentile();
long[] values();
String getDurationUnit();
}
}

View File

@ -23,27 +23,21 @@
*/ */
package org.apache.cassandra.metrics; package org.apache.cassandra.metrics;
import com.cloudius.urchin.metrics.APIMetrics; import javax.management.MalformedObjectNameException;
import com.cloudius.urchin.metrics.DefaultNameFactory;
import com.cloudius.urchin.metrics.MetricNameFactory;
import com.yammer.metrics.core.Counter;
/** /**
* Metrics related to Storage. * Metrics related to Storage.
*/ */
public class StorageMetrics { public class StorageMetrics implements Metrics {
private static final MetricNameFactory factory = new DefaultNameFactory( @Override
"Storage"); public void register(MetricsRegistry registry) throws MalformedObjectNameException {
MetricNameFactory factory = new DefaultNameFactory("Storage");
public static final Counter load = APIMetrics.newCounter( registry.register(() -> registry.counter("/storage_service/metrics/load"), factory.createMetricName("Load"));
"/storage_service/metrics/load", factory.createMetricName("Load")); registry.register(() -> registry.counter("/storage_service/metrics/exceptions"),
public static final Counter exceptions = APIMetrics.newCounter( factory.createMetricName("Exceptions"));
"/storage_service/metrics/exceptions", registry.register(() -> registry.counter("/storage_service/metrics/hints_in_progress"),
factory.createMetricName("Exceptions")); factory.createMetricName("TotalHintsInProgress"));
public static final Counter totalHintsInProgress = APIMetrics.newCounter( registry.register(() -> registry.counter("/storage_service/metrics/total_hints"),
"/storage_service/metrics/hints_in_progress", factory.createMetricName("TotalHints"));
factory.createMetricName("TotalHintsInProgress")); }
public static final Counter totalHints = APIMetrics.newCounter(
"/storage_service/metrics/total_hints",
factory.createMetricName("TotalHints"));
} }

View File

@ -0,0 +1,111 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
package org.apache.cassandra.metrics;
import static java.util.Arrays.asList;
import static org.apache.cassandra.metrics.DefaultNameFactory.createMetricName;
import jakarta.json.JsonArray;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.EnumSet;
import java.util.HashSet;
import java.util.Set;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.OperationsException;
import com.scylladb.jmx.api.APIClient;
import com.scylladb.jmx.metrics.APIMBean;
import com.scylladb.jmx.metrics.RegistrationChecker;
import com.scylladb.jmx.metrics.RegistrationMode;
import com.sun.jmx.mbeanserver.JmxMBeanServer;
/**
* Metrics for streaming.
*/
public class StreamingMetrics {
public static final String TYPE_NAME = "Streaming";
private static final HashSet<ObjectName> globalNames;
static {
try {
globalNames = new HashSet<ObjectName>(asList(createMetricName(TYPE_NAME, "ActiveOutboundStreams", null),
createMetricName(TYPE_NAME, "TotalIncomingBytes", null),
createMetricName(TYPE_NAME, "TotalOutgoingBytes", null)));
} catch (MalformedObjectNameException e) {
throw new Error(e);
}
};
private StreamingMetrics() {
}
private static boolean isStreamingName(ObjectName n) {
return TYPE_NAME.equals(n.getKeyProperty("type"));
}
public static RegistrationChecker createRegistrationChecker() {
return new RegistrationChecker() {
@Override
protected void doCheck(APIClient client, JmxMBeanServer server, EnumSet<RegistrationMode> mode) throws OperationsException, UnknownHostException {
Set<ObjectName> all = new HashSet<ObjectName>(globalNames);
JsonArray streams = client.getJsonArray("/stream_manager/");
for (int i = 0; i < streams.size(); i++) {
JsonArray sessions = streams.getJsonObject(i).getJsonArray("sessions");
for (int j = 0; j < sessions.size(); j++) {
String peer = sessions.getJsonObject(j).getString("peer");
String scope = InetAddress.getByName(peer).getHostAddress().replaceAll(":", ".");
all.add(createMetricName(TYPE_NAME, "IncomingBytes", scope));
all.add(createMetricName(TYPE_NAME, "OutgoingBytes", scope));
}
}
MetricsRegistry registry = new MetricsRegistry(client, server);
APIMBean.checkRegistration(server, all, mode, StreamingMetrics::isStreamingName, n -> {
String scope = n.getKeyProperty("scope");
String name = n.getKeyProperty("name");
String url = null;
if ("ActiveOutboundStreams".equals(name)) {
url = "/stream_manager/metrics/outbound";
} else if ("IncomingBytes".equals(name) || "TotalIncomingBytes".equals(name)) {
url = "/stream_manager/metrics/incoming";
} else if ("OutgoingBytes".equals(name) || "TotalOutgoingBytes".equals(name)) {
url = "/stream_manager/metrics/outgoing";
}
if (url == null) {
throw new IllegalArgumentException();
}
if (scope != null) {
url = url + "/" + scope;
}
return registry.counter(url);
});
}
};
}
}

View File

@ -0,0 +1,553 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.cassandra.metrics;
import static com.scylladb.jmx.api.APIClient.getReader;
import java.io.InvalidObjectException;
import java.io.ObjectStreamException;
import java.util.Hashtable;
import java.util.function.BiFunction;
import java.util.function.Function;
import java.util.function.Supplier;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import org.apache.cassandra.db.ColumnFamilyStore;
import com.scylladb.jmx.api.APIClient;
/**
* Metrics for {@link ColumnFamilyStore}.
*/
public class TableMetrics implements Metrics {
private final MetricNameFactory factory;
private final MetricNameFactory aliasFactory;
private static final MetricNameFactory globalFactory = new AllTableMetricNameFactory("Table");
private static final MetricNameFactory globalAliasFactory = new AllTableMetricNameFactory("ColumnFamily");
private static final LatencyMetrics globalLatency[] = new LatencyMetrics[] {
new LatencyMetrics("Read", compose("read_latency"), globalFactory, globalAliasFactory),
new LatencyMetrics("Write", compose("read_latency"), globalFactory, globalAliasFactory),
new LatencyMetrics("Range", compose("read_latency"), globalFactory, globalAliasFactory), };
private final String cfName;
private final LatencyMetrics latencyMetrics[];
public TableMetrics(String keyspace, String columnFamily, boolean isIndex) {
this.factory = new TableMetricNameFactory(keyspace, columnFamily, isIndex, "Table");
this.aliasFactory = new TableMetricNameFactory(keyspace, columnFamily, isIndex, "ColumnFamily");
this.cfName = keyspace + ":" + columnFamily;
latencyMetrics = new LatencyMetrics[] {
new LatencyMetrics("Read", compose("read_latency"), cfName, factory, aliasFactory),
new LatencyMetrics("Write", compose("write_latency"), cfName, factory, aliasFactory),
new LatencyMetrics("Range", compose("range_latency"), cfName, factory, aliasFactory),
new LatencyMetrics("CasPrepare", compose("cas_prepare"), cfName, factory, aliasFactory),
new LatencyMetrics("CasPropose", compose("cas_propose"), cfName, factory, aliasFactory),
new LatencyMetrics("CasCommit", compose("cas_commit"), cfName, factory, aliasFactory), };
}
@Override
public void register(MetricsRegistry registry) throws MalformedObjectNameException {
Registry r = new Registry(registry, factory, aliasFactory, cfName);
registerCommon(r);
registerLocal(r);
}
@Override
public void registerGlobals(MetricsRegistry registry) throws MalformedObjectNameException {
Registry r = new Registry(registry, globalFactory, globalAliasFactory, null);
registerCommon(r);
for (LatencyMetrics l : globalLatency) {
l.register(registry);
}
}
private static String compose(String base, String name) {
String s = "/column_family/metrics/" + base;
return name != null ? s + "/" + name : s;
}
private static String compose(String base) {
return compose(base, null);
}
/**
* Creates metrics for given {@link ColumnFamilyStore}.
*
* @param cfs
* ColumnFamilyStore to measure metrics
*/
static class Registry extends MetricsRegistry {
@SuppressWarnings("unused")
private Function<APIClient, Long> newGauge(final String url) {
return newGauge(Long.class, url);
}
public <T> Function<APIClient, T> newGauge(BiFunction<APIClient, String, T> function, String url) {
return c -> {
return function.apply(c, url);
};
}
private <T> Function<APIClient, T> newGauge(Class<T> type, final String url) {
return newGauge(getReader(type), url);
}
final MetricNameFactory factory;
final MetricNameFactory aliasFactory;
final String cfName;
final MetricsRegistry other;
public Registry(MetricsRegistry other, MetricNameFactory factory, MetricNameFactory aliasFactory,
String cfName) {
super(other);
this.other = other;
this.cfName = cfName;
this.factory = factory;
this.aliasFactory = aliasFactory;
}
@Override
public void register(Supplier<MetricMBean> f, ObjectName... objectNames) {
other.register(f, objectNames);
}
public void createTableGauge(String name, String uri) throws MalformedObjectNameException {
createTableGauge(name, name, uri);
}
public void createTableGauge(String name, String alias, String uri) throws MalformedObjectNameException {
createTableGauge(Long.class, name, alias, uri);
}
public <T> void createTableGauge(Class<T> c, String name, String uri) throws MalformedObjectNameException {
createTableGauge(c, c, name, name, uri);
}
public <T> void createTableGauge(Class<T> c, String name, String alias, String uri) throws MalformedObjectNameException {
createTableGauge(c, name, alias, uri, getReader(c));
}
public <T> void createTableGauge(Class<T> c, String name, String uri, BiFunction<APIClient, String, T> f)
throws MalformedObjectNameException {
createTableGauge(c, name, name, uri, f);
}
public <T> void createTableGauge(Class<T> c, String name, String alias, String uri,
BiFunction<APIClient, String, T> f) throws MalformedObjectNameException {
register(() -> gauge(newGauge(f, compose(uri, cfName))), factory.createMetricName(name),
aliasFactory.createMetricName(alias));
}
private static <T> BiFunction<APIClient, String, T> getDummy(Class<T> type) {
if (type == String.class) {
return (c, s) -> type.cast("");
} else if (type == Integer.class) {
return (c, s) -> type.cast(0);
} else if (type == Double.class) {
return (c, s) -> type.cast(0.0);
} else if (type == Long.class) {
return (c, s) -> type.cast(0L);
}
throw new IllegalArgumentException(type.getName());
}
public <T> void createDummyTableGauge(Class<T> c, String name) throws MalformedObjectNameException {
register(() -> gauge(newGauge(getDummy(c), null)), factory.createMetricName(name),
aliasFactory.createMetricName(name));
}
public <L, G> void createTableGauge(Class<L> c1, Class<G> c2, String name, String alias, String uri)
throws MalformedObjectNameException {
if (cfName != null) {
createTableGauge(c1, name, alias, uri, getReader(c1));
} else { // global case
createTableGauge(c2, name, alias, uri, getReader(c2));
}
}
public void createTableCounter(String name, String uri) throws MalformedObjectNameException {
createTableCounter(name, name, uri);
}
public void createTableCounter(String name, String alias, String uri) throws MalformedObjectNameException {
register(() -> counter(compose(uri, cfName)), factory.createMetricName(name),
aliasFactory.createMetricName(alias));
}
public void createDummyTableCounter(String name) throws MalformedObjectNameException {
register(() -> counter(null), factory.createMetricName(name),
aliasFactory.createMetricName(name));
}
public void createTableHistogram(String name, String uri, boolean considerZeros)
throws MalformedObjectNameException {
createTableHistogram(name, name, uri, considerZeros);
}
public void createTableHistogram(String name, String alias, String uri, boolean considerZeros)
throws MalformedObjectNameException {
register(() -> histogram(compose(uri, cfName), considerZeros), factory.createMetricName(name),
aliasFactory.createMetricName(alias));
}
public void createTimer(String name, String uri) throws MalformedObjectNameException {
register(() -> timer(compose(uri, cfName)), factory.createMetricName(name));
}
}
private void registerLocal(Registry registry) throws MalformedObjectNameException {
registry.createTableGauge(long[].class, "EstimatedPartitionSizeHistogram", "EstimatedRowSizeHistogram",
"estimated_row_size_histogram", APIClient::getEstimatedHistogramAsLongArrValue);
registry.createTableGauge("EstimatedPartitionCount", "EstimatedRowCount", "estimated_row_count");
registry.createTableGauge(long[].class, "EstimatedColumnCountHistogram", "estimated_column_count_histogram",
APIClient::getEstimatedHistogramAsLongArrValue);
registry.createTableGauge(Double.class, "KeyCacheHitRate", "key_cache_hit_rate");
registry.createTimer("CoordinatorReadLatency", "coordinator/read");
registry.createTimer("CoordinatorScanLatency", "coordinator/scan");
registry.createTimer("WaitingOnFreeMemtableSpace", "waiting_on_free_memtable");
for (LatencyMetrics l : latencyMetrics) {
l.register(registry);
}
// TODO: implement
registry.createDummyTableCounter("DroppedMutations");
}
private static void registerCommon(Registry registry) throws MalformedObjectNameException {
registry.createTableGauge("MemtableColumnsCount", "memtable_columns_count");
registry.createTableGauge("MemtableOnHeapSize", "memtable_on_heap_size");
registry.createTableGauge("MemtableOffHeapSize", "memtable_off_heap_size");
registry.createTableGauge("MemtableLiveDataSize", "memtable_live_data_size");
registry.createTableGauge("AllMemtablesHeapSize", "all_memtables_on_heap_size");
registry.createTableGauge("AllMemtablesOffHeapSize", "all_memtables_off_heap_size");
registry.createTableGauge("AllMemtablesLiveDataSize", "all_memtables_live_data_size");
registry.createTableCounter("MemtableSwitchCount", "memtable_switch_count");
registry.createTableHistogram("SSTablesPerReadHistogram", "sstables_per_read_histogram", true);
registry.createTableGauge(Double.class, "CompressionRatio", "compression_ratio");
registry.createTableCounter("PendingFlushes", "pending_flushes");
registry.createTableGauge(Integer.class, Long.class, "PendingCompactions", "PendingCompactions",
"pending_compactions");
registry.createTableGauge(Integer.class, Long.class, "LiveSSTableCount", "LiveSSTableCount",
"live_ss_table_count");
registry.createTableCounter("LiveDiskSpaceUsed", "live_disk_space_used");
registry.createTableCounter("TotalDiskSpaceUsed", "total_disk_space_used");
registry.createTableGauge("MinPartitionSize", "MinRowSize", "min_row_size");
registry.createTableGauge("MaxPartitionSize", "MaxRowSize", "max_row_size");
registry.createTableGauge("MeanPartitionSize", "MeanRowSize", "mean_row_size");
registry.createTableGauge("BloomFilterFalsePositives", "bloom_filter_false_positives");
registry.createTableGauge("RecentBloomFilterFalsePositives", "recent_bloom_filter_false_positives");
registry.createTableGauge(Double.class, "BloomFilterFalseRatio", "bloom_filter_false_ratio");
registry.createTableGauge(Double.class, "RecentBloomFilterFalseRatio", "recent_bloom_filter_false_ratio");
registry.createTableGauge("BloomFilterDiskSpaceUsed", "bloom_filter_disk_space_used");
registry.createTableGauge("BloomFilterOffHeapMemoryUsed", "bloom_filter_off_heap_memory_used");
registry.createTableGauge("IndexSummaryOffHeapMemoryUsed", "index_summary_off_heap_memory_used");
registry.createTableGauge("CompressionMetadataOffHeapMemoryUsed", "compression_metadata_off_heap_memory_used");
registry.createTableGauge("SpeculativeRetries", "speculative_retries");
registry.createTableHistogram("TombstoneScannedHistogram", "tombstone_scanned_histogram", false);
registry.createTableHistogram("LiveScannedHistogram", "live_scanned_histogram", false);
registry.createTableHistogram("ColUpdateTimeDeltaHistogram", "col_update_time_delta_histogram", false);
// We do not want to capture view mutation specific metrics for a view
// They only makes sense to capture on the base table
// TODO: views
// if (!cfs.metadata.isView())
// {
// viewLockAcquireTime = createTableTimer("ViewLockAcquireTime",
// cfs.keyspace.metric.viewLockAcquireTime);
// viewReadTime = createTableTimer("ViewReadTime",
// cfs.keyspace.metric.viewReadTime);
// }
registry.createTableGauge("SnapshotsSize", "snapshots_size");
registry.createTableCounter("RowCacheHitOutOfRange", "row_cache_hit_out_of_range");
registry.createTableCounter("RowCacheHit", "row_cache_hit");
registry.createTableCounter("RowCacheMiss", "row_cache_miss");
// TODO: implement
registry.createDummyTableGauge(Double.class, "PercentRepaired");
}
static class TableMetricObjectName extends javax.management.ObjectName {
private final TableMetricStringNameFactory factory;
private final String metricName;
public TableMetricObjectName(TableMetricStringNameFactory factory, String metricName) throws MalformedObjectNameException {
super("");
this.factory = factory;
this.metricName = metricName;
}
@Override
public boolean isPropertyValuePattern(String property) {
return false;
}
@Override
public String getCanonicalName() {
return factory.createMetricStringName(metricName);
}
@Override
public String getDomain() {
return factory.getDomain();
}
@Override
public String getKeyProperty(String property) {
if (property == "name") {
return metricName;
}
return factory.getKeyProperty(property);
}
@Override
public Hashtable<String,String> getKeyPropertyList() {
Hashtable<String, String> res = factory.getKeyPropertyList();
res.put("name", metricName);
return res;
}
@Override
public String getKeyPropertyListString() {
return factory.getKeyPropertyListString(metricName);
}
@Override
public String getCanonicalKeyPropertyListString() {
return getKeyPropertyListString();
}
@Override
public String toString() {
return getCanonicalName();
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
return getCanonicalName().equals(((ObjectName) o).getCanonicalName());
}
@Override
public int hashCode() {
return getCanonicalName().hashCode();
}
@Override
public boolean apply(ObjectName name) {
if (name.isDomainPattern() || name.isPropertyListPattern() || name.isPropertyValuePattern()) {
return false;
}
return getCanonicalName().equals(name.getCanonicalName());
}
@Override
public boolean isPattern() {
return false;
}
@Override
public boolean isDomainPattern() {
return false;
}
@Override
public boolean isPropertyPattern() {
return false;
}
@Override
public boolean isPropertyListPattern() {
return false;
}
@Override
public boolean isPropertyValuePattern() {
return false;
}
/**
* This type is not really serializable.
* Replace it with vanilla objectname.
*/
private Object writeReplace() throws ObjectStreamException {
try {
return new ObjectName(getDomain(), getKeyPropertyList());
} catch (MalformedObjectNameException e) {
throw new InvalidObjectException(toString());
}
}
}
static interface TableMetricStringNameFactory {
String createMetricStringName(String metricName);
String getDomain();
String getKeyProperty(String property);
Hashtable<String,String> getKeyPropertyList();
String getKeyPropertyListString(String metricName);
}
static class TableMetricNameFactory implements MetricNameFactory, TableMetricStringNameFactory {
private final String keyspaceName;
private final String tableName;
private final boolean isIndex;
private final String type;
public TableMetricNameFactory(String keyspaceName, String tableName, boolean isIndex, String type) {
this.keyspaceName = keyspaceName;
this.tableName = tableName;
this.isIndex = isIndex;
this.type = type;
}
private void appendKeyPropertyListString(final StringBuilder sb, final String metricName) {
String type = isIndex ? "Index" + this.type : this.type;
// Order matters here - keys have to be sorted
sb.append("keyspace=").append(keyspaceName);
sb.append(",name=").append(metricName);
sb.append(",scope=").append(tableName);
sb.append(",type=").append(type);
}
@Override
public String createMetricStringName(String metricName) {
String groupName = TableMetrics.class.getPackage().getName();
StringBuilder mbeanName = new StringBuilder();
mbeanName.append(groupName).append(":");
appendKeyPropertyListString(mbeanName, metricName);
return mbeanName.toString();
}
@Override
public String getDomain() {
return TableMetrics.class.getPackage().getName();
}
@Override
public String getKeyProperty(String property) {
switch (property) {
case "keyspace": return keyspaceName;
case "scope": return tableName;
case "type": return type;
default: return null;
}
}
@Override
public Hashtable<String,String> getKeyPropertyList() {
Hashtable<String, String> res = new Hashtable<>();
res.put("keyspace", keyspaceName);
res.put("scope", tableName);
res.put("type", type);
return res;
}
@Override
public String getKeyPropertyListString(String metricName) {
final StringBuilder sb = new StringBuilder();
appendKeyPropertyListString(sb, metricName);
return sb.toString();
}
@Override
public ObjectName createMetricName(String metricName) throws MalformedObjectNameException {
return new TableMetricObjectName(this, metricName);
}
}
static class AllTableMetricNameFactory implements MetricNameFactory, TableMetricStringNameFactory {
private final String type;
public AllTableMetricNameFactory(String type) {
this.type = type;
}
private void appendKeyPropertyListString(final StringBuilder sb, final String metricName) {
// Order matters here - keys have to be sorted
sb.append("name=").append(metricName);
sb.append(",type=" + type);
}
@Override
public String createMetricStringName(String metricName) {
String groupName = TableMetrics.class.getPackage().getName();
StringBuilder mbeanName = new StringBuilder();
mbeanName.append(groupName).append(":");
appendKeyPropertyListString(mbeanName, metricName);
return mbeanName.toString();
}
@Override
public String getDomain() {
return TableMetrics.class.getPackage().getName();
}
@Override
public String getKeyProperty(String property) {
switch (property) {
case "type": return type;
default: return null;
}
}
@Override
public Hashtable<String,String> getKeyPropertyList() {
Hashtable<String, String> res = new Hashtable<>();
res.put("type", type);
return res;
}
@Override
public String getKeyPropertyListString(String metricName) {
final StringBuilder sb = new StringBuilder();
appendKeyPropertyListString(sb, metricName);
return sb.toString();
}
@Override
public ObjectName createMetricName(String metricName) throws MalformedObjectNameException {
return new TableMetricObjectName(this, metricName);
}
}
public enum Sampler {
READS, WRITES
}
}

View File

@ -22,140 +22,255 @@
*/ */
package org.apache.cassandra.net; package org.apache.cassandra.net;
import java.lang.management.ManagementFactory; import static java.util.Collections.emptyMap;
import java.net.*;
import java.util.*;
import javax.management.MBeanServer; import jakarta.json.JsonArray;
import javax.management.ObjectName; import jakarta.json.JsonObject;
import java.net.UnknownHostException;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.logging.Logger;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import com.cloudius.urchin.api.APIClient; import org.apache.cassandra.metrics.DroppedMessageMetrics;
public final class MessagingService implements MessagingServiceMBean { import com.scylladb.jmx.api.APIClient;
import com.scylladb.jmx.metrics.MetricsMBean;
public final class MessagingService extends MetricsMBean implements MessagingServiceMBean {
public static final String MBEAN_NAME = "org.apache.cassandra.net:type=MessagingService"; public static final String MBEAN_NAME = "org.apache.cassandra.net:type=MessagingService";
private static final java.util.logging.Logger logger = java.util.logging.Logger private static final Logger logger = Logger.getLogger(MessagingService.class.getName());
.getLogger(MessagingService.class.getName());
private APIClient c = new APIClient(); private Map<String, Long> resentTimeouts = new HashMap<String, Long>();
private long recentTimeoutCount;
private final ObjectName jmxObjectName; /* All verb handler identifiers */
public enum Verb {
MUTATION, @Deprecated BINARY, READ_REPAIR, READ, REQUEST_RESPONSE, // client-initiated
// reads
// and
// writes
@Deprecated STREAM_INITIATE, @Deprecated STREAM_INITIATE_DONE, @Deprecated STREAM_REPLY, @Deprecated STREAM_REQUEST, RANGE_SLICE, @Deprecated BOOTSTRAP_TOKEN, @Deprecated TREE_REQUEST, @Deprecated TREE_RESPONSE, @Deprecated JOIN, GOSSIP_DIGEST_SYN, GOSSIP_DIGEST_ACK, GOSSIP_DIGEST_ACK2, @Deprecated DEFINITIONS_ANNOUNCE, DEFINITIONS_UPDATE, TRUNCATE, SCHEMA_CHECK, @Deprecated INDEX_SCAN, REPLICATION_FINISHED, INTERNAL_RESPONSE, // responses
// to
// internal
// calls
COUNTER_MUTATION, @Deprecated STREAMING_REPAIR_REQUEST, @Deprecated STREAMING_REPAIR_RESPONSE, SNAPSHOT, // Similar
// to
// nt
// snapshot
MIGRATION_REQUEST, GOSSIP_SHUTDOWN, _TRACE, // dummy verb so we can use
// MS.droppedMessages
ECHO, REPAIR_MESSAGE,
// use as padding for backwards compatability where a previous version
// needs to validate a verb from the future.
PAXOS_PREPARE, PAXOS_PROPOSE, PAXOS_COMMIT, PAGED_RANGE,
// remember to add new verbs at the end, since we serialize by ordinal
UNUSED_1, UNUSED_2, UNUSED_3,;
}
public void log(String str) { public void log(String str) {
System.out.println(str); logger.finest(str);
logger.info(str);
} }
public MessagingService() { public MessagingService(APIClient client) {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); super(MBEAN_NAME, client,
try { Stream.of(Verb.values()).map(v -> new DroppedMessageMetrics(v)).collect(Collectors.toList()));
jmxObjectName = new ObjectName(MBEAN_NAME);
mbs.registerMBean(this, jmxObjectName);
// mbs.registerMBean(StreamManager.instance, new ObjectName(
// StreamManager.OBJECT_NAME));
} catch (Exception e) {
throw new RuntimeException(e);
}
}
static MessagingService instance = new MessagingService();
public static MessagingService getInstance() {
return instance;
} }
/** /**
* Pending tasks for Command(Mutations, Read etc) TCP Connections * Pending tasks for Command(Mutations, Read etc) TCP Connections
*/ */
@Override
public Map<String, Integer> getCommandPendingTasks() { public Map<String, Integer> getCommandPendingTasks() {
log(" getCommandPendingTasks()"); log(" getCommandPendingTasks()");
return c.getMapStringIntegerValue("/messaging_service/messages/pending"); return client.getMapStringIntegerValue("/messaging_service/messages/pending");
} }
/** /**
* Completed tasks for Command(Mutations, Read etc) TCP Connections * Completed tasks for Command(Mutations, Read etc) TCP Connections
*/ */
@Override
public Map<String, Long> getCommandCompletedTasks() { public Map<String, Long> getCommandCompletedTasks() {
System.out.println("getCommandCompletedTasks!"); log("getCommandCompletedTasks()");
Map<String, Long> res = c Map<String, Long> res = client.getListMapStringLongValue("/messaging_service/messages/sent");
.getListMapStringLongValue("/messaging_service/messages/sent");
return res; return res;
} }
/** /**
* Dropped tasks for Command(Mutations, Read etc) TCP Connections * Dropped tasks for Command(Mutations, Read etc) TCP Connections
*/ */
@Override
public Map<String, Long> getCommandDroppedTasks() { public Map<String, Long> getCommandDroppedTasks() {
log(" getCommandDroppedTasks()"); log(" getCommandDroppedTasks()");
return c.getMapStringLongValue(""); return client.getMapStringLongValue("/messaging_service/messages/dropped");
} }
/** /**
* Pending tasks for Response(GOSSIP & RESPONSE) TCP Connections * Pending tasks for Response(GOSSIP & RESPONSE) TCP Connections
*/ */
@Override
public Map<String, Integer> getResponsePendingTasks() { public Map<String, Integer> getResponsePendingTasks() {
log(" getResponsePendingTasks()"); log(" getResponsePendingTasks()");
return c.getMapStringIntegerValue(""); return client.getMapStringIntegerValue("/messaging_service/messages/respond_pending");
} }
/** /**
* Completed tasks for Response(GOSSIP & RESPONSE) TCP Connections * Completed tasks for Response(GOSSIP & RESPONSE) TCP Connections
*/ */
@Override
public Map<String, Long> getResponseCompletedTasks() { public Map<String, Long> getResponseCompletedTasks() {
log(" getResponseCompletedTasks()"); log(" getResponseCompletedTasks()");
return c.getMapStringLongValue(""); return client.getMapStringLongValue("/messaging_service/messages/respond_completed");
} }
/** /**
* dropped message counts for server lifetime * dropped message counts for server lifetime
*/ */
@Override
public Map<String, Integer> getDroppedMessages() { public Map<String, Integer> getDroppedMessages() {
log(" getDroppedMessages()"); log(" getDroppedMessages()");
return c.getMapStringIntegerValue(""); Map<String, Integer> res = new HashMap<String, Integer>();
JsonArray arr = client.getJsonArray("/messaging_service/messages/dropped_by_ver");
for (int i = 0; i < arr.size(); i++) {
JsonObject obj = arr.getJsonObject(i);
res.put(obj.getString("verb"), obj.getInt("count"));
}
return res;
} }
private Map<String, Integer> recent;
/** /**
* dropped message counts since last called * dropped message counts since last called
*/ */
@Override
public Map<String, Integer> getRecentlyDroppedMessages() { public Map<String, Integer> getRecentlyDroppedMessages() {
log(" getRecentlyDroppedMessages()"); log(" getRecentlyDroppedMessages()");
return c.getMapStringIntegerValue("");
Map<String, Integer> dropped = getDroppedMessages(), result = new HashMap<>(dropped), old = recent;
recent = dropped;
if (old != null) {
for (Map.Entry<String, Integer> e : old.entrySet()) {
result.put(e.getKey(), result.get(e.getKey()) - e.getValue());
}
}
return result;
} }
/** /**
* Total number of timeouts happened on this node * Total number of timeouts happened on this node
*/ */
@Override
public long getTotalTimeouts() { public long getTotalTimeouts() {
log(" getTotalTimeouts()"); log(" getTotalTimeouts()");
return c.getLongValue(""); Map<String, Long> timeouts = getTimeoutsPerHost();
long res = 0;
for (Entry<String, Long> t : timeouts.entrySet()) {
res += t.getValue();
}
return res;
} }
/** /**
* Number of timeouts per host * Number of timeouts per host
*/ */
@Override
public Map<String, Long> getTimeoutsPerHost() { public Map<String, Long> getTimeoutsPerHost() {
log(" getTimeoutsPerHost()"); log(" getTimeoutsPerHost()");
return c.getMapStringLongValue(""); return client.getMapStringLongValue("/messaging_service/messages/timeout");
} }
/** /**
* Number of timeouts since last check. * Number of timeouts since last check.
*/ */
@Override
public long getRecentTotalTimouts() { public long getRecentTotalTimouts() {
log(" getRecentTotalTimouts()"); log(" getRecentTotalTimouts()");
return c.getLongValue(""); long timeoutCount = getTotalTimeouts();
long recent = timeoutCount - recentTimeoutCount;
recentTimeoutCount = timeoutCount;
return recent;
} }
/** /**
* Number of timeouts since last check per host. * Number of timeouts since last check per host.
*/ */
@Override
public Map<String, Long> getRecentTimeoutsPerHost() { public Map<String, Long> getRecentTimeoutsPerHost() {
log(" getRecentTimeoutsPerHost()"); log(" getRecentTimeoutsPerHost()");
return c.getMapStringLongValue(""); Map<String, Long> timeouts = getTimeoutsPerHost();
Map<String, Long> result = new HashMap<String, Long>();
for (Entry<String, Long> e : timeouts.entrySet()) {
long res = e.getValue().longValue()
- ((resentTimeouts.containsKey(e.getKey())) ? (resentTimeouts.get(e.getKey())).longValue() : 0);
resentTimeouts.put(e.getKey(), e.getValue());
result.put(e.getKey(), res);
}
return result;
} }
@Override
public int getVersion(String address) throws UnknownHostException { public int getVersion(String address) throws UnknownHostException {
log(" getVersion(String address) throws UnknownHostException"); log(" getVersion(String address) throws UnknownHostException");
return c.getIntValue(""); return client.getIntValue("");
} }
@Override
public Map<String, Integer> getLargeMessagePendingTasks() {
// TODO: implement for realsies
return getCommandPendingTasks();
}
@Override
public Map<String, Long> getLargeMessageCompletedTasks() {
// TODO: implement for realsies
return getCommandCompletedTasks();
}
@Override
public Map<String, Long> getLargeMessageDroppedTasks() {
// TODO: implement for realsies
return getCommandDroppedTasks();
}
@Override
public Map<String, Integer> getSmallMessagePendingTasks() {
// TODO: implement for realsies
return getResponsePendingTasks();
}
@Override
public Map<String, Long> getSmallMessageCompletedTasks() {
// TODO: implement for realsies
return getResponseCompletedTasks();
}
@Override
public Map<String, Long> getSmallMessageDroppedTasks() {
// TODO: implement for realsies
return emptyMap();
}
@Override
public Map<String, Integer> getGossipMessagePendingTasks() {
// TODO: implement for realsies
return emptyMap();
}
@Override
public Map<String, Long> getGossipMessageCompletedTasks() {
// TODO: implement for realsies
return emptyMap();
}
@Override
public Map<String, Long> getGossipMessageDroppedTasks() {
// TODO: implement for realsies
return emptyMap();
}
} }

Some files were not shown because too many files have changed in this diff Show More