Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
4c73878
bigquery: update dependencies (#3017)
pongad Mar 9, 2018
8bfd62c
bigquery: declare GA (again) (#3014)
pongad Mar 9, 2018
a21cc36
translate: fix log messages (#3011)
pgbhagat Mar 11, 2018
848245e
Added a snippet to show how to read a newline-delimited-json file and…
happyhuman Mar 12, 2018
bc07a35
pubsub: make Publisher/Subscriber accept plain strings (#3018)
pongad Mar 12, 2018
c6f38e3
storage: speed up IT
pongad Mar 12, 2018
9958b84
firestore: removed extra 'a' in ArraySortedMap findkey documentation …
chemelnucfin Mar 13, 2018
3d81a15
Fixing order test to verify that we order by components (#3033)
schmidt-sebastian Mar 13, 2018
5d86ace
Mark entity bindings as unsupported operation and deprecated (#3024)
yihanzhen Mar 13, 2018
ac6a398
pr comment
pongad Mar 14, 2018
60638e2
break, not return
pongad Mar 14, 2018
12f78da
add vision_v1p2beta1 (#3034)
neozwu Mar 14, 2018
ab231c9
release 0.39.0 (#3042)
neozwu Mar 14, 2018
d70feec
pubsub: remove polling implementation (#3040)
pongad Mar 14, 2018
7f88505
pubsub: make Publisher use GAPIC client (#3008)
pongad Mar 14, 2018
b0918bc
storage: speed up IT (#3023)
pongad Mar 14, 2018
606bbc2
bump version for development (#3045)
neozwu Mar 14, 2018
4fbe331
dlp: add a utility conversion function (#3028)
jeanbza Mar 14, 2018
63e028b
storage: change batch endpoint from /batch to /batch/storage/v1 (#3046)
janlugt Mar 15, 2018
7bd2901
logging: fix page size not propagating (#3047)
pongad Mar 15, 2018
1847a85
Upgrade to gax 1.20.0 (and fix bigtable to use the updated apis) (#3037)
igorbernstein2 Mar 16, 2018
b0cf103
add dlp-v2 and video-intelligence v1p1beta1 (#3053)
neozwu Mar 16, 2018
a1d5b05
Release 0.40.0
neozwu Mar 16, 2018
4170d85
bump to snapshot version (#3055)
neozwu Mar 17, 2018
ccb39e8
Add interface for enhancing Logback events (#2734)
skjolber Mar 17, 2018
362ac3c
Bigtable: 08. Resource Prefixes (#3050)
igorbernstein2 Mar 17, 2018
f8c6613
fix video directory (#3061)
neozwu Mar 19, 2018
bfa6f07
pubsub: delete getSubscriptionName (#3060)
pongad Mar 20, 2018
7499e96
Bigtable: copy InstanceName to data.models. (#3063)
igorbernstein2 Mar 20, 2018
b7102bb
Bigtable: 19 - Implement integration tests (#2997)
igorbernstein2 Mar 20, 2018
d4a179e
Update auth snippets.
jmdobry Mar 20, 2018
4c7cb71
Merge pull request #3066 from GoogleCloudPlatform/auth-cleanup
lesv Mar 20, 2018
54670a8
Bigtable: Add tests for data resource headers (#3062)
igorbernstein2 Mar 20, 2018
bdcdf59
pubsub: update code examples (#3059)
Fokko Mar 21, 2018
3552df4
Refresh Google Cloud Java with latest toolkit (#3069)
michaelbausor Mar 21, 2018
3ec1c79
Add texttospeech module (#3075)
vam-google Mar 22, 2018
986e6c6
Release 0.41.0 (#3076)
vam-google Mar 22, 2018
03e216b
bigquery: document bigquery stable (#3077)
pongad Mar 23, 2018
681e5bc
Bigtable: surface - add syntactic sugar for fetching single rows (#3074)
igorbernstein2 Mar 23, 2018
4c7cdb4
Bump version to 0.41.1-SNAPSHOT for development (#3079)
vam-google Mar 23, 2018
058ee62
Adds support for commit timestamp columns in spanner (#3080)
vkedia Mar 23, 2018
a004f3a
Update README.md (#3089)
parkjam4 Mar 27, 2018
b961fb7
Bigtable: add README (#3090)
igorbernstein2 Mar 27, 2018
4cd36a0
Bump proto/grpc package versions to 0.7.0/1.6.0 (#3092)
andreamlin Mar 27, 2018
106cffd
fix javadoc link (#3094)
andreamlin Mar 27, 2018
267941f
Release 0.42.0 (#3095)
andreamlin Mar 27, 2018
4e84883
Bump to snapshot version for development (#3096)
andreamlin Mar 27, 2018
36e2c6b
Adding Query Tests for NaN and Null (#3091)
schmidt-sebastian Mar 28, 2018
89082b3
add a smoke-test mvn profile (#3085)
neozwu Mar 28, 2018
0283eb2
Update gRPC version to 1.10.1, same as used in gax-java (#3101)
andreamlin Mar 28, 2018
dd1c509
Update gax versions to 1.22.0/0.39.0 (#3102)
andreamlin Mar 29, 2018
e8ff586
Update gax dependencies (#3107)
andreamlin Mar 30, 2018
73f6c9a
Let surefire not run smoke tests (#3113)
neozwu Apr 2, 2018
0125a13
Bigtable: 21. Refactor Batching - Move retries behind batching (#3026)
igorbernstein2 Apr 2, 2018
f548a00
Bumping proto/grpc deps and regenerating clients (#3111)
garrettjonesgoogle Apr 2, 2018
3d9c15d
Release 0.42.1
garrettjonesgoogle Apr 2, 2018
8acb1b3
Javadoc fix
garrettjonesgoogle Apr 2, 2018
1c0725d
Bumping to next snapshot versions (#3117)
garrettjonesgoogle Apr 2, 2018
d0a2fc2
Bigtable: Minor javadoc fixes (#3104)
igorbernstein2 Apr 2, 2018
1e79087
Expose the publishAllOutstanding() method as public (#3093)
j256 Apr 3, 2018
c17e1a6
firestore: updating test for set({}, mergeAll:true) (#3115)
schmidt-sebastian Apr 3, 2018
6bccbce
Prefetcher, to read fast. (#3054)
jean-philippe-martin Apr 3, 2018
d458207
pubsub: use unary calls to send acks/modacks (#3123)
pongad Apr 4, 2018
c56520e
Tweaking release instructions to represent reality (#3118)
garrettjonesgoogle Apr 4, 2018
58d2851
bigquery: add location property (#3121)
pongad Apr 4, 2018
af4bfcb
Release 0.43.0/1.25.0 (#3127)
andreamlin Apr 5, 2018
f81d383
Bump to snapshot version (#3128)
andreamlin Apr 5, 2018
c8f3624
internal: update maven enforcer (#3120)
jeanbza Apr 5, 2018
50c6b8b
Merge branch 'master' into spanner-gapic-migration
yihanzhen Apr 5, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
17 changes: 17 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,19 @@ jobs:
- run:
name: Run integration tests for google-cloud-bigquery
command: ./utilities/verify_single_it.sh google-cloud-bigquery

bigtable_it:
working_directory: ~/googleapis
<<: *anchor_docker
<<: *anchor_auth_vars
steps:
- checkout
- run:
<<: *anchor_run_decrypt
- run:
name: Run integration tests for google-cloud-bigtable
command: ./utilities/verify_single_it.sh google-cloud-bigtable -Dbigtable.env=prod -Dbigtable.table=projects/gcloud-devel/instances/google-cloud-bigtable/tables/integration-tests

compute_it:
working_directory: ~/googleapis
<<: *anchor_docker
Expand Down Expand Up @@ -220,6 +233,10 @@ workflows:
filters:
branches:
only: master
- bigtable_it:
filters:
branches:
only: master
- compute_it:
filters:
branches:
Expand Down
17 changes: 12 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Java idiomatic client for [Google Cloud Platform][cloud-platform] services.
- [Client Library Documentation][client-lib-docs]

This library supports the following Google Cloud Platform services with clients at a [GA](#versioning) quality level:
- [BigQuery](google-cloud-bigquery) (GA)
- [Stackdriver Logging](google-cloud-logging) (GA)
- [Cloud Datastore](google-cloud-datastore) (GA)
- [Cloud Natural Language](google-cloud-language) (GA)
Expand All @@ -22,7 +23,6 @@ This library supports the following Google Cloud Platform services with clients

This library supports the following Google Cloud Platform services with clients at a [Beta](#versioning) quality level:

- [BigQuery](google-cloud-bigquery) (Beta)
- [Cloud Data Loss Prevention](google-cloud-dlp) (Beta)
- [Stackdriver Error Reporting](google-cloud-errorreporting) (Beta)
- [Cloud Firestore](google-cloud-firestore) (Beta)
Expand All @@ -31,6 +31,7 @@ This library supports the following Google Cloud Platform services with clients
- [Cloud Spanner](google-cloud-spanner) (Beta)
- [Cloud Video Intelligence](google-cloud-video-intelligence) (Beta)
- [Stackdriver Trace](google-cloud-trace) (Beta)
- [Text-to-Speech](google-cloud-texttospeech) (Beta)

This library supports the following Google Cloud Platform services with clients at an [Alpha](#versioning) quality level:

Expand Down Expand Up @@ -58,22 +59,28 @@ If you are using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud</artifactId>
<version>0.38.0-alpha</version>
<version>0.43.0-alpha</version>
</dependency>
```
If you are using Gradle, add this to your dependencies
```Groovy
compile 'com.google.cloud:google-cloud:0.38.0-alpha'
compile 'com.google.cloud:google-cloud:0.43.0-alpha'
```
If you are using SBT, add this to your dependencies
```Scala
libraryDependencies += "com.google.cloud" % "google-cloud" % "0.38.0-alpha"
libraryDependencies += "com.google.cloud" % "google-cloud" % "0.43.0-alpha"
```
[//]: # ({x-version-update-end})

It also works just as well to declare a dependency only on the specific clients that you need. See the README of
each client for instructions.

If you're using IntelliJ or Eclipse, you can add client libraries to your project using these IDE plugins:
* [Cloud Tools for IntelliJ](https://cloud.google.com/tools/intellij/docs/client-libraries)
* [Cloud Tools for Eclipse](https://cloud.google.com/eclipse/docs/libraries)

Besides adding client libraries, the plugins provide additional functionality, such as service account key management. Refer to the documentation for each plugin for more details.

These client libraries can be used on App Engine standard for Java 8 runtime, App Engine flexible (including the Compat runtime). Most of the libraries do not work on the App Engine standard for Java 7 runtime, however, Datastore, Storage, and Bigquery should work.

If you are running into problems with version conflicts, see [Version Management](#version-management).
Expand Down Expand Up @@ -285,7 +292,7 @@ The easiest way to solve version conflicts is to use google-cloud's BOM. In Mave
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-bom</artifactId>
<version>0.38.0-alpha</version>
<version>0.43.0-alpha</version>
<type>pom</type>
<scope>import</scope>
</dependency>
Expand Down
6 changes: 2 additions & 4 deletions RELEASING.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,11 +109,9 @@ Go to the [releases page](https://github.com/GoogleCloudPlatform/google-cloud-ja

Ensure that the format is consistent with previous releases (for an example, see the [0.1.0 release](https://github.com/GoogleCloudPlatform/google-cloud-java/releases/tag/v0.1.0)). After adding any missing updates and reformatting as necessary, publish the draft.

11. Create a new draft for the next release. Note any commits not included in the release that have been submitted before the release commit, to ensure they are documented in the next release.
11. Run `python utilities/bump_versions.py next_snapshot patch` to include "-SNAPSHOT" in the current project version (Alternatively, update the versions in `versions.txt` to the correct versions for the next release.). Then, run `python utilities/replace_versions.py` to update the `pom.xml` files. (If you see updates in `README.md` files at this step, you probably did something wrong.)

12. Run `python utilities/bump_versions next_snapshot patch` to include "-SNAPSHOT" in the current project version (Alternatively, update the versions in `versions.txt` to the correct versions for the next release.). Then, run `python utilities/replace_versions.py` to update the `pom.xml` files. (If you see updates in `README.md` files at this step, you probably did something wrong.)

13. Create and merge in another PR to reflect the updated project version. For an example of what this PR should look like, see [#227](https://github.com/GoogleCloudPlatform/google-cloud-java/pull/227).
13. Create and merge in another PR to reflect the updated project version.

Improvements
============
Expand Down
24 changes: 24 additions & 0 deletions TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
This library provides tools to help write tests for code that uses the following google-cloud services:

- [BigQuery](#testing-code-that-uses-bigquery)
- [Bigtable](#testing-code-that-uses-bigtable)
- [Compute](#testing-code-that-uses-compute)
- [Datastore](#testing-code-that-uses-datastore)
- [DNS](#testing-code-that-uses-dns)
Expand Down Expand Up @@ -41,6 +42,29 @@ Here is an example that clears the dataset created in Step 3.
RemoteBigQueryHelper.forceDelete(bigquery, dataset);
```

### Testing code that uses Bigtable

Bigtable integration tests can either be run against an emulator or a real Bigtable table. The
target environment can be selected via the `bigtable.env` system property. By default it is set to
`emulator` and the other option is `prod`.

To use the `emulator` environment, please install the gcloud sdk and use it to install the
`cbtemulator` via `gcloud components install bigtable`.

To use the `prod` environment:
1. Set up the target table using `google-cloud-bigtable/scripts/setup-test-table.sh`
2. Download the [JSON service account credentials file][create-service-account] from the Google
Developer's Console.
3. Set the environment variable `GOOGLE_APPLICATION_CREDENTIALS` to the path of the credentials file
4. Set the system property `bigtable.env=prod` and `bigtable.table` to the full table name you
created earlier. Example:
```shell
mvn verify -am -pl google-cloud-bigtable \
-Dbigtable.env=prod \
-Dbigtable.table=projects/my-project/instances/my-instance/tables/my-table
```


### Testing code that uses Compute

Currently, there isn't an emulator for Google Compute, so an alternative is to create a test
Expand Down
9 changes: 3 additions & 6 deletions google-cloud-bigquery/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,6 @@ Java idiomatic client for [Google Cloud BigQuery][cloud-bigquery].
- [Product Documentation][bigquery-product-docs]
- [Client Library Documentation][bigquery-client-lib-docs]

> Note: This client is a work-in-progress, and may occasionally
> make backwards-incompatible changes.

Quickstart
----------
[//]: # ({x-version-update-start:google-cloud-bigquery:released})
Expand All @@ -23,16 +20,16 @@ If you are using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-bigquery</artifactId>
<version>0.38.0-beta</version>
<version>1.25.0</version>
</dependency>
```
If you are using Gradle, add this to your dependencies
```Groovy
compile 'com.google.cloud:google-cloud-bigquery:0.38.0-beta'
compile 'com.google.cloud:google-cloud-bigquery:1.25.0'
```
If you are using SBT, add this to your dependencies
```Scala
libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "0.38.0-beta"
libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "1.25.0"
```
[//]: # ({x-version-update-end})

Expand Down
4 changes: 2 additions & 2 deletions google-cloud-bigquery/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>google-cloud-bigquery</artifactId>
<version>0.38.1-beta-SNAPSHOT</version><!-- {x-version-update:google-cloud-bigquery:current} -->
<version>1.25.1-SNAPSHOT</version><!-- {x-version-update:google-cloud-bigquery:current} -->
<packaging>jar</packaging>
<name>Google Cloud BigQuery</name>
<url>https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-bigquery</url>
Expand All @@ -12,7 +12,7 @@
<parent>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pom</artifactId>
<version>0.38.1-alpha-SNAPSHOT</version><!-- {x-version-update:google-cloud-pom:current} -->
<version>0.43.1-alpha-SNAPSHOT</version><!-- {x-version-update:google-cloud-pom:current} -->
</parent>
<properties>
<site.installationModule>google-cloud-bigquery</site.installationModule>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -522,7 +522,7 @@ public int hashCode() {
* } catch (BigQueryException e) {
* // the dataset was not created
* }
* } </pre>
* }</pre>
*
* @throws BigQueryException upon failure
*/
Expand All @@ -538,7 +538,7 @@ public int hashCode() {
* String fieldName = "string_field";
* TableId tableId = TableId.of(datasetName, tableName);
* // Table field definition
* Field field = Field.of(fieldName, Field.Type.string());
* Field field = Field.of(fieldName, LegacySQLTypeName.STRING);
* // Table schema definition
* Schema schema = Schema.of(field);
* TableDefinition tableDefinition = StandardTableDefinition.of(schema);
Expand All @@ -553,6 +553,32 @@ public int hashCode() {
/**
* Creates a new job.
*
* <p>Example of loading a newline-delimited-json file with textual fields from GCS to a table.
* <pre> {@code
* String datasetName = "my_dataset_name";
* String tableName = "my_table_name";
* String sourceUri = "gs://cloud-samples-data/bigquery/us-states/us-states.json";
* TableId tableId = TableId.of(datasetName, tableName);
* // Table field definition
* Field[] fields = new Field[] {
* Field.of("name", LegacySQLTypeName.STRING),
* Field.of("post_abbr", LegacySQLTypeName.STRING)
* };
* // Table schema definition
* Schema schema = Schema.of(fields);
* LoadJobConfiguration configuration = LoadJobConfiguration.builder(tableId, sourceUri)
* .setFormatOptions(FormatOptions.json())
* .setCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
* .setSchema(schema)
* .build();
* // Load the table
* Job remoteLoadJob = bigquery.create(JobInfo.of(configuration));
* remoteLoadJob = remoteLoadJob.waitFor();
* // Check the table
* System.out.println("State: " + remoteLoadJob.getStatus().getState());
* return ((StandardTableDefinition) bigquery.getTable(tableId).getDefinition()).getNumRows();
* }</pre>
*
* <p>Example of creating a query job.
* <pre> {@code
* String query = "SELECT field FROM my_dataset_name.my_table_name";
Expand Down Expand Up @@ -861,8 +887,7 @@ public int hashCode() {
* Lists the table's rows.
*
* <p>Example of listing table rows, specifying the page size.
*
* <pre>{@code
* <pre> {@code
* String datasetName = "my_dataset_name";
* String tableName = "my_table_name";
* // This example reads the result 100 rows per RPC call. If there's no need to limit the number,
Expand All @@ -882,16 +907,15 @@ public int hashCode() {
* Lists the table's rows.
*
* <p>Example of listing table rows, specifying the page size.
*
* <pre>{@code
* <pre> {@code
* String datasetName = "my_dataset_name";
* String tableName = "my_table_name";
* TableId tableIdObject = TableId.of(datasetName, tableName);
* // This example reads the result 100 rows per RPC call. If there's no need to limit the number,
* // simply omit the option.
* TableResult tableData =
* bigquery.listTableData(tableIdObject, TableDataListOption.pageSize(100));
* for (FieldValueList row : rowIterator.hasNext()) {
* for (FieldValueList row : tableData.iterateAll()) {
* // do something with the row
* }
* }</pre>
Expand All @@ -904,17 +928,16 @@ public int hashCode() {
* Lists the table's rows. If the {@code schema} is not {@code null}, it is available to the
* {@link FieldValueList} iterated over.
*
* <p>Example of listing table rows.
*
* <pre>{@code
* <p>Example of listing table rows with schema.
* <pre> {@code
* String datasetName = "my_dataset_name";
* String tableName = "my_table_name";
* Schema schema = ...;
* String field = "my_field";
* String field = "field";
* TableResult tableData =
* bigquery.listTableData(datasetName, tableName, schema);
* for (FieldValueList row : tableData.iterateAll()) {
* row.get(field)
* row.get(field);
* }
* }</pre>
*
Expand All @@ -927,9 +950,8 @@ TableResult listTableData(
* Lists the table's rows. If the {@code schema} is not {@code null}, it is available to the
* {@link FieldValueList} iterated over.
*
* <p>Example of listing table rows.
*
* <pre>{@code
* <p>Example of listing table rows with schema.
* <pre> {@code
* Schema schema =
* Schema.of(
* Field.of("word", LegacySQLTypeName.STRING),
Expand Down Expand Up @@ -1047,28 +1069,21 @@ TableResult listTableData(
* queries. Since dry-run queries are not actually executed, there's no way to retrieve results.
*
* <p>Example of running a query.
*
* <pre>{@code
* String query = "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare`";
* QueryJobConfiguration queryConfig = QueryJobConfiguration.of(query);
*
* // To run the legacy syntax queries use the following code instead:
* // String query = "SELECT unique(corpus) FROM [bigquery-public-data:samples.shakespeare]"
* // QueryJobConfiguration queryConfig =
* // QueryJobConfiguration.newBuilder(query).setUseLegacySql(true).build();
*
* <pre> {@code
* String query = "SELECT unique(corpus) FROM [bigquery-public-data:samples.shakespeare]";
* QueryJobConfiguration queryConfig =
* QueryJobConfiguration.newBuilder(query).setUseLegacySql(true).build();
* for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
* // do something with the data
* }
* }</pre>
*
* <p>Example of running a query with query parameters.
*
* <pre>{@code
* String query =
* "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare` where word_count > ?";
* <pre> {@code
* String query = "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare` where word_count > @wordCount";
* // Note, standard SQL is required to use query parameters. Legacy SQL will not work.
* QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query)
* .addPositionalParameter(QueryParameterValue.int64(5))
* .addNamedParameter("wordCount", QueryParameterValue.int64(5))
* .build();
* for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
* // do something with the data
Expand All @@ -1092,18 +1107,6 @@ TableResult query(QueryJobConfiguration configuration, JobOption... options)
* <p>See {@link #query(QueryJobConfiguration, JobOption...)} for examples on populating a {@link
* QueryJobConfiguration}.
*
* <p>The recommended way to create a randomly generated JobId is the following:
*
* <pre>{@code
* JobId jobId = JobId.of();
* }</pre>
*
* For a user specified job id with an optional prefix use the following:
*
* <pre>{@code
* JobId jobId = JobId.of("my_prefix-my_unique_job_id");
* }</pre>
*
* @throws BigQueryException upon failure
* @throws InterruptedException if the current thread gets interrupted while waiting for the query
* to complete
Expand Down
Loading