是否可以使用替代的本地解决方案代替Google Cloud Storage存储桶的使用,从而可以运行例如Kubeflow管道完全独立于Google Cloud Platform吗?

最佳答案

是的,有可能。您可以使用minio,就像s3 / gs一样,但是它可以在本地存储的持久卷上运行。

以下是有关如何将其用作kfserving推理存储的说明:

验证minio是否在您的kubeflow安装中运行:

$ kubectl get svc -n kubeflow |grep minio
minio-service                                  ClusterIP   10.101.143.255   <none>        9000/TCP            81d

为您的迷你组启用隧道:
$ kubectl port-forward svc/minio-service -n kubeflow 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000

浏览http://localhost:9000进入minio UI并创建存储桶/上传模型。凭证minio/minio123。或者,您可以使用mc命令从终端执行此操作:
$ mc ls minio/models/flowers/0001/
[2020-03-26 13:16:57 CET]  1.7MiB saved_model.pb
[2020-04-25 13:37:09 CEST]      0B variables/

创建一个用于minio访问的密码和服务帐户,请注意s3端点定义了minio的路径,keyid&acceskey是base64中编码的凭据:
$ kubectl get secret mysecret -n homelab -o yaml
apiVersion: v1
data:
  awsAccessKeyID: bWluaW8=
  awsSecretAccessKey: bWluaW8xMjM=
kind: Secret
metadata:
  annotations:
    serving.kubeflow.org/s3-endpoint: minio-service.kubeflow:9000
    serving.kubeflow.org/s3-usehttps: "0"
  name: mysecret
  namespace: homelab

$ kubectl get serviceAccount -n homelab sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa
  namespace: homelab
secrets:
- name: mysecret

最后,如下创建您的inferenceservice:
$ kubectl get inferenceservice tensorflow-flowers -n homelab -o yaml
apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
  name: tensorflow-flowers
  namespace: homelab
spec:
  default:
    predictor:
      serviceAccountName: sa
      tensorflow:
        storageUri: s3://models/flowers

关于kubernetes - 没有Google Cloud Storage的Kubeflow,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/61066728/

10-09 22:09