API Docs improvements (#46)
This commit is contained in:
parent
3183d10dbe
commit
607dbd6ae8
100 changed files with 1306 additions and 9201 deletions
|
@ -85,12 +85,18 @@ Sorry for some of them being in German, I'll tranlate them at some point.
|
||||||
|
|
||||||
* [ ] Bauanleitung in die Readme/docs
|
* [ ] Bauanleitung in die Readme/docs
|
||||||
* [x] Auch noch nen "link" zum Featurecreep
|
* [x] Auch noch nen "link" zum Featurecreep
|
||||||
|
* [x] Redocs
|
||||||
|
* [x] Swaggerdocs verbessern
|
||||||
|
* [x] Descriptions in structs
|
||||||
|
* [x] Maxlength specify etc. (see swaggo docs)
|
||||||
|
* [x] Rights
|
||||||
|
* [x] API
|
||||||
* [ ] Anleitung zum Makefile
|
* [ ] Anleitung zum Makefile
|
||||||
* [ ] Struktur erklären
|
* [ ] Struktur erklären
|
||||||
* [ ] Backups
|
|
||||||
* [ ] Deploy in die docs
|
* [ ] Deploy in die docs
|
||||||
* [ ] Docker
|
* [ ] Docker
|
||||||
* [ ] Native (systemd + nginx/apache)
|
* [ ] Native (systemd + nginx/apache)
|
||||||
|
* [ ] Backups
|
||||||
* [ ] Docs aufsetzen
|
* [ ] Docs aufsetzen
|
||||||
|
|
||||||
### Tasks
|
### Tasks
|
||||||
|
@ -128,6 +134,10 @@ Sorry for some of them being in German, I'll tranlate them at some point.
|
||||||
* [ ] mgl. zum Emailmaskieren haben (in den Nutzereinstellungen, wenn man seine Email nicht an alle Welt rausposaunen will)
|
* [ ] mgl. zum Emailmaskieren haben (in den Nutzereinstellungen, wenn man seine Email nicht an alle Welt rausposaunen will)
|
||||||
* [ ] Mgl. zum Accountlöschen haben (so richtig krass mit emailverifiezierung und dass alle Privaten Listen gelöscht werden und man alle geteilten entweder wem übertragen muss oder auf privat stellen)
|
* [ ] Mgl. zum Accountlöschen haben (so richtig krass mit emailverifiezierung und dass alle Privaten Listen gelöscht werden und man alle geteilten entweder wem übertragen muss oder auf privat stellen)
|
||||||
* [ ] /info endpoint, in dem dann zb die limits und version etc steht
|
* [ ] /info endpoint, in dem dann zb die limits und version etc steht
|
||||||
|
* [ ] Deprecate /namespaces/{id}/lists in favour of namespace.ReadOne() <-- should also return the lists
|
||||||
|
* [ ] Description of web.HTTPError
|
||||||
|
* [ ] Rights methods should return errors
|
||||||
|
* [ ] Re-check all `{List|Namespace}{User|Team}` if really all parameters need to be exposed via json or are overwritten via param anyway.
|
||||||
|
|
||||||
### Linters
|
### Linters
|
||||||
|
|
||||||
|
|
1
Makefile
1
Makefile
|
@ -166,6 +166,7 @@ do-the-swag:
|
||||||
# Fix the generated swagger file, currently a workaround until swaggo can properly use go mod
|
# Fix the generated swagger file, currently a workaround until swaggo can properly use go mod
|
||||||
sed -i '/"definitions": {/a "code.vikunja.io.web.HTTPError": {"type": "object","properties": {"code": {"type": "integer"},"message": {"type": "string"}}},' docs/docs.go;
|
sed -i '/"definitions": {/a "code.vikunja.io.web.HTTPError": {"type": "object","properties": {"code": {"type": "integer"},"message": {"type": "string"}}},' docs/docs.go;
|
||||||
sed -i 's/code.vikunja.io\/web.HTTPError/code.vikunja.io.web.HTTPError/g' docs/docs.go;
|
sed -i 's/code.vikunja.io\/web.HTTPError/code.vikunja.io.web.HTTPError/g' docs/docs.go;
|
||||||
|
sed -i 's/` + \\"`\\" + `/` + "`" + `/g' docs/docs.go; # Replace replacements
|
||||||
|
|
||||||
.PHONY: misspell-check
|
.PHONY: misspell-check
|
||||||
misspell-check:
|
misspell-check:
|
||||||
|
|
9
docs/api.md
Normal file
9
docs/api.md
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
# API Documentation
|
||||||
|
|
||||||
|
You can find the api docs under `http://vikunja.tld/api/v1/docs` of your instance.
|
||||||
|
A public instance is available on [try.vikunja.io](http://try.vikunja.io/api/v1/docs).
|
||||||
|
|
||||||
|
These docs are autgenerated from annotations in the code with swagger.
|
||||||
|
|
||||||
|
The specification is hosted at `http://vikunja.tld/api/v1/docs.json`.
|
||||||
|
You can use this to embed it into other openapi compatible applications if you want.
|
384
docs/docs.go
384
docs/docs.go
File diff suppressed because it is too large
Load diff
19
docs/rights.md
Normal file
19
docs/rights.md
Normal file
|
@ -0,0 +1,19 @@
|
||||||
|
# List and namespace rights for teams and users
|
||||||
|
|
||||||
|
Whenever you share a list or namespace with a user or team, you can specify a `rights` parameter.
|
||||||
|
This parameter controls the rights that team or user is going to have (or has, if you request the current sharing status).
|
||||||
|
|
||||||
|
Rights are being specified using integers.
|
||||||
|
|
||||||
|
The following values are possible:
|
||||||
|
|
||||||
|
| Right (int) | Meaning |
|
||||||
|
|-------------|---------|
|
||||||
|
| 0 (Default) | Read only. Anything which is shared with this right cannot be edited. |
|
||||||
|
| 1 | Read and write. Namespaces or lists shared with this right can be read and written to by the team or user. |
|
||||||
|
| 2 | Admin. Can do anything like read and write, but can additionally manage sharing options. |
|
||||||
|
|
||||||
|
### Team admins
|
||||||
|
|
||||||
|
When adding or querying a team, every member has an additional boolean value stating if it is admin or not.
|
||||||
|
A team admin can also add and remove team members and also change whether a user in the team is admin or not.
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
5
go.mod
5
go.mod
|
@ -26,7 +26,7 @@ require (
|
||||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible
|
github.com/dgrijalva/jwt-go v3.2.0+incompatible
|
||||||
github.com/fzipp/gocyclo v0.0.0-20150627053110-6acd4345c835
|
github.com/fzipp/gocyclo v0.0.0-20150627053110-6acd4345c835
|
||||||
github.com/garyburd/redigo v1.6.0 // indirect
|
github.com/garyburd/redigo v1.6.0 // indirect
|
||||||
github.com/ghodss/yaml v1.0.0 // indirect
|
github.com/ghodss/yaml v1.0.0
|
||||||
github.com/go-openapi/spec v0.17.2 // indirect
|
github.com/go-openapi/spec v0.17.2 // indirect
|
||||||
github.com/go-openapi/swag v0.17.2 // indirect
|
github.com/go-openapi/swag v0.17.2 // indirect
|
||||||
github.com/go-redis/redis v6.14.2+incompatible
|
github.com/go-redis/redis v6.14.2+incompatible
|
||||||
|
@ -54,9 +54,6 @@ require (
|
||||||
github.com/prometheus/client_golang v0.9.2
|
github.com/prometheus/client_golang v0.9.2
|
||||||
github.com/spf13/viper v1.2.0
|
github.com/spf13/viper v1.2.0
|
||||||
github.com/stretchr/testify v1.2.2
|
github.com/stretchr/testify v1.2.2
|
||||||
github.com/swaggo/echo-swagger v0.0.0-20180315045949-97f46bb9e5a5
|
|
||||||
github.com/swaggo/files v0.0.0-20180215091130-49c8a91ea3fa // indirect
|
|
||||||
github.com/swaggo/gin-swagger v1.0.0 // indirect
|
|
||||||
github.com/swaggo/swag v1.4.1-0.20181210033626-0e12fd5eb026
|
github.com/swaggo/swag v1.4.1-0.20181210033626-0e12fd5eb026
|
||||||
github.com/urfave/cli v1.20.0 // indirect
|
github.com/urfave/cli v1.20.0 // indirect
|
||||||
github.com/ziutek/mymysql v1.5.4 // indirect
|
github.com/ziutek/mymysql v1.5.4 // indirect
|
||||||
|
|
6
go.sum
6
go.sum
|
@ -138,12 +138,6 @@ github.com/spf13/viper v1.2.0 h1:M4Rzxlu+RgU4pyBRKhKaVN1VeYOm8h2jgyXnAseDgCc=
|
||||||
github.com/spf13/viper v1.2.0/go.mod h1:P4AexN0a+C9tGAnUFNwDMYYZv3pjFuvmeiMyKRaNVlI=
|
github.com/spf13/viper v1.2.0/go.mod h1:P4AexN0a+C9tGAnUFNwDMYYZv3pjFuvmeiMyKRaNVlI=
|
||||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
github.com/swaggo/echo-swagger v0.0.0-20180315045949-97f46bb9e5a5 h1:yU0aDQpp0Dq4BAu8rrHnVdC6SZS0LceJVLCUCbGasbE=
|
|
||||||
github.com/swaggo/echo-swagger v0.0.0-20180315045949-97f46bb9e5a5/go.mod h1:mGVJdredle61MBSrJEnaLjKYU0qXJ5V5aNsBgypcUCY=
|
|
||||||
github.com/swaggo/files v0.0.0-20180215091130-49c8a91ea3fa h1:194s4modF+3X3POBfGHFCl9LHGjqzWhB/aUyfRiruZU=
|
|
||||||
github.com/swaggo/files v0.0.0-20180215091130-49c8a91ea3fa/go.mod h1:gxQT6pBGRuIGunNf/+tSOB5OHvguWi8Tbt82WOkf35E=
|
|
||||||
github.com/swaggo/gin-swagger v1.0.0 h1:k6Nn1jV49u+SNIWt7kejQS/iENZKZVMCNQrKOYatNF8=
|
|
||||||
github.com/swaggo/gin-swagger v1.0.0/go.mod h1:Mt37wE46iUaTAOv+HSnHbJYssKGqbS25X19lNF4YpBo=
|
|
||||||
github.com/swaggo/swag v1.4.1-0.20181210033626-0e12fd5eb026 h1:XAOjF3QgjDUkVrPMO4rYvNptSHQgUlHwQsEdJOTxHQ8=
|
github.com/swaggo/swag v1.4.1-0.20181210033626-0e12fd5eb026 h1:XAOjF3QgjDUkVrPMO4rYvNptSHQgUlHwQsEdJOTxHQ8=
|
||||||
github.com/swaggo/swag v1.4.1-0.20181210033626-0e12fd5eb026/go.mod h1:hog2WgeMOrQ/LvQ+o1YGTeT+vWVrbi0SiIslBtxKTyM=
|
github.com/swaggo/swag v1.4.1-0.20181210033626-0e12fd5eb026/go.mod h1:hog2WgeMOrQ/LvQ+o1YGTeT+vWVrbi0SiIslBtxKTyM=
|
||||||
github.com/urfave/cli v1.20.0 h1:fDqGv3UG/4jbVl/QkFwEdddtEDjh/5Ov6X+0B/3bPaw=
|
github.com/urfave/cli v1.20.0 h1:fDqGv3UG/4jbVl/QkFwEdddtEDjh/5Ov6X+0B/3bPaw=
|
||||||
|
|
|
@ -24,6 +24,7 @@ import (
|
||||||
|
|
||||||
// BulkTask is the definition of a bulk update task
|
// BulkTask is the definition of a bulk update task
|
||||||
type BulkTask struct {
|
type BulkTask struct {
|
||||||
|
// A list of task ids to update
|
||||||
IDs []int64 `json:"task_ids"`
|
IDs []int64 `json:"task_ids"`
|
||||||
Tasks []*ListTask `json:"-"`
|
Tasks []*ListTask `json:"-"`
|
||||||
ListTask
|
ListTask
|
||||||
|
@ -73,7 +74,7 @@ func (bt *BulkTask) CanUpdate(a web.Auth) bool {
|
||||||
// @tags task
|
// @tags task
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param task body models.BulkTask true "The task object. Looks like a normal task, the only difference is it uses an array of list_ids to update."
|
// @Param task body models.BulkTask true "The task object. Looks like a normal task, the only difference is it uses an array of list_ids to update."
|
||||||
// @Success 200 {object} models.ListTask "The updated task object."
|
// @Success 200 {object} models.ListTask "The updated task object."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid task object provided."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid task object provided."
|
||||||
|
|
|
@ -22,15 +22,22 @@ import (
|
||||||
|
|
||||||
// Label represents a label
|
// Label represents a label
|
||||||
type Label struct {
|
type Label struct {
|
||||||
|
// The unique, numeric id of this label.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"label"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"label"`
|
||||||
Title string `xorm:"varchar(250) not null" json:"title" valid:"runelength(3|250)"`
|
// The title of the lable. You'll see this one on tasks associated with it.
|
||||||
Description string `xorm:"varchar(250)" json:"description" valid:"runelength(0|250)"`
|
Title string `xorm:"varchar(250) not null" json:"title" valid:"runelength(3|250)" minLength:"3" maxLength:"250"`
|
||||||
HexColor string `xorm:"varchar(6)" json:"hex_color" valid:"runelength(0|6)"`
|
// The label description.
|
||||||
|
Description string `xorm:"varchar(250)" json:"description" valid:"runelength(0|250)" maxLength:"250"`
|
||||||
|
// The color this label has
|
||||||
|
HexColor string `xorm:"varchar(6)" json:"hex_color" valid:"runelength(0|6)" maxLength:"6"`
|
||||||
|
|
||||||
CreatedByID int64 `xorm:"int(11) not null" json:"-"`
|
CreatedByID int64 `xorm:"int(11) not null" json:"-"`
|
||||||
|
// The user who created this label
|
||||||
CreatedBy *User `xorm:"-" json:"created_by"`
|
CreatedBy *User `xorm:"-" json:"created_by"`
|
||||||
|
|
||||||
|
// A unix timestamp when this label was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this label was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
@ -44,9 +51,12 @@ func (Label) TableName() string {
|
||||||
|
|
||||||
// LabelTask represents a relation between a label and a task
|
// LabelTask represents a relation between a label and a task
|
||||||
type LabelTask struct {
|
type LabelTask struct {
|
||||||
|
// The unique, numeric id of this label.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
||||||
TaskID int64 `xorm:"int(11) INDEX not null" json:"-" param:"listtask"`
|
TaskID int64 `xorm:"int(11) INDEX not null" json:"-" param:"listtask"`
|
||||||
|
// The label id you want to associate with a task.
|
||||||
LabelID int64 `xorm:"int(11) INDEX not null" json:"label_id" param:"label"`
|
LabelID int64 `xorm:"int(11) INDEX not null" json:"label_id" param:"label"`
|
||||||
|
// A unix timestamp when this task was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
|
|
@ -24,7 +24,7 @@ import "code.vikunja.io/web"
|
||||||
// @tags labels
|
// @tags labels
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param label body models.Label true "The label object"
|
// @Param label body models.Label true "The label object"
|
||||||
// @Success 200 {object} models.Label "The created label object."
|
// @Success 200 {object} models.Label "The created label object."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid label object provided."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid label object provided."
|
||||||
|
@ -49,7 +49,7 @@ func (l *Label) Create(a web.Auth) (err error) {
|
||||||
// @tags labels
|
// @tags labels
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Label ID"
|
// @Param id path int true "Label ID"
|
||||||
// @Param label body models.Label true "The label object"
|
// @Param label body models.Label true "The label object"
|
||||||
// @Success 200 {object} models.Label "The created label object."
|
// @Success 200 {object} models.Label "The created label object."
|
||||||
|
@ -74,7 +74,7 @@ func (l *Label) Update() (err error) {
|
||||||
// @tags labels
|
// @tags labels
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Label ID"
|
// @Param id path int true "Label ID"
|
||||||
// @Success 200 {object} models.Label "The label was successfully deleted."
|
// @Success 200 {object} models.Label "The label was successfully deleted."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "Not allowed to delete the label."
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "Not allowed to delete the label."
|
||||||
|
|
|
@ -29,7 +29,7 @@ import (
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search labels by label text."
|
// @Param s query string false "Search labels by label text."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.Label "The labels"
|
// @Success 200 {array} models.Label "The labels"
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
// @Router /labels [get]
|
// @Router /labels [get]
|
||||||
|
@ -55,7 +55,7 @@ func (l *Label) ReadAll(search string, a web.Auth, page int) (ls interface{}, er
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param id path int true "Label ID"
|
// @Param id path int true "Label ID"
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {object} models.Label "The label"
|
// @Success 200 {object} models.Label "The label"
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to the label"
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to the label"
|
||||||
// @Failure 404 {object} code.vikunja.io/web.HTTPError "Label not found"
|
// @Failure 404 {object} code.vikunja.io/web.HTTPError "Label not found"
|
||||||
|
|
|
@ -27,7 +27,7 @@ import (
|
||||||
// @tags labels
|
// @tags labels
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param task path int true "Task ID"
|
// @Param task path int true "Task ID"
|
||||||
// @Param label path int true "Label ID"
|
// @Param label path int true "Label ID"
|
||||||
// @Success 200 {object} models.Label "The label was successfully removed."
|
// @Success 200 {object} models.Label "The label was successfully removed."
|
||||||
|
@ -46,7 +46,7 @@ func (l *LabelTask) Delete() (err error) {
|
||||||
// @tags labels
|
// @tags labels
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param task path int true "Task ID"
|
// @Param task path int true "Task ID"
|
||||||
// @Param label body models.Label true "The label object"
|
// @Param label body models.Label true "The label object"
|
||||||
// @Success 200 {object} models.Label "The created label relation object."
|
// @Success 200 {object} models.Label "The created label relation object."
|
||||||
|
@ -79,7 +79,7 @@ func (l *LabelTask) Create(a web.Auth) (err error) {
|
||||||
// @Param task path int true "Task ID"
|
// @Param task path int true "Task ID"
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search labels by label text."
|
// @Param s query string false "Search labels by label text."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.Label "The labels"
|
// @Success 200 {array} models.Label "The labels"
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
// @Router /tasks/{task}/labels [get]
|
// @Router /tasks/{task}/labels [get]
|
||||||
|
|
|
@ -22,16 +22,23 @@ import (
|
||||||
|
|
||||||
// List represents a list of tasks
|
// List represents a list of tasks
|
||||||
type List struct {
|
type List struct {
|
||||||
|
// The unique, numeric id of this list.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"list"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"list"`
|
||||||
Title string `xorm:"varchar(250)" json:"title" valid:"required,runelength(3|250)"`
|
// The title of the list. You'll see this in the namespace overview.
|
||||||
Description string `xorm:"varchar(1000)" json:"description" valid:"runelength(0|1000)"`
|
Title string `xorm:"varchar(250)" json:"title" valid:"required,runelength(3|250)" minLength:"3" maxLength:"250"`
|
||||||
|
// The description of the list.
|
||||||
|
Description string `xorm:"varchar(1000)" json:"description" valid:"runelength(0|1000)" maxLength:"1000"`
|
||||||
OwnerID int64 `xorm:"int(11) INDEX" json:"-"`
|
OwnerID int64 `xorm:"int(11) INDEX" json:"-"`
|
||||||
NamespaceID int64 `xorm:"int(11) INDEX" json:"-" param:"namespace"`
|
NamespaceID int64 `xorm:"int(11) INDEX" json:"-" param:"namespace"`
|
||||||
|
|
||||||
|
// The user who created this list.
|
||||||
Owner User `xorm:"-" json:"owner" valid:"-"`
|
Owner User `xorm:"-" json:"owner" valid:"-"`
|
||||||
|
// An array of tasks which belong to the list.
|
||||||
Tasks []*ListTask `xorm:"-" json:"tasks"`
|
Tasks []*ListTask `xorm:"-" json:"tasks"`
|
||||||
|
|
||||||
|
// A unix timestamp when this list was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this list was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
@ -71,7 +78,7 @@ func GetListsByNamespaceID(nID int64, doer *User) (lists []*List, err error) {
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search lists by title."
|
// @Param s query string false "Search lists by title."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.List "The lists"
|
// @Success 200 {array} models.List "The lists"
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to the list"
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to the list"
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
|
@ -99,7 +106,7 @@ func (l *List) ReadAll(search string, a web.Auth, page int) (interface{}, error)
|
||||||
// @tags list
|
// @tags list
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "List ID"
|
// @Param id path int true "List ID"
|
||||||
// @Success 200 {object} models.List "The list"
|
// @Success 200 {object} models.List "The list"
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to the list"
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to the list"
|
||||||
|
|
|
@ -59,7 +59,7 @@ func CreateOrUpdateList(list *List) (err error) {
|
||||||
// @tags list
|
// @tags list
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "List ID"
|
// @Param id path int true "List ID"
|
||||||
// @Param list body models.List true "The list with updated values you want to update."
|
// @Param list body models.List true "The list with updated values you want to update."
|
||||||
// @Success 200 {object} models.List "The updated list."
|
// @Success 200 {object} models.List "The updated list."
|
||||||
|
@ -82,7 +82,7 @@ func (l *List) Update() (err error) {
|
||||||
// @tags list
|
// @tags list
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param namespaceID path int true "Namespace ID"
|
// @Param namespaceID path int true "Namespace ID"
|
||||||
// @Param list body models.List true "The list you want to create."
|
// @Param list body models.List true "The list you want to create."
|
||||||
// @Success 200 {object} models.List "The created list."
|
// @Success 200 {object} models.List "The created list."
|
||||||
|
|
|
@ -26,7 +26,7 @@ import (
|
||||||
// @Description Delets a list
|
// @Description Delets a list
|
||||||
// @tags list
|
// @tags list
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "List ID"
|
// @Param id path int true "List ID"
|
||||||
// @Success 200 {object} models.Message "The list was successfully deleted."
|
// @Success 200 {object} models.Message "The list was successfully deleted."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid list object provided."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid list object provided."
|
||||||
|
|
|
@ -32,7 +32,7 @@ const (
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search tasks by task text."
|
// @Param s query string false "Search tasks by task text."
|
||||||
// @Param sortby path string true "The sorting parameter. Possible values to sort by are priority, prioritydesc, priorityasc, dueadate, dueadatedesc, dueadateasc."
|
// @Param sortby path string true "The sorting parameter. Possible values to sort by are priority, prioritydesc, priorityasc, dueadate, dueadatedesc, dueadateasc."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.List "The tasks"
|
// @Success 200 {array} models.List "The tasks"
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
// @Router /tasks/all/{sortby} [get]
|
// @Router /tasks/all/{sortby} [get]
|
||||||
|
@ -51,7 +51,7 @@ func dummy() {
|
||||||
// @Param sortby path string true "The sorting parameter. Possible values to sort by are priority, prioritydesc, priorityasc, dueadate, dueadatedesc, dueadateasc."
|
// @Param sortby path string true "The sorting parameter. Possible values to sort by are priority, prioritydesc, priorityasc, dueadate, dueadatedesc, dueadateasc."
|
||||||
// @Param startdate path string true "The start date parameter. Expects a unix timestamp."
|
// @Param startdate path string true "The start date parameter. Expects a unix timestamp."
|
||||||
// @Param enddate path string true "The end date parameter. Expects a unix timestamp."
|
// @Param enddate path string true "The end date parameter. Expects a unix timestamp."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.List "The tasks"
|
// @Success 200 {array} models.List "The tasks"
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
// @Router /tasks/all/{sortby}/{startdate}/{enddate} [get]
|
// @Router /tasks/all/{sortby}/{startdate}/{enddate} [get]
|
||||||
|
@ -67,7 +67,7 @@ func dummy2() {
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search tasks by task text."
|
// @Param s query string false "Search tasks by task text."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.List "The tasks"
|
// @Success 200 {array} models.List "The tasks"
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
// @Router /tasks/all [get]
|
// @Router /tasks/all [get]
|
||||||
|
|
|
@ -23,31 +23,48 @@ import (
|
||||||
|
|
||||||
// ListTask represents an task in a todolist
|
// ListTask represents an task in a todolist
|
||||||
type ListTask struct {
|
type ListTask struct {
|
||||||
|
// The unique, numeric id of this task.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"listtask"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"listtask"`
|
||||||
Text string `xorm:"varchar(250)" json:"text" valid:"runelength(3|250)"`
|
// The task text. This is what you'll see in the list.
|
||||||
Description string `xorm:"varchar(250)" json:"description" valid:"runelength(0|250)"`
|
Text string `xorm:"varchar(250)" json:"text" valid:"runelength(3|250)" minLength:"3" maxLength:"250"`
|
||||||
|
// The task description.
|
||||||
|
Description string `xorm:"varchar(250)" json:"description" valid:"runelength(0|250)" maxLength:"250"`
|
||||||
Done bool `xorm:"INDEX" json:"done"`
|
Done bool `xorm:"INDEX" json:"done"`
|
||||||
|
// A unix timestamp when the task is due.
|
||||||
DueDateUnix int64 `xorm:"int(11) INDEX" json:"dueDate"`
|
DueDateUnix int64 `xorm:"int(11) INDEX" json:"dueDate"`
|
||||||
|
// An array of unix timestamps when the user wants to be reminded of the task.
|
||||||
RemindersUnix []int64 `xorm:"JSON TEXT" json:"reminderDates"`
|
RemindersUnix []int64 `xorm:"JSON TEXT" json:"reminderDates"`
|
||||||
CreatedByID int64 `xorm:"int(11)" json:"-"` // ID of the user who put that task on the list
|
CreatedByID int64 `xorm:"int(11)" json:"-"` // ID of the user who put that task on the list
|
||||||
|
// The list this task belongs to.
|
||||||
ListID int64 `xorm:"int(11) INDEX" json:"listID" param:"list"`
|
ListID int64 `xorm:"int(11) INDEX" json:"listID" param:"list"`
|
||||||
|
// An amount in seconds this task repeats itself. If this is set, when marking the task as done, it will mark itself as "undone" and then increase all remindes and the due date by its amount.
|
||||||
RepeatAfter int64 `xorm:"int(11) INDEX" json:"repeatAfter"`
|
RepeatAfter int64 `xorm:"int(11) INDEX" json:"repeatAfter"`
|
||||||
|
// If the task is a subtask, this is the id of its parent.
|
||||||
ParentTaskID int64 `xorm:"int(11) INDEX" json:"parentTaskID"`
|
ParentTaskID int64 `xorm:"int(11) INDEX" json:"parentTaskID"`
|
||||||
|
// The task priority. Can be anything you want, it is possible to sort by this later.
|
||||||
Priority int64 `xorm:"int(11)" json:"priority"`
|
Priority int64 `xorm:"int(11)" json:"priority"`
|
||||||
|
// When this task starts.
|
||||||
StartDateUnix int64 `xorm:"int(11) INDEX" json:"startDate"`
|
StartDateUnix int64 `xorm:"int(11) INDEX" json:"startDate"`
|
||||||
|
// When this task ends.
|
||||||
EndDateUnix int64 `xorm:"int(11) INDEX" json:"endDate"`
|
EndDateUnix int64 `xorm:"int(11) INDEX" json:"endDate"`
|
||||||
|
// An array of users who are assigned to this task
|
||||||
Assignees []*User `xorm:"-" json:"assignees"`
|
Assignees []*User `xorm:"-" json:"assignees"`
|
||||||
|
// An array of labels which are associated with this task.
|
||||||
Labels []*Label `xorm:"-" json:"labels"`
|
Labels []*Label `xorm:"-" json:"labels"`
|
||||||
|
|
||||||
Sorting string `xorm:"-" json:"-" param:"sort"` // Parameter to sort by
|
Sorting string `xorm:"-" json:"-" param:"sort"` // Parameter to sort by
|
||||||
StartDateSortUnix int64 `xorm:"-" json:"-" param:"startdatefilter"`
|
StartDateSortUnix int64 `xorm:"-" json:"-" param:"startdatefilter"`
|
||||||
EndDateSortUnix int64 `xorm:"-" json:"-" param:"enddatefilter"`
|
EndDateSortUnix int64 `xorm:"-" json:"-" param:"enddatefilter"`
|
||||||
|
|
||||||
|
// An array of subtasks.
|
||||||
Subtasks []*ListTask `xorm:"-" json:"subtasks"`
|
Subtasks []*ListTask `xorm:"-" json:"subtasks"`
|
||||||
|
|
||||||
|
// A unix timestamp when this task was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this task was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
|
// The user who initially created the task.
|
||||||
CreatedBy User `xorm:"-" json:"createdBy" valid:"-"`
|
CreatedBy User `xorm:"-" json:"createdBy" valid:"-"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
|
|
@ -28,7 +28,7 @@ import (
|
||||||
// @tags task
|
// @tags task
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "List ID"
|
// @Param id path int true "List ID"
|
||||||
// @Param task body models.ListTask true "The task object"
|
// @Param task body models.ListTask true "The task object"
|
||||||
// @Success 200 {object} models.ListTask "The created task object."
|
// @Success 200 {object} models.ListTask "The created task object."
|
||||||
|
@ -81,7 +81,7 @@ func (t *ListTask) Create(a web.Auth) (err error) {
|
||||||
// @tags task
|
// @tags task
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Task ID"
|
// @Param id path int true "Task ID"
|
||||||
// @Param task body models.ListTask true "The task object"
|
// @Param task body models.ListTask true "The task object"
|
||||||
// @Success 200 {object} models.ListTask "The updated task object."
|
// @Success 200 {object} models.ListTask "The updated task object."
|
||||||
|
|
|
@ -26,7 +26,7 @@ import (
|
||||||
// @Description Deletes a task from a list. This does not mean "mark it done".
|
// @Description Deletes a task from a list. This does not mean "mark it done".
|
||||||
// @tags task
|
// @tags task
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Task ID"
|
// @Param id path int true "Task ID"
|
||||||
// @Success 200 {object} models.Message "The created task object."
|
// @Success 200 {object} models.Message "The created task object."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid task ID provided."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid task ID provided."
|
||||||
|
|
|
@ -20,12 +20,18 @@ import "code.vikunja.io/web"
|
||||||
|
|
||||||
// ListUser represents a list <-> user relation
|
// ListUser represents a list <-> user relation
|
||||||
type ListUser struct {
|
type ListUser struct {
|
||||||
|
// The unique, numeric id of this list <-> user relation.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"namespace"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"namespace"`
|
||||||
|
// The user id.
|
||||||
UserID int64 `xorm:"int(11) not null INDEX" json:"user_id" param:"user"`
|
UserID int64 `xorm:"int(11) not null INDEX" json:"user_id" param:"user"`
|
||||||
|
// The list id.
|
||||||
ListID int64 `xorm:"int(11) not null INDEX" json:"list_id" param:"list"`
|
ListID int64 `xorm:"int(11) not null INDEX" json:"list_id" param:"list"`
|
||||||
Right UserRight `xorm:"int(11) INDEX" json:"right" valid:"length(0|2)"`
|
// The right this user has. 0 = Read only, 1 = Read & Write, 2 = Admin. See the docs for more details.
|
||||||
|
Right UserRight `xorm:"int(11) INDEX" json:"right" valid:"length(0|2)" maximum:"2" default:"0"`
|
||||||
|
|
||||||
|
// A unix timestamp when this relation was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this relation was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
|
|
@ -24,7 +24,7 @@ import "code.vikunja.io/web"
|
||||||
// @tags sharing
|
// @tags sharing
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "List ID"
|
// @Param id path int true "List ID"
|
||||||
// @Param list body models.ListUser true "The user you want to add to the list."
|
// @Param list body models.ListUser true "The user you want to add to the list."
|
||||||
// @Success 200 {object} models.ListUser "The created user<->list relation."
|
// @Success 200 {object} models.ListUser "The created user<->list relation."
|
||||||
|
|
|
@ -23,7 +23,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @Description Delets a user from a list. The user won't have access to the list anymore.
|
// @Description Delets a user from a list. The user won't have access to the list anymore.
|
||||||
// @tags sharing
|
// @tags sharing
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param listID path int true "List ID"
|
// @Param listID path int true "List ID"
|
||||||
// @Param userID path int true "User ID"
|
// @Param userID path int true "User ID"
|
||||||
// @Success 200 {object} models.Message "The user was successfully removed from the list."
|
// @Success 200 {object} models.Message "The user was successfully removed from the list."
|
||||||
|
|
|
@ -27,7 +27,7 @@ import "code.vikunja.io/web"
|
||||||
// @Param id path int true "List ID"
|
// @Param id path int true "List ID"
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search users by its name."
|
// @Param s query string false "Search users by its name."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.UserWithRight "The users with the right they have."
|
// @Success 200 {array} models.UserWithRight "The users with the right they have."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "No right to see the list."
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "No right to see the list."
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
|
|
|
@ -27,7 +27,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @Param listID path int true "List ID"
|
// @Param listID path int true "List ID"
|
||||||
// @Param userID path int true "User ID"
|
// @Param userID path int true "User ID"
|
||||||
// @Param list body models.ListUser true "The user you want to update."
|
// @Param list body models.ListUser true "The user you want to update."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {object} models.ListUser "The updated user <-> list relation."
|
// @Success 200 {object} models.ListUser "The updated user <-> list relation."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have admin-access to the list"
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have admin-access to the list"
|
||||||
// @Failure 404 {object} code.vikunja.io/web.HTTPError "User or list does not exist."
|
// @Failure 404 {object} code.vikunja.io/web.HTTPError "User or list does not exist."
|
||||||
|
|
|
@ -18,5 +18,6 @@ package models
|
||||||
|
|
||||||
// Message is a standard message
|
// Message is a standard message
|
||||||
type Message struct {
|
type Message struct {
|
||||||
|
// A standard message.
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
}
|
}
|
||||||
|
|
|
@ -23,14 +23,20 @@ import (
|
||||||
|
|
||||||
// Namespace holds informations about a namespace
|
// Namespace holds informations about a namespace
|
||||||
type Namespace struct {
|
type Namespace struct {
|
||||||
|
// The unique, numeric id of this namespace.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"namespace"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"namespace"`
|
||||||
Name string `xorm:"varchar(250)" json:"name" valid:"required,runelength(5|250)"`
|
// The name of this namespace.
|
||||||
Description string `xorm:"varchar(1000)" json:"description" valid:"runelength(0|250)"`
|
Name string `xorm:"varchar(250)" json:"name" valid:"required,runelength(5|250)" minLength:"5" maxLength:"250"`
|
||||||
|
// The description of the namespace
|
||||||
|
Description string `xorm:"varchar(1000)" json:"description" valid:"runelength(0|250)" maxLength:"250"`
|
||||||
OwnerID int64 `xorm:"int(11) not null INDEX" json:"-"`
|
OwnerID int64 `xorm:"int(11) not null INDEX" json:"-"`
|
||||||
|
|
||||||
|
// The user who owns this namespace
|
||||||
Owner User `xorm:"-" json:"owner" valid:"-"`
|
Owner User `xorm:"-" json:"owner" valid:"-"`
|
||||||
|
|
||||||
|
// A unix timestamp when this namespace was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this namespace was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
@ -88,7 +94,7 @@ func GetNamespaceByID(id int64) (namespace Namespace, err error) {
|
||||||
// @tags namespace
|
// @tags namespace
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Namespace ID"
|
// @Param id path int true "Namespace ID"
|
||||||
// @Success 200 {object} models.Namespace "The Namespace"
|
// @Success 200 {object} models.Namespace "The Namespace"
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to that namespace."
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to that namespace."
|
||||||
|
@ -113,7 +119,7 @@ type NamespaceWithLists struct {
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search namespaces by name."
|
// @Param s query string false "Search namespaces by name."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.NamespaceWithLists "The Namespaces."
|
// @Success 200 {array} models.NamespaceWithLists "The Namespaces."
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
// @Router /namespaces [get]
|
// @Router /namespaces [get]
|
||||||
|
|
|
@ -27,7 +27,7 @@ import (
|
||||||
// @tags namespace
|
// @tags namespace
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param namespace body models.Namespace true "The namespace you want to create."
|
// @Param namespace body models.Namespace true "The namespace you want to create."
|
||||||
// @Success 200 {object} models.Namespace "The created namespace."
|
// @Success 200 {object} models.Namespace "The created namespace."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid namespace object provided."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid namespace object provided."
|
||||||
|
|
|
@ -26,7 +26,7 @@ import (
|
||||||
// @Description Delets a namespace
|
// @Description Delets a namespace
|
||||||
// @tags namespace
|
// @tags namespace
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Namespace ID"
|
// @Param id path int true "Namespace ID"
|
||||||
// @Success 200 {object} models.Message "The namespace was successfully deleted."
|
// @Success 200 {object} models.Message "The namespace was successfully deleted."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid namespace object provided."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid namespace object provided."
|
||||||
|
|
|
@ -24,7 +24,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @tags namespace
|
// @tags namespace
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Namespace ID"
|
// @Param id path int true "Namespace ID"
|
||||||
// @Param namespace body models.Namespace true "The namespace with updated values you want to update."
|
// @Param namespace body models.Namespace true "The namespace with updated values you want to update."
|
||||||
// @Success 200 {object} models.Namespace "The updated namespace."
|
// @Success 200 {object} models.Namespace "The updated namespace."
|
||||||
|
|
|
@ -20,12 +20,18 @@ import "code.vikunja.io/web"
|
||||||
|
|
||||||
// NamespaceUser represents a namespace <-> user relation
|
// NamespaceUser represents a namespace <-> user relation
|
||||||
type NamespaceUser struct {
|
type NamespaceUser struct {
|
||||||
|
// The unique, numeric id of this namespace <-> user relation.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"namespace"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"namespace"`
|
||||||
|
// The user id.
|
||||||
UserID int64 `xorm:"int(11) not null INDEX" json:"user_id" param:"user"`
|
UserID int64 `xorm:"int(11) not null INDEX" json:"user_id" param:"user"`
|
||||||
|
// The namespace id
|
||||||
NamespaceID int64 `xorm:"int(11) not null INDEX" json:"namespace_id" param:"namespace"`
|
NamespaceID int64 `xorm:"int(11) not null INDEX" json:"namespace_id" param:"namespace"`
|
||||||
Right UserRight `xorm:"int(11) INDEX" json:"right" valid:"length(0|2)"`
|
// The right this user has. 0 = Read only, 1 = Read & Write, 2 = Admin. See the docs for more details.
|
||||||
|
Right UserRight `xorm:"int(11) INDEX" json:"right" valid:"length(0|2)" maximum:"2" default:"0"`
|
||||||
|
|
||||||
|
// A unix timestamp when this relation was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this relation was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
|
|
@ -24,7 +24,7 @@ import "code.vikunja.io/web"
|
||||||
// @tags sharing
|
// @tags sharing
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Namespace ID"
|
// @Param id path int true "Namespace ID"
|
||||||
// @Param namespace body models.NamespaceUser true "The user you want to add to the namespace."
|
// @Param namespace body models.NamespaceUser true "The user you want to add to the namespace."
|
||||||
// @Success 200 {object} models.NamespaceUser "The created user<->namespace relation."
|
// @Success 200 {object} models.NamespaceUser "The created user<->namespace relation."
|
||||||
|
|
|
@ -23,7 +23,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @Description Delets a user from a namespace. The user won't have access to the namespace anymore.
|
// @Description Delets a user from a namespace. The user won't have access to the namespace anymore.
|
||||||
// @tags sharing
|
// @tags sharing
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param namespaceID path int true "Namespace ID"
|
// @Param namespaceID path int true "Namespace ID"
|
||||||
// @Param userID path int true "user ID"
|
// @Param userID path int true "user ID"
|
||||||
// @Success 200 {object} models.Message "The user was successfully deleted."
|
// @Success 200 {object} models.Message "The user was successfully deleted."
|
||||||
|
|
|
@ -27,7 +27,7 @@ import "code.vikunja.io/web"
|
||||||
// @Param id path int true "Namespace ID"
|
// @Param id path int true "Namespace ID"
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search users by its name."
|
// @Param s query string false "Search users by its name."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.UserWithRight "The users with the right they have."
|
// @Success 200 {array} models.UserWithRight "The users with the right they have."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "No right to see the namespace."
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "No right to see the namespace."
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
|
|
|
@ -27,7 +27,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @Param namespaceID path int true "Namespace ID"
|
// @Param namespaceID path int true "Namespace ID"
|
||||||
// @Param userID path int true "User ID"
|
// @Param userID path int true "User ID"
|
||||||
// @Param namespace body models.NamespaceUser true "The user you want to update."
|
// @Param namespace body models.NamespaceUser true "The user you want to update."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {object} models.NamespaceUser "The updated user <-> namespace relation."
|
// @Success 200 {object} models.NamespaceUser "The updated user <-> namespace relation."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have admin-access to the namespace"
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have admin-access to the namespace"
|
||||||
// @Failure 404 {object} code.vikunja.io/web.HTTPError "User or namespace does not exist."
|
// @Failure 404 {object} code.vikunja.io/web.HTTPError "User or namespace does not exist."
|
||||||
|
|
|
@ -20,12 +20,18 @@ import "code.vikunja.io/web"
|
||||||
|
|
||||||
// TeamList defines the relation between a team and a list
|
// TeamList defines the relation between a team and a list
|
||||||
type TeamList struct {
|
type TeamList struct {
|
||||||
|
// The unique, numeric id of this list <-> team relation.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
||||||
|
// The team id.
|
||||||
TeamID int64 `xorm:"int(11) not null INDEX" json:"team_id" param:"team"`
|
TeamID int64 `xorm:"int(11) not null INDEX" json:"team_id" param:"team"`
|
||||||
|
// The list id.
|
||||||
ListID int64 `xorm:"int(11) not null INDEX" json:"list_id" param:"list"`
|
ListID int64 `xorm:"int(11) not null INDEX" json:"list_id" param:"list"`
|
||||||
Right TeamRight `xorm:"int(11) INDEX" json:"right" valid:"length(0|2)"`
|
// The right this team has. 0 = Read only, 1 = Read & Write, 2 = Admin. See the docs for more details.
|
||||||
|
Right TeamRight `xorm:"int(11) INDEX" json:"right" valid:"length(0|2)" maximum:"2" default:"0"`
|
||||||
|
|
||||||
|
// A unix timestamp when this relation was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this relation was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
|
|
@ -24,7 +24,7 @@ import "code.vikunja.io/web"
|
||||||
// @tags sharing
|
// @tags sharing
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "List ID"
|
// @Param id path int true "List ID"
|
||||||
// @Param list body models.TeamList true "The team you want to add to the list."
|
// @Param list body models.TeamList true "The team you want to add to the list."
|
||||||
// @Success 200 {object} models.TeamList "The created team<->list relation."
|
// @Success 200 {object} models.TeamList "The created team<->list relation."
|
||||||
|
|
|
@ -23,7 +23,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @Description Delets a team from a list. The team won't have access to the list anymore.
|
// @Description Delets a team from a list. The team won't have access to the list anymore.
|
||||||
// @tags sharing
|
// @tags sharing
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param listID path int true "List ID"
|
// @Param listID path int true "List ID"
|
||||||
// @Param teamID path int true "Team ID"
|
// @Param teamID path int true "Team ID"
|
||||||
// @Success 200 {object} models.Message "The team was successfully deleted."
|
// @Success 200 {object} models.Message "The team was successfully deleted."
|
||||||
|
|
|
@ -27,7 +27,7 @@ import "code.vikunja.io/web"
|
||||||
// @Param id path int true "List ID"
|
// @Param id path int true "List ID"
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search teams by its name."
|
// @Param s query string false "Search teams by its name."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.TeamWithRight "The teams with their right."
|
// @Success 200 {array} models.TeamWithRight "The teams with their right."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "No right to see the list."
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "No right to see the list."
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
|
|
|
@ -27,7 +27,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @Param listID path int true "List ID"
|
// @Param listID path int true "List ID"
|
||||||
// @Param teamID path int true "Team ID"
|
// @Param teamID path int true "Team ID"
|
||||||
// @Param list body models.TeamList true "The team you want to update."
|
// @Param list body models.TeamList true "The team you want to update."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {object} models.TeamList "The updated team <-> list relation."
|
// @Success 200 {object} models.TeamList "The updated team <-> list relation."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have admin-access to the list"
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have admin-access to the list"
|
||||||
// @Failure 404 {object} code.vikunja.io/web.HTTPError "Team or list does not exist."
|
// @Failure 404 {object} code.vikunja.io/web.HTTPError "Team or list does not exist."
|
||||||
|
|
|
@ -24,7 +24,7 @@ import "code.vikunja.io/web"
|
||||||
// @tags team
|
// @tags team
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Team ID"
|
// @Param id path int true "Team ID"
|
||||||
// @Param team body models.TeamMember true "The user to be added to a team."
|
// @Param team body models.TeamMember true "The user to be added to a team."
|
||||||
// @Success 200 {object} models.TeamMember "The newly created member object"
|
// @Success 200 {object} models.TeamMember "The newly created member object"
|
||||||
|
|
|
@ -21,7 +21,7 @@ package models
|
||||||
// @Description Remove a user from a team. This will also revoke any access this user might have via that team.
|
// @Description Remove a user from a team. This will also revoke any access this user might have via that team.
|
||||||
// @tags team
|
// @tags team
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Team ID"
|
// @Param id path int true "Team ID"
|
||||||
// @Param userID path int true "User ID"
|
// @Param userID path int true "User ID"
|
||||||
// @Success 200 {object} models.Message "The user was successfully removed from the team."
|
// @Success 200 {object} models.Message "The user was successfully removed from the team."
|
||||||
|
|
|
@ -20,12 +20,18 @@ import "code.vikunja.io/web"
|
||||||
|
|
||||||
// TeamNamespace defines the relationship between a Team and a Namespace
|
// TeamNamespace defines the relationship between a Team and a Namespace
|
||||||
type TeamNamespace struct {
|
type TeamNamespace struct {
|
||||||
|
// The unique, numeric id of this namespace <-> team relation.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
||||||
|
// The team id.
|
||||||
TeamID int64 `xorm:"int(11) not null INDEX" json:"team_id" param:"team"`
|
TeamID int64 `xorm:"int(11) not null INDEX" json:"team_id" param:"team"`
|
||||||
|
// The namespace id.
|
||||||
NamespaceID int64 `xorm:"int(11) not null INDEX" json:"namespace_id" param:"namespace"`
|
NamespaceID int64 `xorm:"int(11) not null INDEX" json:"namespace_id" param:"namespace"`
|
||||||
Right TeamRight `xorm:"int(11) INDEX" json:"right" valid:"length(0|2)"`
|
// The right this team has. 0 = Read only, 1 = Read & Write, 2 = Admin. See the docs for more details.
|
||||||
|
Right TeamRight `xorm:"int(11) INDEX" json:"right" valid:"length(0|2)" maximum:"2" default:"0"`
|
||||||
|
|
||||||
|
// A unix timestamp when this relation was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this relation was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
|
|
@ -24,7 +24,7 @@ import "code.vikunja.io/web"
|
||||||
// @tags sharing
|
// @tags sharing
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Namespace ID"
|
// @Param id path int true "Namespace ID"
|
||||||
// @Param namespace body models.TeamNamespace true "The team you want to add to the namespace."
|
// @Param namespace body models.TeamNamespace true "The team you want to add to the namespace."
|
||||||
// @Success 200 {object} models.TeamNamespace "The created team<->namespace relation."
|
// @Success 200 {object} models.TeamNamespace "The created team<->namespace relation."
|
||||||
|
|
|
@ -23,7 +23,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @Description Delets a team from a namespace. The team won't have access to the namespace anymore.
|
// @Description Delets a team from a namespace. The team won't have access to the namespace anymore.
|
||||||
// @tags sharing
|
// @tags sharing
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param namespaceID path int true "Namespace ID"
|
// @Param namespaceID path int true "Namespace ID"
|
||||||
// @Param teamID path int true "team ID"
|
// @Param teamID path int true "team ID"
|
||||||
// @Success 200 {object} models.Message "The team was successfully deleted."
|
// @Success 200 {object} models.Message "The team was successfully deleted."
|
||||||
|
|
|
@ -27,7 +27,7 @@ import "code.vikunja.io/web"
|
||||||
// @Param id path int true "Namespace ID"
|
// @Param id path int true "Namespace ID"
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search teams by its name."
|
// @Param s query string false "Search teams by its name."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.TeamWithRight "The teams with the right they have."
|
// @Success 200 {array} models.TeamWithRight "The teams with the right they have."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "No right to see the namespace."
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "No right to see the namespace."
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
|
|
|
@ -27,7 +27,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @Param namespaceID path int true "Namespace ID"
|
// @Param namespaceID path int true "Namespace ID"
|
||||||
// @Param teamID path int true "Team ID"
|
// @Param teamID path int true "Team ID"
|
||||||
// @Param namespace body models.TeamNamespace true "The team you want to update."
|
// @Param namespace body models.TeamNamespace true "The team you want to update."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {object} models.TeamNamespace "The updated team <-> namespace relation."
|
// @Success 200 {object} models.TeamNamespace "The updated team <-> namespace relation."
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The team does not have admin-access to the namespace"
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The team does not have admin-access to the namespace"
|
||||||
// @Failure 404 {object} code.vikunja.io/web.HTTPError "Team or namespace does not exist."
|
// @Failure 404 {object} code.vikunja.io/web.HTTPError "Team or namespace does not exist."
|
||||||
|
|
|
@ -20,15 +20,22 @@ import "code.vikunja.io/web"
|
||||||
|
|
||||||
// Team holds a team object
|
// Team holds a team object
|
||||||
type Team struct {
|
type Team struct {
|
||||||
|
// The unique, numeric id of this team.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"team"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id" param:"team"`
|
||||||
Name string `xorm:"varchar(250) not null" json:"name" valid:"required,runelength(5|250)"`
|
// The name of this team.
|
||||||
Description string `xorm:"varchar(250)" json:"description" valid:"runelength(0|250)"`
|
Name string `xorm:"varchar(250) not null" json:"name" valid:"required,runelength(5|250)" minLength:"5" maxLength:"250"`
|
||||||
|
// The team's description.
|
||||||
|
Description string `xorm:"varchar(250)" json:"description" valid:"runelength(0|250)" minLength:"0" maxLength:"250"`
|
||||||
CreatedByID int64 `xorm:"int(11) not null INDEX" json:"-"`
|
CreatedByID int64 `xorm:"int(11) not null INDEX" json:"-"`
|
||||||
|
|
||||||
|
// The user who created this team.
|
||||||
CreatedBy User `xorm:"-" json:"created_by"`
|
CreatedBy User `xorm:"-" json:"created_by"`
|
||||||
|
// An array of all members in this team.
|
||||||
Members []*TeamUser `xorm:"-" json:"members"`
|
Members []*TeamUser `xorm:"-" json:"members"`
|
||||||
|
|
||||||
|
// A unix timestamp when this relation was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this relation was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
|
@ -55,13 +62,17 @@ func (t *Team) AfterLoad() {
|
||||||
|
|
||||||
// TeamMember defines the relationship between a user and a team
|
// TeamMember defines the relationship between a user and a team
|
||||||
type TeamMember struct {
|
type TeamMember struct {
|
||||||
|
// The unique, numeric id of this team member relation.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
||||||
|
// The team id.
|
||||||
TeamID int64 `xorm:"int(11) not null INDEX" json:"team_id" param:"team"`
|
TeamID int64 `xorm:"int(11) not null INDEX" json:"team_id" param:"team"`
|
||||||
|
// The id of the member.
|
||||||
UserID int64 `xorm:"int(11) not null INDEX" json:"user_id" param:"user"`
|
UserID int64 `xorm:"int(11) not null INDEX" json:"user_id" param:"user"`
|
||||||
|
// Whether or not the member is an admin of the team. See the docs for more about what a team admin can do
|
||||||
Admin bool `xorm:"tinyint(1) INDEX" json:"admin"`
|
Admin bool `xorm:"tinyint(1) INDEX" json:"admin"`
|
||||||
|
|
||||||
|
// A unix timestamp when this relation was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
|
||||||
|
|
||||||
web.CRUDable `xorm:"-" json:"-"`
|
web.CRUDable `xorm:"-" json:"-"`
|
||||||
web.Rights `xorm:"-" json:"-"`
|
web.Rights `xorm:"-" json:"-"`
|
||||||
|
@ -75,6 +86,7 @@ func (TeamMember) TableName() string {
|
||||||
// TeamUser is the team member type
|
// TeamUser is the team member type
|
||||||
type TeamUser struct {
|
type TeamUser struct {
|
||||||
User `xorm:"extends"`
|
User `xorm:"extends"`
|
||||||
|
// Whether or not the member is an admin of the team. See the docs for more about what a team admin can do
|
||||||
Admin bool `json:"admin"`
|
Admin bool `json:"admin"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -101,7 +113,7 @@ func GetTeamByID(id int64) (team Team, err error) {
|
||||||
// @tags team
|
// @tags team
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Team ID"
|
// @Param id path int true "Team ID"
|
||||||
// @Success 200 {object} models.Team "The team"
|
// @Success 200 {object} models.Team "The team"
|
||||||
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to the team"
|
// @Failure 403 {object} code.vikunja.io/web.HTTPError "The user does not have access to the team"
|
||||||
|
@ -120,7 +132,7 @@ func (t *Team) ReadOne() (err error) {
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
// @Param p query int false "The page number. Used for pagination. If not provided, the first page of results is returned."
|
||||||
// @Param s query string false "Search teams by its name."
|
// @Param s query string false "Search teams by its name."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.Team "The teams."
|
// @Success 200 {array} models.Team "The teams."
|
||||||
// @Failure 500 {object} models.Message "Internal error"
|
// @Failure 500 {object} models.Message "Internal error"
|
||||||
// @Router /teams [get]
|
// @Router /teams [get]
|
||||||
|
|
|
@ -27,7 +27,7 @@ import (
|
||||||
// @tags team
|
// @tags team
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param team body models.Team true "The team you want to create."
|
// @Param team body models.Team true "The team you want to create."
|
||||||
// @Success 200 {object} models.Team "The created team."
|
// @Success 200 {object} models.Team "The created team."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid team object provided."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid team object provided."
|
||||||
|
|
|
@ -26,7 +26,7 @@ import (
|
||||||
// @Description Delets a team. This will also remove the access for all users in that team.
|
// @Description Delets a team. This will also remove the access for all users in that team.
|
||||||
// @tags team
|
// @tags team
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Team ID"
|
// @Param id path int true "Team ID"
|
||||||
// @Success 200 {object} models.Message "The team was successfully deleted."
|
// @Success 200 {object} models.Message "The team was successfully deleted."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid team object provided."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Invalid team object provided."
|
||||||
|
|
|
@ -24,7 +24,7 @@ import _ "code.vikunja.io/web" // For swaggerdocs generation
|
||||||
// @tags team
|
// @tags team
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Param id path int true "Team ID"
|
// @Param id path int true "Team ID"
|
||||||
// @Param team body models.Team true "The team with updated values you want to update."
|
// @Param team body models.Team true "The team with updated values you want to update."
|
||||||
// @Success 200 {object} models.Team "The updated team."
|
// @Success 200 {object} models.Team "The updated team."
|
||||||
|
|
|
@ -30,22 +30,29 @@ import (
|
||||||
|
|
||||||
// UserLogin Object to recive user credentials in JSON format
|
// UserLogin Object to recive user credentials in JSON format
|
||||||
type UserLogin struct {
|
type UserLogin struct {
|
||||||
|
// The username used to log in.
|
||||||
Username string `json:"username" form:"username"`
|
Username string `json:"username" form:"username"`
|
||||||
|
// The password for the user.
|
||||||
Password string `json:"password" form:"password"`
|
Password string `json:"password" form:"password"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// User holds information about an user
|
// User holds information about an user
|
||||||
type User struct {
|
type User struct {
|
||||||
|
// The unique, numeric id of this user.
|
||||||
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
ID int64 `xorm:"int(11) autoincr not null unique pk" json:"id"`
|
||||||
Username string `xorm:"varchar(250) not null unique" json:"username" valid:"length(3|250)"`
|
// The username of the username. Is always unique.
|
||||||
|
Username string `xorm:"varchar(250) not null unique" json:"username" valid:"length(3|250)" minLength:"3" maxLength:"250"`
|
||||||
Password string `xorm:"varchar(250) not null" json:"-"`
|
Password string `xorm:"varchar(250) not null" json:"-"`
|
||||||
Email string `xorm:"varchar(250)" json:"email" valid:"email,length(0|250)"`
|
// The user's email address
|
||||||
|
Email string `xorm:"varchar(250)" json:"email" valid:"email,length(0|250)" maxLength:"250"`
|
||||||
IsActive bool `json:"-"`
|
IsActive bool `json:"-"`
|
||||||
|
|
||||||
PasswordResetToken string `xorm:"varchar(450)" json:"-"`
|
PasswordResetToken string `xorm:"varchar(450)" json:"-"`
|
||||||
EmailConfirmToken string `xorm:"varchar(450)" json:"-"`
|
EmailConfirmToken string `xorm:"varchar(450)" json:"-"`
|
||||||
|
|
||||||
|
// A unix timestamp when this task was created. You cannot change this value.
|
||||||
Created int64 `xorm:"created" json:"created"`
|
Created int64 `xorm:"created" json:"created"`
|
||||||
|
// A unix timestamp when this task was last updated. You cannot change this value.
|
||||||
Updated int64 `xorm:"updated" json:"updated"`
|
Updated int64 `xorm:"updated" json:"updated"`
|
||||||
|
|
||||||
web.Auth `xorm:"-" json:"-"`
|
web.Auth `xorm:"-" json:"-"`
|
||||||
|
@ -77,10 +84,14 @@ func getUserWithError(a web.Auth) (*User, error) {
|
||||||
|
|
||||||
// APIUserPassword represents a user object without timestamps and a json password field.
|
// APIUserPassword represents a user object without timestamps and a json password field.
|
||||||
type APIUserPassword struct {
|
type APIUserPassword struct {
|
||||||
|
// The unique, numeric id of this user.
|
||||||
ID int64 `json:"id"`
|
ID int64 `json:"id"`
|
||||||
Username string `json:"username" valid:"length(3|250)"`
|
// The username of the username. Is always unique.
|
||||||
Password string `json:"password" valid:"length(8|250)"`
|
Username string `json:"username" valid:"length(3|250)" minLength:"3" maxLength:"250"`
|
||||||
Email string `json:"email" valid:"email,length(0|250)"`
|
// The user's password in clear text. Only used when registering the user.
|
||||||
|
Password string `json:"password" valid:"length(8|250)" minLength:"8" maxLength:"250"`
|
||||||
|
// The user's email address
|
||||||
|
Email string `json:"email" valid:"email,length(0|250)" maxLength:"250"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// APIFormat formats an API User into a normal user struct
|
// APIFormat formats an API User into a normal user struct
|
||||||
|
|
|
@ -18,6 +18,7 @@ package models
|
||||||
|
|
||||||
// EmailConfirm holds the token to confirm a mail address
|
// EmailConfirm holds the token to confirm a mail address
|
||||||
type EmailConfirm struct {
|
type EmailConfirm struct {
|
||||||
|
// The email confirm token sent via email.
|
||||||
Token string `json:"token"`
|
Token string `json:"token"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -24,7 +24,9 @@ import (
|
||||||
|
|
||||||
// PasswordReset holds the data to reset a password
|
// PasswordReset holds the data to reset a password
|
||||||
type PasswordReset struct {
|
type PasswordReset struct {
|
||||||
|
// The previously issued reset token.
|
||||||
Token string `json:"token"`
|
Token string `json:"token"`
|
||||||
|
// The new password for this user.
|
||||||
NewPassword string `json:"new_password"`
|
NewPassword string `json:"new_password"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -76,7 +78,7 @@ func UserPasswordReset(reset *PasswordReset) (err error) {
|
||||||
|
|
||||||
// PasswordTokenRequest defines the request format for password reset resqest
|
// PasswordTokenRequest defines the request format for password reset resqest
|
||||||
type PasswordTokenRequest struct {
|
type PasswordTokenRequest struct {
|
||||||
Email string `json:"email" valid:"email,length(0|250)"`
|
Email string `json:"email" valid:"email,length(0|250)" maxLength:"250"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RequestUserPasswordResetToken inserts a random token to reset a users password into the databsse
|
// RequestUserPasswordResetToken inserts a random token to reset a users password into the databsse
|
||||||
|
|
176
pkg/routes/api/v1/docs.go
Normal file
176
pkg/routes/api/v1/docs.go
Normal file
File diff suppressed because one or more lines are too long
|
@ -31,7 +31,7 @@ import (
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param id path int true "Namespace ID"
|
// @Param id path int true "Namespace ID"
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.List "The lists."
|
// @Success 200 {array} models.List "The lists."
|
||||||
// @Failure 403 {object} models.Message "No access to that namespace."
|
// @Failure 403 {object} models.Message "No access to that namespace."
|
||||||
// @Failure 404 {object} models.Message "The namespace does not exist."
|
// @Failure 404 {object} models.Message "The namespace does not exist."
|
||||||
|
|
|
@ -30,7 +30,7 @@ import (
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param s query string false "Search for a user by its name."
|
// @Param s query string false "Search for a user by its name."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {array} models.User "All (found) users."
|
// @Success 200 {array} models.User "All (found) users."
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Something's invalid."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Something's invalid."
|
||||||
// @Failure 500 {object} models.Message "Internal server error."
|
// @Failure 500 {object} models.Message "Internal server error."
|
||||||
|
|
|
@ -29,7 +29,7 @@ import (
|
||||||
// @tags user
|
// @tags user
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {object} models.User
|
// @Success 200 {object} models.User
|
||||||
// @Failure 404 {object} code.vikunja.io/web.HTTPError "User does not exist."
|
// @Failure 404 {object} code.vikunja.io/web.HTTPError "User does not exist."
|
||||||
// @Failure 500 {object} models.Message "Internal server error."
|
// @Failure 500 {object} models.Message "Internal server error."
|
||||||
|
|
|
@ -36,7 +36,7 @@ type UserPassword struct {
|
||||||
// @Accept json
|
// @Accept json
|
||||||
// @Produce json
|
// @Produce json
|
||||||
// @Param userPassword body v1.UserPassword true "The current and new password."
|
// @Param userPassword body v1.UserPassword true "The current and new password."
|
||||||
// @Security ApiKeyAuth
|
// @Security JWTKeyAuth
|
||||||
// @Success 200 {object} models.Message
|
// @Success 200 {object} models.Message
|
||||||
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Something's invalid."
|
// @Failure 400 {object} code.vikunja.io/web.HTTPError "Something's invalid."
|
||||||
// @Failure 404 {object} code.vikunja.io/web.HTTPError "User does not exist."
|
// @Failure 404 {object} code.vikunja.io/web.HTTPError "User does not exist."
|
||||||
|
|
|
@ -15,12 +15,24 @@
|
||||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
// @title Vikunja API
|
// @title Vikunja API
|
||||||
// @license.name GPLv3
|
// @description This is the documentation for the [Vikunja](http://vikunja.io) API. Vikunja is a cross-plattform Todo-application with a lot of features, such as sharing lists with users or teams. <!-- ReDoc-Inject: <security-definitions> -->
|
||||||
|
// @description # Authorization
|
||||||
|
// @description **JWT-Auth:** Main authorization method, used for most of the requests. Needs ` + "`" + `Authorization: Bearer <jwt-token>` + "`" + `-header to authenticate successfully.
|
||||||
|
// @description
|
||||||
|
// @description **BasicAuth:** Only used when requesting tasks via caldav.
|
||||||
|
// @description <!-- ReDoc-Inject: <security-definitions> -->
|
||||||
// @BasePath /api/v1
|
// @BasePath /api/v1
|
||||||
|
|
||||||
|
// @license.url http://code.vikunja.io/api/src/branch/master/LICENSE
|
||||||
|
// @license.name GPLv3
|
||||||
|
|
||||||
|
// @contact.url http://vikunja.io/en/contact/
|
||||||
|
// @contact.name General Vikunja contact
|
||||||
|
// @contact.email hello@vikunja.io
|
||||||
|
|
||||||
// @securityDefinitions.basic BasicAuth
|
// @securityDefinitions.basic BasicAuth
|
||||||
|
|
||||||
// @securityDefinitions.apikey ApiKeyAuth
|
// @securityDefinitions.apikey JWTKeyAuth
|
||||||
// @in header
|
// @in header
|
||||||
// @name Authorization
|
// @name Authorization
|
||||||
|
|
||||||
|
@ -39,7 +51,6 @@ import (
|
||||||
"github.com/labstack/echo/middleware"
|
"github.com/labstack/echo/middleware"
|
||||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||||
"github.com/spf13/viper"
|
"github.com/spf13/viper"
|
||||||
"github.com/swaggo/echo-swagger"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// CustomValidator is a dummy struct to use govalidator with echo
|
// CustomValidator is a dummy struct to use govalidator with echo
|
||||||
|
@ -95,8 +106,9 @@ func RegisterRoutes(e *echo.Echo) {
|
||||||
// API Routes
|
// API Routes
|
||||||
a := e.Group("/api/v1")
|
a := e.Group("/api/v1")
|
||||||
|
|
||||||
// Swagger UI
|
// Docs
|
||||||
a.GET("/swagger/*", echoSwagger.WrapHandler)
|
a.GET("/docs.json", apiv1.DocsJSON)
|
||||||
|
a.GET("/docs", apiv1.RedocUI)
|
||||||
|
|
||||||
// Prometheus endpoint
|
// Prometheus endpoint
|
||||||
if viper.GetBool("service.enablemetrics") {
|
if viper.GetBool("service.enablemetrics") {
|
||||||
|
|
1
tools.go
1
tools.go
|
@ -26,7 +26,6 @@ import (
|
||||||
_ "github.com/fzipp/gocyclo"
|
_ "github.com/fzipp/gocyclo"
|
||||||
_ "github.com/gordonklaus/ineffassign"
|
_ "github.com/gordonklaus/ineffassign"
|
||||||
_ "github.com/karalabe/xgo"
|
_ "github.com/karalabe/xgo"
|
||||||
_ "github.com/swaggo/echo-swagger"
|
|
||||||
_ "github.com/swaggo/swag/cmd/swag"
|
_ "github.com/swaggo/swag/cmd/swag"
|
||||||
_ "golang.org/x/lint/golint"
|
_ "golang.org/x/lint/golint"
|
||||||
|
|
||||||
|
|
15
vendor/github.com/swaggo/echo-swagger/.travis.yml
generated
vendored
15
vendor/github.com/swaggo/echo-swagger/.travis.yml
generated
vendored
|
@ -1,15 +0,0 @@
|
||||||
language: go
|
|
||||||
|
|
||||||
go:
|
|
||||||
- 1.8.x
|
|
||||||
- 1.9.x
|
|
||||||
- 1.10.x
|
|
||||||
|
|
||||||
before_install:
|
|
||||||
- go get -t -v ./...
|
|
||||||
|
|
||||||
script:
|
|
||||||
- go test -coverprofile=coverage.txt -covermode=atomic
|
|
||||||
|
|
||||||
after_success:
|
|
||||||
- bash <(curl -s https://codecov.io/bash)
|
|
21
vendor/github.com/swaggo/echo-swagger/LICENSE
generated
vendored
21
vendor/github.com/swaggo/echo-swagger/LICENSE
generated
vendored
|
@ -1,21 +0,0 @@
|
||||||
MIT License
|
|
||||||
|
|
||||||
Copyright (c) 2018 Swaggo
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
|
||||||
in the Software without restriction, including without limitation the rights
|
|
||||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
copies of the Software, and to permit persons to whom the Software is
|
|
||||||
furnished to do so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
SOFTWARE.
|
|
72
vendor/github.com/swaggo/echo-swagger/README.md
generated
vendored
72
vendor/github.com/swaggo/echo-swagger/README.md
generated
vendored
|
@ -1,72 +0,0 @@
|
||||||
# echo-swagger
|
|
||||||
|
|
||||||
echo middleware to automatically generate RESTful API documentation with Swagger 2.0.
|
|
||||||
|
|
||||||
[![Travis branch](https://img.shields.io/travis/swaggo/echo-swagger/master.svg)](https://travis-ci.org/swaggo/echo-swagger)
|
|
||||||
[![Codecov branch](https://img.shields.io/codecov/c/github/swaggo/echo-swagger/master.svg)](https://codecov.io/gh/swaggo/echo-swagger)
|
|
||||||
[![Go Report Card](https://goreportcard.com/badge/github.com/swaggo/echo-swagger)](https://goreportcard.com/report/github.com/swaggo/echo-swagger)
|
|
||||||
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Start using it
|
|
||||||
1. Add comments to your API source code, [See Declarative Comments Format](https://github.com/swaggo/swag#declarative-comments-format).
|
|
||||||
2. Download [Swag](https://github.com/swaggo/swag) for Go by using:
|
|
||||||
```sh
|
|
||||||
$ go get github.com/swaggo/swag/cmd/swag
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Run the [Swag](https://github.com/swaggo/swag) in your Go project root folder which contains `main.go` file, [Swag](https://github.com/swaggo/swag) will parse comments and generate required files(`docs` folder and `docs/doc.go`).
|
|
||||||
```sh
|
|
||||||
$ swag init
|
|
||||||
```
|
|
||||||
4.Download [echo-swagger](https://github.com/swaggo/echo-swagger) by using:
|
|
||||||
```sh
|
|
||||||
$ go get -u github.com/swaggo/echo-swagger
|
|
||||||
```
|
|
||||||
And import following in your code:
|
|
||||||
|
|
||||||
```go
|
|
||||||
import "github.com/swaggo/echo-swagger" // echo-swagger middleware
|
|
||||||
```
|
|
||||||
|
|
||||||
### Canonical example:
|
|
||||||
|
|
||||||
```go
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/labstack/echo"
|
|
||||||
"github.com/swaggo/echo-swagger"
|
|
||||||
|
|
||||||
_ "github.com/swaggo/echo-swagger/example/docs" // docs is generated by Swag CLI, you have to import it.
|
|
||||||
)
|
|
||||||
|
|
||||||
// @title Swagger Example API
|
|
||||||
// @version 1.0
|
|
||||||
// @description This is a sample server Petstore server.
|
|
||||||
// @termsOfService http://swagger.io/terms/
|
|
||||||
|
|
||||||
// @contact.name API Support
|
|
||||||
// @contact.url http://www.swagger.io/support
|
|
||||||
// @contact.email support@swagger.io
|
|
||||||
|
|
||||||
// @license.name Apache 2.0
|
|
||||||
// @license.url http://www.apache.org/licenses/LICENSE-2.0.html
|
|
||||||
|
|
||||||
// @host petstore.swagger.io
|
|
||||||
// @BasePath /v2
|
|
||||||
func main() {
|
|
||||||
e := echo.New()
|
|
||||||
|
|
||||||
e.GET("/swagger/*", echoSwagger.WrapHandler)
|
|
||||||
|
|
||||||
e.Logger.Fatal(e.Start(":1323"))
|
|
||||||
}
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
5. Run it, and browser to http://localhost:1323/swagger/index.html, you can see Swagger 2.0 Api documents.
|
|
||||||
|
|
||||||
![swagger_index.html](https://user-images.githubusercontent.com/8943871/36250587-40834072-1279-11e8-8bb7-02a2e2fdd7a7.png)
|
|
||||||
|
|
151
vendor/github.com/swaggo/echo-swagger/swagger.go
generated
vendored
151
vendor/github.com/swaggo/echo-swagger/swagger.go
generated
vendored
|
@ -1,151 +0,0 @@
|
||||||
package echoSwagger
|
|
||||||
|
|
||||||
import (
|
|
||||||
"golang.org/x/net/webdav"
|
|
||||||
"html/template"
|
|
||||||
"net/http"
|
|
||||||
"regexp"
|
|
||||||
|
|
||||||
"github.com/labstack/echo"
|
|
||||||
"github.com/swaggo/files"
|
|
||||||
"github.com/swaggo/swag"
|
|
||||||
)
|
|
||||||
|
|
||||||
// WrapHandler wraps swaggerFiles.Handler and returns echo.HandlerFunc
|
|
||||||
var WrapHandler = wrapHandler(swaggerFiles.Handler)
|
|
||||||
|
|
||||||
// wapHandler wraps `http.Handler` into `gin.HandlerFunc`.
|
|
||||||
func wrapHandler(h *webdav.Handler) echo.HandlerFunc {
|
|
||||||
//create a template with name
|
|
||||||
t := template.New("swagger_index.html")
|
|
||||||
index, _ := t.Parse(indexTempl)
|
|
||||||
|
|
||||||
type pro struct {
|
|
||||||
Host string
|
|
||||||
}
|
|
||||||
|
|
||||||
var re = regexp.MustCompile(`(.*)(index\.html|doc\.json|favicon-16x16\.png|favicon-32x32\.png|/oauth2-redirect\.html|swagger-ui\.css|swagger-ui\.css\.map|swagger-ui\.js|swagger-ui\.js\.map|swagger-ui-bundle\.js|swagger-ui-bundle\.js\.map|swagger-ui-standalone-preset\.js|swagger-ui-standalone-preset\.js\.map)[\?|.]*`)
|
|
||||||
|
|
||||||
return func(c echo.Context) error {
|
|
||||||
var matches []string
|
|
||||||
if matches = re.FindStringSubmatch(c.Request().RequestURI); len(matches) != 3 {
|
|
||||||
|
|
||||||
return c.String(http.StatusNotFound, "404 page not found")
|
|
||||||
}
|
|
||||||
path := matches[2]
|
|
||||||
prefix := matches[1]
|
|
||||||
h.Prefix = prefix
|
|
||||||
|
|
||||||
switch path {
|
|
||||||
case "index.html":
|
|
||||||
s := &pro{
|
|
||||||
Host: "doc.json", //TODO: provide to customs?
|
|
||||||
}
|
|
||||||
index.Execute(c.Response().Writer, s)
|
|
||||||
case "doc.json":
|
|
||||||
doc, _ := swag.ReadDoc()
|
|
||||||
c.Response().Write([]byte(doc))
|
|
||||||
default:
|
|
||||||
h.ServeHTTP(c.Response().Writer, c.Request())
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const indexTempl = `<!-- HTML for static distribution bundle build -->
|
|
||||||
<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<title>Swagger UI</title>
|
|
||||||
<link href="https://fonts.googleapis.com/css?family=Open+Sans:400,700|Source+Code+Pro:300,600|Titillium+Web:400,600,700" rel="stylesheet">
|
|
||||||
<link rel="stylesheet" type="text/css" href="./swagger-ui.css" >
|
|
||||||
<link rel="icon" type="image/png" href="./favicon-32x32.png" sizes="32x32" />
|
|
||||||
<link rel="icon" type="image/png" href="./favicon-16x16.png" sizes="16x16" />
|
|
||||||
<style>
|
|
||||||
html
|
|
||||||
{
|
|
||||||
box-sizing: border-box;
|
|
||||||
overflow: -moz-scrollbars-vertical;
|
|
||||||
overflow-y: scroll;
|
|
||||||
}
|
|
||||||
*,
|
|
||||||
*:before,
|
|
||||||
*:after
|
|
||||||
{
|
|
||||||
box-sizing: inherit;
|
|
||||||
}
|
|
||||||
|
|
||||||
body {
|
|
||||||
margin:0;
|
|
||||||
background: #fafafa;
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
<body>
|
|
||||||
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" style="position:absolute;width:0;height:0">
|
|
||||||
<defs>
|
|
||||||
<symbol viewBox="0 0 20 20" id="unlocked">
|
|
||||||
<path d="M15.8 8H14V5.6C14 2.703 12.665 1 10 1 7.334 1 6 2.703 6 5.6V6h2v-.801C8 3.754 8.797 3 10 3c1.203 0 2 .754 2 2.199V8H4c-.553 0-1 .646-1 1.199V17c0 .549.428 1.139.951 1.307l1.197.387C5.672 18.861 6.55 19 7.1 19h5.8c.549 0 1.428-.139 1.951-.307l1.196-.387c.524-.167.953-.757.953-1.306V9.199C17 8.646 16.352 8 15.8 8z"></path>
|
|
||||||
</symbol>
|
|
||||||
|
|
||||||
<symbol viewBox="0 0 20 20" id="locked">
|
|
||||||
<path d="M15.8 8H14V5.6C14 2.703 12.665 1 10 1 7.334 1 6 2.703 6 5.6V8H4c-.553 0-1 .646-1 1.199V17c0 .549.428 1.139.951 1.307l1.197.387C5.672 18.861 6.55 19 7.1 19h5.8c.549 0 1.428-.139 1.951-.307l1.196-.387c.524-.167.953-.757.953-1.306V9.199C17 8.646 16.352 8 15.8 8zM12 8H8V5.199C8 3.754 8.797 3 10 3c1.203 0 2 .754 2 2.199V8z"/>
|
|
||||||
</symbol>
|
|
||||||
|
|
||||||
<symbol viewBox="0 0 20 20" id="close">
|
|
||||||
<path d="M14.348 14.849c-.469.469-1.229.469-1.697 0L10 11.819l-2.651 3.029c-.469.469-1.229.469-1.697 0-.469-.469-.469-1.229 0-1.697l2.758-3.15-2.759-3.152c-.469-.469-.469-1.228 0-1.697.469-.469 1.228-.469 1.697 0L10 8.183l2.651-3.031c.469-.469 1.228-.469 1.697 0 .469.469.469 1.229 0 1.697l-2.758 3.152 2.758 3.15c.469.469.469 1.229 0 1.698z"/>
|
|
||||||
</symbol>
|
|
||||||
|
|
||||||
<symbol viewBox="0 0 20 20" id="large-arrow">
|
|
||||||
<path d="M13.25 10L6.109 2.58c-.268-.27-.268-.707 0-.979.268-.27.701-.27.969 0l7.83 7.908c.268.271.268.709 0 .979l-7.83 7.908c-.268.271-.701.27-.969 0-.268-.269-.268-.707 0-.979L13.25 10z"/>
|
|
||||||
</symbol>
|
|
||||||
|
|
||||||
<symbol viewBox="0 0 20 20" id="large-arrow-down">
|
|
||||||
<path d="M17.418 6.109c.272-.268.709-.268.979 0s.271.701 0 .969l-7.908 7.83c-.27.268-.707.268-.979 0l-7.908-7.83c-.27-.268-.27-.701 0-.969.271-.268.709-.268.979 0L10 13.25l7.418-7.141z"/>
|
|
||||||
</symbol>
|
|
||||||
|
|
||||||
|
|
||||||
<symbol viewBox="0 0 24 24" id="jump-to">
|
|
||||||
<path d="M19 7v4H5.83l3.58-3.59L8 6l-6 6 6 6 1.41-1.41L5.83 13H21V7z"/>
|
|
||||||
</symbol>
|
|
||||||
|
|
||||||
<symbol viewBox="0 0 24 24" id="expand">
|
|
||||||
<path d="M10 18h4v-2h-4v2zM3 6v2h18V6H3zm3 7h12v-2H6v2z"/>
|
|
||||||
</symbol>
|
|
||||||
|
|
||||||
</defs>
|
|
||||||
</svg>
|
|
||||||
|
|
||||||
<div id="swagger-ui"></div>
|
|
||||||
|
|
||||||
<script src="./swagger-ui-bundle.js"> </script>
|
|
||||||
<script src="./swagger-ui-standalone-preset.js"> </script>
|
|
||||||
<script>
|
|
||||||
window.onload = function() {
|
|
||||||
// Build a system
|
|
||||||
const ui = SwaggerUIBundle({
|
|
||||||
url: "{{.Host}}",
|
|
||||||
dom_id: '#swagger-ui',
|
|
||||||
validatorUrl: null,
|
|
||||||
presets: [
|
|
||||||
SwaggerUIBundle.presets.apis,
|
|
||||||
SwaggerUIStandalonePreset
|
|
||||||
],
|
|
||||||
plugins: [
|
|
||||||
SwaggerUIBundle.plugins.DownloadUrl
|
|
||||||
],
|
|
||||||
layout: "StandaloneLayout"
|
|
||||||
})
|
|
||||||
|
|
||||||
window.ui = ui
|
|
||||||
}
|
|
||||||
</script>
|
|
||||||
</body>
|
|
||||||
|
|
||||||
</html>
|
|
||||||
`
|
|
1
vendor/github.com/swaggo/files/README.md
generated
vendored
1
vendor/github.com/swaggo/files/README.md
generated
vendored
|
@ -1 +0,0 @@
|
||||||
# swaggerFiles
|
|
131
vendor/github.com/swaggo/files/ab0x.go
generated
vendored
131
vendor/github.com/swaggo/files/ab0x.go
generated
vendored
|
@ -1,131 +0,0 @@
|
||||||
// Code generated by fileb0x at "2017-11-26 17:57:18.000591466 +0600 +06 m=+3.756909921" from config file "b0x.yml" DO NOT EDIT.
|
|
||||||
|
|
||||||
package swaggerFiles
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
|
|
||||||
"io"
|
|
||||||
"log"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"path"
|
|
||||||
|
|
||||||
"golang.org/x/net/context"
|
|
||||||
"golang.org/x/net/webdav"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
// CTX is a context for webdav vfs
|
|
||||||
CTX = context.Background()
|
|
||||||
|
|
||||||
// FS is a virtual memory file system
|
|
||||||
FS = webdav.NewMemFS()
|
|
||||||
|
|
||||||
// Handler is used to server files through a http handler
|
|
||||||
Handler *webdav.Handler
|
|
||||||
|
|
||||||
// HTTP is the http file system
|
|
||||||
HTTP http.FileSystem = new(HTTPFS)
|
|
||||||
)
|
|
||||||
|
|
||||||
// HTTPFS implements http.FileSystem
|
|
||||||
type HTTPFS struct{}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
if CTX.Err() != nil {
|
|
||||||
log.Fatal(CTX.Err())
|
|
||||||
}
|
|
||||||
|
|
||||||
//var err error
|
|
||||||
|
|
||||||
Handler = &webdav.Handler{
|
|
||||||
FileSystem: FS,
|
|
||||||
LockSystem: webdav.NewMemLS(),
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
// Open a file
|
|
||||||
func (hfs *HTTPFS) Open(path string) (http.File, error) {
|
|
||||||
f, err := FS.OpenFile(CTX, path, os.O_RDONLY, 0644)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return f, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReadFile is adapTed from ioutil
|
|
||||||
func ReadFile(path string) ([]byte, error) {
|
|
||||||
f, err := FS.OpenFile(CTX, path, os.O_RDONLY, 0644)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
buf := bytes.NewBuffer(make([]byte, 0, bytes.MinRead))
|
|
||||||
|
|
||||||
// If the buffer overflows, we will get bytes.ErrTooLarge.
|
|
||||||
// Return that as an error. Any other panic remains.
|
|
||||||
defer func() {
|
|
||||||
e := recover()
|
|
||||||
if e == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if panicErr, ok := e.(error); ok && panicErr == bytes.ErrTooLarge {
|
|
||||||
err = panicErr
|
|
||||||
} else {
|
|
||||||
panic(e)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
_, err = buf.ReadFrom(f)
|
|
||||||
return buf.Bytes(), err
|
|
||||||
}
|
|
||||||
|
|
||||||
// WriteFile is adapTed from ioutil
|
|
||||||
func WriteFile(filename string, data []byte, perm os.FileMode) error {
|
|
||||||
f, err := FS.OpenFile(CTX, filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
n, err := f.Write(data)
|
|
||||||
if err == nil && n < len(data) {
|
|
||||||
err = io.ErrShortWrite
|
|
||||||
}
|
|
||||||
if err1 := f.Close(); err == nil {
|
|
||||||
err = err1
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// WalkDirs looks for files in the given dir and returns a list of files in it
|
|
||||||
// usage for all files in the b0x: WalkDirs("", false)
|
|
||||||
func WalkDirs(name string, includeDirsInList bool, files ...string) ([]string, error) {
|
|
||||||
f, err := FS.OpenFile(CTX, name, os.O_RDONLY, 0)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
fileInfos, err := f.Readdir(0)
|
|
||||||
f.Close()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, info := range fileInfos {
|
|
||||||
filename := path.Join(name, info.Name())
|
|
||||||
|
|
||||||
if includeDirsInList || !info.IsDir() {
|
|
||||||
files = append(files, filename)
|
|
||||||
}
|
|
||||||
|
|
||||||
if info.IsDir() {
|
|
||||||
files, err = WalkDirs(filename, includeDirsInList, files...)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return files, nil
|
|
||||||
}
|
|
29
vendor/github.com/swaggo/files/b0xfile__favicon-16x16.png.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__favicon-16x16.png.go
generated
vendored
|
@ -1,29 +0,0 @@
|
||||||
// Code generaTed by fileb0x at "2017-11-26 17:57:23.142282087 +0600 +06 m=+8.898600553" from config file "b0x.yml" DO NOT EDIT.
|
|
||||||
|
|
||||||
package swaggerFiles
|
|
||||||
|
|
||||||
import (
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
)
|
|
||||||
|
|
||||||
// FileFavicon16x16Png is "/favicon-16x16.png"
|
|
||||||
var FileFavicon16x16Png = []byte("\x89\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\x00\x00\x10\x00\x00\x00\x10\x08\x06\x00\x00\x00\x1f\xf3\xff\x61\x00\x00\x01\x84\x49\x44\x41\x54\x78\x01\x95\x53\x03\x4c\x75\x71\x1c\xfd\x8c\xf1\xc3\xec\x30\xa7\x29\xcd\x61\xb6\x6b\x36\xb2\x9b\xf9\xb2\x6b\xc8\x35\x2f\xdb\x8d\x71\x78\xc6\x94\x6d\xcc\x7b\xef\x7f\x4f\xff\xf3\x6c\xdc\xed\xf2\xe0\xfe\xf8\xc9\xff\x50\x14\x11\x2f\x14\x5b\xa3\x50\xc4\xa1\xbc\x3f\xf1\x74\x3e\x37\x12\x73\x13\x03\x85\xca\x37\x49\x52\x09\x61\xb5\x6a\x8f\xa7\x31\xbe\x5d\x88\xf6\xb9\x4c\xf0\x1c\x93\xcf\xda\xe3\x29\x10\x93\x66\x8d\xe4\x06\x13\xcf\xde\x3c\x9b\xd1\x34\x95\x8a\x92\x81\x4f\x41\xcf\x46\x89\xdd\x3c\x9b\x20\x4d\xe6\x7d\x4c\xe4\x07\x15\xc5\xf5\xe3\xff\x49\x0c\x7b\xd6\x8d\xff\x73\x99\x34\xba\x73\x66\x68\xae\x3f\xaf\x6b\x1a\x70\x72\x77\x10\x20\x3c\xb9\xdb\xc7\x86\xa6\xd1\x19\x49\x0a\xa8\xb1\xd7\x84\x79\x33\x67\x17\x31\x54\x24\xb5\x63\x7f\x71\xfb\x62\x71\xbf\x6b\x8e\x27\x1d\x51\xb0\xc2\x2c\x92\x0b\x78\x7c\x3b\x46\xe5\xf0\xef\x00\x83\xf2\xa1\x1f\x78\x7c\x3f\x71\xbd\xcb\xc2\x16\x80\x5a\x46\xf0\xc4\x4a\xf3\xe3\xe4\x6e\x31\xcc\x17\x6b\x60\x3a\x7d\xcb\x79\xe8\x98\xcb\x42\xc7\x7c\x36\x7a\x97\x72\xd1\x34\x9d\x06\xd3\xf9\x8a\xe4\x94\x90\x8b\xb6\xd9\x0c\x50\xeb\x63\x40\xd0\x7c\xbe\x2a\xc9\x34\xc8\xa7\x98\x27\xcd\x68\x00\xe3\xd9\x32\xa6\x76\x4b\x7d\x0c\x42\xa4\xf0\x2b\x44\x0a\xc7\x81\x29\xb0\x10\x9a\xe3\xa9\xd8\x8b\x78\xe4\x28\xa2\xbb\x8d\x6c\x0d\x01\xb6\x8a\x2d\xf3\x37\x38\xbe\xdd\xc7\xa6\xb6\xc9\xd9\xc6\x64\xd8\x5c\x6d\xf4\x0c\x92\x09\x75\x51\x0e\xd2\xf5\xb3\xd1\xf1\x77\xdf\x51\x16\xb3\x34\x61\x24\xa1\xc4\xc4\x28\x56\xbc\x46\xd9\xdf\xa4\x91\xe9\xb0\x26\x2c\x12\x2b\xcd\x93\xcf\x1c\x1c\x62\xdc\xca\x00\x71\x74\xeb\xcc\x2d\x14\x89\xfe\xfc\x0f\x6d\x32\x6a\x88\xec\xcc\x73\x18\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82")
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
|
|
||||||
f, err := FS.OpenFile(CTX, "/favicon-16x16.png", os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = f.Write(FileFavicon16x16Png)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = f.Close()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
}
|
|
29
vendor/github.com/swaggo/files/b0xfile__favicon-32x32.png.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__favicon-32x32.png.go
generated
vendored
|
@ -1,29 +0,0 @@
|
||||||
// Code generaTed by fileb0x at "2017-11-26 17:57:18.759175324 +0600 +06 m=+4.515493805" from config file "b0x.yml" DO NOT EDIT.
|
|
||||||
|
|
||||||
package swaggerFiles
|
|
||||||
|
|
||||||
import (
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
)
|
|
||||||
|
|
||||||
// FileFavicon32x32Png is "/favicon-32x32.png"
|
|
||||||
var FileFavicon32x32Png = []byte("\x89\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\x00\x00\x20\x00\x00\x00\x20\x08\x06\x00\x00\x00\x73\x7a\x7a\xf4\x00\x00\x04\x3c\x49\x44\x41\x54\x78\x01\xbd\x57\x03\xcc\x65\x59\x0c\x7e\x6b\xdb\xb6\x6d\xc4\x5e\xc7\x5e\xdb\xb6\x3d\x46\xf0\xdb\xb6\x6d\xdb\xb6\x6d\xf3\xa2\xd3\x6f\xf2\xce\x33\x7f\x35\x69\xee\x61\xdb\x5b\x1f\x8d\xa3\xa0\xaa\xea\xa9\xb2\xb2\xf9\xa4\xac\x48\x3f\xf2\x37\x42\x92\xd7\xab\x78\x3c\x02\x94\xe4\x8d\x1a\xfe\x46\x61\x0f\x67\x70\x56\xb3\x53\xa0\xa8\xf2\x85\x4c\xf8\x7b\x45\x91\xfa\x88\x54\x72\x04\x70\x96\x05\xf9\x91\xef\x5e\x6c\x8f\xbe\x9d\x3f\x96\xde\x66\x22\x53\x82\x30\xaf\xd1\xf0\x74\x03\x95\xb4\x7a\x52\x62\xe5\xcf\x14\x5e\xf4\x21\x90\xc7\x3f\x51\x71\xab\x07\xef\xd5\x13\xce\x08\xc0\x5d\xa6\xf1\x2e\x68\x39\xc9\x5c\xb9\x98\xff\x20\x4e\x10\x5b\xdf\x5c\xa6\xbc\xa6\xe3\xf4\x6f\xc4\xdd\xf4\x99\xa7\xc6\x26\xfe\x13\x71\x17\xe5\x36\x1e\xe3\x3b\x4b\x3a\xa1\x59\x88\x04\xd0\x74\x94\xf9\xd5\x7c\xa1\x41\x5c\xae\xea\x0a\xa1\x5f\x82\xaf\x01\x71\xa7\xf0\x97\xa0\xab\xa9\xb2\x33\x08\x34\x84\x59\x9a\x98\xf6\xb5\x76\xff\x5c\x30\x67\xc7\xa2\x90\xfc\xb7\x41\x6c\x5b\x18\x9c\xff\x26\xb1\xc3\x1a\x08\xa1\x5e\x6c\xcb\xe6\x71\x82\xb9\x47\xc6\x2b\x20\xb0\x23\xe8\x9e\xf6\xa2\xa1\x10\x09\x16\x7d\x02\x0e\x07\x75\x01\x43\xf2\xdf\xc2\xc5\x1d\xc5\xa0\xbc\x37\x48\xd0\x87\x63\x9a\xaa\xfe\x42\xf6\xd8\x09\x62\xa8\xea\x0a\x36\xba\xf8\xb5\xcf\x39\x54\xd1\x11\x40\xab\xeb\x73\x34\x34\x55\x6b\x97\xd1\xe0\x54\x0d\xad\x6e\xcc\xc3\xfe\xf4\xb5\xef\xb9\x46\x7b\x15\x9d\x01\xba\xe8\x30\x0a\x51\xc4\xb9\xf0\x76\x53\x87\x8b\x2e\xfd\x82\x00\xe3\x73\x6d\x94\xdd\x70\xc8\xae\x00\xd9\xf5\x07\x69\x6c\xb6\x85\x00\xb1\x65\x5f\x1b\xed\xfd\x1c\x74\x95\x2e\x3a\x90\xb4\x74\xb6\x67\xbb\xf4\x60\x31\x8f\xc3\xc7\x94\x20\xa2\x00\xb0\x3f\xfa\x21\x87\xd5\xfd\x5f\xd4\x7d\x04\xa8\xed\x89\x30\xdb\xcb\x69\x38\xa2\xf5\x05\xb9\xef\xa4\x2f\x20\x75\x8a\x90\x43\x0c\x9b\x5e\x68\x19\x4c\x21\xc0\xef\xa1\x37\x39\x2c\x00\xb4\x08\x68\x1d\x4c\x33\xdb\xfb\x3b\xfc\x0e\x5d\x68\x32\xef\xa7\x35\x50\x05\x26\xc8\x62\x38\x60\x2e\x40\x1a\x01\x7e\x0b\xb9\xde\x61\x01\x7e\x0c\xbc\x1c\x4c\xa8\x75\x28\xdd\xd2\x3e\x7c\x49\x44\xc4\xcf\xd0\x40\x04\x26\x25\xad\x1e\x16\x0f\xf7\x8d\x97\x41\x52\xfa\xca\xe7\x6c\x87\x05\xf8\xd2\xfb\x0c\x84\x1d\x0d\x4c\x56\x59\xdc\x2f\x6a\x75\x13\x1a\x88\xd2\xa0\xaa\x61\x82\x7c\x6e\x7a\x70\x5f\xf4\x03\xc8\x09\xd4\x3b\x5e\x8a\x39\x7d\xee\x75\x9a\x91\x20\x60\x04\x14\x73\xec\xe1\x0c\xc6\x5d\xa3\x05\x60\x60\xd1\x77\x12\x2a\x7e\x20\x00\xf3\xae\xd3\xa0\x9c\x62\x82\xa2\x62\x78\x28\xb3\x6e\x1f\x71\x78\xd2\xf2\xda\x34\x1d\x8a\x7d\x1c\x6b\xd4\x3e\x9c\x49\x2b\xeb\xb3\xf4\x6b\xc8\x75\x60\x4c\x93\xf3\x5d\x34\xb5\xd0\xc3\xe3\x33\xd9\xee\xd7\xf2\xd9\x19\xea\x18\xc9\xc1\x59\x3a\x18\xfb\x28\x2d\xad\x4e\x82\x06\x65\xd5\x1f\x30\xa2\x1d\x56\xf8\xbe\x30\xc1\x98\x35\x01\xf8\xd2\x7e\x5c\xa6\xa5\xb5\x29\x26\xf6\x98\x56\x80\x6c\xe4\x03\xf8\x03\x04\x00\x73\x9a\x5e\xec\x85\x00\xf4\x2b\x0b\x00\xe1\x3a\x47\xf2\x70\x96\x0e\xc4\x3c\x42\x8b\xab\x13\xa0\x81\xd0\xb4\x2e\x00\xab\xd8\xaa\x09\xf6\xc7\x3c\xac\x35\x41\x09\xe6\xf4\x05\xab\xf7\x6b\x23\x13\x9c\x09\x34\x32\xc1\x17\x3a\x13\xe4\xc3\x04\x10\xde\xae\x09\x22\x30\x29\xb6\xe6\x84\x13\xc2\x09\xcf\x72\xda\x09\xfb\x27\x2b\x2d\x3b\x61\x8b\x70\x42\x29\x66\x77\xc2\x30\xc0\x66\x18\x22\x5d\x0b\x01\x10\x86\x92\x41\x22\xba\x73\x0f\x12\xd1\xed\x06\x89\x48\x7a\x5a\x9b\x8a\xe5\x3e\x2c\xe4\x36\x1e\x35\xbb\x50\xdd\x15\x4a\x80\x7d\xce\xa4\xe2\xc8\x7b\x6d\xa4\xe2\xc3\xc2\x01\x07\xc0\xdb\xa4\x18\x2d\xa1\x93\x31\xba\x10\x53\xfa\x25\xb6\x50\x60\x10\x19\x76\x99\x23\x7c\x47\x67\x9b\x09\x10\x57\xf6\x8d\x49\x31\xba\x92\xd6\x36\x17\x45\x12\xfa\xd9\xa8\xf3\x55\x54\x65\x0a\x1b\x95\x9d\x81\x66\xe5\x18\xa5\x75\x6d\x63\x81\x86\xa6\xeb\xec\x09\x80\x34\xcb\x67\x17\xa1\x39\xfa\xc6\xf7\x3c\xa3\xbd\xf2\x0e\x7f\x02\x80\x97\x59\xc7\xac\x18\x34\x24\x68\xa3\x76\xba\x21\x09\xcc\x7b\xcd\xb4\x21\xb1\xd8\x92\x25\x68\xe3\x93\xdc\xd3\x5f\xda\x31\xe6\xae\x69\xcf\x83\xa6\x70\xbc\x24\xf0\xb2\xda\x94\xa2\x71\x14\x42\x40\x13\xdb\xff\xf3\xd7\x0d\xfa\x41\xb9\xc5\x6e\x7b\x8e\xd6\x59\x08\x01\x75\xc1\x27\x7e\x16\x8e\xe9\x04\xa2\xfb\x41\x2b\xc7\x34\x0c\x98\xab\xd7\x3a\xfc\x30\xd1\x76\xaf\x24\xa2\x23\xb7\xf1\x08\xfd\x6d\x21\x4f\x58\x68\x38\x10\x6a\x7c\x67\xd1\xe0\x61\xb2\x99\x04\x9a\x5b\x79\x9a\xbd\x6b\xf2\x34\x43\x24\xa0\x9e\x23\x9f\xa3\xa8\x00\x31\xc6\x1a\x22\xc0\xe4\x69\xa6\xcc\x30\xf3\xf7\xb7\xf5\x58\x45\xb8\xe0\xa1\xc9\xc2\x0c\x90\x83\x80\x24\x83\x38\xdf\xd6\xe3\xd4\x82\x46\x4e\x47\x0f\x87\x36\x8a\xbf\x31\xa8\x64\x28\xa7\x40\x8c\x51\x58\x90\xdb\x19\x9f\xc5\x59\x47\xe9\x9e\x00\xa5\x79\x33\x5d\x9a\x4a\xe1\x22\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82")
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
|
|
||||||
f, err := FS.OpenFile(CTX, "/favicon-32x32.png", os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = f.Write(FileFavicon32x32Png)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = f.Close()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
}
|
|
29
vendor/github.com/swaggo/files/b0xfile__index.html.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__index.html.go
generated
vendored
File diff suppressed because one or more lines are too long
29
vendor/github.com/swaggo/files/b0xfile__oauth2-redirect.html.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__oauth2-redirect.html.go
generated
vendored
File diff suppressed because one or more lines are too long
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui-bundle.js.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui-bundle.js.go
generated
vendored
File diff suppressed because one or more lines are too long
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui-bundle.js.map.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui-bundle.js.map.go
generated
vendored
File diff suppressed because one or more lines are too long
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui-standalone-preset.js.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui-standalone-preset.js.go
generated
vendored
File diff suppressed because one or more lines are too long
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui-standalone-preset.js.map.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui-standalone-preset.js.map.go
generated
vendored
File diff suppressed because one or more lines are too long
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui.css.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui.css.go
generated
vendored
File diff suppressed because one or more lines are too long
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui.css.map.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui.css.map.go
generated
vendored
|
@ -1,29 +0,0 @@
|
||||||
// Code generaTed by fileb0x at "2017-11-26 17:57:18.489607614 +0600 +06 m=+4.245926076" from config file "b0x.yml" DO NOT EDIT.
|
|
||||||
|
|
||||||
package swaggerFiles
|
|
||||||
|
|
||||||
import (
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
)
|
|
||||||
|
|
||||||
// FileSwaggerUICSSMap is "/swagger-ui.css.map"
|
|
||||||
var FileSwaggerUICSSMap = []byte("\x7b\x22\x76\x65\x72\x73\x69\x6f\x6e\x22\x3a\x33\x2c\x22\x73\x6f\x75\x72\x63\x65\x73\x22\x3a\x5b\x5d\x2c\x22\x6e\x61\x6d\x65\x73\x22\x3a\x5b\x5d\x2c\x22\x6d\x61\x70\x70\x69\x6e\x67\x73\x22\x3a\x22\x22\x2c\x22\x66\x69\x6c\x65\x22\x3a\x22\x73\x77\x61\x67\x67\x65\x72\x2d\x75\x69\x2e\x63\x73\x73\x22\x2c\x22\x73\x6f\x75\x72\x63\x65\x52\x6f\x6f\x74\x22\x3a\x22\x22\x7d")
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
|
|
||||||
f, err := FS.OpenFile(CTX, "/swagger-ui.css.map", os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = f.Write(FileSwaggerUICSSMap)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = f.Close()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
}
|
|
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui.js.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui.js.go
generated
vendored
File diff suppressed because one or more lines are too long
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui.js.map.go
generated
vendored
29
vendor/github.com/swaggo/files/b0xfile__swagger-ui.js.map.go
generated
vendored
File diff suppressed because one or more lines are too long
2
vendor/github.com/swaggo/swag/operation.go
generated
vendored
2
vendor/github.com/swaggo/swag/operation.go
generated
vendored
|
@ -53,7 +53,7 @@ func (operation *Operation) ParseComment(comment string, astFile *ast.File) erro
|
||||||
if operation.Description == "" {
|
if operation.Description == "" {
|
||||||
operation.Description = lineRemainder
|
operation.Description = lineRemainder
|
||||||
} else {
|
} else {
|
||||||
operation.Description += "<br>" + lineRemainder
|
operation.Description += "\n" + lineRemainder
|
||||||
}
|
}
|
||||||
case "@summary":
|
case "@summary":
|
||||||
operation.Summary = lineRemainder
|
operation.Summary = lineRemainder
|
||||||
|
|
4
vendor/github.com/swaggo/swag/parser.go
generated
vendored
4
vendor/github.com/swaggo/swag/parser.go
generated
vendored
|
@ -130,7 +130,11 @@ func (parser *Parser) ParseGeneralAPIInfo(mainAPIFile string) error {
|
||||||
case "@title":
|
case "@title":
|
||||||
parser.swagger.Info.Title = strings.TrimSpace(commentLine[len(attribute):])
|
parser.swagger.Info.Title = strings.TrimSpace(commentLine[len(attribute):])
|
||||||
case "@description":
|
case "@description":
|
||||||
|
if parser.swagger.Info.Description == "{{.Description}}" {
|
||||||
parser.swagger.Info.Description = strings.TrimSpace(commentLine[len(attribute):])
|
parser.swagger.Info.Description = strings.TrimSpace(commentLine[len(attribute):])
|
||||||
|
} else {
|
||||||
|
parser.swagger.Info.Description += "\n" + strings.TrimSpace(commentLine[len(attribute):])
|
||||||
|
}
|
||||||
case "@termsofservice":
|
case "@termsofservice":
|
||||||
parser.swagger.Info.TermsOfService = strings.TrimSpace(commentLine[len(attribute):])
|
parser.swagger.Info.TermsOfService = strings.TrimSpace(commentLine[len(attribute):])
|
||||||
case "@contact.name":
|
case "@contact.name":
|
||||||
|
|
56
vendor/golang.org/x/net/context/context.go
generated
vendored
56
vendor/golang.org/x/net/context/context.go
generated
vendored
|
@ -1,56 +0,0 @@
|
||||||
// Copyright 2014 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// Package context defines the Context type, which carries deadlines,
|
|
||||||
// cancelation signals, and other request-scoped values across API boundaries
|
|
||||||
// and between processes.
|
|
||||||
// As of Go 1.7 this package is available in the standard library under the
|
|
||||||
// name context. https://golang.org/pkg/context.
|
|
||||||
//
|
|
||||||
// Incoming requests to a server should create a Context, and outgoing calls to
|
|
||||||
// servers should accept a Context. The chain of function calls between must
|
|
||||||
// propagate the Context, optionally replacing it with a modified copy created
|
|
||||||
// using WithDeadline, WithTimeout, WithCancel, or WithValue.
|
|
||||||
//
|
|
||||||
// Programs that use Contexts should follow these rules to keep interfaces
|
|
||||||
// consistent across packages and enable static analysis tools to check context
|
|
||||||
// propagation:
|
|
||||||
//
|
|
||||||
// Do not store Contexts inside a struct type; instead, pass a Context
|
|
||||||
// explicitly to each function that needs it. The Context should be the first
|
|
||||||
// parameter, typically named ctx:
|
|
||||||
//
|
|
||||||
// func DoSomething(ctx context.Context, arg Arg) error {
|
|
||||||
// // ... use ctx ...
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// Do not pass a nil Context, even if a function permits it. Pass context.TODO
|
|
||||||
// if you are unsure about which Context to use.
|
|
||||||
//
|
|
||||||
// Use context Values only for request-scoped data that transits processes and
|
|
||||||
// APIs, not for passing optional parameters to functions.
|
|
||||||
//
|
|
||||||
// The same Context may be passed to functions running in different goroutines;
|
|
||||||
// Contexts are safe for simultaneous use by multiple goroutines.
|
|
||||||
//
|
|
||||||
// See http://blog.golang.org/context for example code for a server that uses
|
|
||||||
// Contexts.
|
|
||||||
package context // import "golang.org/x/net/context"
|
|
||||||
|
|
||||||
// Background returns a non-nil, empty Context. It is never canceled, has no
|
|
||||||
// values, and has no deadline. It is typically used by the main function,
|
|
||||||
// initialization, and tests, and as the top-level Context for incoming
|
|
||||||
// requests.
|
|
||||||
func Background() Context {
|
|
||||||
return background
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO returns a non-nil, empty Context. Code should use context.TODO when
|
|
||||||
// it's unclear which Context to use or it is not yet available (because the
|
|
||||||
// surrounding function has not yet been extended to accept a Context
|
|
||||||
// parameter). TODO is recognized by static analysis tools that determine
|
|
||||||
// whether Contexts are propagated correctly in a program.
|
|
||||||
func TODO() Context {
|
|
||||||
return todo
|
|
||||||
}
|
|
72
vendor/golang.org/x/net/context/go17.go
generated
vendored
72
vendor/golang.org/x/net/context/go17.go
generated
vendored
|
@ -1,72 +0,0 @@
|
||||||
// Copyright 2016 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// +build go1.7
|
|
||||||
|
|
||||||
package context
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context" // standard library's context, as of Go 1.7
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
todo = context.TODO()
|
|
||||||
background = context.Background()
|
|
||||||
)
|
|
||||||
|
|
||||||
// Canceled is the error returned by Context.Err when the context is canceled.
|
|
||||||
var Canceled = context.Canceled
|
|
||||||
|
|
||||||
// DeadlineExceeded is the error returned by Context.Err when the context's
|
|
||||||
// deadline passes.
|
|
||||||
var DeadlineExceeded = context.DeadlineExceeded
|
|
||||||
|
|
||||||
// WithCancel returns a copy of parent with a new Done channel. The returned
|
|
||||||
// context's Done channel is closed when the returned cancel function is called
|
|
||||||
// or when the parent context's Done channel is closed, whichever happens first.
|
|
||||||
//
|
|
||||||
// Canceling this context releases resources associated with it, so code should
|
|
||||||
// call cancel as soon as the operations running in this Context complete.
|
|
||||||
func WithCancel(parent Context) (ctx Context, cancel CancelFunc) {
|
|
||||||
ctx, f := context.WithCancel(parent)
|
|
||||||
return ctx, CancelFunc(f)
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithDeadline returns a copy of the parent context with the deadline adjusted
|
|
||||||
// to be no later than d. If the parent's deadline is already earlier than d,
|
|
||||||
// WithDeadline(parent, d) is semantically equivalent to parent. The returned
|
|
||||||
// context's Done channel is closed when the deadline expires, when the returned
|
|
||||||
// cancel function is called, or when the parent context's Done channel is
|
|
||||||
// closed, whichever happens first.
|
|
||||||
//
|
|
||||||
// Canceling this context releases resources associated with it, so code should
|
|
||||||
// call cancel as soon as the operations running in this Context complete.
|
|
||||||
func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) {
|
|
||||||
ctx, f := context.WithDeadline(parent, deadline)
|
|
||||||
return ctx, CancelFunc(f)
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)).
|
|
||||||
//
|
|
||||||
// Canceling this context releases resources associated with it, so code should
|
|
||||||
// call cancel as soon as the operations running in this Context complete:
|
|
||||||
//
|
|
||||||
// func slowOperationWithTimeout(ctx context.Context) (Result, error) {
|
|
||||||
// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
|
|
||||||
// defer cancel() // releases resources if slowOperation completes before timeout elapses
|
|
||||||
// return slowOperation(ctx)
|
|
||||||
// }
|
|
||||||
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) {
|
|
||||||
return WithDeadline(parent, time.Now().Add(timeout))
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithValue returns a copy of parent in which the value associated with key is
|
|
||||||
// val.
|
|
||||||
//
|
|
||||||
// Use context Values only for request-scoped data that transits processes and
|
|
||||||
// APIs, not for passing optional parameters to functions.
|
|
||||||
func WithValue(parent Context, key interface{}, val interface{}) Context {
|
|
||||||
return context.WithValue(parent, key, val)
|
|
||||||
}
|
|
20
vendor/golang.org/x/net/context/go19.go
generated
vendored
20
vendor/golang.org/x/net/context/go19.go
generated
vendored
|
@ -1,20 +0,0 @@
|
||||||
// Copyright 2017 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// +build go1.9
|
|
||||||
|
|
||||||
package context
|
|
||||||
|
|
||||||
import "context" // standard library's context, as of Go 1.7
|
|
||||||
|
|
||||||
// A Context carries a deadline, a cancelation signal, and other values across
|
|
||||||
// API boundaries.
|
|
||||||
//
|
|
||||||
// Context's methods may be called by multiple goroutines simultaneously.
|
|
||||||
type Context = context.Context
|
|
||||||
|
|
||||||
// A CancelFunc tells an operation to abandon its work.
|
|
||||||
// A CancelFunc does not wait for the work to stop.
|
|
||||||
// After the first call, subsequent calls to a CancelFunc do nothing.
|
|
||||||
type CancelFunc = context.CancelFunc
|
|
300
vendor/golang.org/x/net/context/pre_go17.go
generated
vendored
300
vendor/golang.org/x/net/context/pre_go17.go
generated
vendored
|
@ -1,300 +0,0 @@
|
||||||
// Copyright 2014 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// +build !go1.7
|
|
||||||
|
|
||||||
package context
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
// An emptyCtx is never canceled, has no values, and has no deadline. It is not
|
|
||||||
// struct{}, since vars of this type must have distinct addresses.
|
|
||||||
type emptyCtx int
|
|
||||||
|
|
||||||
func (*emptyCtx) Deadline() (deadline time.Time, ok bool) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (*emptyCtx) Done() <-chan struct{} {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (*emptyCtx) Err() error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (*emptyCtx) Value(key interface{}) interface{} {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *emptyCtx) String() string {
|
|
||||||
switch e {
|
|
||||||
case background:
|
|
||||||
return "context.Background"
|
|
||||||
case todo:
|
|
||||||
return "context.TODO"
|
|
||||||
}
|
|
||||||
return "unknown empty Context"
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
background = new(emptyCtx)
|
|
||||||
todo = new(emptyCtx)
|
|
||||||
)
|
|
||||||
|
|
||||||
// Canceled is the error returned by Context.Err when the context is canceled.
|
|
||||||
var Canceled = errors.New("context canceled")
|
|
||||||
|
|
||||||
// DeadlineExceeded is the error returned by Context.Err when the context's
|
|
||||||
// deadline passes.
|
|
||||||
var DeadlineExceeded = errors.New("context deadline exceeded")
|
|
||||||
|
|
||||||
// WithCancel returns a copy of parent with a new Done channel. The returned
|
|
||||||
// context's Done channel is closed when the returned cancel function is called
|
|
||||||
// or when the parent context's Done channel is closed, whichever happens first.
|
|
||||||
//
|
|
||||||
// Canceling this context releases resources associated with it, so code should
|
|
||||||
// call cancel as soon as the operations running in this Context complete.
|
|
||||||
func WithCancel(parent Context) (ctx Context, cancel CancelFunc) {
|
|
||||||
c := newCancelCtx(parent)
|
|
||||||
propagateCancel(parent, c)
|
|
||||||
return c, func() { c.cancel(true, Canceled) }
|
|
||||||
}
|
|
||||||
|
|
||||||
// newCancelCtx returns an initialized cancelCtx.
|
|
||||||
func newCancelCtx(parent Context) *cancelCtx {
|
|
||||||
return &cancelCtx{
|
|
||||||
Context: parent,
|
|
||||||
done: make(chan struct{}),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// propagateCancel arranges for child to be canceled when parent is.
|
|
||||||
func propagateCancel(parent Context, child canceler) {
|
|
||||||
if parent.Done() == nil {
|
|
||||||
return // parent is never canceled
|
|
||||||
}
|
|
||||||
if p, ok := parentCancelCtx(parent); ok {
|
|
||||||
p.mu.Lock()
|
|
||||||
if p.err != nil {
|
|
||||||
// parent has already been canceled
|
|
||||||
child.cancel(false, p.err)
|
|
||||||
} else {
|
|
||||||
if p.children == nil {
|
|
||||||
p.children = make(map[canceler]bool)
|
|
||||||
}
|
|
||||||
p.children[child] = true
|
|
||||||
}
|
|
||||||
p.mu.Unlock()
|
|
||||||
} else {
|
|
||||||
go func() {
|
|
||||||
select {
|
|
||||||
case <-parent.Done():
|
|
||||||
child.cancel(false, parent.Err())
|
|
||||||
case <-child.Done():
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// parentCancelCtx follows a chain of parent references until it finds a
|
|
||||||
// *cancelCtx. This function understands how each of the concrete types in this
|
|
||||||
// package represents its parent.
|
|
||||||
func parentCancelCtx(parent Context) (*cancelCtx, bool) {
|
|
||||||
for {
|
|
||||||
switch c := parent.(type) {
|
|
||||||
case *cancelCtx:
|
|
||||||
return c, true
|
|
||||||
case *timerCtx:
|
|
||||||
return c.cancelCtx, true
|
|
||||||
case *valueCtx:
|
|
||||||
parent = c.Context
|
|
||||||
default:
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// removeChild removes a context from its parent.
|
|
||||||
func removeChild(parent Context, child canceler) {
|
|
||||||
p, ok := parentCancelCtx(parent)
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
p.mu.Lock()
|
|
||||||
if p.children != nil {
|
|
||||||
delete(p.children, child)
|
|
||||||
}
|
|
||||||
p.mu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
// A canceler is a context type that can be canceled directly. The
|
|
||||||
// implementations are *cancelCtx and *timerCtx.
|
|
||||||
type canceler interface {
|
|
||||||
cancel(removeFromParent bool, err error)
|
|
||||||
Done() <-chan struct{}
|
|
||||||
}
|
|
||||||
|
|
||||||
// A cancelCtx can be canceled. When canceled, it also cancels any children
|
|
||||||
// that implement canceler.
|
|
||||||
type cancelCtx struct {
|
|
||||||
Context
|
|
||||||
|
|
||||||
done chan struct{} // closed by the first cancel call.
|
|
||||||
|
|
||||||
mu sync.Mutex
|
|
||||||
children map[canceler]bool // set to nil by the first cancel call
|
|
||||||
err error // set to non-nil by the first cancel call
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *cancelCtx) Done() <-chan struct{} {
|
|
||||||
return c.done
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *cancelCtx) Err() error {
|
|
||||||
c.mu.Lock()
|
|
||||||
defer c.mu.Unlock()
|
|
||||||
return c.err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *cancelCtx) String() string {
|
|
||||||
return fmt.Sprintf("%v.WithCancel", c.Context)
|
|
||||||
}
|
|
||||||
|
|
||||||
// cancel closes c.done, cancels each of c's children, and, if
|
|
||||||
// removeFromParent is true, removes c from its parent's children.
|
|
||||||
func (c *cancelCtx) cancel(removeFromParent bool, err error) {
|
|
||||||
if err == nil {
|
|
||||||
panic("context: internal error: missing cancel error")
|
|
||||||
}
|
|
||||||
c.mu.Lock()
|
|
||||||
if c.err != nil {
|
|
||||||
c.mu.Unlock()
|
|
||||||
return // already canceled
|
|
||||||
}
|
|
||||||
c.err = err
|
|
||||||
close(c.done)
|
|
||||||
for child := range c.children {
|
|
||||||
// NOTE: acquiring the child's lock while holding parent's lock.
|
|
||||||
child.cancel(false, err)
|
|
||||||
}
|
|
||||||
c.children = nil
|
|
||||||
c.mu.Unlock()
|
|
||||||
|
|
||||||
if removeFromParent {
|
|
||||||
removeChild(c.Context, c)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithDeadline returns a copy of the parent context with the deadline adjusted
|
|
||||||
// to be no later than d. If the parent's deadline is already earlier than d,
|
|
||||||
// WithDeadline(parent, d) is semantically equivalent to parent. The returned
|
|
||||||
// context's Done channel is closed when the deadline expires, when the returned
|
|
||||||
// cancel function is called, or when the parent context's Done channel is
|
|
||||||
// closed, whichever happens first.
|
|
||||||
//
|
|
||||||
// Canceling this context releases resources associated with it, so code should
|
|
||||||
// call cancel as soon as the operations running in this Context complete.
|
|
||||||
func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) {
|
|
||||||
if cur, ok := parent.Deadline(); ok && cur.Before(deadline) {
|
|
||||||
// The current deadline is already sooner than the new one.
|
|
||||||
return WithCancel(parent)
|
|
||||||
}
|
|
||||||
c := &timerCtx{
|
|
||||||
cancelCtx: newCancelCtx(parent),
|
|
||||||
deadline: deadline,
|
|
||||||
}
|
|
||||||
propagateCancel(parent, c)
|
|
||||||
d := deadline.Sub(time.Now())
|
|
||||||
if d <= 0 {
|
|
||||||
c.cancel(true, DeadlineExceeded) // deadline has already passed
|
|
||||||
return c, func() { c.cancel(true, Canceled) }
|
|
||||||
}
|
|
||||||
c.mu.Lock()
|
|
||||||
defer c.mu.Unlock()
|
|
||||||
if c.err == nil {
|
|
||||||
c.timer = time.AfterFunc(d, func() {
|
|
||||||
c.cancel(true, DeadlineExceeded)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return c, func() { c.cancel(true, Canceled) }
|
|
||||||
}
|
|
||||||
|
|
||||||
// A timerCtx carries a timer and a deadline. It embeds a cancelCtx to
|
|
||||||
// implement Done and Err. It implements cancel by stopping its timer then
|
|
||||||
// delegating to cancelCtx.cancel.
|
|
||||||
type timerCtx struct {
|
|
||||||
*cancelCtx
|
|
||||||
timer *time.Timer // Under cancelCtx.mu.
|
|
||||||
|
|
||||||
deadline time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *timerCtx) Deadline() (deadline time.Time, ok bool) {
|
|
||||||
return c.deadline, true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *timerCtx) String() string {
|
|
||||||
return fmt.Sprintf("%v.WithDeadline(%s [%s])", c.cancelCtx.Context, c.deadline, c.deadline.Sub(time.Now()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *timerCtx) cancel(removeFromParent bool, err error) {
|
|
||||||
c.cancelCtx.cancel(false, err)
|
|
||||||
if removeFromParent {
|
|
||||||
// Remove this timerCtx from its parent cancelCtx's children.
|
|
||||||
removeChild(c.cancelCtx.Context, c)
|
|
||||||
}
|
|
||||||
c.mu.Lock()
|
|
||||||
if c.timer != nil {
|
|
||||||
c.timer.Stop()
|
|
||||||
c.timer = nil
|
|
||||||
}
|
|
||||||
c.mu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)).
|
|
||||||
//
|
|
||||||
// Canceling this context releases resources associated with it, so code should
|
|
||||||
// call cancel as soon as the operations running in this Context complete:
|
|
||||||
//
|
|
||||||
// func slowOperationWithTimeout(ctx context.Context) (Result, error) {
|
|
||||||
// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
|
|
||||||
// defer cancel() // releases resources if slowOperation completes before timeout elapses
|
|
||||||
// return slowOperation(ctx)
|
|
||||||
// }
|
|
||||||
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) {
|
|
||||||
return WithDeadline(parent, time.Now().Add(timeout))
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithValue returns a copy of parent in which the value associated with key is
|
|
||||||
// val.
|
|
||||||
//
|
|
||||||
// Use context Values only for request-scoped data that transits processes and
|
|
||||||
// APIs, not for passing optional parameters to functions.
|
|
||||||
func WithValue(parent Context, key interface{}, val interface{}) Context {
|
|
||||||
return &valueCtx{parent, key, val}
|
|
||||||
}
|
|
||||||
|
|
||||||
// A valueCtx carries a key-value pair. It implements Value for that key and
|
|
||||||
// delegates all other calls to the embedded Context.
|
|
||||||
type valueCtx struct {
|
|
||||||
Context
|
|
||||||
key, val interface{}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *valueCtx) String() string {
|
|
||||||
return fmt.Sprintf("%v.WithValue(%#v, %#v)", c.Context, c.key, c.val)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *valueCtx) Value(key interface{}) interface{} {
|
|
||||||
if c.key == key {
|
|
||||||
return c.val
|
|
||||||
}
|
|
||||||
return c.Context.Value(key)
|
|
||||||
}
|
|
109
vendor/golang.org/x/net/context/pre_go19.go
generated
vendored
109
vendor/golang.org/x/net/context/pre_go19.go
generated
vendored
|
@ -1,109 +0,0 @@
|
||||||
// Copyright 2014 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// +build !go1.9
|
|
||||||
|
|
||||||
package context
|
|
||||||
|
|
||||||
import "time"
|
|
||||||
|
|
||||||
// A Context carries a deadline, a cancelation signal, and other values across
|
|
||||||
// API boundaries.
|
|
||||||
//
|
|
||||||
// Context's methods may be called by multiple goroutines simultaneously.
|
|
||||||
type Context interface {
|
|
||||||
// Deadline returns the time when work done on behalf of this context
|
|
||||||
// should be canceled. Deadline returns ok==false when no deadline is
|
|
||||||
// set. Successive calls to Deadline return the same results.
|
|
||||||
Deadline() (deadline time.Time, ok bool)
|
|
||||||
|
|
||||||
// Done returns a channel that's closed when work done on behalf of this
|
|
||||||
// context should be canceled. Done may return nil if this context can
|
|
||||||
// never be canceled. Successive calls to Done return the same value.
|
|
||||||
//
|
|
||||||
// WithCancel arranges for Done to be closed when cancel is called;
|
|
||||||
// WithDeadline arranges for Done to be closed when the deadline
|
|
||||||
// expires; WithTimeout arranges for Done to be closed when the timeout
|
|
||||||
// elapses.
|
|
||||||
//
|
|
||||||
// Done is provided for use in select statements:
|
|
||||||
//
|
|
||||||
// // Stream generates values with DoSomething and sends them to out
|
|
||||||
// // until DoSomething returns an error or ctx.Done is closed.
|
|
||||||
// func Stream(ctx context.Context, out chan<- Value) error {
|
|
||||||
// for {
|
|
||||||
// v, err := DoSomething(ctx)
|
|
||||||
// if err != nil {
|
|
||||||
// return err
|
|
||||||
// }
|
|
||||||
// select {
|
|
||||||
// case <-ctx.Done():
|
|
||||||
// return ctx.Err()
|
|
||||||
// case out <- v:
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// See http://blog.golang.org/pipelines for more examples of how to use
|
|
||||||
// a Done channel for cancelation.
|
|
||||||
Done() <-chan struct{}
|
|
||||||
|
|
||||||
// Err returns a non-nil error value after Done is closed. Err returns
|
|
||||||
// Canceled if the context was canceled or DeadlineExceeded if the
|
|
||||||
// context's deadline passed. No other values for Err are defined.
|
|
||||||
// After Done is closed, successive calls to Err return the same value.
|
|
||||||
Err() error
|
|
||||||
|
|
||||||
// Value returns the value associated with this context for key, or nil
|
|
||||||
// if no value is associated with key. Successive calls to Value with
|
|
||||||
// the same key returns the same result.
|
|
||||||
//
|
|
||||||
// Use context values only for request-scoped data that transits
|
|
||||||
// processes and API boundaries, not for passing optional parameters to
|
|
||||||
// functions.
|
|
||||||
//
|
|
||||||
// A key identifies a specific value in a Context. Functions that wish
|
|
||||||
// to store values in Context typically allocate a key in a global
|
|
||||||
// variable then use that key as the argument to context.WithValue and
|
|
||||||
// Context.Value. A key can be any type that supports equality;
|
|
||||||
// packages should define keys as an unexported type to avoid
|
|
||||||
// collisions.
|
|
||||||
//
|
|
||||||
// Packages that define a Context key should provide type-safe accessors
|
|
||||||
// for the values stores using that key:
|
|
||||||
//
|
|
||||||
// // Package user defines a User type that's stored in Contexts.
|
|
||||||
// package user
|
|
||||||
//
|
|
||||||
// import "golang.org/x/net/context"
|
|
||||||
//
|
|
||||||
// // User is the type of value stored in the Contexts.
|
|
||||||
// type User struct {...}
|
|
||||||
//
|
|
||||||
// // key is an unexported type for keys defined in this package.
|
|
||||||
// // This prevents collisions with keys defined in other packages.
|
|
||||||
// type key int
|
|
||||||
//
|
|
||||||
// // userKey is the key for user.User values in Contexts. It is
|
|
||||||
// // unexported; clients use user.NewContext and user.FromContext
|
|
||||||
// // instead of using this key directly.
|
|
||||||
// var userKey key = 0
|
|
||||||
//
|
|
||||||
// // NewContext returns a new Context that carries value u.
|
|
||||||
// func NewContext(ctx context.Context, u *User) context.Context {
|
|
||||||
// return context.WithValue(ctx, userKey, u)
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// // FromContext returns the User value stored in ctx, if any.
|
|
||||||
// func FromContext(ctx context.Context) (*User, bool) {
|
|
||||||
// u, ok := ctx.Value(userKey).(*User)
|
|
||||||
// return u, ok
|
|
||||||
// }
|
|
||||||
Value(key interface{}) interface{}
|
|
||||||
}
|
|
||||||
|
|
||||||
// A CancelFunc tells an operation to abandon its work.
|
|
||||||
// A CancelFunc does not wait for the work to stop.
|
|
||||||
// After the first call, subsequent calls to a CancelFunc do nothing.
|
|
||||||
type CancelFunc func()
|
|
795
vendor/golang.org/x/net/webdav/file.go
generated
vendored
795
vendor/golang.org/x/net/webdav/file.go
generated
vendored
|
@ -1,795 +0,0 @@
|
||||||
// Copyright 2014 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package webdav
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/xml"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"path"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
// slashClean is equivalent to but slightly more efficient than
|
|
||||||
// path.Clean("/" + name).
|
|
||||||
func slashClean(name string) string {
|
|
||||||
if name == "" || name[0] != '/' {
|
|
||||||
name = "/" + name
|
|
||||||
}
|
|
||||||
return path.Clean(name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// A FileSystem implements access to a collection of named files. The elements
|
|
||||||
// in a file path are separated by slash ('/', U+002F) characters, regardless
|
|
||||||
// of host operating system convention.
|
|
||||||
//
|
|
||||||
// Each method has the same semantics as the os package's function of the same
|
|
||||||
// name.
|
|
||||||
//
|
|
||||||
// Note that the os.Rename documentation says that "OS-specific restrictions
|
|
||||||
// might apply". In particular, whether or not renaming a file or directory
|
|
||||||
// overwriting another existing file or directory is an error is OS-dependent.
|
|
||||||
type FileSystem interface {
|
|
||||||
Mkdir(ctx context.Context, name string, perm os.FileMode) error
|
|
||||||
OpenFile(ctx context.Context, name string, flag int, perm os.FileMode) (File, error)
|
|
||||||
RemoveAll(ctx context.Context, name string) error
|
|
||||||
Rename(ctx context.Context, oldName, newName string) error
|
|
||||||
Stat(ctx context.Context, name string) (os.FileInfo, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
// A File is returned by a FileSystem's OpenFile method and can be served by a
|
|
||||||
// Handler.
|
|
||||||
//
|
|
||||||
// A File may optionally implement the DeadPropsHolder interface, if it can
|
|
||||||
// load and save dead properties.
|
|
||||||
type File interface {
|
|
||||||
http.File
|
|
||||||
io.Writer
|
|
||||||
}
|
|
||||||
|
|
||||||
// A Dir implements FileSystem using the native file system restricted to a
|
|
||||||
// specific directory tree.
|
|
||||||
//
|
|
||||||
// While the FileSystem.OpenFile method takes '/'-separated paths, a Dir's
|
|
||||||
// string value is a filename on the native file system, not a URL, so it is
|
|
||||||
// separated by filepath.Separator, which isn't necessarily '/'.
|
|
||||||
//
|
|
||||||
// An empty Dir is treated as ".".
|
|
||||||
type Dir string
|
|
||||||
|
|
||||||
func (d Dir) resolve(name string) string {
|
|
||||||
// This implementation is based on Dir.Open's code in the standard net/http package.
|
|
||||||
if filepath.Separator != '/' && strings.IndexRune(name, filepath.Separator) >= 0 ||
|
|
||||||
strings.Contains(name, "\x00") {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
dir := string(d)
|
|
||||||
if dir == "" {
|
|
||||||
dir = "."
|
|
||||||
}
|
|
||||||
return filepath.Join(dir, filepath.FromSlash(slashClean(name)))
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d Dir) Mkdir(ctx context.Context, name string, perm os.FileMode) error {
|
|
||||||
if name = d.resolve(name); name == "" {
|
|
||||||
return os.ErrNotExist
|
|
||||||
}
|
|
||||||
return os.Mkdir(name, perm)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d Dir) OpenFile(ctx context.Context, name string, flag int, perm os.FileMode) (File, error) {
|
|
||||||
if name = d.resolve(name); name == "" {
|
|
||||||
return nil, os.ErrNotExist
|
|
||||||
}
|
|
||||||
f, err := os.OpenFile(name, flag, perm)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return f, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d Dir) RemoveAll(ctx context.Context, name string) error {
|
|
||||||
if name = d.resolve(name); name == "" {
|
|
||||||
return os.ErrNotExist
|
|
||||||
}
|
|
||||||
if name == filepath.Clean(string(d)) {
|
|
||||||
// Prohibit removing the virtual root directory.
|
|
||||||
return os.ErrInvalid
|
|
||||||
}
|
|
||||||
return os.RemoveAll(name)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d Dir) Rename(ctx context.Context, oldName, newName string) error {
|
|
||||||
if oldName = d.resolve(oldName); oldName == "" {
|
|
||||||
return os.ErrNotExist
|
|
||||||
}
|
|
||||||
if newName = d.resolve(newName); newName == "" {
|
|
||||||
return os.ErrNotExist
|
|
||||||
}
|
|
||||||
if root := filepath.Clean(string(d)); root == oldName || root == newName {
|
|
||||||
// Prohibit renaming from or to the virtual root directory.
|
|
||||||
return os.ErrInvalid
|
|
||||||
}
|
|
||||||
return os.Rename(oldName, newName)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d Dir) Stat(ctx context.Context, name string) (os.FileInfo, error) {
|
|
||||||
if name = d.resolve(name); name == "" {
|
|
||||||
return nil, os.ErrNotExist
|
|
||||||
}
|
|
||||||
return os.Stat(name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewMemFS returns a new in-memory FileSystem implementation.
|
|
||||||
func NewMemFS() FileSystem {
|
|
||||||
return &memFS{
|
|
||||||
root: memFSNode{
|
|
||||||
children: make(map[string]*memFSNode),
|
|
||||||
mode: 0660 | os.ModeDir,
|
|
||||||
modTime: time.Now(),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// A memFS implements FileSystem, storing all metadata and actual file data
|
|
||||||
// in-memory. No limits on filesystem size are used, so it is not recommended
|
|
||||||
// this be used where the clients are untrusted.
|
|
||||||
//
|
|
||||||
// Concurrent access is permitted. The tree structure is protected by a mutex,
|
|
||||||
// and each node's contents and metadata are protected by a per-node mutex.
|
|
||||||
//
|
|
||||||
// TODO: Enforce file permissions.
|
|
||||||
type memFS struct {
|
|
||||||
mu sync.Mutex
|
|
||||||
root memFSNode
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: clean up and rationalize the walk/find code.
|
|
||||||
|
|
||||||
// walk walks the directory tree for the fullname, calling f at each step. If f
|
|
||||||
// returns an error, the walk will be aborted and return that same error.
|
|
||||||
//
|
|
||||||
// dir is the directory at that step, frag is the name fragment, and final is
|
|
||||||
// whether it is the final step. For example, walking "/foo/bar/x" will result
|
|
||||||
// in 3 calls to f:
|
|
||||||
// - "/", "foo", false
|
|
||||||
// - "/foo/", "bar", false
|
|
||||||
// - "/foo/bar/", "x", true
|
|
||||||
// The frag argument will be empty only if dir is the root node and the walk
|
|
||||||
// ends at that root node.
|
|
||||||
func (fs *memFS) walk(op, fullname string, f func(dir *memFSNode, frag string, final bool) error) error {
|
|
||||||
original := fullname
|
|
||||||
fullname = slashClean(fullname)
|
|
||||||
|
|
||||||
// Strip any leading "/"s to make fullname a relative path, as the walk
|
|
||||||
// starts at fs.root.
|
|
||||||
if fullname[0] == '/' {
|
|
||||||
fullname = fullname[1:]
|
|
||||||
}
|
|
||||||
dir := &fs.root
|
|
||||||
|
|
||||||
for {
|
|
||||||
frag, remaining := fullname, ""
|
|
||||||
i := strings.IndexRune(fullname, '/')
|
|
||||||
final := i < 0
|
|
||||||
if !final {
|
|
||||||
frag, remaining = fullname[:i], fullname[i+1:]
|
|
||||||
}
|
|
||||||
if frag == "" && dir != &fs.root {
|
|
||||||
panic("webdav: empty path fragment for a clean path")
|
|
||||||
}
|
|
||||||
if err := f(dir, frag, final); err != nil {
|
|
||||||
return &os.PathError{
|
|
||||||
Op: op,
|
|
||||||
Path: original,
|
|
||||||
Err: err,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if final {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
child := dir.children[frag]
|
|
||||||
if child == nil {
|
|
||||||
return &os.PathError{
|
|
||||||
Op: op,
|
|
||||||
Path: original,
|
|
||||||
Err: os.ErrNotExist,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !child.mode.IsDir() {
|
|
||||||
return &os.PathError{
|
|
||||||
Op: op,
|
|
||||||
Path: original,
|
|
||||||
Err: os.ErrInvalid,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
dir, fullname = child, remaining
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// find returns the parent of the named node and the relative name fragment
|
|
||||||
// from the parent to the child. For example, if finding "/foo/bar/baz" then
|
|
||||||
// parent will be the node for "/foo/bar" and frag will be "baz".
|
|
||||||
//
|
|
||||||
// If the fullname names the root node, then parent, frag and err will be zero.
|
|
||||||
//
|
|
||||||
// find returns an error if the parent does not already exist or the parent
|
|
||||||
// isn't a directory, but it will not return an error per se if the child does
|
|
||||||
// not already exist. The error returned is either nil or an *os.PathError
|
|
||||||
// whose Op is op.
|
|
||||||
func (fs *memFS) find(op, fullname string) (parent *memFSNode, frag string, err error) {
|
|
||||||
err = fs.walk(op, fullname, func(parent0 *memFSNode, frag0 string, final bool) error {
|
|
||||||
if !final {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if frag0 != "" {
|
|
||||||
parent, frag = parent0, frag0
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
return parent, frag, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *memFS) Mkdir(ctx context.Context, name string, perm os.FileMode) error {
|
|
||||||
fs.mu.Lock()
|
|
||||||
defer fs.mu.Unlock()
|
|
||||||
|
|
||||||
dir, frag, err := fs.find("mkdir", name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if dir == nil {
|
|
||||||
// We can't create the root.
|
|
||||||
return os.ErrInvalid
|
|
||||||
}
|
|
||||||
if _, ok := dir.children[frag]; ok {
|
|
||||||
return os.ErrExist
|
|
||||||
}
|
|
||||||
dir.children[frag] = &memFSNode{
|
|
||||||
children: make(map[string]*memFSNode),
|
|
||||||
mode: perm.Perm() | os.ModeDir,
|
|
||||||
modTime: time.Now(),
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *memFS) OpenFile(ctx context.Context, name string, flag int, perm os.FileMode) (File, error) {
|
|
||||||
fs.mu.Lock()
|
|
||||||
defer fs.mu.Unlock()
|
|
||||||
|
|
||||||
dir, frag, err := fs.find("open", name)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
var n *memFSNode
|
|
||||||
if dir == nil {
|
|
||||||
// We're opening the root.
|
|
||||||
if flag&(os.O_WRONLY|os.O_RDWR) != 0 {
|
|
||||||
return nil, os.ErrPermission
|
|
||||||
}
|
|
||||||
n, frag = &fs.root, "/"
|
|
||||||
|
|
||||||
} else {
|
|
||||||
n = dir.children[frag]
|
|
||||||
if flag&(os.O_SYNC|os.O_APPEND) != 0 {
|
|
||||||
// memFile doesn't support these flags yet.
|
|
||||||
return nil, os.ErrInvalid
|
|
||||||
}
|
|
||||||
if flag&os.O_CREATE != 0 {
|
|
||||||
if flag&os.O_EXCL != 0 && n != nil {
|
|
||||||
return nil, os.ErrExist
|
|
||||||
}
|
|
||||||
if n == nil {
|
|
||||||
n = &memFSNode{
|
|
||||||
mode: perm.Perm(),
|
|
||||||
}
|
|
||||||
dir.children[frag] = n
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if n == nil {
|
|
||||||
return nil, os.ErrNotExist
|
|
||||||
}
|
|
||||||
if flag&(os.O_WRONLY|os.O_RDWR) != 0 && flag&os.O_TRUNC != 0 {
|
|
||||||
n.mu.Lock()
|
|
||||||
n.data = nil
|
|
||||||
n.mu.Unlock()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
children := make([]os.FileInfo, 0, len(n.children))
|
|
||||||
for cName, c := range n.children {
|
|
||||||
children = append(children, c.stat(cName))
|
|
||||||
}
|
|
||||||
return &memFile{
|
|
||||||
n: n,
|
|
||||||
nameSnapshot: frag,
|
|
||||||
childrenSnapshot: children,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *memFS) RemoveAll(ctx context.Context, name string) error {
|
|
||||||
fs.mu.Lock()
|
|
||||||
defer fs.mu.Unlock()
|
|
||||||
|
|
||||||
dir, frag, err := fs.find("remove", name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if dir == nil {
|
|
||||||
// We can't remove the root.
|
|
||||||
return os.ErrInvalid
|
|
||||||
}
|
|
||||||
delete(dir.children, frag)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *memFS) Rename(ctx context.Context, oldName, newName string) error {
|
|
||||||
fs.mu.Lock()
|
|
||||||
defer fs.mu.Unlock()
|
|
||||||
|
|
||||||
oldName = slashClean(oldName)
|
|
||||||
newName = slashClean(newName)
|
|
||||||
if oldName == newName {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if strings.HasPrefix(newName, oldName+"/") {
|
|
||||||
// We can't rename oldName to be a sub-directory of itself.
|
|
||||||
return os.ErrInvalid
|
|
||||||
}
|
|
||||||
|
|
||||||
oDir, oFrag, err := fs.find("rename", oldName)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if oDir == nil {
|
|
||||||
// We can't rename from the root.
|
|
||||||
return os.ErrInvalid
|
|
||||||
}
|
|
||||||
|
|
||||||
nDir, nFrag, err := fs.find("rename", newName)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if nDir == nil {
|
|
||||||
// We can't rename to the root.
|
|
||||||
return os.ErrInvalid
|
|
||||||
}
|
|
||||||
|
|
||||||
oNode, ok := oDir.children[oFrag]
|
|
||||||
if !ok {
|
|
||||||
return os.ErrNotExist
|
|
||||||
}
|
|
||||||
if oNode.children != nil {
|
|
||||||
if nNode, ok := nDir.children[nFrag]; ok {
|
|
||||||
if nNode.children == nil {
|
|
||||||
return errNotADirectory
|
|
||||||
}
|
|
||||||
if len(nNode.children) != 0 {
|
|
||||||
return errDirectoryNotEmpty
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
delete(oDir.children, oFrag)
|
|
||||||
nDir.children[nFrag] = oNode
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *memFS) Stat(ctx context.Context, name string) (os.FileInfo, error) {
|
|
||||||
fs.mu.Lock()
|
|
||||||
defer fs.mu.Unlock()
|
|
||||||
|
|
||||||
dir, frag, err := fs.find("stat", name)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if dir == nil {
|
|
||||||
// We're stat'ting the root.
|
|
||||||
return fs.root.stat("/"), nil
|
|
||||||
}
|
|
||||||
if n, ok := dir.children[frag]; ok {
|
|
||||||
return n.stat(path.Base(name)), nil
|
|
||||||
}
|
|
||||||
return nil, os.ErrNotExist
|
|
||||||
}
|
|
||||||
|
|
||||||
// A memFSNode represents a single entry in the in-memory filesystem and also
|
|
||||||
// implements os.FileInfo.
|
|
||||||
type memFSNode struct {
|
|
||||||
// children is protected by memFS.mu.
|
|
||||||
children map[string]*memFSNode
|
|
||||||
|
|
||||||
mu sync.Mutex
|
|
||||||
data []byte
|
|
||||||
mode os.FileMode
|
|
||||||
modTime time.Time
|
|
||||||
deadProps map[xml.Name]Property
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *memFSNode) stat(name string) *memFileInfo {
|
|
||||||
n.mu.Lock()
|
|
||||||
defer n.mu.Unlock()
|
|
||||||
return &memFileInfo{
|
|
||||||
name: name,
|
|
||||||
size: int64(len(n.data)),
|
|
||||||
mode: n.mode,
|
|
||||||
modTime: n.modTime,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *memFSNode) DeadProps() (map[xml.Name]Property, error) {
|
|
||||||
n.mu.Lock()
|
|
||||||
defer n.mu.Unlock()
|
|
||||||
if len(n.deadProps) == 0 {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
ret := make(map[xml.Name]Property, len(n.deadProps))
|
|
||||||
for k, v := range n.deadProps {
|
|
||||||
ret[k] = v
|
|
||||||
}
|
|
||||||
return ret, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *memFSNode) Patch(patches []Proppatch) ([]Propstat, error) {
|
|
||||||
n.mu.Lock()
|
|
||||||
defer n.mu.Unlock()
|
|
||||||
pstat := Propstat{Status: http.StatusOK}
|
|
||||||
for _, patch := range patches {
|
|
||||||
for _, p := range patch.Props {
|
|
||||||
pstat.Props = append(pstat.Props, Property{XMLName: p.XMLName})
|
|
||||||
if patch.Remove {
|
|
||||||
delete(n.deadProps, p.XMLName)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if n.deadProps == nil {
|
|
||||||
n.deadProps = map[xml.Name]Property{}
|
|
||||||
}
|
|
||||||
n.deadProps[p.XMLName] = p
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return []Propstat{pstat}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type memFileInfo struct {
|
|
||||||
name string
|
|
||||||
size int64
|
|
||||||
mode os.FileMode
|
|
||||||
modTime time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *memFileInfo) Name() string { return f.name }
|
|
||||||
func (f *memFileInfo) Size() int64 { return f.size }
|
|
||||||
func (f *memFileInfo) Mode() os.FileMode { return f.mode }
|
|
||||||
func (f *memFileInfo) ModTime() time.Time { return f.modTime }
|
|
||||||
func (f *memFileInfo) IsDir() bool { return f.mode.IsDir() }
|
|
||||||
func (f *memFileInfo) Sys() interface{} { return nil }
|
|
||||||
|
|
||||||
// A memFile is a File implementation for a memFSNode. It is a per-file (not
|
|
||||||
// per-node) read/write position, and a snapshot of the memFS' tree structure
|
|
||||||
// (a node's name and children) for that node.
|
|
||||||
type memFile struct {
|
|
||||||
n *memFSNode
|
|
||||||
nameSnapshot string
|
|
||||||
childrenSnapshot []os.FileInfo
|
|
||||||
// pos is protected by n.mu.
|
|
||||||
pos int
|
|
||||||
}
|
|
||||||
|
|
||||||
// A *memFile implements the optional DeadPropsHolder interface.
|
|
||||||
var _ DeadPropsHolder = (*memFile)(nil)
|
|
||||||
|
|
||||||
func (f *memFile) DeadProps() (map[xml.Name]Property, error) { return f.n.DeadProps() }
|
|
||||||
func (f *memFile) Patch(patches []Proppatch) ([]Propstat, error) { return f.n.Patch(patches) }
|
|
||||||
|
|
||||||
func (f *memFile) Close() error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *memFile) Read(p []byte) (int, error) {
|
|
||||||
f.n.mu.Lock()
|
|
||||||
defer f.n.mu.Unlock()
|
|
||||||
if f.n.mode.IsDir() {
|
|
||||||
return 0, os.ErrInvalid
|
|
||||||
}
|
|
||||||
if f.pos >= len(f.n.data) {
|
|
||||||
return 0, io.EOF
|
|
||||||
}
|
|
||||||
n := copy(p, f.n.data[f.pos:])
|
|
||||||
f.pos += n
|
|
||||||
return n, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *memFile) Readdir(count int) ([]os.FileInfo, error) {
|
|
||||||
f.n.mu.Lock()
|
|
||||||
defer f.n.mu.Unlock()
|
|
||||||
if !f.n.mode.IsDir() {
|
|
||||||
return nil, os.ErrInvalid
|
|
||||||
}
|
|
||||||
old := f.pos
|
|
||||||
if old >= len(f.childrenSnapshot) {
|
|
||||||
// The os.File Readdir docs say that at the end of a directory,
|
|
||||||
// the error is io.EOF if count > 0 and nil if count <= 0.
|
|
||||||
if count > 0 {
|
|
||||||
return nil, io.EOF
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
if count > 0 {
|
|
||||||
f.pos += count
|
|
||||||
if f.pos > len(f.childrenSnapshot) {
|
|
||||||
f.pos = len(f.childrenSnapshot)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
f.pos = len(f.childrenSnapshot)
|
|
||||||
old = 0
|
|
||||||
}
|
|
||||||
return f.childrenSnapshot[old:f.pos], nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *memFile) Seek(offset int64, whence int) (int64, error) {
|
|
||||||
f.n.mu.Lock()
|
|
||||||
defer f.n.mu.Unlock()
|
|
||||||
npos := f.pos
|
|
||||||
// TODO: How to handle offsets greater than the size of system int?
|
|
||||||
switch whence {
|
|
||||||
case os.SEEK_SET:
|
|
||||||
npos = int(offset)
|
|
||||||
case os.SEEK_CUR:
|
|
||||||
npos += int(offset)
|
|
||||||
case os.SEEK_END:
|
|
||||||
npos = len(f.n.data) + int(offset)
|
|
||||||
default:
|
|
||||||
npos = -1
|
|
||||||
}
|
|
||||||
if npos < 0 {
|
|
||||||
return 0, os.ErrInvalid
|
|
||||||
}
|
|
||||||
f.pos = npos
|
|
||||||
return int64(f.pos), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *memFile) Stat() (os.FileInfo, error) {
|
|
||||||
return f.n.stat(f.nameSnapshot), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *memFile) Write(p []byte) (int, error) {
|
|
||||||
lenp := len(p)
|
|
||||||
f.n.mu.Lock()
|
|
||||||
defer f.n.mu.Unlock()
|
|
||||||
|
|
||||||
if f.n.mode.IsDir() {
|
|
||||||
return 0, os.ErrInvalid
|
|
||||||
}
|
|
||||||
if f.pos < len(f.n.data) {
|
|
||||||
n := copy(f.n.data[f.pos:], p)
|
|
||||||
f.pos += n
|
|
||||||
p = p[n:]
|
|
||||||
} else if f.pos > len(f.n.data) {
|
|
||||||
// Write permits the creation of holes, if we've seek'ed past the
|
|
||||||
// existing end of file.
|
|
||||||
if f.pos <= cap(f.n.data) {
|
|
||||||
oldLen := len(f.n.data)
|
|
||||||
f.n.data = f.n.data[:f.pos]
|
|
||||||
hole := f.n.data[oldLen:]
|
|
||||||
for i := range hole {
|
|
||||||
hole[i] = 0
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
d := make([]byte, f.pos, f.pos+len(p))
|
|
||||||
copy(d, f.n.data)
|
|
||||||
f.n.data = d
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(p) > 0 {
|
|
||||||
// We should only get here if f.pos == len(f.n.data).
|
|
||||||
f.n.data = append(f.n.data, p...)
|
|
||||||
f.pos = len(f.n.data)
|
|
||||||
}
|
|
||||||
f.n.modTime = time.Now()
|
|
||||||
return lenp, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// moveFiles moves files and/or directories from src to dst.
|
|
||||||
//
|
|
||||||
// See section 9.9.4 for when various HTTP status codes apply.
|
|
||||||
func moveFiles(ctx context.Context, fs FileSystem, src, dst string, overwrite bool) (status int, err error) {
|
|
||||||
created := false
|
|
||||||
if _, err := fs.Stat(ctx, dst); err != nil {
|
|
||||||
if !os.IsNotExist(err) {
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
}
|
|
||||||
created = true
|
|
||||||
} else if overwrite {
|
|
||||||
// Section 9.9.3 says that "If a resource exists at the destination
|
|
||||||
// and the Overwrite header is "T", then prior to performing the move,
|
|
||||||
// the server must perform a DELETE with "Depth: infinity" on the
|
|
||||||
// destination resource.
|
|
||||||
if err := fs.RemoveAll(ctx, dst); err != nil {
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return http.StatusPreconditionFailed, os.ErrExist
|
|
||||||
}
|
|
||||||
if err := fs.Rename(ctx, src, dst); err != nil {
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
}
|
|
||||||
if created {
|
|
||||||
return http.StatusCreated, nil
|
|
||||||
}
|
|
||||||
return http.StatusNoContent, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func copyProps(dst, src File) error {
|
|
||||||
d, ok := dst.(DeadPropsHolder)
|
|
||||||
if !ok {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
s, ok := src.(DeadPropsHolder)
|
|
||||||
if !ok {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
m, err := s.DeadProps()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
props := make([]Property, 0, len(m))
|
|
||||||
for _, prop := range m {
|
|
||||||
props = append(props, prop)
|
|
||||||
}
|
|
||||||
_, err = d.Patch([]Proppatch{{Props: props}})
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// copyFiles copies files and/or directories from src to dst.
|
|
||||||
//
|
|
||||||
// See section 9.8.5 for when various HTTP status codes apply.
|
|
||||||
func copyFiles(ctx context.Context, fs FileSystem, src, dst string, overwrite bool, depth int, recursion int) (status int, err error) {
|
|
||||||
if recursion == 1000 {
|
|
||||||
return http.StatusInternalServerError, errRecursionTooDeep
|
|
||||||
}
|
|
||||||
recursion++
|
|
||||||
|
|
||||||
// TODO: section 9.8.3 says that "Note that an infinite-depth COPY of /A/
|
|
||||||
// into /A/B/ could lead to infinite recursion if not handled correctly."
|
|
||||||
|
|
||||||
srcFile, err := fs.OpenFile(ctx, src, os.O_RDONLY, 0)
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return http.StatusNotFound, err
|
|
||||||
}
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
defer srcFile.Close()
|
|
||||||
srcStat, err := srcFile.Stat()
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return http.StatusNotFound, err
|
|
||||||
}
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
srcPerm := srcStat.Mode() & os.ModePerm
|
|
||||||
|
|
||||||
created := false
|
|
||||||
if _, err := fs.Stat(ctx, dst); err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
created = true
|
|
||||||
} else {
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if !overwrite {
|
|
||||||
return http.StatusPreconditionFailed, os.ErrExist
|
|
||||||
}
|
|
||||||
if err := fs.RemoveAll(ctx, dst); err != nil && !os.IsNotExist(err) {
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if srcStat.IsDir() {
|
|
||||||
if err := fs.Mkdir(ctx, dst, srcPerm); err != nil {
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
}
|
|
||||||
if depth == infiniteDepth {
|
|
||||||
children, err := srcFile.Readdir(-1)
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
}
|
|
||||||
for _, c := range children {
|
|
||||||
name := c.Name()
|
|
||||||
s := path.Join(src, name)
|
|
||||||
d := path.Join(dst, name)
|
|
||||||
cStatus, cErr := copyFiles(ctx, fs, s, d, overwrite, depth, recursion)
|
|
||||||
if cErr != nil {
|
|
||||||
// TODO: MultiStatus.
|
|
||||||
return cStatus, cErr
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
} else {
|
|
||||||
dstFile, err := fs.OpenFile(ctx, dst, os.O_RDWR|os.O_CREATE|os.O_TRUNC, srcPerm)
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return http.StatusConflict, err
|
|
||||||
}
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
|
|
||||||
}
|
|
||||||
_, copyErr := io.Copy(dstFile, srcFile)
|
|
||||||
propsErr := copyProps(dstFile, srcFile)
|
|
||||||
closeErr := dstFile.Close()
|
|
||||||
if copyErr != nil {
|
|
||||||
return http.StatusInternalServerError, copyErr
|
|
||||||
}
|
|
||||||
if propsErr != nil {
|
|
||||||
return http.StatusInternalServerError, propsErr
|
|
||||||
}
|
|
||||||
if closeErr != nil {
|
|
||||||
return http.StatusInternalServerError, closeErr
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if created {
|
|
||||||
return http.StatusCreated, nil
|
|
||||||
}
|
|
||||||
return http.StatusNoContent, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// walkFS traverses filesystem fs starting at name up to depth levels.
|
|
||||||
//
|
|
||||||
// Allowed values for depth are 0, 1 or infiniteDepth. For each visited node,
|
|
||||||
// walkFS calls walkFn. If a visited file system node is a directory and
|
|
||||||
// walkFn returns filepath.SkipDir, walkFS will skip traversal of this node.
|
|
||||||
func walkFS(ctx context.Context, fs FileSystem, depth int, name string, info os.FileInfo, walkFn filepath.WalkFunc) error {
|
|
||||||
// This implementation is based on Walk's code in the standard path/filepath package.
|
|
||||||
err := walkFn(name, info, nil)
|
|
||||||
if err != nil {
|
|
||||||
if info.IsDir() && err == filepath.SkipDir {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if !info.IsDir() || depth == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if depth == 1 {
|
|
||||||
depth = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read directory names.
|
|
||||||
f, err := fs.OpenFile(ctx, name, os.O_RDONLY, 0)
|
|
||||||
if err != nil {
|
|
||||||
return walkFn(name, info, err)
|
|
||||||
}
|
|
||||||
fileInfos, err := f.Readdir(0)
|
|
||||||
f.Close()
|
|
||||||
if err != nil {
|
|
||||||
return walkFn(name, info, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, fileInfo := range fileInfos {
|
|
||||||
filename := path.Join(name, fileInfo.Name())
|
|
||||||
fileInfo, err := fs.Stat(ctx, filename)
|
|
||||||
if err != nil {
|
|
||||||
if err := walkFn(filename, fileInfo, err); err != nil && err != filepath.SkipDir {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
err = walkFS(ctx, fs, depth, filename, fileInfo, walkFn)
|
|
||||||
if err != nil {
|
|
||||||
if !fileInfo.IsDir() || err != filepath.SkipDir {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
173
vendor/golang.org/x/net/webdav/if.go
generated
vendored
173
vendor/golang.org/x/net/webdav/if.go
generated
vendored
|
@ -1,173 +0,0 @@
|
||||||
// Copyright 2014 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package webdav
|
|
||||||
|
|
||||||
// The If header is covered by Section 10.4.
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#HEADER_If
|
|
||||||
|
|
||||||
import (
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ifHeader is a disjunction (OR) of ifLists.
|
|
||||||
type ifHeader struct {
|
|
||||||
lists []ifList
|
|
||||||
}
|
|
||||||
|
|
||||||
// ifList is a conjunction (AND) of Conditions, and an optional resource tag.
|
|
||||||
type ifList struct {
|
|
||||||
resourceTag string
|
|
||||||
conditions []Condition
|
|
||||||
}
|
|
||||||
|
|
||||||
// parseIfHeader parses the "If: foo bar" HTTP header. The httpHeader string
|
|
||||||
// should omit the "If:" prefix and have any "\r\n"s collapsed to a " ", as is
|
|
||||||
// returned by req.Header.Get("If") for a http.Request req.
|
|
||||||
func parseIfHeader(httpHeader string) (h ifHeader, ok bool) {
|
|
||||||
s := strings.TrimSpace(httpHeader)
|
|
||||||
switch tokenType, _, _ := lex(s); tokenType {
|
|
||||||
case '(':
|
|
||||||
return parseNoTagLists(s)
|
|
||||||
case angleTokenType:
|
|
||||||
return parseTaggedLists(s)
|
|
||||||
default:
|
|
||||||
return ifHeader{}, false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseNoTagLists(s string) (h ifHeader, ok bool) {
|
|
||||||
for {
|
|
||||||
l, remaining, ok := parseList(s)
|
|
||||||
if !ok {
|
|
||||||
return ifHeader{}, false
|
|
||||||
}
|
|
||||||
h.lists = append(h.lists, l)
|
|
||||||
if remaining == "" {
|
|
||||||
return h, true
|
|
||||||
}
|
|
||||||
s = remaining
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseTaggedLists(s string) (h ifHeader, ok bool) {
|
|
||||||
resourceTag, n := "", 0
|
|
||||||
for first := true; ; first = false {
|
|
||||||
tokenType, tokenStr, remaining := lex(s)
|
|
||||||
switch tokenType {
|
|
||||||
case angleTokenType:
|
|
||||||
if !first && n == 0 {
|
|
||||||
return ifHeader{}, false
|
|
||||||
}
|
|
||||||
resourceTag, n = tokenStr, 0
|
|
||||||
s = remaining
|
|
||||||
case '(':
|
|
||||||
n++
|
|
||||||
l, remaining, ok := parseList(s)
|
|
||||||
if !ok {
|
|
||||||
return ifHeader{}, false
|
|
||||||
}
|
|
||||||
l.resourceTag = resourceTag
|
|
||||||
h.lists = append(h.lists, l)
|
|
||||||
if remaining == "" {
|
|
||||||
return h, true
|
|
||||||
}
|
|
||||||
s = remaining
|
|
||||||
default:
|
|
||||||
return ifHeader{}, false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseList(s string) (l ifList, remaining string, ok bool) {
|
|
||||||
tokenType, _, s := lex(s)
|
|
||||||
if tokenType != '(' {
|
|
||||||
return ifList{}, "", false
|
|
||||||
}
|
|
||||||
for {
|
|
||||||
tokenType, _, remaining = lex(s)
|
|
||||||
if tokenType == ')' {
|
|
||||||
if len(l.conditions) == 0 {
|
|
||||||
return ifList{}, "", false
|
|
||||||
}
|
|
||||||
return l, remaining, true
|
|
||||||
}
|
|
||||||
c, remaining, ok := parseCondition(s)
|
|
||||||
if !ok {
|
|
||||||
return ifList{}, "", false
|
|
||||||
}
|
|
||||||
l.conditions = append(l.conditions, c)
|
|
||||||
s = remaining
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseCondition(s string) (c Condition, remaining string, ok bool) {
|
|
||||||
tokenType, tokenStr, s := lex(s)
|
|
||||||
if tokenType == notTokenType {
|
|
||||||
c.Not = true
|
|
||||||
tokenType, tokenStr, s = lex(s)
|
|
||||||
}
|
|
||||||
switch tokenType {
|
|
||||||
case strTokenType, angleTokenType:
|
|
||||||
c.Token = tokenStr
|
|
||||||
case squareTokenType:
|
|
||||||
c.ETag = tokenStr
|
|
||||||
default:
|
|
||||||
return Condition{}, "", false
|
|
||||||
}
|
|
||||||
return c, s, true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Single-rune tokens like '(' or ')' have a token type equal to their rune.
|
|
||||||
// All other tokens have a negative token type.
|
|
||||||
const (
|
|
||||||
errTokenType = rune(-1)
|
|
||||||
eofTokenType = rune(-2)
|
|
||||||
strTokenType = rune(-3)
|
|
||||||
notTokenType = rune(-4)
|
|
||||||
angleTokenType = rune(-5)
|
|
||||||
squareTokenType = rune(-6)
|
|
||||||
)
|
|
||||||
|
|
||||||
func lex(s string) (tokenType rune, tokenStr string, remaining string) {
|
|
||||||
// The net/textproto Reader that parses the HTTP header will collapse
|
|
||||||
// Linear White Space that spans multiple "\r\n" lines to a single " ",
|
|
||||||
// so we don't need to look for '\r' or '\n'.
|
|
||||||
for len(s) > 0 && (s[0] == '\t' || s[0] == ' ') {
|
|
||||||
s = s[1:]
|
|
||||||
}
|
|
||||||
if len(s) == 0 {
|
|
||||||
return eofTokenType, "", ""
|
|
||||||
}
|
|
||||||
i := 0
|
|
||||||
loop:
|
|
||||||
for ; i < len(s); i++ {
|
|
||||||
switch s[i] {
|
|
||||||
case '\t', ' ', '(', ')', '<', '>', '[', ']':
|
|
||||||
break loop
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if i != 0 {
|
|
||||||
tokenStr, remaining = s[:i], s[i:]
|
|
||||||
if tokenStr == "Not" {
|
|
||||||
return notTokenType, "", remaining
|
|
||||||
}
|
|
||||||
return strTokenType, tokenStr, remaining
|
|
||||||
}
|
|
||||||
|
|
||||||
j := 0
|
|
||||||
switch s[0] {
|
|
||||||
case '<':
|
|
||||||
j, tokenType = strings.IndexByte(s, '>'), angleTokenType
|
|
||||||
case '[':
|
|
||||||
j, tokenType = strings.IndexByte(s, ']'), squareTokenType
|
|
||||||
default:
|
|
||||||
return rune(s[0]), "", s[1:]
|
|
||||||
}
|
|
||||||
if j < 0 {
|
|
||||||
return errTokenType, "", ""
|
|
||||||
}
|
|
||||||
return tokenType, s[1:j], s[j+1:]
|
|
||||||
}
|
|
11
vendor/golang.org/x/net/webdav/internal/xml/README
generated
vendored
11
vendor/golang.org/x/net/webdav/internal/xml/README
generated
vendored
|
@ -1,11 +0,0 @@
|
||||||
This is a fork of the encoding/xml package at ca1d6c4, the last commit before
|
|
||||||
https://go.googlesource.com/go/+/c0d6d33 "encoding/xml: restore Go 1.4 name
|
|
||||||
space behavior" made late in the lead-up to the Go 1.5 release.
|
|
||||||
|
|
||||||
The list of encoding/xml changes is at
|
|
||||||
https://go.googlesource.com/go/+log/master/src/encoding/xml
|
|
||||||
|
|
||||||
This fork is temporary, and I (nigeltao) expect to revert it after Go 1.6 is
|
|
||||||
released.
|
|
||||||
|
|
||||||
See http://golang.org/issue/11841
|
|
1223
vendor/golang.org/x/net/webdav/internal/xml/marshal.go
generated
vendored
1223
vendor/golang.org/x/net/webdav/internal/xml/marshal.go
generated
vendored
File diff suppressed because it is too large
Load diff
692
vendor/golang.org/x/net/webdav/internal/xml/read.go
generated
vendored
692
vendor/golang.org/x/net/webdav/internal/xml/read.go
generated
vendored
|
@ -1,692 +0,0 @@
|
||||||
// Copyright 2009 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package xml
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"encoding"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"reflect"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// BUG(rsc): Mapping between XML elements and data structures is inherently flawed:
|
|
||||||
// an XML element is an order-dependent collection of anonymous
|
|
||||||
// values, while a data structure is an order-independent collection
|
|
||||||
// of named values.
|
|
||||||
// See package json for a textual representation more suitable
|
|
||||||
// to data structures.
|
|
||||||
|
|
||||||
// Unmarshal parses the XML-encoded data and stores the result in
|
|
||||||
// the value pointed to by v, which must be an arbitrary struct,
|
|
||||||
// slice, or string. Well-formed data that does not fit into v is
|
|
||||||
// discarded.
|
|
||||||
//
|
|
||||||
// Because Unmarshal uses the reflect package, it can only assign
|
|
||||||
// to exported (upper case) fields. Unmarshal uses a case-sensitive
|
|
||||||
// comparison to match XML element names to tag values and struct
|
|
||||||
// field names.
|
|
||||||
//
|
|
||||||
// Unmarshal maps an XML element to a struct using the following rules.
|
|
||||||
// In the rules, the tag of a field refers to the value associated with the
|
|
||||||
// key 'xml' in the struct field's tag (see the example above).
|
|
||||||
//
|
|
||||||
// * If the struct has a field of type []byte or string with tag
|
|
||||||
// ",innerxml", Unmarshal accumulates the raw XML nested inside the
|
|
||||||
// element in that field. The rest of the rules still apply.
|
|
||||||
//
|
|
||||||
// * If the struct has a field named XMLName of type xml.Name,
|
|
||||||
// Unmarshal records the element name in that field.
|
|
||||||
//
|
|
||||||
// * If the XMLName field has an associated tag of the form
|
|
||||||
// "name" or "namespace-URL name", the XML element must have
|
|
||||||
// the given name (and, optionally, name space) or else Unmarshal
|
|
||||||
// returns an error.
|
|
||||||
//
|
|
||||||
// * If the XML element has an attribute whose name matches a
|
|
||||||
// struct field name with an associated tag containing ",attr" or
|
|
||||||
// the explicit name in a struct field tag of the form "name,attr",
|
|
||||||
// Unmarshal records the attribute value in that field.
|
|
||||||
//
|
|
||||||
// * If the XML element contains character data, that data is
|
|
||||||
// accumulated in the first struct field that has tag ",chardata".
|
|
||||||
// The struct field may have type []byte or string.
|
|
||||||
// If there is no such field, the character data is discarded.
|
|
||||||
//
|
|
||||||
// * If the XML element contains comments, they are accumulated in
|
|
||||||
// the first struct field that has tag ",comment". The struct
|
|
||||||
// field may have type []byte or string. If there is no such
|
|
||||||
// field, the comments are discarded.
|
|
||||||
//
|
|
||||||
// * If the XML element contains a sub-element whose name matches
|
|
||||||
// the prefix of a tag formatted as "a" or "a>b>c", unmarshal
|
|
||||||
// will descend into the XML structure looking for elements with the
|
|
||||||
// given names, and will map the innermost elements to that struct
|
|
||||||
// field. A tag starting with ">" is equivalent to one starting
|
|
||||||
// with the field name followed by ">".
|
|
||||||
//
|
|
||||||
// * If the XML element contains a sub-element whose name matches
|
|
||||||
// a struct field's XMLName tag and the struct field has no
|
|
||||||
// explicit name tag as per the previous rule, unmarshal maps
|
|
||||||
// the sub-element to that struct field.
|
|
||||||
//
|
|
||||||
// * If the XML element contains a sub-element whose name matches a
|
|
||||||
// field without any mode flags (",attr", ",chardata", etc), Unmarshal
|
|
||||||
// maps the sub-element to that struct field.
|
|
||||||
//
|
|
||||||
// * If the XML element contains a sub-element that hasn't matched any
|
|
||||||
// of the above rules and the struct has a field with tag ",any",
|
|
||||||
// unmarshal maps the sub-element to that struct field.
|
|
||||||
//
|
|
||||||
// * An anonymous struct field is handled as if the fields of its
|
|
||||||
// value were part of the outer struct.
|
|
||||||
//
|
|
||||||
// * A struct field with tag "-" is never unmarshalled into.
|
|
||||||
//
|
|
||||||
// Unmarshal maps an XML element to a string or []byte by saving the
|
|
||||||
// concatenation of that element's character data in the string or
|
|
||||||
// []byte. The saved []byte is never nil.
|
|
||||||
//
|
|
||||||
// Unmarshal maps an attribute value to a string or []byte by saving
|
|
||||||
// the value in the string or slice.
|
|
||||||
//
|
|
||||||
// Unmarshal maps an XML element to a slice by extending the length of
|
|
||||||
// the slice and mapping the element to the newly created value.
|
|
||||||
//
|
|
||||||
// Unmarshal maps an XML element or attribute value to a bool by
|
|
||||||
// setting it to the boolean value represented by the string.
|
|
||||||
//
|
|
||||||
// Unmarshal maps an XML element or attribute value to an integer or
|
|
||||||
// floating-point field by setting the field to the result of
|
|
||||||
// interpreting the string value in decimal. There is no check for
|
|
||||||
// overflow.
|
|
||||||
//
|
|
||||||
// Unmarshal maps an XML element to an xml.Name by recording the
|
|
||||||
// element name.
|
|
||||||
//
|
|
||||||
// Unmarshal maps an XML element to a pointer by setting the pointer
|
|
||||||
// to a freshly allocated value and then mapping the element to that value.
|
|
||||||
//
|
|
||||||
func Unmarshal(data []byte, v interface{}) error {
|
|
||||||
return NewDecoder(bytes.NewReader(data)).Decode(v)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decode works like xml.Unmarshal, except it reads the decoder
|
|
||||||
// stream to find the start element.
|
|
||||||
func (d *Decoder) Decode(v interface{}) error {
|
|
||||||
return d.DecodeElement(v, nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
// DecodeElement works like xml.Unmarshal except that it takes
|
|
||||||
// a pointer to the start XML element to decode into v.
|
|
||||||
// It is useful when a client reads some raw XML tokens itself
|
|
||||||
// but also wants to defer to Unmarshal for some elements.
|
|
||||||
func (d *Decoder) DecodeElement(v interface{}, start *StartElement) error {
|
|
||||||
val := reflect.ValueOf(v)
|
|
||||||
if val.Kind() != reflect.Ptr {
|
|
||||||
return errors.New("non-pointer passed to Unmarshal")
|
|
||||||
}
|
|
||||||
return d.unmarshal(val.Elem(), start)
|
|
||||||
}
|
|
||||||
|
|
||||||
// An UnmarshalError represents an error in the unmarshalling process.
|
|
||||||
type UnmarshalError string
|
|
||||||
|
|
||||||
func (e UnmarshalError) Error() string { return string(e) }
|
|
||||||
|
|
||||||
// Unmarshaler is the interface implemented by objects that can unmarshal
|
|
||||||
// an XML element description of themselves.
|
|
||||||
//
|
|
||||||
// UnmarshalXML decodes a single XML element
|
|
||||||
// beginning with the given start element.
|
|
||||||
// If it returns an error, the outer call to Unmarshal stops and
|
|
||||||
// returns that error.
|
|
||||||
// UnmarshalXML must consume exactly one XML element.
|
|
||||||
// One common implementation strategy is to unmarshal into
|
|
||||||
// a separate value with a layout matching the expected XML
|
|
||||||
// using d.DecodeElement, and then to copy the data from
|
|
||||||
// that value into the receiver.
|
|
||||||
// Another common strategy is to use d.Token to process the
|
|
||||||
// XML object one token at a time.
|
|
||||||
// UnmarshalXML may not use d.RawToken.
|
|
||||||
type Unmarshaler interface {
|
|
||||||
UnmarshalXML(d *Decoder, start StartElement) error
|
|
||||||
}
|
|
||||||
|
|
||||||
// UnmarshalerAttr is the interface implemented by objects that can unmarshal
|
|
||||||
// an XML attribute description of themselves.
|
|
||||||
//
|
|
||||||
// UnmarshalXMLAttr decodes a single XML attribute.
|
|
||||||
// If it returns an error, the outer call to Unmarshal stops and
|
|
||||||
// returns that error.
|
|
||||||
// UnmarshalXMLAttr is used only for struct fields with the
|
|
||||||
// "attr" option in the field tag.
|
|
||||||
type UnmarshalerAttr interface {
|
|
||||||
UnmarshalXMLAttr(attr Attr) error
|
|
||||||
}
|
|
||||||
|
|
||||||
// receiverType returns the receiver type to use in an expression like "%s.MethodName".
|
|
||||||
func receiverType(val interface{}) string {
|
|
||||||
t := reflect.TypeOf(val)
|
|
||||||
if t.Name() != "" {
|
|
||||||
return t.String()
|
|
||||||
}
|
|
||||||
return "(" + t.String() + ")"
|
|
||||||
}
|
|
||||||
|
|
||||||
// unmarshalInterface unmarshals a single XML element into val.
|
|
||||||
// start is the opening tag of the element.
|
|
||||||
func (p *Decoder) unmarshalInterface(val Unmarshaler, start *StartElement) error {
|
|
||||||
// Record that decoder must stop at end tag corresponding to start.
|
|
||||||
p.pushEOF()
|
|
||||||
|
|
||||||
p.unmarshalDepth++
|
|
||||||
err := val.UnmarshalXML(p, *start)
|
|
||||||
p.unmarshalDepth--
|
|
||||||
if err != nil {
|
|
||||||
p.popEOF()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if !p.popEOF() {
|
|
||||||
return fmt.Errorf("xml: %s.UnmarshalXML did not consume entire <%s> element", receiverType(val), start.Name.Local)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// unmarshalTextInterface unmarshals a single XML element into val.
|
|
||||||
// The chardata contained in the element (but not its children)
|
|
||||||
// is passed to the text unmarshaler.
|
|
||||||
func (p *Decoder) unmarshalTextInterface(val encoding.TextUnmarshaler, start *StartElement) error {
|
|
||||||
var buf []byte
|
|
||||||
depth := 1
|
|
||||||
for depth > 0 {
|
|
||||||
t, err := p.Token()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
switch t := t.(type) {
|
|
||||||
case CharData:
|
|
||||||
if depth == 1 {
|
|
||||||
buf = append(buf, t...)
|
|
||||||
}
|
|
||||||
case StartElement:
|
|
||||||
depth++
|
|
||||||
case EndElement:
|
|
||||||
depth--
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return val.UnmarshalText(buf)
|
|
||||||
}
|
|
||||||
|
|
||||||
// unmarshalAttr unmarshals a single XML attribute into val.
|
|
||||||
func (p *Decoder) unmarshalAttr(val reflect.Value, attr Attr) error {
|
|
||||||
if val.Kind() == reflect.Ptr {
|
|
||||||
if val.IsNil() {
|
|
||||||
val.Set(reflect.New(val.Type().Elem()))
|
|
||||||
}
|
|
||||||
val = val.Elem()
|
|
||||||
}
|
|
||||||
|
|
||||||
if val.CanInterface() && val.Type().Implements(unmarshalerAttrType) {
|
|
||||||
// This is an unmarshaler with a non-pointer receiver,
|
|
||||||
// so it's likely to be incorrect, but we do what we're told.
|
|
||||||
return val.Interface().(UnmarshalerAttr).UnmarshalXMLAttr(attr)
|
|
||||||
}
|
|
||||||
if val.CanAddr() {
|
|
||||||
pv := val.Addr()
|
|
||||||
if pv.CanInterface() && pv.Type().Implements(unmarshalerAttrType) {
|
|
||||||
return pv.Interface().(UnmarshalerAttr).UnmarshalXMLAttr(attr)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Not an UnmarshalerAttr; try encoding.TextUnmarshaler.
|
|
||||||
if val.CanInterface() && val.Type().Implements(textUnmarshalerType) {
|
|
||||||
// This is an unmarshaler with a non-pointer receiver,
|
|
||||||
// so it's likely to be incorrect, but we do what we're told.
|
|
||||||
return val.Interface().(encoding.TextUnmarshaler).UnmarshalText([]byte(attr.Value))
|
|
||||||
}
|
|
||||||
if val.CanAddr() {
|
|
||||||
pv := val.Addr()
|
|
||||||
if pv.CanInterface() && pv.Type().Implements(textUnmarshalerType) {
|
|
||||||
return pv.Interface().(encoding.TextUnmarshaler).UnmarshalText([]byte(attr.Value))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
copyValue(val, []byte(attr.Value))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
unmarshalerType = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
|
|
||||||
unmarshalerAttrType = reflect.TypeOf((*UnmarshalerAttr)(nil)).Elem()
|
|
||||||
textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
|
|
||||||
)
|
|
||||||
|
|
||||||
// Unmarshal a single XML element into val.
|
|
||||||
func (p *Decoder) unmarshal(val reflect.Value, start *StartElement) error {
|
|
||||||
// Find start element if we need it.
|
|
||||||
if start == nil {
|
|
||||||
for {
|
|
||||||
tok, err := p.Token()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if t, ok := tok.(StartElement); ok {
|
|
||||||
start = &t
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load value from interface, but only if the result will be
|
|
||||||
// usefully addressable.
|
|
||||||
if val.Kind() == reflect.Interface && !val.IsNil() {
|
|
||||||
e := val.Elem()
|
|
||||||
if e.Kind() == reflect.Ptr && !e.IsNil() {
|
|
||||||
val = e
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if val.Kind() == reflect.Ptr {
|
|
||||||
if val.IsNil() {
|
|
||||||
val.Set(reflect.New(val.Type().Elem()))
|
|
||||||
}
|
|
||||||
val = val.Elem()
|
|
||||||
}
|
|
||||||
|
|
||||||
if val.CanInterface() && val.Type().Implements(unmarshalerType) {
|
|
||||||
// This is an unmarshaler with a non-pointer receiver,
|
|
||||||
// so it's likely to be incorrect, but we do what we're told.
|
|
||||||
return p.unmarshalInterface(val.Interface().(Unmarshaler), start)
|
|
||||||
}
|
|
||||||
|
|
||||||
if val.CanAddr() {
|
|
||||||
pv := val.Addr()
|
|
||||||
if pv.CanInterface() && pv.Type().Implements(unmarshalerType) {
|
|
||||||
return p.unmarshalInterface(pv.Interface().(Unmarshaler), start)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if val.CanInterface() && val.Type().Implements(textUnmarshalerType) {
|
|
||||||
return p.unmarshalTextInterface(val.Interface().(encoding.TextUnmarshaler), start)
|
|
||||||
}
|
|
||||||
|
|
||||||
if val.CanAddr() {
|
|
||||||
pv := val.Addr()
|
|
||||||
if pv.CanInterface() && pv.Type().Implements(textUnmarshalerType) {
|
|
||||||
return p.unmarshalTextInterface(pv.Interface().(encoding.TextUnmarshaler), start)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
data []byte
|
|
||||||
saveData reflect.Value
|
|
||||||
comment []byte
|
|
||||||
saveComment reflect.Value
|
|
||||||
saveXML reflect.Value
|
|
||||||
saveXMLIndex int
|
|
||||||
saveXMLData []byte
|
|
||||||
saveAny reflect.Value
|
|
||||||
sv reflect.Value
|
|
||||||
tinfo *typeInfo
|
|
||||||
err error
|
|
||||||
)
|
|
||||||
|
|
||||||
switch v := val; v.Kind() {
|
|
||||||
default:
|
|
||||||
return errors.New("unknown type " + v.Type().String())
|
|
||||||
|
|
||||||
case reflect.Interface:
|
|
||||||
// TODO: For now, simply ignore the field. In the near
|
|
||||||
// future we may choose to unmarshal the start
|
|
||||||
// element on it, if not nil.
|
|
||||||
return p.Skip()
|
|
||||||
|
|
||||||
case reflect.Slice:
|
|
||||||
typ := v.Type()
|
|
||||||
if typ.Elem().Kind() == reflect.Uint8 {
|
|
||||||
// []byte
|
|
||||||
saveData = v
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
// Slice of element values.
|
|
||||||
// Grow slice.
|
|
||||||
n := v.Len()
|
|
||||||
if n >= v.Cap() {
|
|
||||||
ncap := 2 * n
|
|
||||||
if ncap < 4 {
|
|
||||||
ncap = 4
|
|
||||||
}
|
|
||||||
new := reflect.MakeSlice(typ, n, ncap)
|
|
||||||
reflect.Copy(new, v)
|
|
||||||
v.Set(new)
|
|
||||||
}
|
|
||||||
v.SetLen(n + 1)
|
|
||||||
|
|
||||||
// Recur to read element into slice.
|
|
||||||
if err := p.unmarshal(v.Index(n), start); err != nil {
|
|
||||||
v.SetLen(n)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
|
|
||||||
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr, reflect.String:
|
|
||||||
saveData = v
|
|
||||||
|
|
||||||
case reflect.Struct:
|
|
||||||
typ := v.Type()
|
|
||||||
if typ == nameType {
|
|
||||||
v.Set(reflect.ValueOf(start.Name))
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
sv = v
|
|
||||||
tinfo, err = getTypeInfo(typ)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate and assign element name.
|
|
||||||
if tinfo.xmlname != nil {
|
|
||||||
finfo := tinfo.xmlname
|
|
||||||
if finfo.name != "" && finfo.name != start.Name.Local {
|
|
||||||
return UnmarshalError("expected element type <" + finfo.name + "> but have <" + start.Name.Local + ">")
|
|
||||||
}
|
|
||||||
if finfo.xmlns != "" && finfo.xmlns != start.Name.Space {
|
|
||||||
e := "expected element <" + finfo.name + "> in name space " + finfo.xmlns + " but have "
|
|
||||||
if start.Name.Space == "" {
|
|
||||||
e += "no name space"
|
|
||||||
} else {
|
|
||||||
e += start.Name.Space
|
|
||||||
}
|
|
||||||
return UnmarshalError(e)
|
|
||||||
}
|
|
||||||
fv := finfo.value(sv)
|
|
||||||
if _, ok := fv.Interface().(Name); ok {
|
|
||||||
fv.Set(reflect.ValueOf(start.Name))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Assign attributes.
|
|
||||||
// Also, determine whether we need to save character data or comments.
|
|
||||||
for i := range tinfo.fields {
|
|
||||||
finfo := &tinfo.fields[i]
|
|
||||||
switch finfo.flags & fMode {
|
|
||||||
case fAttr:
|
|
||||||
strv := finfo.value(sv)
|
|
||||||
// Look for attribute.
|
|
||||||
for _, a := range start.Attr {
|
|
||||||
if a.Name.Local == finfo.name && (finfo.xmlns == "" || finfo.xmlns == a.Name.Space) {
|
|
||||||
if err := p.unmarshalAttr(strv, a); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case fCharData:
|
|
||||||
if !saveData.IsValid() {
|
|
||||||
saveData = finfo.value(sv)
|
|
||||||
}
|
|
||||||
|
|
||||||
case fComment:
|
|
||||||
if !saveComment.IsValid() {
|
|
||||||
saveComment = finfo.value(sv)
|
|
||||||
}
|
|
||||||
|
|
||||||
case fAny, fAny | fElement:
|
|
||||||
if !saveAny.IsValid() {
|
|
||||||
saveAny = finfo.value(sv)
|
|
||||||
}
|
|
||||||
|
|
||||||
case fInnerXml:
|
|
||||||
if !saveXML.IsValid() {
|
|
||||||
saveXML = finfo.value(sv)
|
|
||||||
if p.saved == nil {
|
|
||||||
saveXMLIndex = 0
|
|
||||||
p.saved = new(bytes.Buffer)
|
|
||||||
} else {
|
|
||||||
saveXMLIndex = p.savedOffset()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find end element.
|
|
||||||
// Process sub-elements along the way.
|
|
||||||
Loop:
|
|
||||||
for {
|
|
||||||
var savedOffset int
|
|
||||||
if saveXML.IsValid() {
|
|
||||||
savedOffset = p.savedOffset()
|
|
||||||
}
|
|
||||||
tok, err := p.Token()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
switch t := tok.(type) {
|
|
||||||
case StartElement:
|
|
||||||
consumed := false
|
|
||||||
if sv.IsValid() {
|
|
||||||
consumed, err = p.unmarshalPath(tinfo, sv, nil, &t)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if !consumed && saveAny.IsValid() {
|
|
||||||
consumed = true
|
|
||||||
if err := p.unmarshal(saveAny, &t); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !consumed {
|
|
||||||
if err := p.Skip(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case EndElement:
|
|
||||||
if saveXML.IsValid() {
|
|
||||||
saveXMLData = p.saved.Bytes()[saveXMLIndex:savedOffset]
|
|
||||||
if saveXMLIndex == 0 {
|
|
||||||
p.saved = nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
break Loop
|
|
||||||
|
|
||||||
case CharData:
|
|
||||||
if saveData.IsValid() {
|
|
||||||
data = append(data, t...)
|
|
||||||
}
|
|
||||||
|
|
||||||
case Comment:
|
|
||||||
if saveComment.IsValid() {
|
|
||||||
comment = append(comment, t...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if saveData.IsValid() && saveData.CanInterface() && saveData.Type().Implements(textUnmarshalerType) {
|
|
||||||
if err := saveData.Interface().(encoding.TextUnmarshaler).UnmarshalText(data); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
saveData = reflect.Value{}
|
|
||||||
}
|
|
||||||
|
|
||||||
if saveData.IsValid() && saveData.CanAddr() {
|
|
||||||
pv := saveData.Addr()
|
|
||||||
if pv.CanInterface() && pv.Type().Implements(textUnmarshalerType) {
|
|
||||||
if err := pv.Interface().(encoding.TextUnmarshaler).UnmarshalText(data); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
saveData = reflect.Value{}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := copyValue(saveData, data); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
switch t := saveComment; t.Kind() {
|
|
||||||
case reflect.String:
|
|
||||||
t.SetString(string(comment))
|
|
||||||
case reflect.Slice:
|
|
||||||
t.Set(reflect.ValueOf(comment))
|
|
||||||
}
|
|
||||||
|
|
||||||
switch t := saveXML; t.Kind() {
|
|
||||||
case reflect.String:
|
|
||||||
t.SetString(string(saveXMLData))
|
|
||||||
case reflect.Slice:
|
|
||||||
t.Set(reflect.ValueOf(saveXMLData))
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func copyValue(dst reflect.Value, src []byte) (err error) {
|
|
||||||
dst0 := dst
|
|
||||||
|
|
||||||
if dst.Kind() == reflect.Ptr {
|
|
||||||
if dst.IsNil() {
|
|
||||||
dst.Set(reflect.New(dst.Type().Elem()))
|
|
||||||
}
|
|
||||||
dst = dst.Elem()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Save accumulated data.
|
|
||||||
switch dst.Kind() {
|
|
||||||
case reflect.Invalid:
|
|
||||||
// Probably a comment.
|
|
||||||
default:
|
|
||||||
return errors.New("cannot unmarshal into " + dst0.Type().String())
|
|
||||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
|
||||||
itmp, err := strconv.ParseInt(string(src), 10, dst.Type().Bits())
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
dst.SetInt(itmp)
|
|
||||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
|
||||||
utmp, err := strconv.ParseUint(string(src), 10, dst.Type().Bits())
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
dst.SetUint(utmp)
|
|
||||||
case reflect.Float32, reflect.Float64:
|
|
||||||
ftmp, err := strconv.ParseFloat(string(src), dst.Type().Bits())
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
dst.SetFloat(ftmp)
|
|
||||||
case reflect.Bool:
|
|
||||||
value, err := strconv.ParseBool(strings.TrimSpace(string(src)))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
dst.SetBool(value)
|
|
||||||
case reflect.String:
|
|
||||||
dst.SetString(string(src))
|
|
||||||
case reflect.Slice:
|
|
||||||
if len(src) == 0 {
|
|
||||||
// non-nil to flag presence
|
|
||||||
src = []byte{}
|
|
||||||
}
|
|
||||||
dst.SetBytes(src)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// unmarshalPath walks down an XML structure looking for wanted
|
|
||||||
// paths, and calls unmarshal on them.
|
|
||||||
// The consumed result tells whether XML elements have been consumed
|
|
||||||
// from the Decoder until start's matching end element, or if it's
|
|
||||||
// still untouched because start is uninteresting for sv's fields.
|
|
||||||
func (p *Decoder) unmarshalPath(tinfo *typeInfo, sv reflect.Value, parents []string, start *StartElement) (consumed bool, err error) {
|
|
||||||
recurse := false
|
|
||||||
Loop:
|
|
||||||
for i := range tinfo.fields {
|
|
||||||
finfo := &tinfo.fields[i]
|
|
||||||
if finfo.flags&fElement == 0 || len(finfo.parents) < len(parents) || finfo.xmlns != "" && finfo.xmlns != start.Name.Space {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
for j := range parents {
|
|
||||||
if parents[j] != finfo.parents[j] {
|
|
||||||
continue Loop
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(finfo.parents) == len(parents) && finfo.name == start.Name.Local {
|
|
||||||
// It's a perfect match, unmarshal the field.
|
|
||||||
return true, p.unmarshal(finfo.value(sv), start)
|
|
||||||
}
|
|
||||||
if len(finfo.parents) > len(parents) && finfo.parents[len(parents)] == start.Name.Local {
|
|
||||||
// It's a prefix for the field. Break and recurse
|
|
||||||
// since it's not ok for one field path to be itself
|
|
||||||
// the prefix for another field path.
|
|
||||||
recurse = true
|
|
||||||
|
|
||||||
// We can reuse the same slice as long as we
|
|
||||||
// don't try to append to it.
|
|
||||||
parents = finfo.parents[:len(parents)+1]
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !recurse {
|
|
||||||
// We have no business with this element.
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
// The element is not a perfect match for any field, but one
|
|
||||||
// or more fields have the path to this element as a parent
|
|
||||||
// prefix. Recurse and attempt to match these.
|
|
||||||
for {
|
|
||||||
var tok Token
|
|
||||||
tok, err = p.Token()
|
|
||||||
if err != nil {
|
|
||||||
return true, err
|
|
||||||
}
|
|
||||||
switch t := tok.(type) {
|
|
||||||
case StartElement:
|
|
||||||
consumed2, err := p.unmarshalPath(tinfo, sv, parents, &t)
|
|
||||||
if err != nil {
|
|
||||||
return true, err
|
|
||||||
}
|
|
||||||
if !consumed2 {
|
|
||||||
if err := p.Skip(); err != nil {
|
|
||||||
return true, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case EndElement:
|
|
||||||
return true, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Skip reads tokens until it has consumed the end element
|
|
||||||
// matching the most recent start element already consumed.
|
|
||||||
// It recurs if it encounters a start element, so it can be used to
|
|
||||||
// skip nested structures.
|
|
||||||
// It returns nil if it finds an end element matching the start
|
|
||||||
// element; otherwise it returns an error describing the problem.
|
|
||||||
func (d *Decoder) Skip() error {
|
|
||||||
for {
|
|
||||||
tok, err := d.Token()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
switch tok.(type) {
|
|
||||||
case StartElement:
|
|
||||||
if err := d.Skip(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
case EndElement:
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
371
vendor/golang.org/x/net/webdav/internal/xml/typeinfo.go
generated
vendored
371
vendor/golang.org/x/net/webdav/internal/xml/typeinfo.go
generated
vendored
|
@ -1,371 +0,0 @@
|
||||||
// Copyright 2011 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package xml
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"reflect"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
)
|
|
||||||
|
|
||||||
// typeInfo holds details for the xml representation of a type.
|
|
||||||
type typeInfo struct {
|
|
||||||
xmlname *fieldInfo
|
|
||||||
fields []fieldInfo
|
|
||||||
}
|
|
||||||
|
|
||||||
// fieldInfo holds details for the xml representation of a single field.
|
|
||||||
type fieldInfo struct {
|
|
||||||
idx []int
|
|
||||||
name string
|
|
||||||
xmlns string
|
|
||||||
flags fieldFlags
|
|
||||||
parents []string
|
|
||||||
}
|
|
||||||
|
|
||||||
type fieldFlags int
|
|
||||||
|
|
||||||
const (
|
|
||||||
fElement fieldFlags = 1 << iota
|
|
||||||
fAttr
|
|
||||||
fCharData
|
|
||||||
fInnerXml
|
|
||||||
fComment
|
|
||||||
fAny
|
|
||||||
|
|
||||||
fOmitEmpty
|
|
||||||
|
|
||||||
fMode = fElement | fAttr | fCharData | fInnerXml | fComment | fAny
|
|
||||||
)
|
|
||||||
|
|
||||||
var tinfoMap = make(map[reflect.Type]*typeInfo)
|
|
||||||
var tinfoLock sync.RWMutex
|
|
||||||
|
|
||||||
var nameType = reflect.TypeOf(Name{})
|
|
||||||
|
|
||||||
// getTypeInfo returns the typeInfo structure with details necessary
|
|
||||||
// for marshalling and unmarshalling typ.
|
|
||||||
func getTypeInfo(typ reflect.Type) (*typeInfo, error) {
|
|
||||||
tinfoLock.RLock()
|
|
||||||
tinfo, ok := tinfoMap[typ]
|
|
||||||
tinfoLock.RUnlock()
|
|
||||||
if ok {
|
|
||||||
return tinfo, nil
|
|
||||||
}
|
|
||||||
tinfo = &typeInfo{}
|
|
||||||
if typ.Kind() == reflect.Struct && typ != nameType {
|
|
||||||
n := typ.NumField()
|
|
||||||
for i := 0; i < n; i++ {
|
|
||||||
f := typ.Field(i)
|
|
||||||
if f.PkgPath != "" || f.Tag.Get("xml") == "-" {
|
|
||||||
continue // Private field
|
|
||||||
}
|
|
||||||
|
|
||||||
// For embedded structs, embed its fields.
|
|
||||||
if f.Anonymous {
|
|
||||||
t := f.Type
|
|
||||||
if t.Kind() == reflect.Ptr {
|
|
||||||
t = t.Elem()
|
|
||||||
}
|
|
||||||
if t.Kind() == reflect.Struct {
|
|
||||||
inner, err := getTypeInfo(t)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if tinfo.xmlname == nil {
|
|
||||||
tinfo.xmlname = inner.xmlname
|
|
||||||
}
|
|
||||||
for _, finfo := range inner.fields {
|
|
||||||
finfo.idx = append([]int{i}, finfo.idx...)
|
|
||||||
if err := addFieldInfo(typ, tinfo, &finfo); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
finfo, err := structFieldInfo(typ, &f)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if f.Name == "XMLName" {
|
|
||||||
tinfo.xmlname = finfo
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add the field if it doesn't conflict with other fields.
|
|
||||||
if err := addFieldInfo(typ, tinfo, finfo); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
tinfoLock.Lock()
|
|
||||||
tinfoMap[typ] = tinfo
|
|
||||||
tinfoLock.Unlock()
|
|
||||||
return tinfo, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// structFieldInfo builds and returns a fieldInfo for f.
|
|
||||||
func structFieldInfo(typ reflect.Type, f *reflect.StructField) (*fieldInfo, error) {
|
|
||||||
finfo := &fieldInfo{idx: f.Index}
|
|
||||||
|
|
||||||
// Split the tag from the xml namespace if necessary.
|
|
||||||
tag := f.Tag.Get("xml")
|
|
||||||
if i := strings.Index(tag, " "); i >= 0 {
|
|
||||||
finfo.xmlns, tag = tag[:i], tag[i+1:]
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse flags.
|
|
||||||
tokens := strings.Split(tag, ",")
|
|
||||||
if len(tokens) == 1 {
|
|
||||||
finfo.flags = fElement
|
|
||||||
} else {
|
|
||||||
tag = tokens[0]
|
|
||||||
for _, flag := range tokens[1:] {
|
|
||||||
switch flag {
|
|
||||||
case "attr":
|
|
||||||
finfo.flags |= fAttr
|
|
||||||
case "chardata":
|
|
||||||
finfo.flags |= fCharData
|
|
||||||
case "innerxml":
|
|
||||||
finfo.flags |= fInnerXml
|
|
||||||
case "comment":
|
|
||||||
finfo.flags |= fComment
|
|
||||||
case "any":
|
|
||||||
finfo.flags |= fAny
|
|
||||||
case "omitempty":
|
|
||||||
finfo.flags |= fOmitEmpty
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate the flags used.
|
|
||||||
valid := true
|
|
||||||
switch mode := finfo.flags & fMode; mode {
|
|
||||||
case 0:
|
|
||||||
finfo.flags |= fElement
|
|
||||||
case fAttr, fCharData, fInnerXml, fComment, fAny:
|
|
||||||
if f.Name == "XMLName" || tag != "" && mode != fAttr {
|
|
||||||
valid = false
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
// This will also catch multiple modes in a single field.
|
|
||||||
valid = false
|
|
||||||
}
|
|
||||||
if finfo.flags&fMode == fAny {
|
|
||||||
finfo.flags |= fElement
|
|
||||||
}
|
|
||||||
if finfo.flags&fOmitEmpty != 0 && finfo.flags&(fElement|fAttr) == 0 {
|
|
||||||
valid = false
|
|
||||||
}
|
|
||||||
if !valid {
|
|
||||||
return nil, fmt.Errorf("xml: invalid tag in field %s of type %s: %q",
|
|
||||||
f.Name, typ, f.Tag.Get("xml"))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Use of xmlns without a name is not allowed.
|
|
||||||
if finfo.xmlns != "" && tag == "" {
|
|
||||||
return nil, fmt.Errorf("xml: namespace without name in field %s of type %s: %q",
|
|
||||||
f.Name, typ, f.Tag.Get("xml"))
|
|
||||||
}
|
|
||||||
|
|
||||||
if f.Name == "XMLName" {
|
|
||||||
// The XMLName field records the XML element name. Don't
|
|
||||||
// process it as usual because its name should default to
|
|
||||||
// empty rather than to the field name.
|
|
||||||
finfo.name = tag
|
|
||||||
return finfo, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if tag == "" {
|
|
||||||
// If the name part of the tag is completely empty, get
|
|
||||||
// default from XMLName of underlying struct if feasible,
|
|
||||||
// or field name otherwise.
|
|
||||||
if xmlname := lookupXMLName(f.Type); xmlname != nil {
|
|
||||||
finfo.xmlns, finfo.name = xmlname.xmlns, xmlname.name
|
|
||||||
} else {
|
|
||||||
finfo.name = f.Name
|
|
||||||
}
|
|
||||||
return finfo, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if finfo.xmlns == "" && finfo.flags&fAttr == 0 {
|
|
||||||
// If it's an element no namespace specified, get the default
|
|
||||||
// from the XMLName of enclosing struct if possible.
|
|
||||||
if xmlname := lookupXMLName(typ); xmlname != nil {
|
|
||||||
finfo.xmlns = xmlname.xmlns
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Prepare field name and parents.
|
|
||||||
parents := strings.Split(tag, ">")
|
|
||||||
if parents[0] == "" {
|
|
||||||
parents[0] = f.Name
|
|
||||||
}
|
|
||||||
if parents[len(parents)-1] == "" {
|
|
||||||
return nil, fmt.Errorf("xml: trailing '>' in field %s of type %s", f.Name, typ)
|
|
||||||
}
|
|
||||||
finfo.name = parents[len(parents)-1]
|
|
||||||
if len(parents) > 1 {
|
|
||||||
if (finfo.flags & fElement) == 0 {
|
|
||||||
return nil, fmt.Errorf("xml: %s chain not valid with %s flag", tag, strings.Join(tokens[1:], ","))
|
|
||||||
}
|
|
||||||
finfo.parents = parents[:len(parents)-1]
|
|
||||||
}
|
|
||||||
|
|
||||||
// If the field type has an XMLName field, the names must match
|
|
||||||
// so that the behavior of both marshalling and unmarshalling
|
|
||||||
// is straightforward and unambiguous.
|
|
||||||
if finfo.flags&fElement != 0 {
|
|
||||||
ftyp := f.Type
|
|
||||||
xmlname := lookupXMLName(ftyp)
|
|
||||||
if xmlname != nil && xmlname.name != finfo.name {
|
|
||||||
return nil, fmt.Errorf("xml: name %q in tag of %s.%s conflicts with name %q in %s.XMLName",
|
|
||||||
finfo.name, typ, f.Name, xmlname.name, ftyp)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return finfo, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// lookupXMLName returns the fieldInfo for typ's XMLName field
|
|
||||||
// in case it exists and has a valid xml field tag, otherwise
|
|
||||||
// it returns nil.
|
|
||||||
func lookupXMLName(typ reflect.Type) (xmlname *fieldInfo) {
|
|
||||||
for typ.Kind() == reflect.Ptr {
|
|
||||||
typ = typ.Elem()
|
|
||||||
}
|
|
||||||
if typ.Kind() != reflect.Struct {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
for i, n := 0, typ.NumField(); i < n; i++ {
|
|
||||||
f := typ.Field(i)
|
|
||||||
if f.Name != "XMLName" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
finfo, err := structFieldInfo(typ, &f)
|
|
||||||
if finfo.name != "" && err == nil {
|
|
||||||
return finfo
|
|
||||||
}
|
|
||||||
// Also consider errors as a non-existent field tag
|
|
||||||
// and let getTypeInfo itself report the error.
|
|
||||||
break
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func min(a, b int) int {
|
|
||||||
if a <= b {
|
|
||||||
return a
|
|
||||||
}
|
|
||||||
return b
|
|
||||||
}
|
|
||||||
|
|
||||||
// addFieldInfo adds finfo to tinfo.fields if there are no
|
|
||||||
// conflicts, or if conflicts arise from previous fields that were
|
|
||||||
// obtained from deeper embedded structures than finfo. In the latter
|
|
||||||
// case, the conflicting entries are dropped.
|
|
||||||
// A conflict occurs when the path (parent + name) to a field is
|
|
||||||
// itself a prefix of another path, or when two paths match exactly.
|
|
||||||
// It is okay for field paths to share a common, shorter prefix.
|
|
||||||
func addFieldInfo(typ reflect.Type, tinfo *typeInfo, newf *fieldInfo) error {
|
|
||||||
var conflicts []int
|
|
||||||
Loop:
|
|
||||||
// First, figure all conflicts. Most working code will have none.
|
|
||||||
for i := range tinfo.fields {
|
|
||||||
oldf := &tinfo.fields[i]
|
|
||||||
if oldf.flags&fMode != newf.flags&fMode {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if oldf.xmlns != "" && newf.xmlns != "" && oldf.xmlns != newf.xmlns {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
minl := min(len(newf.parents), len(oldf.parents))
|
|
||||||
for p := 0; p < minl; p++ {
|
|
||||||
if oldf.parents[p] != newf.parents[p] {
|
|
||||||
continue Loop
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(oldf.parents) > len(newf.parents) {
|
|
||||||
if oldf.parents[len(newf.parents)] == newf.name {
|
|
||||||
conflicts = append(conflicts, i)
|
|
||||||
}
|
|
||||||
} else if len(oldf.parents) < len(newf.parents) {
|
|
||||||
if newf.parents[len(oldf.parents)] == oldf.name {
|
|
||||||
conflicts = append(conflicts, i)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if newf.name == oldf.name {
|
|
||||||
conflicts = append(conflicts, i)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Without conflicts, add the new field and return.
|
|
||||||
if conflicts == nil {
|
|
||||||
tinfo.fields = append(tinfo.fields, *newf)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// If any conflict is shallower, ignore the new field.
|
|
||||||
// This matches the Go field resolution on embedding.
|
|
||||||
for _, i := range conflicts {
|
|
||||||
if len(tinfo.fields[i].idx) < len(newf.idx) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Otherwise, if any of them is at the same depth level, it's an error.
|
|
||||||
for _, i := range conflicts {
|
|
||||||
oldf := &tinfo.fields[i]
|
|
||||||
if len(oldf.idx) == len(newf.idx) {
|
|
||||||
f1 := typ.FieldByIndex(oldf.idx)
|
|
||||||
f2 := typ.FieldByIndex(newf.idx)
|
|
||||||
return &TagPathError{typ, f1.Name, f1.Tag.Get("xml"), f2.Name, f2.Tag.Get("xml")}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Otherwise, the new field is shallower, and thus takes precedence,
|
|
||||||
// so drop the conflicting fields from tinfo and append the new one.
|
|
||||||
for c := len(conflicts) - 1; c >= 0; c-- {
|
|
||||||
i := conflicts[c]
|
|
||||||
copy(tinfo.fields[i:], tinfo.fields[i+1:])
|
|
||||||
tinfo.fields = tinfo.fields[:len(tinfo.fields)-1]
|
|
||||||
}
|
|
||||||
tinfo.fields = append(tinfo.fields, *newf)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// A TagPathError represents an error in the unmarshalling process
|
|
||||||
// caused by the use of field tags with conflicting paths.
|
|
||||||
type TagPathError struct {
|
|
||||||
Struct reflect.Type
|
|
||||||
Field1, Tag1 string
|
|
||||||
Field2, Tag2 string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *TagPathError) Error() string {
|
|
||||||
return fmt.Sprintf("%s field %q with tag %q conflicts with field %q with tag %q", e.Struct, e.Field1, e.Tag1, e.Field2, e.Tag2)
|
|
||||||
}
|
|
||||||
|
|
||||||
// value returns v's field value corresponding to finfo.
|
|
||||||
// It's equivalent to v.FieldByIndex(finfo.idx), but initializes
|
|
||||||
// and dereferences pointers as necessary.
|
|
||||||
func (finfo *fieldInfo) value(v reflect.Value) reflect.Value {
|
|
||||||
for i, x := range finfo.idx {
|
|
||||||
if i > 0 {
|
|
||||||
t := v.Type()
|
|
||||||
if t.Kind() == reflect.Ptr && t.Elem().Kind() == reflect.Struct {
|
|
||||||
if v.IsNil() {
|
|
||||||
v.Set(reflect.New(v.Type().Elem()))
|
|
||||||
}
|
|
||||||
v = v.Elem()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
v = v.Field(x)
|
|
||||||
}
|
|
||||||
return v
|
|
||||||
}
|
|
1998
vendor/golang.org/x/net/webdav/internal/xml/xml.go
generated
vendored
1998
vendor/golang.org/x/net/webdav/internal/xml/xml.go
generated
vendored
File diff suppressed because it is too large
Load diff
94
vendor/golang.org/x/net/webdav/litmus_test_server.go
generated
vendored
94
vendor/golang.org/x/net/webdav/litmus_test_server.go
generated
vendored
|
@ -1,94 +0,0 @@
|
||||||
// Copyright 2015 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// +build ignore
|
|
||||||
|
|
||||||
/*
|
|
||||||
This program is a server for the WebDAV 'litmus' compliance test at
|
|
||||||
http://www.webdav.org/neon/litmus/
|
|
||||||
To run the test:
|
|
||||||
|
|
||||||
go run litmus_test_server.go
|
|
||||||
|
|
||||||
and separately, from the downloaded litmus-xxx directory:
|
|
||||||
|
|
||||||
make URL=http://localhost:9999/ check
|
|
||||||
*/
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"net/http"
|
|
||||||
"net/url"
|
|
||||||
|
|
||||||
"golang.org/x/net/webdav"
|
|
||||||
)
|
|
||||||
|
|
||||||
var port = flag.Int("port", 9999, "server port")
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
flag.Parse()
|
|
||||||
log.SetFlags(0)
|
|
||||||
h := &webdav.Handler{
|
|
||||||
FileSystem: webdav.NewMemFS(),
|
|
||||||
LockSystem: webdav.NewMemLS(),
|
|
||||||
Logger: func(r *http.Request, err error) {
|
|
||||||
litmus := r.Header.Get("X-Litmus")
|
|
||||||
if len(litmus) > 19 {
|
|
||||||
litmus = litmus[:16] + "..."
|
|
||||||
}
|
|
||||||
|
|
||||||
switch r.Method {
|
|
||||||
case "COPY", "MOVE":
|
|
||||||
dst := ""
|
|
||||||
if u, err := url.Parse(r.Header.Get("Destination")); err == nil {
|
|
||||||
dst = u.Path
|
|
||||||
}
|
|
||||||
o := r.Header.Get("Overwrite")
|
|
||||||
log.Printf("%-20s%-10s%-30s%-30so=%-2s%v", litmus, r.Method, r.URL.Path, dst, o, err)
|
|
||||||
default:
|
|
||||||
log.Printf("%-20s%-10s%-30s%v", litmus, r.Method, r.URL.Path, err)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// The next line would normally be:
|
|
||||||
// http.Handle("/", h)
|
|
||||||
// but we wrap that HTTP handler h to cater for a special case.
|
|
||||||
//
|
|
||||||
// The propfind_invalid2 litmus test case expects an empty namespace prefix
|
|
||||||
// declaration to be an error. The FAQ in the webdav litmus test says:
|
|
||||||
//
|
|
||||||
// "What does the "propfind_invalid2" test check for?...
|
|
||||||
//
|
|
||||||
// If a request was sent with an XML body which included an empty namespace
|
|
||||||
// prefix declaration (xmlns:ns1=""), then the server must reject that with
|
|
||||||
// a "400 Bad Request" response, as it is invalid according to the XML
|
|
||||||
// Namespace specification."
|
|
||||||
//
|
|
||||||
// On the other hand, the Go standard library's encoding/xml package
|
|
||||||
// accepts an empty xmlns namespace, as per the discussion at
|
|
||||||
// https://github.com/golang/go/issues/8068
|
|
||||||
//
|
|
||||||
// Empty namespaces seem disallowed in the second (2006) edition of the XML
|
|
||||||
// standard, but allowed in a later edition. The grammar differs between
|
|
||||||
// http://www.w3.org/TR/2006/REC-xml-names-20060816/#ns-decl and
|
|
||||||
// http://www.w3.org/TR/REC-xml-names/#dt-prefix
|
|
||||||
//
|
|
||||||
// Thus, we assume that the propfind_invalid2 test is obsolete, and
|
|
||||||
// hard-code the 400 Bad Request response that the test expects.
|
|
||||||
http.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Header.Get("X-Litmus") == "props: 3 (propfind_invalid2)" {
|
|
||||||
http.Error(w, "400 Bad Request", http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
h.ServeHTTP(w, r)
|
|
||||||
}))
|
|
||||||
|
|
||||||
addr := fmt.Sprintf(":%d", *port)
|
|
||||||
log.Printf("Serving %v", addr)
|
|
||||||
log.Fatal(http.ListenAndServe(addr, nil))
|
|
||||||
}
|
|
445
vendor/golang.org/x/net/webdav/lock.go
generated
vendored
445
vendor/golang.org/x/net/webdav/lock.go
generated
vendored
|
@ -1,445 +0,0 @@
|
||||||
// Copyright 2014 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package webdav
|
|
||||||
|
|
||||||
import (
|
|
||||||
"container/heap"
|
|
||||||
"errors"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
// ErrConfirmationFailed is returned by a LockSystem's Confirm method.
|
|
||||||
ErrConfirmationFailed = errors.New("webdav: confirmation failed")
|
|
||||||
// ErrForbidden is returned by a LockSystem's Unlock method.
|
|
||||||
ErrForbidden = errors.New("webdav: forbidden")
|
|
||||||
// ErrLocked is returned by a LockSystem's Create, Refresh and Unlock methods.
|
|
||||||
ErrLocked = errors.New("webdav: locked")
|
|
||||||
// ErrNoSuchLock is returned by a LockSystem's Refresh and Unlock methods.
|
|
||||||
ErrNoSuchLock = errors.New("webdav: no such lock")
|
|
||||||
)
|
|
||||||
|
|
||||||
// Condition can match a WebDAV resource, based on a token or ETag.
|
|
||||||
// Exactly one of Token and ETag should be non-empty.
|
|
||||||
type Condition struct {
|
|
||||||
Not bool
|
|
||||||
Token string
|
|
||||||
ETag string
|
|
||||||
}
|
|
||||||
|
|
||||||
// LockSystem manages access to a collection of named resources. The elements
|
|
||||||
// in a lock name are separated by slash ('/', U+002F) characters, regardless
|
|
||||||
// of host operating system convention.
|
|
||||||
type LockSystem interface {
|
|
||||||
// Confirm confirms that the caller can claim all of the locks specified by
|
|
||||||
// the given conditions, and that holding the union of all of those locks
|
|
||||||
// gives exclusive access to all of the named resources. Up to two resources
|
|
||||||
// can be named. Empty names are ignored.
|
|
||||||
//
|
|
||||||
// Exactly one of release and err will be non-nil. If release is non-nil,
|
|
||||||
// all of the requested locks are held until release is called. Calling
|
|
||||||
// release does not unlock the lock, in the WebDAV UNLOCK sense, but once
|
|
||||||
// Confirm has confirmed that a lock claim is valid, that lock cannot be
|
|
||||||
// Confirmed again until it has been released.
|
|
||||||
//
|
|
||||||
// If Confirm returns ErrConfirmationFailed then the Handler will continue
|
|
||||||
// to try any other set of locks presented (a WebDAV HTTP request can
|
|
||||||
// present more than one set of locks). If it returns any other non-nil
|
|
||||||
// error, the Handler will write a "500 Internal Server Error" HTTP status.
|
|
||||||
Confirm(now time.Time, name0, name1 string, conditions ...Condition) (release func(), err error)
|
|
||||||
|
|
||||||
// Create creates a lock with the given depth, duration, owner and root
|
|
||||||
// (name). The depth will either be negative (meaning infinite) or zero.
|
|
||||||
//
|
|
||||||
// If Create returns ErrLocked then the Handler will write a "423 Locked"
|
|
||||||
// HTTP status. If it returns any other non-nil error, the Handler will
|
|
||||||
// write a "500 Internal Server Error" HTTP status.
|
|
||||||
//
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#rfc.section.9.10.6 for
|
|
||||||
// when to use each error.
|
|
||||||
//
|
|
||||||
// The token returned identifies the created lock. It should be an absolute
|
|
||||||
// URI as defined by RFC 3986, Section 4.3. In particular, it should not
|
|
||||||
// contain whitespace.
|
|
||||||
Create(now time.Time, details LockDetails) (token string, err error)
|
|
||||||
|
|
||||||
// Refresh refreshes the lock with the given token.
|
|
||||||
//
|
|
||||||
// If Refresh returns ErrLocked then the Handler will write a "423 Locked"
|
|
||||||
// HTTP Status. If Refresh returns ErrNoSuchLock then the Handler will write
|
|
||||||
// a "412 Precondition Failed" HTTP Status. If it returns any other non-nil
|
|
||||||
// error, the Handler will write a "500 Internal Server Error" HTTP status.
|
|
||||||
//
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#rfc.section.9.10.6 for
|
|
||||||
// when to use each error.
|
|
||||||
Refresh(now time.Time, token string, duration time.Duration) (LockDetails, error)
|
|
||||||
|
|
||||||
// Unlock unlocks the lock with the given token.
|
|
||||||
//
|
|
||||||
// If Unlock returns ErrForbidden then the Handler will write a "403
|
|
||||||
// Forbidden" HTTP Status. If Unlock returns ErrLocked then the Handler
|
|
||||||
// will write a "423 Locked" HTTP status. If Unlock returns ErrNoSuchLock
|
|
||||||
// then the Handler will write a "409 Conflict" HTTP Status. If it returns
|
|
||||||
// any other non-nil error, the Handler will write a "500 Internal Server
|
|
||||||
// Error" HTTP status.
|
|
||||||
//
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#rfc.section.9.11.1 for
|
|
||||||
// when to use each error.
|
|
||||||
Unlock(now time.Time, token string) error
|
|
||||||
}
|
|
||||||
|
|
||||||
// LockDetails are a lock's metadata.
|
|
||||||
type LockDetails struct {
|
|
||||||
// Root is the root resource name being locked. For a zero-depth lock, the
|
|
||||||
// root is the only resource being locked.
|
|
||||||
Root string
|
|
||||||
// Duration is the lock timeout. A negative duration means infinite.
|
|
||||||
Duration time.Duration
|
|
||||||
// OwnerXML is the verbatim <owner> XML given in a LOCK HTTP request.
|
|
||||||
//
|
|
||||||
// TODO: does the "verbatim" nature play well with XML namespaces?
|
|
||||||
// Does the OwnerXML field need to have more structure? See
|
|
||||||
// https://codereview.appspot.com/175140043/#msg2
|
|
||||||
OwnerXML string
|
|
||||||
// ZeroDepth is whether the lock has zero depth. If it does not have zero
|
|
||||||
// depth, it has infinite depth.
|
|
||||||
ZeroDepth bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewMemLS returns a new in-memory LockSystem.
|
|
||||||
func NewMemLS() LockSystem {
|
|
||||||
return &memLS{
|
|
||||||
byName: make(map[string]*memLSNode),
|
|
||||||
byToken: make(map[string]*memLSNode),
|
|
||||||
gen: uint64(time.Now().Unix()),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type memLS struct {
|
|
||||||
mu sync.Mutex
|
|
||||||
byName map[string]*memLSNode
|
|
||||||
byToken map[string]*memLSNode
|
|
||||||
gen uint64
|
|
||||||
// byExpiry only contains those nodes whose LockDetails have a finite
|
|
||||||
// Duration and are yet to expire.
|
|
||||||
byExpiry byExpiry
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) nextToken() string {
|
|
||||||
m.gen++
|
|
||||||
return strconv.FormatUint(m.gen, 10)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) collectExpiredNodes(now time.Time) {
|
|
||||||
for len(m.byExpiry) > 0 {
|
|
||||||
if now.Before(m.byExpiry[0].expiry) {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
m.remove(m.byExpiry[0])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) Confirm(now time.Time, name0, name1 string, conditions ...Condition) (func(), error) {
|
|
||||||
m.mu.Lock()
|
|
||||||
defer m.mu.Unlock()
|
|
||||||
m.collectExpiredNodes(now)
|
|
||||||
|
|
||||||
var n0, n1 *memLSNode
|
|
||||||
if name0 != "" {
|
|
||||||
if n0 = m.lookup(slashClean(name0), conditions...); n0 == nil {
|
|
||||||
return nil, ErrConfirmationFailed
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if name1 != "" {
|
|
||||||
if n1 = m.lookup(slashClean(name1), conditions...); n1 == nil {
|
|
||||||
return nil, ErrConfirmationFailed
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Don't hold the same node twice.
|
|
||||||
if n1 == n0 {
|
|
||||||
n1 = nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if n0 != nil {
|
|
||||||
m.hold(n0)
|
|
||||||
}
|
|
||||||
if n1 != nil {
|
|
||||||
m.hold(n1)
|
|
||||||
}
|
|
||||||
return func() {
|
|
||||||
m.mu.Lock()
|
|
||||||
defer m.mu.Unlock()
|
|
||||||
if n1 != nil {
|
|
||||||
m.unhold(n1)
|
|
||||||
}
|
|
||||||
if n0 != nil {
|
|
||||||
m.unhold(n0)
|
|
||||||
}
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// lookup returns the node n that locks the named resource, provided that n
|
|
||||||
// matches at least one of the given conditions and that lock isn't held by
|
|
||||||
// another party. Otherwise, it returns nil.
|
|
||||||
//
|
|
||||||
// n may be a parent of the named resource, if n is an infinite depth lock.
|
|
||||||
func (m *memLS) lookup(name string, conditions ...Condition) (n *memLSNode) {
|
|
||||||
// TODO: support Condition.Not and Condition.ETag.
|
|
||||||
for _, c := range conditions {
|
|
||||||
n = m.byToken[c.Token]
|
|
||||||
if n == nil || n.held {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if name == n.details.Root {
|
|
||||||
return n
|
|
||||||
}
|
|
||||||
if n.details.ZeroDepth {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if n.details.Root == "/" || strings.HasPrefix(name, n.details.Root+"/") {
|
|
||||||
return n
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) hold(n *memLSNode) {
|
|
||||||
if n.held {
|
|
||||||
panic("webdav: memLS inconsistent held state")
|
|
||||||
}
|
|
||||||
n.held = true
|
|
||||||
if n.details.Duration >= 0 && n.byExpiryIndex >= 0 {
|
|
||||||
heap.Remove(&m.byExpiry, n.byExpiryIndex)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) unhold(n *memLSNode) {
|
|
||||||
if !n.held {
|
|
||||||
panic("webdav: memLS inconsistent held state")
|
|
||||||
}
|
|
||||||
n.held = false
|
|
||||||
if n.details.Duration >= 0 {
|
|
||||||
heap.Push(&m.byExpiry, n)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) Create(now time.Time, details LockDetails) (string, error) {
|
|
||||||
m.mu.Lock()
|
|
||||||
defer m.mu.Unlock()
|
|
||||||
m.collectExpiredNodes(now)
|
|
||||||
details.Root = slashClean(details.Root)
|
|
||||||
|
|
||||||
if !m.canCreate(details.Root, details.ZeroDepth) {
|
|
||||||
return "", ErrLocked
|
|
||||||
}
|
|
||||||
n := m.create(details.Root)
|
|
||||||
n.token = m.nextToken()
|
|
||||||
m.byToken[n.token] = n
|
|
||||||
n.details = details
|
|
||||||
if n.details.Duration >= 0 {
|
|
||||||
n.expiry = now.Add(n.details.Duration)
|
|
||||||
heap.Push(&m.byExpiry, n)
|
|
||||||
}
|
|
||||||
return n.token, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) Refresh(now time.Time, token string, duration time.Duration) (LockDetails, error) {
|
|
||||||
m.mu.Lock()
|
|
||||||
defer m.mu.Unlock()
|
|
||||||
m.collectExpiredNodes(now)
|
|
||||||
|
|
||||||
n := m.byToken[token]
|
|
||||||
if n == nil {
|
|
||||||
return LockDetails{}, ErrNoSuchLock
|
|
||||||
}
|
|
||||||
if n.held {
|
|
||||||
return LockDetails{}, ErrLocked
|
|
||||||
}
|
|
||||||
if n.byExpiryIndex >= 0 {
|
|
||||||
heap.Remove(&m.byExpiry, n.byExpiryIndex)
|
|
||||||
}
|
|
||||||
n.details.Duration = duration
|
|
||||||
if n.details.Duration >= 0 {
|
|
||||||
n.expiry = now.Add(n.details.Duration)
|
|
||||||
heap.Push(&m.byExpiry, n)
|
|
||||||
}
|
|
||||||
return n.details, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) Unlock(now time.Time, token string) error {
|
|
||||||
m.mu.Lock()
|
|
||||||
defer m.mu.Unlock()
|
|
||||||
m.collectExpiredNodes(now)
|
|
||||||
|
|
||||||
n := m.byToken[token]
|
|
||||||
if n == nil {
|
|
||||||
return ErrNoSuchLock
|
|
||||||
}
|
|
||||||
if n.held {
|
|
||||||
return ErrLocked
|
|
||||||
}
|
|
||||||
m.remove(n)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) canCreate(name string, zeroDepth bool) bool {
|
|
||||||
return walkToRoot(name, func(name0 string, first bool) bool {
|
|
||||||
n := m.byName[name0]
|
|
||||||
if n == nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
if first {
|
|
||||||
if n.token != "" {
|
|
||||||
// The target node is already locked.
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
if !zeroDepth {
|
|
||||||
// The requested lock depth is infinite, and the fact that n exists
|
|
||||||
// (n != nil) means that a descendent of the target node is locked.
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
} else if n.token != "" && !n.details.ZeroDepth {
|
|
||||||
// An ancestor of the target node is locked with infinite depth.
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) create(name string) (ret *memLSNode) {
|
|
||||||
walkToRoot(name, func(name0 string, first bool) bool {
|
|
||||||
n := m.byName[name0]
|
|
||||||
if n == nil {
|
|
||||||
n = &memLSNode{
|
|
||||||
details: LockDetails{
|
|
||||||
Root: name0,
|
|
||||||
},
|
|
||||||
byExpiryIndex: -1,
|
|
||||||
}
|
|
||||||
m.byName[name0] = n
|
|
||||||
}
|
|
||||||
n.refCount++
|
|
||||||
if first {
|
|
||||||
ret = n
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
return ret
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *memLS) remove(n *memLSNode) {
|
|
||||||
delete(m.byToken, n.token)
|
|
||||||
n.token = ""
|
|
||||||
walkToRoot(n.details.Root, func(name0 string, first bool) bool {
|
|
||||||
x := m.byName[name0]
|
|
||||||
x.refCount--
|
|
||||||
if x.refCount == 0 {
|
|
||||||
delete(m.byName, name0)
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
if n.byExpiryIndex >= 0 {
|
|
||||||
heap.Remove(&m.byExpiry, n.byExpiryIndex)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func walkToRoot(name string, f func(name0 string, first bool) bool) bool {
|
|
||||||
for first := true; ; first = false {
|
|
||||||
if !f(name, first) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
if name == "/" {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
name = name[:strings.LastIndex(name, "/")]
|
|
||||||
if name == "" {
|
|
||||||
name = "/"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
type memLSNode struct {
|
|
||||||
// details are the lock metadata. Even if this node's name is not explicitly locked,
|
|
||||||
// details.Root will still equal the node's name.
|
|
||||||
details LockDetails
|
|
||||||
// token is the unique identifier for this node's lock. An empty token means that
|
|
||||||
// this node is not explicitly locked.
|
|
||||||
token string
|
|
||||||
// refCount is the number of self-or-descendent nodes that are explicitly locked.
|
|
||||||
refCount int
|
|
||||||
// expiry is when this node's lock expires.
|
|
||||||
expiry time.Time
|
|
||||||
// byExpiryIndex is the index of this node in memLS.byExpiry. It is -1
|
|
||||||
// if this node does not expire, or has expired.
|
|
||||||
byExpiryIndex int
|
|
||||||
// held is whether this node's lock is actively held by a Confirm call.
|
|
||||||
held bool
|
|
||||||
}
|
|
||||||
|
|
||||||
type byExpiry []*memLSNode
|
|
||||||
|
|
||||||
func (b *byExpiry) Len() int {
|
|
||||||
return len(*b)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *byExpiry) Less(i, j int) bool {
|
|
||||||
return (*b)[i].expiry.Before((*b)[j].expiry)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *byExpiry) Swap(i, j int) {
|
|
||||||
(*b)[i], (*b)[j] = (*b)[j], (*b)[i]
|
|
||||||
(*b)[i].byExpiryIndex = i
|
|
||||||
(*b)[j].byExpiryIndex = j
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *byExpiry) Push(x interface{}) {
|
|
||||||
n := x.(*memLSNode)
|
|
||||||
n.byExpiryIndex = len(*b)
|
|
||||||
*b = append(*b, n)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *byExpiry) Pop() interface{} {
|
|
||||||
i := len(*b) - 1
|
|
||||||
n := (*b)[i]
|
|
||||||
(*b)[i] = nil
|
|
||||||
n.byExpiryIndex = -1
|
|
||||||
*b = (*b)[:i]
|
|
||||||
return n
|
|
||||||
}
|
|
||||||
|
|
||||||
const infiniteTimeout = -1
|
|
||||||
|
|
||||||
// parseTimeout parses the Timeout HTTP header, as per section 10.7. If s is
|
|
||||||
// empty, an infiniteTimeout is returned.
|
|
||||||
func parseTimeout(s string) (time.Duration, error) {
|
|
||||||
if s == "" {
|
|
||||||
return infiniteTimeout, nil
|
|
||||||
}
|
|
||||||
if i := strings.IndexByte(s, ','); i >= 0 {
|
|
||||||
s = s[:i]
|
|
||||||
}
|
|
||||||
s = strings.TrimSpace(s)
|
|
||||||
if s == "Infinite" {
|
|
||||||
return infiniteTimeout, nil
|
|
||||||
}
|
|
||||||
const pre = "Second-"
|
|
||||||
if !strings.HasPrefix(s, pre) {
|
|
||||||
return 0, errInvalidTimeout
|
|
||||||
}
|
|
||||||
s = s[len(pre):]
|
|
||||||
if s == "" || s[0] < '0' || '9' < s[0] {
|
|
||||||
return 0, errInvalidTimeout
|
|
||||||
}
|
|
||||||
n, err := strconv.ParseInt(s, 10, 64)
|
|
||||||
if err != nil || 1<<32-1 < n {
|
|
||||||
return 0, errInvalidTimeout
|
|
||||||
}
|
|
||||||
return time.Duration(n) * time.Second, nil
|
|
||||||
}
|
|
469
vendor/golang.org/x/net/webdav/prop.go
generated
vendored
469
vendor/golang.org/x/net/webdav/prop.go
generated
vendored
|
@ -1,469 +0,0 @@
|
||||||
// Copyright 2015 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package webdav
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/xml"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"mime"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Proppatch describes a property update instruction as defined in RFC 4918.
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#METHOD_PROPPATCH
|
|
||||||
type Proppatch struct {
|
|
||||||
// Remove specifies whether this patch removes properties. If it does not
|
|
||||||
// remove them, it sets them.
|
|
||||||
Remove bool
|
|
||||||
// Props contains the properties to be set or removed.
|
|
||||||
Props []Property
|
|
||||||
}
|
|
||||||
|
|
||||||
// Propstat describes a XML propstat element as defined in RFC 4918.
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#ELEMENT_propstat
|
|
||||||
type Propstat struct {
|
|
||||||
// Props contains the properties for which Status applies.
|
|
||||||
Props []Property
|
|
||||||
|
|
||||||
// Status defines the HTTP status code of the properties in Prop.
|
|
||||||
// Allowed values include, but are not limited to the WebDAV status
|
|
||||||
// code extensions for HTTP/1.1.
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#status.code.extensions.to.http11
|
|
||||||
Status int
|
|
||||||
|
|
||||||
// XMLError contains the XML representation of the optional error element.
|
|
||||||
// XML content within this field must not rely on any predefined
|
|
||||||
// namespace declarations or prefixes. If empty, the XML error element
|
|
||||||
// is omitted.
|
|
||||||
XMLError string
|
|
||||||
|
|
||||||
// ResponseDescription contains the contents of the optional
|
|
||||||
// responsedescription field. If empty, the XML element is omitted.
|
|
||||||
ResponseDescription string
|
|
||||||
}
|
|
||||||
|
|
||||||
// makePropstats returns a slice containing those of x and y whose Props slice
|
|
||||||
// is non-empty. If both are empty, it returns a slice containing an otherwise
|
|
||||||
// zero Propstat whose HTTP status code is 200 OK.
|
|
||||||
func makePropstats(x, y Propstat) []Propstat {
|
|
||||||
pstats := make([]Propstat, 0, 2)
|
|
||||||
if len(x.Props) != 0 {
|
|
||||||
pstats = append(pstats, x)
|
|
||||||
}
|
|
||||||
if len(y.Props) != 0 {
|
|
||||||
pstats = append(pstats, y)
|
|
||||||
}
|
|
||||||
if len(pstats) == 0 {
|
|
||||||
pstats = append(pstats, Propstat{
|
|
||||||
Status: http.StatusOK,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return pstats
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeadPropsHolder holds the dead properties of a resource.
|
|
||||||
//
|
|
||||||
// Dead properties are those properties that are explicitly defined. In
|
|
||||||
// comparison, live properties, such as DAV:getcontentlength, are implicitly
|
|
||||||
// defined by the underlying resource, and cannot be explicitly overridden or
|
|
||||||
// removed. See the Terminology section of
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#rfc.section.3
|
|
||||||
//
|
|
||||||
// There is a whitelist of the names of live properties. This package handles
|
|
||||||
// all live properties, and will only pass non-whitelisted names to the Patch
|
|
||||||
// method of DeadPropsHolder implementations.
|
|
||||||
type DeadPropsHolder interface {
|
|
||||||
// DeadProps returns a copy of the dead properties held.
|
|
||||||
DeadProps() (map[xml.Name]Property, error)
|
|
||||||
|
|
||||||
// Patch patches the dead properties held.
|
|
||||||
//
|
|
||||||
// Patching is atomic; either all or no patches succeed. It returns (nil,
|
|
||||||
// non-nil) if an internal server error occurred, otherwise the Propstats
|
|
||||||
// collectively contain one Property for each proposed patch Property. If
|
|
||||||
// all patches succeed, Patch returns a slice of length one and a Propstat
|
|
||||||
// element with a 200 OK HTTP status code. If none succeed, for reasons
|
|
||||||
// other than an internal server error, no Propstat has status 200 OK.
|
|
||||||
//
|
|
||||||
// For more details on when various HTTP status codes apply, see
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#PROPPATCH-status
|
|
||||||
Patch([]Proppatch) ([]Propstat, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
// liveProps contains all supported, protected DAV: properties.
|
|
||||||
var liveProps = map[xml.Name]struct {
|
|
||||||
// findFn implements the propfind function of this property. If nil,
|
|
||||||
// it indicates a hidden property.
|
|
||||||
findFn func(context.Context, FileSystem, LockSystem, string, os.FileInfo) (string, error)
|
|
||||||
// dir is true if the property applies to directories.
|
|
||||||
dir bool
|
|
||||||
}{
|
|
||||||
{Space: "DAV:", Local: "resourcetype"}: {
|
|
||||||
findFn: findResourceType,
|
|
||||||
dir: true,
|
|
||||||
},
|
|
||||||
{Space: "DAV:", Local: "displayname"}: {
|
|
||||||
findFn: findDisplayName,
|
|
||||||
dir: true,
|
|
||||||
},
|
|
||||||
{Space: "DAV:", Local: "getcontentlength"}: {
|
|
||||||
findFn: findContentLength,
|
|
||||||
dir: false,
|
|
||||||
},
|
|
||||||
{Space: "DAV:", Local: "getlastmodified"}: {
|
|
||||||
findFn: findLastModified,
|
|
||||||
// http://webdav.org/specs/rfc4918.html#PROPERTY_getlastmodified
|
|
||||||
// suggests that getlastmodified should only apply to GETable
|
|
||||||
// resources, and this package does not support GET on directories.
|
|
||||||
//
|
|
||||||
// Nonetheless, some WebDAV clients expect child directories to be
|
|
||||||
// sortable by getlastmodified date, so this value is true, not false.
|
|
||||||
// See golang.org/issue/15334.
|
|
||||||
dir: true,
|
|
||||||
},
|
|
||||||
{Space: "DAV:", Local: "creationdate"}: {
|
|
||||||
findFn: nil,
|
|
||||||
dir: false,
|
|
||||||
},
|
|
||||||
{Space: "DAV:", Local: "getcontentlanguage"}: {
|
|
||||||
findFn: nil,
|
|
||||||
dir: false,
|
|
||||||
},
|
|
||||||
{Space: "DAV:", Local: "getcontenttype"}: {
|
|
||||||
findFn: findContentType,
|
|
||||||
dir: false,
|
|
||||||
},
|
|
||||||
{Space: "DAV:", Local: "getetag"}: {
|
|
||||||
findFn: findETag,
|
|
||||||
// findETag implements ETag as the concatenated hex values of a file's
|
|
||||||
// modification time and size. This is not a reliable synchronization
|
|
||||||
// mechanism for directories, so we do not advertise getetag for DAV
|
|
||||||
// collections.
|
|
||||||
dir: false,
|
|
||||||
},
|
|
||||||
|
|
||||||
// TODO: The lockdiscovery property requires LockSystem to list the
|
|
||||||
// active locks on a resource.
|
|
||||||
{Space: "DAV:", Local: "lockdiscovery"}: {},
|
|
||||||
{Space: "DAV:", Local: "supportedlock"}: {
|
|
||||||
findFn: findSupportedLock,
|
|
||||||
dir: true,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO(nigeltao) merge props and allprop?
|
|
||||||
|
|
||||||
// Props returns the status of the properties named pnames for resource name.
|
|
||||||
//
|
|
||||||
// Each Propstat has a unique status and each property name will only be part
|
|
||||||
// of one Propstat element.
|
|
||||||
func props(ctx context.Context, fs FileSystem, ls LockSystem, name string, pnames []xml.Name) ([]Propstat, error) {
|
|
||||||
f, err := fs.OpenFile(ctx, name, os.O_RDONLY, 0)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
fi, err := f.Stat()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
isDir := fi.IsDir()
|
|
||||||
|
|
||||||
var deadProps map[xml.Name]Property
|
|
||||||
if dph, ok := f.(DeadPropsHolder); ok {
|
|
||||||
deadProps, err = dph.DeadProps()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pstatOK := Propstat{Status: http.StatusOK}
|
|
||||||
pstatNotFound := Propstat{Status: http.StatusNotFound}
|
|
||||||
for _, pn := range pnames {
|
|
||||||
// If this file has dead properties, check if they contain pn.
|
|
||||||
if dp, ok := deadProps[pn]; ok {
|
|
||||||
pstatOK.Props = append(pstatOK.Props, dp)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// Otherwise, it must either be a live property or we don't know it.
|
|
||||||
if prop := liveProps[pn]; prop.findFn != nil && (prop.dir || !isDir) {
|
|
||||||
innerXML, err := prop.findFn(ctx, fs, ls, name, fi)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
pstatOK.Props = append(pstatOK.Props, Property{
|
|
||||||
XMLName: pn,
|
|
||||||
InnerXML: []byte(innerXML),
|
|
||||||
})
|
|
||||||
} else {
|
|
||||||
pstatNotFound.Props = append(pstatNotFound.Props, Property{
|
|
||||||
XMLName: pn,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return makePropstats(pstatOK, pstatNotFound), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Propnames returns the property names defined for resource name.
|
|
||||||
func propnames(ctx context.Context, fs FileSystem, ls LockSystem, name string) ([]xml.Name, error) {
|
|
||||||
f, err := fs.OpenFile(ctx, name, os.O_RDONLY, 0)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
fi, err := f.Stat()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
isDir := fi.IsDir()
|
|
||||||
|
|
||||||
var deadProps map[xml.Name]Property
|
|
||||||
if dph, ok := f.(DeadPropsHolder); ok {
|
|
||||||
deadProps, err = dph.DeadProps()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pnames := make([]xml.Name, 0, len(liveProps)+len(deadProps))
|
|
||||||
for pn, prop := range liveProps {
|
|
||||||
if prop.findFn != nil && (prop.dir || !isDir) {
|
|
||||||
pnames = append(pnames, pn)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for pn := range deadProps {
|
|
||||||
pnames = append(pnames, pn)
|
|
||||||
}
|
|
||||||
return pnames, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Allprop returns the properties defined for resource name and the properties
|
|
||||||
// named in include.
|
|
||||||
//
|
|
||||||
// Note that RFC 4918 defines 'allprop' to return the DAV: properties defined
|
|
||||||
// within the RFC plus dead properties. Other live properties should only be
|
|
||||||
// returned if they are named in 'include'.
|
|
||||||
//
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#METHOD_PROPFIND
|
|
||||||
func allprop(ctx context.Context, fs FileSystem, ls LockSystem, name string, include []xml.Name) ([]Propstat, error) {
|
|
||||||
pnames, err := propnames(ctx, fs, ls, name)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
// Add names from include if they are not already covered in pnames.
|
|
||||||
nameset := make(map[xml.Name]bool)
|
|
||||||
for _, pn := range pnames {
|
|
||||||
nameset[pn] = true
|
|
||||||
}
|
|
||||||
for _, pn := range include {
|
|
||||||
if !nameset[pn] {
|
|
||||||
pnames = append(pnames, pn)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return props(ctx, fs, ls, name, pnames)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Patch patches the properties of resource name. The return values are
|
|
||||||
// constrained in the same manner as DeadPropsHolder.Patch.
|
|
||||||
func patch(ctx context.Context, fs FileSystem, ls LockSystem, name string, patches []Proppatch) ([]Propstat, error) {
|
|
||||||
conflict := false
|
|
||||||
loop:
|
|
||||||
for _, patch := range patches {
|
|
||||||
for _, p := range patch.Props {
|
|
||||||
if _, ok := liveProps[p.XMLName]; ok {
|
|
||||||
conflict = true
|
|
||||||
break loop
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if conflict {
|
|
||||||
pstatForbidden := Propstat{
|
|
||||||
Status: http.StatusForbidden,
|
|
||||||
XMLError: `<D:cannot-modify-protected-property xmlns:D="DAV:"/>`,
|
|
||||||
}
|
|
||||||
pstatFailedDep := Propstat{
|
|
||||||
Status: StatusFailedDependency,
|
|
||||||
}
|
|
||||||
for _, patch := range patches {
|
|
||||||
for _, p := range patch.Props {
|
|
||||||
if _, ok := liveProps[p.XMLName]; ok {
|
|
||||||
pstatForbidden.Props = append(pstatForbidden.Props, Property{XMLName: p.XMLName})
|
|
||||||
} else {
|
|
||||||
pstatFailedDep.Props = append(pstatFailedDep.Props, Property{XMLName: p.XMLName})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return makePropstats(pstatForbidden, pstatFailedDep), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
f, err := fs.OpenFile(ctx, name, os.O_RDWR, 0)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
if dph, ok := f.(DeadPropsHolder); ok {
|
|
||||||
ret, err := dph.Patch(patches)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_propstat says that
|
|
||||||
// "The contents of the prop XML element must only list the names of
|
|
||||||
// properties to which the result in the status element applies."
|
|
||||||
for _, pstat := range ret {
|
|
||||||
for i, p := range pstat.Props {
|
|
||||||
pstat.Props[i] = Property{XMLName: p.XMLName}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return ret, nil
|
|
||||||
}
|
|
||||||
// The file doesn't implement the optional DeadPropsHolder interface, so
|
|
||||||
// all patches are forbidden.
|
|
||||||
pstat := Propstat{Status: http.StatusForbidden}
|
|
||||||
for _, patch := range patches {
|
|
||||||
for _, p := range patch.Props {
|
|
||||||
pstat.Props = append(pstat.Props, Property{XMLName: p.XMLName})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return []Propstat{pstat}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func escapeXML(s string) string {
|
|
||||||
for i := 0; i < len(s); i++ {
|
|
||||||
// As an optimization, if s contains only ASCII letters, digits or a
|
|
||||||
// few special characters, the escaped value is s itself and we don't
|
|
||||||
// need to allocate a buffer and convert between string and []byte.
|
|
||||||
switch c := s[i]; {
|
|
||||||
case c == ' ' || c == '_' ||
|
|
||||||
('+' <= c && c <= '9') || // Digits as well as + , - . and /
|
|
||||||
('A' <= c && c <= 'Z') ||
|
|
||||||
('a' <= c && c <= 'z'):
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// Otherwise, go through the full escaping process.
|
|
||||||
var buf bytes.Buffer
|
|
||||||
xml.EscapeText(&buf, []byte(s))
|
|
||||||
return buf.String()
|
|
||||||
}
|
|
||||||
return s
|
|
||||||
}
|
|
||||||
|
|
||||||
func findResourceType(ctx context.Context, fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) {
|
|
||||||
if fi.IsDir() {
|
|
||||||
return `<D:collection xmlns:D="DAV:"/>`, nil
|
|
||||||
}
|
|
||||||
return "", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func findDisplayName(ctx context.Context, fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) {
|
|
||||||
if slashClean(name) == "/" {
|
|
||||||
// Hide the real name of a possibly prefixed root directory.
|
|
||||||
return "", nil
|
|
||||||
}
|
|
||||||
return escapeXML(fi.Name()), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func findContentLength(ctx context.Context, fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) {
|
|
||||||
return strconv.FormatInt(fi.Size(), 10), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func findLastModified(ctx context.Context, fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) {
|
|
||||||
return fi.ModTime().UTC().Format(http.TimeFormat), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ErrNotImplemented should be returned by optional interfaces if they
|
|
||||||
// want the original implementation to be used.
|
|
||||||
var ErrNotImplemented = errors.New("not implemented")
|
|
||||||
|
|
||||||
// ContentTyper is an optional interface for the os.FileInfo
|
|
||||||
// objects returned by the FileSystem.
|
|
||||||
//
|
|
||||||
// If this interface is defined then it will be used to read the
|
|
||||||
// content type from the object.
|
|
||||||
//
|
|
||||||
// If this interface is not defined the file will be opened and the
|
|
||||||
// content type will be guessed from the initial contents of the file.
|
|
||||||
type ContentTyper interface {
|
|
||||||
// ContentType returns the content type for the file.
|
|
||||||
//
|
|
||||||
// If this returns error ErrNotImplemented then the error will
|
|
||||||
// be ignored and the base implementation will be used
|
|
||||||
// instead.
|
|
||||||
ContentType(ctx context.Context) (string, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
func findContentType(ctx context.Context, fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) {
|
|
||||||
if do, ok := fi.(ContentTyper); ok {
|
|
||||||
ctype, err := do.ContentType(ctx)
|
|
||||||
if err != ErrNotImplemented {
|
|
||||||
return ctype, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
f, err := fs.OpenFile(ctx, name, os.O_RDONLY, 0)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
// This implementation is based on serveContent's code in the standard net/http package.
|
|
||||||
ctype := mime.TypeByExtension(filepath.Ext(name))
|
|
||||||
if ctype != "" {
|
|
||||||
return ctype, nil
|
|
||||||
}
|
|
||||||
// Read a chunk to decide between utf-8 text and binary.
|
|
||||||
var buf [512]byte
|
|
||||||
n, err := io.ReadFull(f, buf[:])
|
|
||||||
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
ctype = http.DetectContentType(buf[:n])
|
|
||||||
// Rewind file.
|
|
||||||
_, err = f.Seek(0, os.SEEK_SET)
|
|
||||||
return ctype, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// ETager is an optional interface for the os.FileInfo objects
|
|
||||||
// returned by the FileSystem.
|
|
||||||
//
|
|
||||||
// If this interface is defined then it will be used to read the ETag
|
|
||||||
// for the object.
|
|
||||||
//
|
|
||||||
// If this interface is not defined an ETag will be computed using the
|
|
||||||
// ModTime() and the Size() methods of the os.FileInfo object.
|
|
||||||
type ETager interface {
|
|
||||||
// ETag returns an ETag for the file. This should be of the
|
|
||||||
// form "value" or W/"value"
|
|
||||||
//
|
|
||||||
// If this returns error ErrNotImplemented then the error will
|
|
||||||
// be ignored and the base implementation will be used
|
|
||||||
// instead.
|
|
||||||
ETag(ctx context.Context) (string, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
func findETag(ctx context.Context, fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) {
|
|
||||||
if do, ok := fi.(ETager); ok {
|
|
||||||
etag, err := do.ETag(ctx)
|
|
||||||
if err != ErrNotImplemented {
|
|
||||||
return etag, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// The Apache http 2.4 web server by default concatenates the
|
|
||||||
// modification time and size of a file. We replicate the heuristic
|
|
||||||
// with nanosecond granularity.
|
|
||||||
return fmt.Sprintf(`"%x%x"`, fi.ModTime().UnixNano(), fi.Size()), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func findSupportedLock(ctx context.Context, fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) {
|
|
||||||
return `` +
|
|
||||||
`<D:lockentry xmlns:D="DAV:">` +
|
|
||||||
`<D:lockscope><D:exclusive/></D:lockscope>` +
|
|
||||||
`<D:locktype><D:write/></D:locktype>` +
|
|
||||||
`</D:lockentry>`, nil
|
|
||||||
}
|
|
702
vendor/golang.org/x/net/webdav/webdav.go
generated
vendored
702
vendor/golang.org/x/net/webdav/webdav.go
generated
vendored
|
@ -1,702 +0,0 @@
|
||||||
// Copyright 2014 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// Package webdav provides a WebDAV server implementation.
|
|
||||||
package webdav // import "golang.org/x/net/webdav"
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"net/url"
|
|
||||||
"os"
|
|
||||||
"path"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Handler struct {
|
|
||||||
// Prefix is the URL path prefix to strip from WebDAV resource paths.
|
|
||||||
Prefix string
|
|
||||||
// FileSystem is the virtual file system.
|
|
||||||
FileSystem FileSystem
|
|
||||||
// LockSystem is the lock management system.
|
|
||||||
LockSystem LockSystem
|
|
||||||
// Logger is an optional error logger. If non-nil, it will be called
|
|
||||||
// for all HTTP requests.
|
|
||||||
Logger func(*http.Request, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) stripPrefix(p string) (string, int, error) {
|
|
||||||
if h.Prefix == "" {
|
|
||||||
return p, http.StatusOK, nil
|
|
||||||
}
|
|
||||||
if r := strings.TrimPrefix(p, h.Prefix); len(r) < len(p) {
|
|
||||||
return r, http.StatusOK, nil
|
|
||||||
}
|
|
||||||
return p, http.StatusNotFound, errPrefixMismatch
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
|
||||||
status, err := http.StatusBadRequest, errUnsupportedMethod
|
|
||||||
if h.FileSystem == nil {
|
|
||||||
status, err = http.StatusInternalServerError, errNoFileSystem
|
|
||||||
} else if h.LockSystem == nil {
|
|
||||||
status, err = http.StatusInternalServerError, errNoLockSystem
|
|
||||||
} else {
|
|
||||||
switch r.Method {
|
|
||||||
case "OPTIONS":
|
|
||||||
status, err = h.handleOptions(w, r)
|
|
||||||
case "GET", "HEAD", "POST":
|
|
||||||
status, err = h.handleGetHeadPost(w, r)
|
|
||||||
case "DELETE":
|
|
||||||
status, err = h.handleDelete(w, r)
|
|
||||||
case "PUT":
|
|
||||||
status, err = h.handlePut(w, r)
|
|
||||||
case "MKCOL":
|
|
||||||
status, err = h.handleMkcol(w, r)
|
|
||||||
case "COPY", "MOVE":
|
|
||||||
status, err = h.handleCopyMove(w, r)
|
|
||||||
case "LOCK":
|
|
||||||
status, err = h.handleLock(w, r)
|
|
||||||
case "UNLOCK":
|
|
||||||
status, err = h.handleUnlock(w, r)
|
|
||||||
case "PROPFIND":
|
|
||||||
status, err = h.handlePropfind(w, r)
|
|
||||||
case "PROPPATCH":
|
|
||||||
status, err = h.handleProppatch(w, r)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if status != 0 {
|
|
||||||
w.WriteHeader(status)
|
|
||||||
if status != http.StatusNoContent {
|
|
||||||
w.Write([]byte(StatusText(status)))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if h.Logger != nil {
|
|
||||||
h.Logger(r, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) lock(now time.Time, root string) (token string, status int, err error) {
|
|
||||||
token, err = h.LockSystem.Create(now, LockDetails{
|
|
||||||
Root: root,
|
|
||||||
Duration: infiniteTimeout,
|
|
||||||
ZeroDepth: true,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
if err == ErrLocked {
|
|
||||||
return "", StatusLocked, err
|
|
||||||
}
|
|
||||||
return "", http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
return token, 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) confirmLocks(r *http.Request, src, dst string) (release func(), status int, err error) {
|
|
||||||
hdr := r.Header.Get("If")
|
|
||||||
if hdr == "" {
|
|
||||||
// An empty If header means that the client hasn't previously created locks.
|
|
||||||
// Even if this client doesn't care about locks, we still need to check that
|
|
||||||
// the resources aren't locked by another client, so we create temporary
|
|
||||||
// locks that would conflict with another client's locks. These temporary
|
|
||||||
// locks are unlocked at the end of the HTTP request.
|
|
||||||
now, srcToken, dstToken := time.Now(), "", ""
|
|
||||||
if src != "" {
|
|
||||||
srcToken, status, err = h.lock(now, src)
|
|
||||||
if err != nil {
|
|
||||||
return nil, status, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if dst != "" {
|
|
||||||
dstToken, status, err = h.lock(now, dst)
|
|
||||||
if err != nil {
|
|
||||||
if srcToken != "" {
|
|
||||||
h.LockSystem.Unlock(now, srcToken)
|
|
||||||
}
|
|
||||||
return nil, status, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return func() {
|
|
||||||
if dstToken != "" {
|
|
||||||
h.LockSystem.Unlock(now, dstToken)
|
|
||||||
}
|
|
||||||
if srcToken != "" {
|
|
||||||
h.LockSystem.Unlock(now, srcToken)
|
|
||||||
}
|
|
||||||
}, 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
ih, ok := parseIfHeader(hdr)
|
|
||||||
if !ok {
|
|
||||||
return nil, http.StatusBadRequest, errInvalidIfHeader
|
|
||||||
}
|
|
||||||
// ih is a disjunction (OR) of ifLists, so any ifList will do.
|
|
||||||
for _, l := range ih.lists {
|
|
||||||
lsrc := l.resourceTag
|
|
||||||
if lsrc == "" {
|
|
||||||
lsrc = src
|
|
||||||
} else {
|
|
||||||
u, err := url.Parse(lsrc)
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if u.Host != r.Host {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
lsrc, status, err = h.stripPrefix(u.Path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, status, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
release, err = h.LockSystem.Confirm(time.Now(), lsrc, dst, l.conditions...)
|
|
||||||
if err == ErrConfirmationFailed {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return nil, http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
return release, 0, nil
|
|
||||||
}
|
|
||||||
// Section 10.4.1 says that "If this header is evaluated and all state lists
|
|
||||||
// fail, then the request must fail with a 412 (Precondition Failed) status."
|
|
||||||
// We follow the spec even though the cond_put_corrupt_token test case from
|
|
||||||
// the litmus test warns on seeing a 412 instead of a 423 (Locked).
|
|
||||||
return nil, http.StatusPreconditionFailed, ErrLocked
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handleOptions(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
reqPath, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
ctx := r.Context()
|
|
||||||
allow := "OPTIONS, LOCK, PUT, MKCOL"
|
|
||||||
if fi, err := h.FileSystem.Stat(ctx, reqPath); err == nil {
|
|
||||||
if fi.IsDir() {
|
|
||||||
allow = "OPTIONS, LOCK, DELETE, PROPPATCH, COPY, MOVE, UNLOCK, PROPFIND"
|
|
||||||
} else {
|
|
||||||
allow = "OPTIONS, LOCK, GET, HEAD, POST, DELETE, PROPPATCH, COPY, MOVE, UNLOCK, PROPFIND, PUT"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
w.Header().Set("Allow", allow)
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#dav.compliance.classes
|
|
||||||
w.Header().Set("DAV", "1, 2")
|
|
||||||
// http://msdn.microsoft.com/en-au/library/cc250217.aspx
|
|
||||||
w.Header().Set("MS-Author-Via", "DAV")
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handleGetHeadPost(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
reqPath, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
// TODO: check locks for read-only access??
|
|
||||||
ctx := r.Context()
|
|
||||||
f, err := h.FileSystem.OpenFile(ctx, reqPath, os.O_RDONLY, 0)
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusNotFound, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
fi, err := f.Stat()
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusNotFound, err
|
|
||||||
}
|
|
||||||
if fi.IsDir() {
|
|
||||||
return http.StatusMethodNotAllowed, nil
|
|
||||||
}
|
|
||||||
etag, err := findETag(ctx, h.FileSystem, h.LockSystem, reqPath, fi)
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
w.Header().Set("ETag", etag)
|
|
||||||
// Let ServeContent determine the Content-Type header.
|
|
||||||
http.ServeContent(w, r, reqPath, fi.ModTime(), f)
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handleDelete(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
reqPath, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
release, status, err := h.confirmLocks(r, reqPath, "")
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
|
|
||||||
ctx := r.Context()
|
|
||||||
|
|
||||||
// TODO: return MultiStatus where appropriate.
|
|
||||||
|
|
||||||
// "godoc os RemoveAll" says that "If the path does not exist, RemoveAll
|
|
||||||
// returns nil (no error)." WebDAV semantics are that it should return a
|
|
||||||
// "404 Not Found". We therefore have to Stat before we RemoveAll.
|
|
||||||
if _, err := h.FileSystem.Stat(ctx, reqPath); err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return http.StatusNotFound, err
|
|
||||||
}
|
|
||||||
return http.StatusMethodNotAllowed, err
|
|
||||||
}
|
|
||||||
if err := h.FileSystem.RemoveAll(ctx, reqPath); err != nil {
|
|
||||||
return http.StatusMethodNotAllowed, err
|
|
||||||
}
|
|
||||||
return http.StatusNoContent, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handlePut(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
reqPath, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
release, status, err := h.confirmLocks(r, reqPath, "")
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
// TODO(rost): Support the If-Match, If-None-Match headers? See bradfitz'
|
|
||||||
// comments in http.checkEtag.
|
|
||||||
ctx := r.Context()
|
|
||||||
|
|
||||||
f, err := h.FileSystem.OpenFile(ctx, reqPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666)
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusNotFound, err
|
|
||||||
}
|
|
||||||
_, copyErr := io.Copy(f, r.Body)
|
|
||||||
fi, statErr := f.Stat()
|
|
||||||
closeErr := f.Close()
|
|
||||||
// TODO(rost): Returning 405 Method Not Allowed might not be appropriate.
|
|
||||||
if copyErr != nil {
|
|
||||||
return http.StatusMethodNotAllowed, copyErr
|
|
||||||
}
|
|
||||||
if statErr != nil {
|
|
||||||
return http.StatusMethodNotAllowed, statErr
|
|
||||||
}
|
|
||||||
if closeErr != nil {
|
|
||||||
return http.StatusMethodNotAllowed, closeErr
|
|
||||||
}
|
|
||||||
etag, err := findETag(ctx, h.FileSystem, h.LockSystem, reqPath, fi)
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
w.Header().Set("ETag", etag)
|
|
||||||
return http.StatusCreated, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handleMkcol(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
reqPath, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
release, status, err := h.confirmLocks(r, reqPath, "")
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
|
|
||||||
ctx := r.Context()
|
|
||||||
|
|
||||||
if r.ContentLength > 0 {
|
|
||||||
return http.StatusUnsupportedMediaType, nil
|
|
||||||
}
|
|
||||||
if err := h.FileSystem.Mkdir(ctx, reqPath, 0777); err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return http.StatusConflict, err
|
|
||||||
}
|
|
||||||
return http.StatusMethodNotAllowed, err
|
|
||||||
}
|
|
||||||
return http.StatusCreated, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handleCopyMove(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
hdr := r.Header.Get("Destination")
|
|
||||||
if hdr == "" {
|
|
||||||
return http.StatusBadRequest, errInvalidDestination
|
|
||||||
}
|
|
||||||
u, err := url.Parse(hdr)
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusBadRequest, errInvalidDestination
|
|
||||||
}
|
|
||||||
if u.Host != r.Host {
|
|
||||||
return http.StatusBadGateway, errInvalidDestination
|
|
||||||
}
|
|
||||||
|
|
||||||
src, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
|
|
||||||
dst, status, err := h.stripPrefix(u.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if dst == "" {
|
|
||||||
return http.StatusBadGateway, errInvalidDestination
|
|
||||||
}
|
|
||||||
if dst == src {
|
|
||||||
return http.StatusForbidden, errDestinationEqualsSource
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx := r.Context()
|
|
||||||
|
|
||||||
if r.Method == "COPY" {
|
|
||||||
// Section 7.5.1 says that a COPY only needs to lock the destination,
|
|
||||||
// not both destination and source. Strictly speaking, this is racy,
|
|
||||||
// even though a COPY doesn't modify the source, if a concurrent
|
|
||||||
// operation modifies the source. However, the litmus test explicitly
|
|
||||||
// checks that COPYing a locked-by-another source is OK.
|
|
||||||
release, status, err := h.confirmLocks(r, "", dst)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
|
|
||||||
// Section 9.8.3 says that "The COPY method on a collection without a Depth
|
|
||||||
// header must act as if a Depth header with value "infinity" was included".
|
|
||||||
depth := infiniteDepth
|
|
||||||
if hdr := r.Header.Get("Depth"); hdr != "" {
|
|
||||||
depth = parseDepth(hdr)
|
|
||||||
if depth != 0 && depth != infiniteDepth {
|
|
||||||
// Section 9.8.3 says that "A client may submit a Depth header on a
|
|
||||||
// COPY on a collection with a value of "0" or "infinity"."
|
|
||||||
return http.StatusBadRequest, errInvalidDepth
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return copyFiles(ctx, h.FileSystem, src, dst, r.Header.Get("Overwrite") != "F", depth, 0)
|
|
||||||
}
|
|
||||||
|
|
||||||
release, status, err := h.confirmLocks(r, src, dst)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
|
|
||||||
// Section 9.9.2 says that "The MOVE method on a collection must act as if
|
|
||||||
// a "Depth: infinity" header was used on it. A client must not submit a
|
|
||||||
// Depth header on a MOVE on a collection with any value but "infinity"."
|
|
||||||
if hdr := r.Header.Get("Depth"); hdr != "" {
|
|
||||||
if parseDepth(hdr) != infiniteDepth {
|
|
||||||
return http.StatusBadRequest, errInvalidDepth
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return moveFiles(ctx, h.FileSystem, src, dst, r.Header.Get("Overwrite") == "T")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handleLock(w http.ResponseWriter, r *http.Request) (retStatus int, retErr error) {
|
|
||||||
duration, err := parseTimeout(r.Header.Get("Timeout"))
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusBadRequest, err
|
|
||||||
}
|
|
||||||
li, status, err := readLockInfo(r.Body)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx := r.Context()
|
|
||||||
token, ld, now, created := "", LockDetails{}, time.Now(), false
|
|
||||||
if li == (lockInfo{}) {
|
|
||||||
// An empty lockInfo means to refresh the lock.
|
|
||||||
ih, ok := parseIfHeader(r.Header.Get("If"))
|
|
||||||
if !ok {
|
|
||||||
return http.StatusBadRequest, errInvalidIfHeader
|
|
||||||
}
|
|
||||||
if len(ih.lists) == 1 && len(ih.lists[0].conditions) == 1 {
|
|
||||||
token = ih.lists[0].conditions[0].Token
|
|
||||||
}
|
|
||||||
if token == "" {
|
|
||||||
return http.StatusBadRequest, errInvalidLockToken
|
|
||||||
}
|
|
||||||
ld, err = h.LockSystem.Refresh(now, token, duration)
|
|
||||||
if err != nil {
|
|
||||||
if err == ErrNoSuchLock {
|
|
||||||
return http.StatusPreconditionFailed, err
|
|
||||||
}
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
|
|
||||||
} else {
|
|
||||||
// Section 9.10.3 says that "If no Depth header is submitted on a LOCK request,
|
|
||||||
// then the request MUST act as if a "Depth:infinity" had been submitted."
|
|
||||||
depth := infiniteDepth
|
|
||||||
if hdr := r.Header.Get("Depth"); hdr != "" {
|
|
||||||
depth = parseDepth(hdr)
|
|
||||||
if depth != 0 && depth != infiniteDepth {
|
|
||||||
// Section 9.10.3 says that "Values other than 0 or infinity must not be
|
|
||||||
// used with the Depth header on a LOCK method".
|
|
||||||
return http.StatusBadRequest, errInvalidDepth
|
|
||||||
}
|
|
||||||
}
|
|
||||||
reqPath, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
ld = LockDetails{
|
|
||||||
Root: reqPath,
|
|
||||||
Duration: duration,
|
|
||||||
OwnerXML: li.Owner.InnerXML,
|
|
||||||
ZeroDepth: depth == 0,
|
|
||||||
}
|
|
||||||
token, err = h.LockSystem.Create(now, ld)
|
|
||||||
if err != nil {
|
|
||||||
if err == ErrLocked {
|
|
||||||
return StatusLocked, err
|
|
||||||
}
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
if retErr != nil {
|
|
||||||
h.LockSystem.Unlock(now, token)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Create the resource if it didn't previously exist.
|
|
||||||
if _, err := h.FileSystem.Stat(ctx, reqPath); err != nil {
|
|
||||||
f, err := h.FileSystem.OpenFile(ctx, reqPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666)
|
|
||||||
if err != nil {
|
|
||||||
// TODO: detect missing intermediate dirs and return http.StatusConflict?
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
f.Close()
|
|
||||||
created = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#HEADER_Lock-Token says that the
|
|
||||||
// Lock-Token value is a Coded-URL. We add angle brackets.
|
|
||||||
w.Header().Set("Lock-Token", "<"+token+">")
|
|
||||||
}
|
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/xml; charset=utf-8")
|
|
||||||
if created {
|
|
||||||
// This is "w.WriteHeader(http.StatusCreated)" and not "return
|
|
||||||
// http.StatusCreated, nil" because we write our own (XML) response to w
|
|
||||||
// and Handler.ServeHTTP would otherwise write "Created".
|
|
||||||
w.WriteHeader(http.StatusCreated)
|
|
||||||
}
|
|
||||||
writeLockInfo(w, token, ld)
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handleUnlock(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#HEADER_Lock-Token says that the
|
|
||||||
// Lock-Token value is a Coded-URL. We strip its angle brackets.
|
|
||||||
t := r.Header.Get("Lock-Token")
|
|
||||||
if len(t) < 2 || t[0] != '<' || t[len(t)-1] != '>' {
|
|
||||||
return http.StatusBadRequest, errInvalidLockToken
|
|
||||||
}
|
|
||||||
t = t[1 : len(t)-1]
|
|
||||||
|
|
||||||
switch err = h.LockSystem.Unlock(time.Now(), t); err {
|
|
||||||
case nil:
|
|
||||||
return http.StatusNoContent, err
|
|
||||||
case ErrForbidden:
|
|
||||||
return http.StatusForbidden, err
|
|
||||||
case ErrLocked:
|
|
||||||
return StatusLocked, err
|
|
||||||
case ErrNoSuchLock:
|
|
||||||
return http.StatusConflict, err
|
|
||||||
default:
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handlePropfind(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
reqPath, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
ctx := r.Context()
|
|
||||||
fi, err := h.FileSystem.Stat(ctx, reqPath)
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return http.StatusNotFound, err
|
|
||||||
}
|
|
||||||
return http.StatusMethodNotAllowed, err
|
|
||||||
}
|
|
||||||
depth := infiniteDepth
|
|
||||||
if hdr := r.Header.Get("Depth"); hdr != "" {
|
|
||||||
depth = parseDepth(hdr)
|
|
||||||
if depth == invalidDepth {
|
|
||||||
return http.StatusBadRequest, errInvalidDepth
|
|
||||||
}
|
|
||||||
}
|
|
||||||
pf, status, err := readPropfind(r.Body)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
|
|
||||||
mw := multistatusWriter{w: w}
|
|
||||||
|
|
||||||
walkFn := func(reqPath string, info os.FileInfo, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
var pstats []Propstat
|
|
||||||
if pf.Propname != nil {
|
|
||||||
pnames, err := propnames(ctx, h.FileSystem, h.LockSystem, reqPath)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
pstat := Propstat{Status: http.StatusOK}
|
|
||||||
for _, xmlname := range pnames {
|
|
||||||
pstat.Props = append(pstat.Props, Property{XMLName: xmlname})
|
|
||||||
}
|
|
||||||
pstats = append(pstats, pstat)
|
|
||||||
} else if pf.Allprop != nil {
|
|
||||||
pstats, err = allprop(ctx, h.FileSystem, h.LockSystem, reqPath, pf.Prop)
|
|
||||||
} else {
|
|
||||||
pstats, err = props(ctx, h.FileSystem, h.LockSystem, reqPath, pf.Prop)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return mw.write(makePropstatResponse(path.Join(h.Prefix, reqPath), pstats))
|
|
||||||
}
|
|
||||||
|
|
||||||
walkErr := walkFS(ctx, h.FileSystem, depth, reqPath, fi, walkFn)
|
|
||||||
closeErr := mw.close()
|
|
||||||
if walkErr != nil {
|
|
||||||
return http.StatusInternalServerError, walkErr
|
|
||||||
}
|
|
||||||
if closeErr != nil {
|
|
||||||
return http.StatusInternalServerError, closeErr
|
|
||||||
}
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) handleProppatch(w http.ResponseWriter, r *http.Request) (status int, err error) {
|
|
||||||
reqPath, status, err := h.stripPrefix(r.URL.Path)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
release, status, err := h.confirmLocks(r, reqPath, "")
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
|
|
||||||
ctx := r.Context()
|
|
||||||
|
|
||||||
if _, err := h.FileSystem.Stat(ctx, reqPath); err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return http.StatusNotFound, err
|
|
||||||
}
|
|
||||||
return http.StatusMethodNotAllowed, err
|
|
||||||
}
|
|
||||||
patches, status, err := readProppatch(r.Body)
|
|
||||||
if err != nil {
|
|
||||||
return status, err
|
|
||||||
}
|
|
||||||
pstats, err := patch(ctx, h.FileSystem, h.LockSystem, reqPath, patches)
|
|
||||||
if err != nil {
|
|
||||||
return http.StatusInternalServerError, err
|
|
||||||
}
|
|
||||||
mw := multistatusWriter{w: w}
|
|
||||||
writeErr := mw.write(makePropstatResponse(r.URL.Path, pstats))
|
|
||||||
closeErr := mw.close()
|
|
||||||
if writeErr != nil {
|
|
||||||
return http.StatusInternalServerError, writeErr
|
|
||||||
}
|
|
||||||
if closeErr != nil {
|
|
||||||
return http.StatusInternalServerError, closeErr
|
|
||||||
}
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func makePropstatResponse(href string, pstats []Propstat) *response {
|
|
||||||
resp := response{
|
|
||||||
Href: []string{(&url.URL{Path: href}).EscapedPath()},
|
|
||||||
Propstat: make([]propstat, 0, len(pstats)),
|
|
||||||
}
|
|
||||||
for _, p := range pstats {
|
|
||||||
var xmlErr *xmlError
|
|
||||||
if p.XMLError != "" {
|
|
||||||
xmlErr = &xmlError{InnerXML: []byte(p.XMLError)}
|
|
||||||
}
|
|
||||||
resp.Propstat = append(resp.Propstat, propstat{
|
|
||||||
Status: fmt.Sprintf("HTTP/1.1 %d %s", p.Status, StatusText(p.Status)),
|
|
||||||
Prop: p.Props,
|
|
||||||
ResponseDescription: p.ResponseDescription,
|
|
||||||
Error: xmlErr,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return &resp
|
|
||||||
}
|
|
||||||
|
|
||||||
const (
|
|
||||||
infiniteDepth = -1
|
|
||||||
invalidDepth = -2
|
|
||||||
)
|
|
||||||
|
|
||||||
// parseDepth maps the strings "0", "1" and "infinity" to 0, 1 and
|
|
||||||
// infiniteDepth. Parsing any other string returns invalidDepth.
|
|
||||||
//
|
|
||||||
// Different WebDAV methods have further constraints on valid depths:
|
|
||||||
// - PROPFIND has no further restrictions, as per section 9.1.
|
|
||||||
// - COPY accepts only "0" or "infinity", as per section 9.8.3.
|
|
||||||
// - MOVE accepts only "infinity", as per section 9.9.2.
|
|
||||||
// - LOCK accepts only "0" or "infinity", as per section 9.10.3.
|
|
||||||
// These constraints are enforced by the handleXxx methods.
|
|
||||||
func parseDepth(s string) int {
|
|
||||||
switch s {
|
|
||||||
case "0":
|
|
||||||
return 0
|
|
||||||
case "1":
|
|
||||||
return 1
|
|
||||||
case "infinity":
|
|
||||||
return infiniteDepth
|
|
||||||
}
|
|
||||||
return invalidDepth
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#status.code.extensions.to.http11
|
|
||||||
const (
|
|
||||||
StatusMulti = 207
|
|
||||||
StatusUnprocessableEntity = 422
|
|
||||||
StatusLocked = 423
|
|
||||||
StatusFailedDependency = 424
|
|
||||||
StatusInsufficientStorage = 507
|
|
||||||
)
|
|
||||||
|
|
||||||
func StatusText(code int) string {
|
|
||||||
switch code {
|
|
||||||
case StatusMulti:
|
|
||||||
return "Multi-Status"
|
|
||||||
case StatusUnprocessableEntity:
|
|
||||||
return "Unprocessable Entity"
|
|
||||||
case StatusLocked:
|
|
||||||
return "Locked"
|
|
||||||
case StatusFailedDependency:
|
|
||||||
return "Failed Dependency"
|
|
||||||
case StatusInsufficientStorage:
|
|
||||||
return "Insufficient Storage"
|
|
||||||
}
|
|
||||||
return http.StatusText(code)
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
errDestinationEqualsSource = errors.New("webdav: destination equals source")
|
|
||||||
errDirectoryNotEmpty = errors.New("webdav: directory not empty")
|
|
||||||
errInvalidDepth = errors.New("webdav: invalid depth")
|
|
||||||
errInvalidDestination = errors.New("webdav: invalid destination")
|
|
||||||
errInvalidIfHeader = errors.New("webdav: invalid If header")
|
|
||||||
errInvalidLockInfo = errors.New("webdav: invalid lock info")
|
|
||||||
errInvalidLockToken = errors.New("webdav: invalid lock token")
|
|
||||||
errInvalidPropfind = errors.New("webdav: invalid propfind")
|
|
||||||
errInvalidProppatch = errors.New("webdav: invalid proppatch")
|
|
||||||
errInvalidResponse = errors.New("webdav: invalid response")
|
|
||||||
errInvalidTimeout = errors.New("webdav: invalid timeout")
|
|
||||||
errNoFileSystem = errors.New("webdav: no file system")
|
|
||||||
errNoLockSystem = errors.New("webdav: no lock system")
|
|
||||||
errNotADirectory = errors.New("webdav: not a directory")
|
|
||||||
errPrefixMismatch = errors.New("webdav: prefix mismatch")
|
|
||||||
errRecursionTooDeep = errors.New("webdav: recursion too deep")
|
|
||||||
errUnsupportedLockInfo = errors.New("webdav: unsupported lock info")
|
|
||||||
errUnsupportedMethod = errors.New("webdav: unsupported method")
|
|
||||||
)
|
|
519
vendor/golang.org/x/net/webdav/xml.go
generated
vendored
519
vendor/golang.org/x/net/webdav/xml.go
generated
vendored
|
@ -1,519 +0,0 @@
|
||||||
// Copyright 2014 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package webdav
|
|
||||||
|
|
||||||
// The XML encoding is covered by Section 14.
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#xml.element.definitions
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"encoding/xml"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
// As of https://go-review.googlesource.com/#/c/12772/ which was submitted
|
|
||||||
// in July 2015, this package uses an internal fork of the standard
|
|
||||||
// library's encoding/xml package, due to changes in the way namespaces
|
|
||||||
// were encoded. Such changes were introduced in the Go 1.5 cycle, but were
|
|
||||||
// rolled back in response to https://github.com/golang/go/issues/11841
|
|
||||||
//
|
|
||||||
// However, this package's exported API, specifically the Property and
|
|
||||||
// DeadPropsHolder types, need to refer to the standard library's version
|
|
||||||
// of the xml.Name type, as code that imports this package cannot refer to
|
|
||||||
// the internal version.
|
|
||||||
//
|
|
||||||
// This file therefore imports both the internal and external versions, as
|
|
||||||
// ixml and xml, and converts between them.
|
|
||||||
//
|
|
||||||
// In the long term, this package should use the standard library's version
|
|
||||||
// only, and the internal fork deleted, once
|
|
||||||
// https://github.com/golang/go/issues/13400 is resolved.
|
|
||||||
ixml "golang.org/x/net/webdav/internal/xml"
|
|
||||||
)
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_lockinfo
|
|
||||||
type lockInfo struct {
|
|
||||||
XMLName ixml.Name `xml:"lockinfo"`
|
|
||||||
Exclusive *struct{} `xml:"lockscope>exclusive"`
|
|
||||||
Shared *struct{} `xml:"lockscope>shared"`
|
|
||||||
Write *struct{} `xml:"locktype>write"`
|
|
||||||
Owner owner `xml:"owner"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_owner
|
|
||||||
type owner struct {
|
|
||||||
InnerXML string `xml:",innerxml"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func readLockInfo(r io.Reader) (li lockInfo, status int, err error) {
|
|
||||||
c := &countingReader{r: r}
|
|
||||||
if err = ixml.NewDecoder(c).Decode(&li); err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
if c.n == 0 {
|
|
||||||
// An empty body means to refresh the lock.
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#refreshing-locks
|
|
||||||
return lockInfo{}, 0, nil
|
|
||||||
}
|
|
||||||
err = errInvalidLockInfo
|
|
||||||
}
|
|
||||||
return lockInfo{}, http.StatusBadRequest, err
|
|
||||||
}
|
|
||||||
// We only support exclusive (non-shared) write locks. In practice, these are
|
|
||||||
// the only types of locks that seem to matter.
|
|
||||||
if li.Exclusive == nil || li.Shared != nil || li.Write == nil {
|
|
||||||
return lockInfo{}, http.StatusNotImplemented, errUnsupportedLockInfo
|
|
||||||
}
|
|
||||||
return li, 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type countingReader struct {
|
|
||||||
n int
|
|
||||||
r io.Reader
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *countingReader) Read(p []byte) (int, error) {
|
|
||||||
n, err := c.r.Read(p)
|
|
||||||
c.n += n
|
|
||||||
return n, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func writeLockInfo(w io.Writer, token string, ld LockDetails) (int, error) {
|
|
||||||
depth := "infinity"
|
|
||||||
if ld.ZeroDepth {
|
|
||||||
depth = "0"
|
|
||||||
}
|
|
||||||
timeout := ld.Duration / time.Second
|
|
||||||
return fmt.Fprintf(w, "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n"+
|
|
||||||
"<D:prop xmlns:D=\"DAV:\"><D:lockdiscovery><D:activelock>\n"+
|
|
||||||
" <D:locktype><D:write/></D:locktype>\n"+
|
|
||||||
" <D:lockscope><D:exclusive/></D:lockscope>\n"+
|
|
||||||
" <D:depth>%s</D:depth>\n"+
|
|
||||||
" <D:owner>%s</D:owner>\n"+
|
|
||||||
" <D:timeout>Second-%d</D:timeout>\n"+
|
|
||||||
" <D:locktoken><D:href>%s</D:href></D:locktoken>\n"+
|
|
||||||
" <D:lockroot><D:href>%s</D:href></D:lockroot>\n"+
|
|
||||||
"</D:activelock></D:lockdiscovery></D:prop>",
|
|
||||||
depth, ld.OwnerXML, timeout, escape(token), escape(ld.Root),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
func escape(s string) string {
|
|
||||||
for i := 0; i < len(s); i++ {
|
|
||||||
switch s[i] {
|
|
||||||
case '"', '&', '\'', '<', '>':
|
|
||||||
b := bytes.NewBuffer(nil)
|
|
||||||
ixml.EscapeText(b, []byte(s))
|
|
||||||
return b.String()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return s
|
|
||||||
}
|
|
||||||
|
|
||||||
// Next returns the next token, if any, in the XML stream of d.
|
|
||||||
// RFC 4918 requires to ignore comments, processing instructions
|
|
||||||
// and directives.
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#property_values
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#xml-extensibility
|
|
||||||
func next(d *ixml.Decoder) (ixml.Token, error) {
|
|
||||||
for {
|
|
||||||
t, err := d.Token()
|
|
||||||
if err != nil {
|
|
||||||
return t, err
|
|
||||||
}
|
|
||||||
switch t.(type) {
|
|
||||||
case ixml.Comment, ixml.Directive, ixml.ProcInst:
|
|
||||||
continue
|
|
||||||
default:
|
|
||||||
return t, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_prop (for propfind)
|
|
||||||
type propfindProps []xml.Name
|
|
||||||
|
|
||||||
// UnmarshalXML appends the property names enclosed within start to pn.
|
|
||||||
//
|
|
||||||
// It returns an error if start does not contain any properties or if
|
|
||||||
// properties contain values. Character data between properties is ignored.
|
|
||||||
func (pn *propfindProps) UnmarshalXML(d *ixml.Decoder, start ixml.StartElement) error {
|
|
||||||
for {
|
|
||||||
t, err := next(d)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
switch t.(type) {
|
|
||||||
case ixml.EndElement:
|
|
||||||
if len(*pn) == 0 {
|
|
||||||
return fmt.Errorf("%s must not be empty", start.Name.Local)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
case ixml.StartElement:
|
|
||||||
name := t.(ixml.StartElement).Name
|
|
||||||
t, err = next(d)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if _, ok := t.(ixml.EndElement); !ok {
|
|
||||||
return fmt.Errorf("unexpected token %T", t)
|
|
||||||
}
|
|
||||||
*pn = append(*pn, xml.Name(name))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_propfind
|
|
||||||
type propfind struct {
|
|
||||||
XMLName ixml.Name `xml:"DAV: propfind"`
|
|
||||||
Allprop *struct{} `xml:"DAV: allprop"`
|
|
||||||
Propname *struct{} `xml:"DAV: propname"`
|
|
||||||
Prop propfindProps `xml:"DAV: prop"`
|
|
||||||
Include propfindProps `xml:"DAV: include"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func readPropfind(r io.Reader) (pf propfind, status int, err error) {
|
|
||||||
c := countingReader{r: r}
|
|
||||||
if err = ixml.NewDecoder(&c).Decode(&pf); err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
if c.n == 0 {
|
|
||||||
// An empty body means to propfind allprop.
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#METHOD_PROPFIND
|
|
||||||
return propfind{Allprop: new(struct{})}, 0, nil
|
|
||||||
}
|
|
||||||
err = errInvalidPropfind
|
|
||||||
}
|
|
||||||
return propfind{}, http.StatusBadRequest, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if pf.Allprop == nil && pf.Include != nil {
|
|
||||||
return propfind{}, http.StatusBadRequest, errInvalidPropfind
|
|
||||||
}
|
|
||||||
if pf.Allprop != nil && (pf.Prop != nil || pf.Propname != nil) {
|
|
||||||
return propfind{}, http.StatusBadRequest, errInvalidPropfind
|
|
||||||
}
|
|
||||||
if pf.Prop != nil && pf.Propname != nil {
|
|
||||||
return propfind{}, http.StatusBadRequest, errInvalidPropfind
|
|
||||||
}
|
|
||||||
if pf.Propname == nil && pf.Allprop == nil && pf.Prop == nil {
|
|
||||||
return propfind{}, http.StatusBadRequest, errInvalidPropfind
|
|
||||||
}
|
|
||||||
return pf, 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Property represents a single DAV resource property as defined in RFC 4918.
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#data.model.for.resource.properties
|
|
||||||
type Property struct {
|
|
||||||
// XMLName is the fully qualified name that identifies this property.
|
|
||||||
XMLName xml.Name
|
|
||||||
|
|
||||||
// Lang is an optional xml:lang attribute.
|
|
||||||
Lang string `xml:"xml:lang,attr,omitempty"`
|
|
||||||
|
|
||||||
// InnerXML contains the XML representation of the property value.
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#property_values
|
|
||||||
//
|
|
||||||
// Property values of complex type or mixed-content must have fully
|
|
||||||
// expanded XML namespaces or be self-contained with according
|
|
||||||
// XML namespace declarations. They must not rely on any XML
|
|
||||||
// namespace declarations within the scope of the XML document,
|
|
||||||
// even including the DAV: namespace.
|
|
||||||
InnerXML []byte `xml:",innerxml"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// ixmlProperty is the same as the Property type except it holds an ixml.Name
|
|
||||||
// instead of an xml.Name.
|
|
||||||
type ixmlProperty struct {
|
|
||||||
XMLName ixml.Name
|
|
||||||
Lang string `xml:"xml:lang,attr,omitempty"`
|
|
||||||
InnerXML []byte `xml:",innerxml"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_error
|
|
||||||
// See multistatusWriter for the "D:" namespace prefix.
|
|
||||||
type xmlError struct {
|
|
||||||
XMLName ixml.Name `xml:"D:error"`
|
|
||||||
InnerXML []byte `xml:",innerxml"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_propstat
|
|
||||||
// See multistatusWriter for the "D:" namespace prefix.
|
|
||||||
type propstat struct {
|
|
||||||
Prop []Property `xml:"D:prop>_ignored_"`
|
|
||||||
Status string `xml:"D:status"`
|
|
||||||
Error *xmlError `xml:"D:error"`
|
|
||||||
ResponseDescription string `xml:"D:responsedescription,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// ixmlPropstat is the same as the propstat type except it holds an ixml.Name
|
|
||||||
// instead of an xml.Name.
|
|
||||||
type ixmlPropstat struct {
|
|
||||||
Prop []ixmlProperty `xml:"D:prop>_ignored_"`
|
|
||||||
Status string `xml:"D:status"`
|
|
||||||
Error *xmlError `xml:"D:error"`
|
|
||||||
ResponseDescription string `xml:"D:responsedescription,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// MarshalXML prepends the "D:" namespace prefix on properties in the DAV: namespace
|
|
||||||
// before encoding. See multistatusWriter.
|
|
||||||
func (ps propstat) MarshalXML(e *ixml.Encoder, start ixml.StartElement) error {
|
|
||||||
// Convert from a propstat to an ixmlPropstat.
|
|
||||||
ixmlPs := ixmlPropstat{
|
|
||||||
Prop: make([]ixmlProperty, len(ps.Prop)),
|
|
||||||
Status: ps.Status,
|
|
||||||
Error: ps.Error,
|
|
||||||
ResponseDescription: ps.ResponseDescription,
|
|
||||||
}
|
|
||||||
for k, prop := range ps.Prop {
|
|
||||||
ixmlPs.Prop[k] = ixmlProperty{
|
|
||||||
XMLName: ixml.Name(prop.XMLName),
|
|
||||||
Lang: prop.Lang,
|
|
||||||
InnerXML: prop.InnerXML,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for k, prop := range ixmlPs.Prop {
|
|
||||||
if prop.XMLName.Space == "DAV:" {
|
|
||||||
prop.XMLName = ixml.Name{Space: "", Local: "D:" + prop.XMLName.Local}
|
|
||||||
ixmlPs.Prop[k] = prop
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Distinct type to avoid infinite recursion of MarshalXML.
|
|
||||||
type newpropstat ixmlPropstat
|
|
||||||
return e.EncodeElement(newpropstat(ixmlPs), start)
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_response
|
|
||||||
// See multistatusWriter for the "D:" namespace prefix.
|
|
||||||
type response struct {
|
|
||||||
XMLName ixml.Name `xml:"D:response"`
|
|
||||||
Href []string `xml:"D:href"`
|
|
||||||
Propstat []propstat `xml:"D:propstat"`
|
|
||||||
Status string `xml:"D:status,omitempty"`
|
|
||||||
Error *xmlError `xml:"D:error"`
|
|
||||||
ResponseDescription string `xml:"D:responsedescription,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// MultistatusWriter marshals one or more Responses into a XML
|
|
||||||
// multistatus response.
|
|
||||||
// See http://www.webdav.org/specs/rfc4918.html#ELEMENT_multistatus
|
|
||||||
// TODO(rsto, mpl): As a workaround, the "D:" namespace prefix, defined as
|
|
||||||
// "DAV:" on this element, is prepended on the nested response, as well as on all
|
|
||||||
// its nested elements. All property names in the DAV: namespace are prefixed as
|
|
||||||
// well. This is because some versions of Mini-Redirector (on windows 7) ignore
|
|
||||||
// elements with a default namespace (no prefixed namespace). A less intrusive fix
|
|
||||||
// should be possible after golang.org/cl/11074. See https://golang.org/issue/11177
|
|
||||||
type multistatusWriter struct {
|
|
||||||
// ResponseDescription contains the optional responsedescription
|
|
||||||
// of the multistatus XML element. Only the latest content before
|
|
||||||
// close will be emitted. Empty response descriptions are not
|
|
||||||
// written.
|
|
||||||
responseDescription string
|
|
||||||
|
|
||||||
w http.ResponseWriter
|
|
||||||
enc *ixml.Encoder
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write validates and emits a DAV response as part of a multistatus response
|
|
||||||
// element.
|
|
||||||
//
|
|
||||||
// It sets the HTTP status code of its underlying http.ResponseWriter to 207
|
|
||||||
// (Multi-Status) and populates the Content-Type header. If r is the
|
|
||||||
// first, valid response to be written, Write prepends the XML representation
|
|
||||||
// of r with a multistatus tag. Callers must call close after the last response
|
|
||||||
// has been written.
|
|
||||||
func (w *multistatusWriter) write(r *response) error {
|
|
||||||
switch len(r.Href) {
|
|
||||||
case 0:
|
|
||||||
return errInvalidResponse
|
|
||||||
case 1:
|
|
||||||
if len(r.Propstat) > 0 != (r.Status == "") {
|
|
||||||
return errInvalidResponse
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
if len(r.Propstat) > 0 || r.Status == "" {
|
|
||||||
return errInvalidResponse
|
|
||||||
}
|
|
||||||
}
|
|
||||||
err := w.writeHeader()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return w.enc.Encode(r)
|
|
||||||
}
|
|
||||||
|
|
||||||
// writeHeader writes a XML multistatus start element on w's underlying
|
|
||||||
// http.ResponseWriter and returns the result of the write operation.
|
|
||||||
// After the first write attempt, writeHeader becomes a no-op.
|
|
||||||
func (w *multistatusWriter) writeHeader() error {
|
|
||||||
if w.enc != nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
w.w.Header().Add("Content-Type", "text/xml; charset=utf-8")
|
|
||||||
w.w.WriteHeader(StatusMulti)
|
|
||||||
_, err := fmt.Fprintf(w.w, `<?xml version="1.0" encoding="UTF-8"?>`)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
w.enc = ixml.NewEncoder(w.w)
|
|
||||||
return w.enc.EncodeToken(ixml.StartElement{
|
|
||||||
Name: ixml.Name{
|
|
||||||
Space: "DAV:",
|
|
||||||
Local: "multistatus",
|
|
||||||
},
|
|
||||||
Attr: []ixml.Attr{{
|
|
||||||
Name: ixml.Name{Space: "xmlns", Local: "D"},
|
|
||||||
Value: "DAV:",
|
|
||||||
}},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// Close completes the marshalling of the multistatus response. It returns
|
|
||||||
// an error if the multistatus response could not be completed. If both the
|
|
||||||
// return value and field enc of w are nil, then no multistatus response has
|
|
||||||
// been written.
|
|
||||||
func (w *multistatusWriter) close() error {
|
|
||||||
if w.enc == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
var end []ixml.Token
|
|
||||||
if w.responseDescription != "" {
|
|
||||||
name := ixml.Name{Space: "DAV:", Local: "responsedescription"}
|
|
||||||
end = append(end,
|
|
||||||
ixml.StartElement{Name: name},
|
|
||||||
ixml.CharData(w.responseDescription),
|
|
||||||
ixml.EndElement{Name: name},
|
|
||||||
)
|
|
||||||
}
|
|
||||||
end = append(end, ixml.EndElement{
|
|
||||||
Name: ixml.Name{Space: "DAV:", Local: "multistatus"},
|
|
||||||
})
|
|
||||||
for _, t := range end {
|
|
||||||
err := w.enc.EncodeToken(t)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return w.enc.Flush()
|
|
||||||
}
|
|
||||||
|
|
||||||
var xmlLangName = ixml.Name{Space: "http://www.w3.org/XML/1998/namespace", Local: "lang"}
|
|
||||||
|
|
||||||
func xmlLang(s ixml.StartElement, d string) string {
|
|
||||||
for _, attr := range s.Attr {
|
|
||||||
if attr.Name == xmlLangName {
|
|
||||||
return attr.Value
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return d
|
|
||||||
}
|
|
||||||
|
|
||||||
type xmlValue []byte
|
|
||||||
|
|
||||||
func (v *xmlValue) UnmarshalXML(d *ixml.Decoder, start ixml.StartElement) error {
|
|
||||||
// The XML value of a property can be arbitrary, mixed-content XML.
|
|
||||||
// To make sure that the unmarshalled value contains all required
|
|
||||||
// namespaces, we encode all the property value XML tokens into a
|
|
||||||
// buffer. This forces the encoder to redeclare any used namespaces.
|
|
||||||
var b bytes.Buffer
|
|
||||||
e := ixml.NewEncoder(&b)
|
|
||||||
for {
|
|
||||||
t, err := next(d)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if e, ok := t.(ixml.EndElement); ok && e.Name == start.Name {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if err = e.EncodeToken(t); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
err := e.Flush()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
*v = b.Bytes()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_prop (for proppatch)
|
|
||||||
type proppatchProps []Property
|
|
||||||
|
|
||||||
// UnmarshalXML appends the property names and values enclosed within start
|
|
||||||
// to ps.
|
|
||||||
//
|
|
||||||
// An xml:lang attribute that is defined either on the DAV:prop or property
|
|
||||||
// name XML element is propagated to the property's Lang field.
|
|
||||||
//
|
|
||||||
// UnmarshalXML returns an error if start does not contain any properties or if
|
|
||||||
// property values contain syntactically incorrect XML.
|
|
||||||
func (ps *proppatchProps) UnmarshalXML(d *ixml.Decoder, start ixml.StartElement) error {
|
|
||||||
lang := xmlLang(start, "")
|
|
||||||
for {
|
|
||||||
t, err := next(d)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
switch elem := t.(type) {
|
|
||||||
case ixml.EndElement:
|
|
||||||
if len(*ps) == 0 {
|
|
||||||
return fmt.Errorf("%s must not be empty", start.Name.Local)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
case ixml.StartElement:
|
|
||||||
p := Property{
|
|
||||||
XMLName: xml.Name(t.(ixml.StartElement).Name),
|
|
||||||
Lang: xmlLang(t.(ixml.StartElement), lang),
|
|
||||||
}
|
|
||||||
err = d.DecodeElement(((*xmlValue)(&p.InnerXML)), &elem)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
*ps = append(*ps, p)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_set
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_remove
|
|
||||||
type setRemove struct {
|
|
||||||
XMLName ixml.Name
|
|
||||||
Lang string `xml:"xml:lang,attr,omitempty"`
|
|
||||||
Prop proppatchProps `xml:"DAV: prop"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// http://www.webdav.org/specs/rfc4918.html#ELEMENT_propertyupdate
|
|
||||||
type propertyupdate struct {
|
|
||||||
XMLName ixml.Name `xml:"DAV: propertyupdate"`
|
|
||||||
Lang string `xml:"xml:lang,attr,omitempty"`
|
|
||||||
SetRemove []setRemove `xml:",any"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func readProppatch(r io.Reader) (patches []Proppatch, status int, err error) {
|
|
||||||
var pu propertyupdate
|
|
||||||
if err = ixml.NewDecoder(r).Decode(&pu); err != nil {
|
|
||||||
return nil, http.StatusBadRequest, err
|
|
||||||
}
|
|
||||||
for _, op := range pu.SetRemove {
|
|
||||||
remove := false
|
|
||||||
switch op.XMLName {
|
|
||||||
case ixml.Name{Space: "DAV:", Local: "set"}:
|
|
||||||
// No-op.
|
|
||||||
case ixml.Name{Space: "DAV:", Local: "remove"}:
|
|
||||||
for _, p := range op.Prop {
|
|
||||||
if len(p.InnerXML) > 0 {
|
|
||||||
return nil, http.StatusBadRequest, errInvalidProppatch
|
|
||||||
}
|
|
||||||
}
|
|
||||||
remove = true
|
|
||||||
default:
|
|
||||||
return nil, http.StatusBadRequest, errInvalidProppatch
|
|
||||||
}
|
|
||||||
patches = append(patches, Proppatch{Remove: remove, Props: op.Prop})
|
|
||||||
}
|
|
||||||
return patches, 0, nil
|
|
||||||
}
|
|
7
vendor/modules.txt
vendored
7
vendor/modules.txt
vendored
|
@ -141,10 +141,6 @@ github.com/spf13/pflag
|
||||||
github.com/spf13/viper
|
github.com/spf13/viper
|
||||||
# github.com/stretchr/testify v1.2.2
|
# github.com/stretchr/testify v1.2.2
|
||||||
github.com/stretchr/testify/assert
|
github.com/stretchr/testify/assert
|
||||||
# github.com/swaggo/echo-swagger v0.0.0-20180315045949-97f46bb9e5a5
|
|
||||||
github.com/swaggo/echo-swagger
|
|
||||||
# github.com/swaggo/files v0.0.0-20180215091130-49c8a91ea3fa
|
|
||||||
github.com/swaggo/files
|
|
||||||
# github.com/swaggo/swag v1.4.1-0.20181210033626-0e12fd5eb026
|
# github.com/swaggo/swag v1.4.1-0.20181210033626-0e12fd5eb026
|
||||||
github.com/swaggo/swag/cmd/swag
|
github.com/swaggo/swag/cmd/swag
|
||||||
github.com/swaggo/swag
|
github.com/swaggo/swag
|
||||||
|
@ -164,9 +160,6 @@ golang.org/x/crypto/acme
|
||||||
golang.org/x/lint/golint
|
golang.org/x/lint/golint
|
||||||
golang.org/x/lint
|
golang.org/x/lint
|
||||||
# golang.org/x/net v0.0.0-20181217023233-e147a9138326
|
# golang.org/x/net v0.0.0-20181217023233-e147a9138326
|
||||||
golang.org/x/net/webdav
|
|
||||||
golang.org/x/net/context
|
|
||||||
golang.org/x/net/webdav/internal/xml
|
|
||||||
golang.org/x/net/idna
|
golang.org/x/net/idna
|
||||||
# golang.org/x/sys v0.0.0-20181128092732-4ed8d59d0b35
|
# golang.org/x/sys v0.0.0-20181128092732-4ed8d59d0b35
|
||||||
golang.org/x/sys/unix
|
golang.org/x/sys/unix
|
||||||
|
|
Loading…
Reference in a new issue