Difference between r1.10 and the current
@@ -53,4 +53,4 @@
== Comments ==
== Back page ==
== Back page ==
* [JunhyuckWoo]
* [Robot_Study]
1. Information ¶
I plan to study a motion planning algorithm.
I will refer to the famous course from USC.
I will refer to the famous course from USC.
This is course information.
Instructor: Professor Nora Ayanian
Course: Coordinated Mobile Robotics
Instructor: Professor Nora Ayanian
Course: Coordinated Mobile Robotics
3.1.1. Discrete Planning ¶
- All models are completely known and predictable
- Problem Solving and Planning are used as synonym
3.1.1.2. Problem Formulation ¶
- State Space Model
- State = Distinct Situation for the world (x)
- Set of all possible states = State space (X) -> Countable
- State = Distinct Situation for the world (x)
- State Transition Equation
x' = f(x, u)
- x : current state
- x': new state
- u : each action
- x : current state
- Set U of all possible actions over all states
U = set of U(x), x ∈ X
- U(x): action space for each state x
- For distinct x, x' ∈ X, U(x) and U(x') are not necessarily disjoint
- U(x): action space for each state x
- Xg: a set of goal states
- Formulation 2.1 = Discrete Feasible Planning
- A nonempty state space X, which is a finite or countably infinite set of states.
- For each state x ∈ X, a finite action space U(x).
- A state transition function f that produces a state f(x,u) ∈ X for every x ∈ X and u ∈ U(x). The state transition equation is derived from f as x′ =f(x,u).
- An initial state x1 ∈ X.
- A goal set Xg ⊂ X.
=> Express as a "Directed State Transition Graph"
- set of vertices = state space X
- directed edge from x ∈ X to x′ ∈ X exists <=> exists an action u ∈ U(x) such that x′ = f(x,u)
- initial state and goal set are designated as special vertices in the graph
- set of vertices = state space X
- A nonempty state space X, which is a finite or countably infinite set of states.