i need help with 2 questions 3 and 4 NOT 5. Robotics subject

3. Reinforcement Learning

Reinforcements learning (RL) agents learn by taking state-dependent actions and experiencing reward arising from interaction with their environments. One method is to use a table-based Q-learning algorithm.

Figure 1: The inverted pendulum problem

Q-learning tables are discrete, but most real-world tasks involve systems that have continuous states and are controlled using continuous actions. With this in mind, consider how a table-based Q-learning algorithm could learn to balance an inverted pendulum (as shown in Fig. 1). To achieve this:

(a) Describe a suitable reward function.

[3 marks]

(b) Describe a suitable choice of states and explain why they are appropriate.

[3 marks]

(c) Describe a suitable choice of actions and explain why they are appropriate and how they relate to the states discussed in part (a).

[3 marks]

(d) Discuss how an inverted pendulum task could be either an MDP or a POMDP. [2 marks]

Question 3 continued …

Question 3 continued

(e) Discuss how simulated experience generated from a model within a RL agent can increase the speed with which the RL algorithm convergence. How can this assist finding a solution in the inverted pendulum task?

[4 marks]

(f) Dyna-Q algorithm is one such model-based approach to RL. Using high-level pseudo code in no more than 12 lines, describe the operation of the Dyna-Q algorithm and describe all its key terms.

[5 marks]

4. State estimation

(a) When building a full state feedback controller, why is if often necessary to use some form of state estimator?

[3 marks]

(b) The Luenberger observer is a deterministic state estimator. Draw its signal flow graph to illustrate its operation and explain the design and function of the Luenberger gain L.

[3 marks]

(c) The Kalman filter is a stochastic state estimator. Draw and compare a signal flow graph of the Kalman estimator with that of the Luenberger observer, illustrating all the Kalman estimator’s important components, including its noise sources.

[4 marks]

Question 4 continued …

Question 4 continued

(d) The Kalman filter iteratively computes 5 variables as illustrated below

Write a short paragraph on each of the terms 1 – 5 to explain their meaning and function.

[10 marks]

5. Gaussian processes

Describe the main difference between using Gaussian Processes and Support Vector Machines in approximating linear functions.

[20 marks]

3. Reinforcement Learning

Reinforcements learning (RL) agents learn by taking state-dependent actions and experiencing reward arising from interaction with their environments. One method is to use a table-based Q-learning algorithm.

Figure 1: The inverted pendulum problem

Q-learning tables are discrete, but most real-world tasks involve systems that have continuous states and are controlled using continuous actions. With this in mind, consider how a table-based Q-learning algorithm could learn to balance an inverted pendulum (as shown in Fig. 1). To achieve this:

(a) Describe a suitable reward function.

[3 marks]

(b) Describe a suitable choice of states and explain why they are appropriate.

[3 marks]

(c) Describe a suitable choice of actions and explain why they are appropriate and how they relate to the states discussed in part (a).

[3 marks]

(d) Discuss how an inverted pendulum task could be either an MDP or a POMDP. [2 marks]

Question 3 continued …

Question 3 continued

(e) Discuss how simulated experience generated from a model within a RL agent can increase the speed with which the RL algorithm convergence. How can this assist finding a solution in the inverted pendulum task?

[4 marks]

(f) Dyna-Q algorithm is one such model-based approach to RL. Using high-level pseudo code in no more than 12 lines, describe the operation of the Dyna-Q algorithm and describe all its key terms.

[5 marks]

4. State estimation

(a) When building a full state feedback controller, why is if often necessary to use some form of state estimator?

[3 marks]

(b) The Luenberger observer is a deterministic state estimator. Draw its signal flow graph to illustrate its operation and explain the design and function of the Luenberger gain L.

[3 marks]

(c) The Kalman filter is a stochastic state estimator. Draw and compare a signal flow graph of the Kalman estimator with that of the Luenberger observer, illustrating all the Kalman estimator’s important components, including its noise sources.

[4 marks]

Question 4 continued …

Question 4 continued

(d) The Kalman filter iteratively computes 5 variables as illustrated below

Write a short paragraph on each of the terms 1 – 5 to explain their meaning and function.

[10 marks]

5. Gaussian processes

Describe the main difference between using Gaussian Processes and Support Vector Machines in approximating linear functions.

[20 marks]

UCBS7037 Financial Management AssessmentUniversity of Cumbria and Robert Kennedy CollegeGeneral Instructions – Please read carefullyUniversity of Cumbria, Financial Management Interim Assignment and Final...Word count 2500 words – please use APA referencing style for referencesDetails of the task ObjectiveIn CW1, you have been asked to prepare a report for the manager of a new hedge fund detailing at least...This is a 15,000 word dissertation, I have attached the proposalNeed a quote to proof read and assist with Psychology dissertation service for a Masters course.(Alternative assessment)Weighting: 50% of the overall gradeWord count: 1500Submission deadline: 11/05/2021 1600hrsLearning Outcomes Evidenced by this assignment: 1,2,3,4,5,6At the end of this module, students...Digital MarketingCase Study Assessment GuideSummaryThis is Case study report.Weighting: 70%Learning Outcomes Evidenced by this assignment:1. Evaluate the relevance of traditional marketing theory to the...Topic is: does wealth affect human behavior. This is a self chosen topic so I can amend if it can't be done. I need the assignment to be 1st class condition because its my final year. I'll send...**Show All Questions**