|
Search the dblp DataBase
Daniel Schneegaß:
[Publications]
[Author Rank by year]
[Co-authors]
[Prefers]
[Cites]
[Cited by]
Publications of Author
- Daniel Schneegaß, Steffen Udluft, Thomas Martinetz
Kernel Rewards Regression: An Information Efficient Batch Policy Iteration Approach. [Citation Graph (0, 0)][DBLP] Artificial Intelligence and Applications, 2006, pp:428-433 [Conf]
- Daniel Schneegaß, Thomas Martinetz, Michael Clausohm
OnlineDoubleMaxMinOver: a simple approximate time and information efficient online Support Vector Classification method. [Citation Graph (0, 0)][DBLP] ESANN, 2006, pp:575-580 [Conf]
- Thomas Martinetz, Kai Labusch, Daniel Schneegaß
SoftDoubleMinOver: A Simple Procedure for Maximum Margin Classification. [Citation Graph (0, 0)][DBLP] ICANN (2), 2005, pp:301-306 [Conf]
- Daniel Schneegaß, Kai Labusch, Thomas Martinetz
MaxMinOver Regression: A Simple Incremental Approach for Support Vector Function Approximation. [Citation Graph (0, 0)][DBLP] ICANN (1), 2006, pp:150-158 [Conf]
- Daniel Schneegaß, Steffen Udluft, Thomas Martinetz
Improving Optimality of Neural Rewards Regression for Data-Efficient Batch Near-Optimal Policy Identification. [Citation Graph (0, 0)][DBLP] ICANN (1), 2007, pp:109-118 [Conf]
Neural Rewards Regression for near-optimal policy identification in Markovian and partial observable environments. [Citation Graph (, )][DBLP]
Explicit Kernel Rewards Regression for data-efficient near-optimal policy identification. [Citation Graph (, )][DBLP]
The Intrinsic Recurrent Support Vector Machine. [Citation Graph (, )][DBLP]
Safe exploration for reinforcement learning. [Citation Graph (, )][DBLP]
A Neural Reinforcement Learning Approach to Gas Turbine Control. [Citation Graph (, )][DBLP]
Uncertainty propagation for quality assurance in Reinforcement Learning. [Citation Graph (, )][DBLP]
Search in 0.001secs, Finished in 0.002secs
|