GSoC Project Ideas
Related Topics: NTF:GSoC/GSoCProjectIdeas
Testing
Anyone interested in learning advanced techniques in software testing will be interested in this project.
It will cover testing low-level protocols and operational testing of
ntpd
.
Existing knowledge of software testing is not required however a successful candidate must have a firm grasp of the C language.
We also need network-level testing, including adding and removing interfaces.
Please be aware this project is more about software testing than it is about the NTP software itself.
You will learn a lot about the NTP protocol and protocols in general.
Learning how to test software is an extremely desirable skill in the software development world any candidate should come in with an eagerness to learn.
We usually have several students participate in writing tests. Students do not work together, although you will be doing code reviews of each other's work.
Improving NTP's logging/debugging system
- Redesign/Change NTP's logging/debugging system.
- Write a common debugging/logging interface for NTP.
While this topic has been worked on 3 times before, we've gotten the API designed and ready to go but we have not yet converted the codebase to use it all. There is a reasonable chance that during the deployment of this project we'll find some problems with the new API that must be corrected, and we may also learn ways to make the new API more lightweight or otherwise better.
Related Topics: GSoC2013LoggingDebugging,
GSoC2010LoggingAndDebugging,
GSoC2009LogDebug,
GSoC2009LoggingAndDebugging,
Bug #1408,
Bug #2160
Virtual Refclock Engine/Refclock Definition Language
Analyze the existing refclock drivers. Come up with a language that can be converted into some sort of bytecode or threaded code that will be able to fully implement all of the refclocks we currently support. Harlan thinks this language would need basic math and expression support, execution flow (loop, conditional, exception, etc.) support, and subroutine support (calls to pre-defined subroutines for I/O, logging, etc.) The implemented language should be demonstrably "safe" (bounds checking, etc.), perhaps implemented as a small "virtual refclock machine", either inside NTP or as a separate program that communicates with NTP using, eg, the SHM refclock.
Related Topics: LoadableRefclockDrivers
Study the usefulness of different clock models for NTP
This project is best suited for students in post-doc or doctoral programs who have a good understanding of clock statistics.
It is possible that a graduate student would also be considered for this project.
The rare undergraduate who can demonstrate proficiency will also be considered.
There are two issues that could be considered for study. The first is a parameter that provides an explicit connection between the polling interval and the accuracy of the synchronization process. This is useful because the polling interval can be increased by a large factor if the client system does not require the full accuracy that NTP can typically support. The second issue is a more sophisticated model for the performance of the clock on the local system and for the network delay. The Kalman filter formalism is promising in this respect, because it provides a much more general method for specifying the noise characteristics of a process.
Convert the distribution from a recursive to a non-recursive Makefile
framework
The codebase uses a traditional recursive
Makefile
framework. If the package could be converted to a non-recursive
Makefile
framework (hopefully one that continued to use
AutoMake
) then after running
configure
one could
cd ntpd && make
and all prerequisite libraries would be built and then
ntpd
would be produced, as efficiently as possible. A
make
from the top-level of the build tree would build the entire package, very efficiently. Note that typically a non-recursive project does not allow
make
anywhere but the top level, yet we would like to preserve that functionality of recursive
make
builds. The solution should handle "sub-packages" - for example, the sntp/ directory is moving toward being a full tear-off, and the solution should work for a stand-alone sntp package or the complete NTP distribution (that includes sntp). Further, sntp carries a sub-package libevent which uses a recursive Autoconf/Automake build, and there is probably not enough benefit to be had from non-recursive libevent build to justify maintaining separate build infrastructure from upstream. One can install libevent systemwide and avoid building the NTP bundled copy entirely. This would imply a mostly non-recursive framework that nonetheless recurses if building the bundled libevent. Similarly, to preserve the intended ability of sntp to be built from a tearoff tarball containing only the sntp subtree without duplicating logic between the top-level NTP
Makefile
and
sntp/Makefile
, it may make sense to retain recursion from the top-level
Makefile
to build sntp.
Update ntpq
- Provide real decoding of Authentication Status information and other status bytes / flags. (DaveHart agrees decoded Authentication Status is worthwhile but believes strongly such decoding belongs in ntpd, not ntpq, because ntpq should not be required to exactly match the version of ntpd for management convenience, and these values' interpretation has changed in the past and likely will in the future, if nothing else by adding more defined bits -- potentially this can be done in ntpd by changing the response for peer variable authstatus from a simple numeric value to a string containing the decoded flag values, with or without the simple numeric value -- but this seems to DaveHart to be far too little work to constitute a GSoC project.)
Related Topics: NtpVariablesAndNtpq,
UpdatingTheRefidFormat,
Bug #820
Access and Authorization levels
Currently,
ntpd
supports two levels of access to its information, public and trusted. Finer-grained access would be better. All read-only data is currently considered "public" and some of it should be considered "private". All writable data is currently protected by a single level of access control, and some of this data is safer to modify than other data. For this project, you'd be classifying these data types and implementing some additional authorization levels. If you have worked with parsers and lexers before then you can work on adding the new keywords. If you have not worked with parsers and lexers before, an NTP developer familiar with these areas of the code will take care of these things for you.
Related Topics: ConfigurationAndAuthorizationLevelsForNtpd
Monitoring / Management front end
- cross-platform TUI/GUI front end for
ntpq
and the content of the scripts directory
- ncurses, perl/tk, WxWindows
- provide log access tools, graphing, etc.
Update the SHM refclock
- Clean up the protocol
- Possibly offer a "client library"
- include some unit and stress tests to demonstrate that the protocol works and does not have race conditions or mutex problems
- Also see RefclockShmV2
Finish the autogen tag translators
We use GNU autogen to handle our options processing and to produce manual pages in
man
and
mdoc
formats, and also
.texi
,
.info
, and
.html
formats. The "source" format for these pages use
man
,
mdoc
, or
.texi
tags (probably
html
as well), and autogen needs translations scripts to convert from one tag format to the other. Right now, the
mdoc2man
and
mdoc2texi
scripts are fairly complete, but we need:
- all of the other combinations finished
- a test suite (this will actually be pretty easy to do)
Related Topics:
Bug #2311
Create a web-based ntp.conf
file generator
Different versions of NTP support different options in the configuration file.
It would be good to have a web-based service that would ask the user for the version of NTP they are using,
ask them some basic questions,
and generate an
ntp.conf
file.
The generated file should contain
the version number of the script that generated the file,
and would ideally include a URL
that would re-generate the configuration file with any newer version of the generation script.
On a related topic, it might be interesting to write a web-based
ntp.conf
analyzer that took the version of NTP as a parameter and an
ntp.conf
file and produced a report about extraneous or useless lines, as well as a report of "missing" important items.
We had a successful GSoC 2015 project on this topic, and we learned a lot about what needs to happen to finish it. It should be possible to get a working initial prototype with one more GSoC session.
Upgrade NTP's monitoring mechanisms
Traditionally,
ntpd
has used
ntpq
(mode 6) and
ntpdc
(mode 7) UDP packets to monitor and control
ntpd
.
With the recent awareness by black-hats that mis-configured older
ntpd
instances can be used for significant reflection attacks, the monitoring capabilities of NTP should be re-examined.
This may include TCP options.
Related Topics: ConfigurationAndAuthorizationLevelsForNtpd
Audit libisc
Many years ago we imported
libisc
into the NTP codebase. It's been a while since anybody has done a comparison to what
libisc
looks like today with what we are using. We have made some bugfixes that may not have been "accepted" by
libisc
, and there are likely some changes in
libisc
that we don't know about. It would be useful to do a reconciliation of the two codebases, accepting improvements in to NTP as needed, and offering any of our patches back to ISC.
Implement a tzdist
information distribution service and client package
Martin Burnicki suggests:
The server package would pull updates for time zone rules and leap second table from a tzdist server and save updates in formats expected by existing applications.
For example, it could update the TZ files used by glibc to convert UTC to local time, and it could generate a leap second file in NIST format which could be read by ntpd. Eventually it could even update the time zone rules in the Windows registry.
IMO this would be very helpful to provide systems with automatic updates, e.g. when Morocco or Egypt have determine the date of DST changes only very late, which usually causes huge problems to maintainers of the IT infrastructure in such countries.
Terje Mathisen adds:
This seems like a very good idea, the tzdist client would be a small project while the full server, including all the needed hacks to pull into legal info from all over the place, could be as large as you want to. You would require an option for manual input as well.
Personally I would probably make the client use a simple https request with parameters specifying the target area/country/gps coordinates and the return would be in JSON or (shudder) XML.
I would definitely make encryption & secure authentication the default, possibly with plain http as a configurable fallback for a local server.
Implement a "configuration summary" display
When
configure
runs there is a lot of output generated and it can be pretty hard to go thru all of that output and learn anything useful from it.
If we had a "summary page" of the configuration options that was emitted after the
configure
run it would help a lot of folks.
For example:
- whether or not OpenSSL was being used
- a complete list of which refclocks were being built, and which ones were not being build.
- whether or not any given features were being enabled, and if not, why (missing libraries, missing headers)
- directory paths
Other ideas
Comments
Under
Monitoring Management front end, what about adding SNMP support for Windows?
--
DavidTaylor - 2012-03-17
Probably no longer interesting ideas:
System clock startup analysis
When NTP starts up there is value in quickly learning the offset to the correct time and the frequency adjustment that is needed to keep the clock sync'd. This can be thought of as
y=mx+b
, where
y
is the actual time,
m
is the needed frequency adjustment,
x
is the correct time, and
b
is the offset between the system time and the correct time.
There are at least two interesting aspects of this perspective.
One is that since the frequency adjustment is usually measured in "parts per million", there is something to be said for calculating
y=(m+1)x+b
so the
slope correction value is "normalized" relative to 0 instead of relative to 1 and the
double
value of
m
provides the most number of significant bits.
The other has to do with how we get the time - we've heard that if we are using a wired LAN we should be able to determine the offset and frequency adjustment in 30-45 seconds' time. It may take a minute or two to get this same level of accuracy using a wireless LAN.
One obvious way to get this is thru a least-squares analysis of the time returned from some number of time servers, external or possibly local refclocks. A good starting point for this is
https://www.dragonflybsd.org/cvsweb/src/usr.sbin/dntpd/ .
If one wanted to use a local GPS refclock for getting time signals,
gpsd
might be a good way to go.
One should be aware of the rate at which one can send packets to a remote
ntpd
, and one should study a range of "collection times" to see what the differences are between, say, checking to see how well we can figure out the offset and drift over a useful range of time periods.
Identity Scheme Configuration Tool
- TUI/GUI front end for Identity Scheme Deployment / Management
How is this project different from the "Autokey configuration wizard"?
Autokey configuration wizard
- Lead user through a series of comprehensible questions.
- Invoke
ntp-keygen
repeatedly to generate per-trusted-host, per-group, and per-client keysdir files
- Generate
ntp.conf
snippets for each host.
- Verify every supported scheme works as intended by testing all paths through wizard.
How is this project different from the "Identity Scheme Configuration Tool"?