blog  
 
 
Una Cittadella della Scienza nel nome di Galileo
17/01/2014 | 14:04 | Stampa

Sorgerŕ ai Vecchi Macelli. Ne parla Laura Montanari su “La Repubblica”

Nell'anno del 450° anniversario della nascita del grande scienzato, Pisa sta lavorando per allestire un vero e proprio museo-parco della scienze

Si chiamerŕ Cittadella Galileiana e si troverŕ nell'area dei Vecchi Macelli, tra piazza dei Miracoli e il parco della Cittadella. Integrerŕ l'esistente Museo del Calcolo con una ludoteca, un parco scientifico e laboratori di ricerca

Allegati

Commenti
12/09/2014 | 04:19
Osla ha scritto:
Hi Frederick,I think the problem with ppreor knowledge of data and its quality' is with advances in data sharing technology, combined with the diminishing federal role for monitoring and the expanding role of other government agencies and the private sector, is that data search and discovery will become increasingly fragmented. It would become onerous to have ppreor knowledge' of your data if for any given project there are several small data providers.We need to be prepared to exploit the information content of all datasets that are relevant to the problem without disinformation getting in the way.The problem needs to be looked at as the sum of risks:1. what is the risk of erroneous data contaminating information leading to bad decisions?2. what is the risk of good data being disregarded for fear it might be disinformative?3. what is the risk of good data being hoarded/inaccessible because the data provider fears mis-appropriate use of the data?4. what is the risk of further erosion of federal networks because decision makers can't distinguish the quality/'fitness for purpose' from lesser quality networks?5. what is the risk of survival of the cheapest' as technology deployed by agencies who do not have adequate hydrometric training/standards/quality assurance becomes available through the internet?I don't know what these risks are but they do scare me a bit. The question is what can we do to reduce these risks? http://zeonpi.com [url=http://tmiskebvryr.com]tmiskebvryr[/url] [link=http://mbaelskp.com]mbaelskp[/link]

10/09/2014 | 07:02
Omnya ha scritto:
Hi Gerald,It is absolutely true that data preirdovs have to work within the resource envelope given to them, hence decisions about fitness for purpose are abstracted to a budgetary process where the decision-makers are largely ignorant of the impacts of their decisions. Within this resource envelope further compromises have to be made: do I use the money to run more gauges of a lower quality or fewer gauges at a higher quality?There is a circular logic that stymies progress in the field of hydrology. We lack the predictive skill to fill data voids in the hydroscape so we need more gauges but by diluting our resources with more gauges (hence, less technology and fewer site visits per gauge) we reduce our ability to improve our predictive skill.We are entrenched in the notion that our data only needs to be as good as it used to be. I would argue that it needs to be much better.The continuity equation: inputs equal outputs + change in storage (Qi=Qo+DS/dt) is well known and is the basis for almost everything that we know, or think we know, about hydrology.Arguably, we have learned all there is to know given traditional data with unknown uncertainty. Almost any hydrological model can explain 80% of the variability in outputs based on the inputs. Resolving the last 20% of the variability will require much better data and metadata than are currently available. This means that the research community needs to learn how to collect good data and the hydrometric community needs to learn how to communicate the limitations of their data for precise work.The only way forward that I can think of is to keep the conversation going.Stu

09/09/2014 | 14:29
Evans ha scritto:
Hi Ferdinand,Somehow, we need to find a way move beyond where we are to get to a place where the ifoarmntion in the data can be fully exploited without contamination from the disinformation that is inherent in all data. To follow up on your reference to the USGS data standards I fully trust USGS data for the 90% use case – it is the 10% use case that is troublesome. At some scale the data are highly reliable, however within every dataset there is some scale where the data are unreliable. Used for the purpose of flood frequency analysis – no problem (mostly) but to make inference that a given peak is different than another given peak because of process hydrology – how would you disentangle that from the effect of modification/updating of rating curves?I may be wrong but I think it is up to people like you and I to find some way of explaining hydrometry to the hydrology community. If we are successful in that then maybe formal training in the principles and practices of hydrometry will become a mandatory requirement for accreditation as a hydrologist.The problem is finding a soap box to stand on to get our message across. I have tried publishing commentaries and articles in hydrological journals (e.g. citations below) without too much success. I sometimes wonder if people read what I write as if I am saying that hydrometric data are untrustworthy. Quite the opposite, almost every data provider I have ever met has an almost religious zeal to achieve high quality. The only thing that is bad is ignorant people taking that data and using it inappropriately.Let’s keep the conversation going until we come up with a plan for how to solve the problem.Hamilton, A.S. and R.D. Moore. 2012. “Quantifying uncertainty in hydrometric records.” Canadian Water Resources Journal, 37(1):1-19.Hamilton, S. 2008. “Sources of uncertainty in Canadian low-flow hydrometric data.” Canadian Water Resources Journal, 33(2):125-136.Hamilton, S. 2007. “Invited Commentary: Completing the loop from data to decisions and back to data.” Hydrological Processes, 21: 3105-3106. DOI:10.1002/hyp.6860Stu http://aibmxml.com [url=http://aqvfkjqzhu.com]aqvfkjqzhu[/url] [link=http://eeifrxm.com]eeifrxm[/link]

09/09/2014 | 12:00
Marco ha scritto:
Hi Bob,The topic of streamflow rrosncteuction for gap filling is an important issue. Data gaps are inevitable and, arguably, the hydrographer responsible for the dataset is the one best informed for doing the streamflow estimation. However, there is relatively little rigor in the process of transference of information from nearby gauges or climate stations to provide these estimates as compared to primary data production processes. Metadata to indicate that a gap has been filled may be available but there is almost never enough information from which an end-user can judge the quality of the estimates.On the topic of uncertainty, one of the things that I would challenge is the notion that an estimate of aggregate uncertainty (e.g. 5%) is even useful. It may be that most of the time data has low error, some of the time high error, and occasionally very high error. I would argue that it is almost irrelevant if that all averages out to an aggregate of plus/minus 5%. Frequently it is the extreme data that has the most influence on decision making and it is precisely this data that is most likely to have very high error.I also think that as well as uncertainty being asymmetrical among data values it can be asymmetrical within data values. With environmental data it is sometimes less wrong' to leave a bias uncorrected than it is to correct for a bias which is inadequately defined. An example might be a systematic backwater effect that is marginally less than the magnitude that would trigger a correction according to protocols for use of shift corrections.There is obviously a lot of work ahead in developing methods for the communication of hydrometric uncertainty in a way that unambiguously leads to better information and decision-making. I like that you are thinking of ways to assist people doing analysis of hydrometric data. I would like to hear your ideas for identifying fitness for purpose'. Data that are good enough to trigger a warning may not be good enough to detect a climate change signal with high confidence. How can someone without good knowledge of the data provider tell the difference?

07/09/2014 | 18:14
Nedzad ha scritto:
I really enjeyod browsing through correspondence related to the Plan for the next decade, and thought of dropping my two pence worth. I would like to strongly support the point made by Salvatore Grimaldi, which also echoed in some other postings (but still I am not sure that it is getting the required attention). Let’s take a break from endlessly trying to improve models with the very limited observed data, and focus the next decade more on improved observed data availability. The problem of observations at different scales, for various hydrological processes – if not resolved, will always be a limiting factor in our ability to understand hydrological systems. We all know this. Almost every paper published in hydrology water resources explicitly or implicitly blames the lack of data. The first Hydrological Decade from mid 1960s focused on Data (off my head, correct me if I am wrong), and good progress was achieved. But latter it got diffused, and it is not a surprise that situation globally only deteriorated, particularly in the last two decades. We are still guessing, not assessing water resources, as John Rodda once (in 1995) put it. There is no doubt in my mind that last PUB decade produced a lot of super tool for prediction in ungagged basins. But shouldn’t we be also trying to solve the problem of actually reducing the numbers of such basins?Some think it is not a scientific but rather political / funding problem. It is partially true, but research showing economic value of water data, improved design of monitoring networks to capture increasing variability etc. (i.e. changing world) can / should influence politics and funding. There is a great need to continue improving ground observational networks. But alternative research focus shall be on remote data acquisition techniques. There is plenty of room for improvement in those, no doubt, but there is good progress also. I hear the counter-argument that we will never be able to measure fluxes, storages and flows by remote techniques as reliably as by ground based methods. Well, future will show. But dismissing the potential of remote methods of becoming an alternative to ground-based ones, because remote ones have issues today, sound like a dead end. Funny enough, we are so much in our “box” sometimes that we forget that, for example, long-term discharge series at a flow gauge are not, strictly speaking, measured. They are calculated from a rating curve that comes from only a few concurrent stage and discharge measurements. So it is another model, and yet, we treat these data as “observed“.Hydrology and Change is a good focus, don’t take me wrong. All research questions that are formulated are pertinent and relevant. But it would be good to emphasize the observed data aspect, as much as possible and re-direct much more effort of global hydrological community, in this changing world, of course, to the solution of the acute data problem, rather than dissolving it.One more concern comes to mind – a different one: the issue of communicating PUB science. I think this is another major bottleneck in hydrology at present. How much of fine PUB tool and ideas have actually become known, let alone implemented, in developing world, where most of water problems are? I see a lot of individuals and groups modeling and remodeling the same basins again and again with, very often, just the same one or two common models, without paying much attention to recent scientific developments.And finally, it would be good to think of some measurable, scientifically and socially relevant targets for the next decade, something like MDGs in hydrology and water resources. Every really good plan needs some.Vote this comment 0 user(s) like(s) this comment