Archive | December 2014

Downloads of the Year 2014

The top 10 Windows and Mac downloads for 2014.

via CNET Blogs http://ift.tt/1rBy8mL

Advertisements

New Language from MIT Streamlines Building SQL-Backed Web Applications

There are countless developers and administrators who are creating and deploying online applications backed by SQL databases.

The problem is that creating and deploying them is not the easiest nut to crack due to the complexity of marrying HTML, JavaScript and other tools and components.

That’s exactly the problem that Adam Chlipala, an Assistant Professor of Electrical Engineering and Computer Science at MIT, is trying to solve with Ur/Web, a domain-specific functional programming language for modern Web applications. The language encapsulates many key components needed for robust applications into just one language, and can help ensure the security of the applications. 

According to Chlipala:

"My research applies formal logic to improve the software development process. I spend a lot of time proving programs correct with the Coq computer proof assistant, with a focus on reducing the human cost of program verification so that we can imagine that it could one day become a standard part of software development (at least for systems software). I’m also interested in the design and implementation of programming languages, especially functional or otherwise declarative languages, especially when expressive type systems (particularly dependent type systems) are involved. I usually stick to very low-level or very high-level languages; I believe that most ‘general-purpose languages’ of today fail to hit the mark by being, for any particular software project, either too low-level or too high-level."

Chlipala also emphasizes that Ur/Web is not only a research prototype. It has a growing programmer community and some commercial application development underway. As an explanatory page notes:

"Ur/Web supports construction of dynamic web applications backed by SQL databases. The signature of the standard library is such that well-typed Ur/Web programs "don’t go wrong" in a very broad sense. Not only do they not crash during particular page generations, but they also may not:

 

  • – Suffer from any kinds of code-injection attacks
  • – Return invalid HTML
  • – Contain dead intra-application links
  • – Have mismatches between HTML forms and the fields expected by their handlers
  •  – Attempt invalid SQL queries
  • – Use improper marshaling or unmarshaling in communication with SQL databases or between browsers and web servers

 

"This type safety is just the foundation of the Ur/Web methodology. It is also possible to use metaprogramming to build significant application pieces by analysis of type structure. For instance, the demo includes an ML-style functor for building an admin interface for an arbitrary SQL table. The type system guarantees that the admin interface sub-application that comes out will always be free of the above-listed bugs, no matter which well-typed table description is given as input."

"The Ur/Web compiler also produces very efficient object code that does not use garbage collection. These compiled programs will often be even more efficient than what most programmers would bother to write in C. For example, the standalone web server generated for the demo uses less RAM than the bash shell. The compiler also generates JavaScript versions of client-side code, with no need to write those parts of applications in a different language."

"The implementation of all this is open source."

You can get the latest distribution of Ur/Web here.

 

 

Related Activities

Related Blog Posts

via OStatic blogs http://ift.tt/1rA8oXU

Apache Marks Year’s End By Graduating Two Big Data Projects

As this year draws to a close, it’s worth taking note of two important projects from the Apache Software Foundation (ASF) that have graduated to top-tier project status, ensuring them development resources and more. Apache  MetaModel went from the Apache Incubator to become a Top Level Project. It provides a model for interacting with data based on metadata, and developers can use it to go beyond just physical data layers to work with most any forms of data.

Meanwhile, we’ve also covered the news of Apache Drill graduating to Top Level Project status. Drill is billed as the world’s first schema-free SQL query engine that delivers real-time insights by removing the constraint of building and maintaining schemas before data can be analyzed.

We ran an interview with Tomer Shiran (shown above), a member of the Drill Project Management Committee, to get his thoughts. He said:

"Analysts and developers can use Drill to interactively explore data in Hadoop and other NoSQL databases, such as HBase and MongoDB. There’s no need to explicitly define and maintain schemas, as Drill can automatically leverage the structure that’s embedded in the data."

"This enables self-service data exploration, which is not possible with traditional data warehouses or SQL-on-Hadoop solutions like Hive and Impala, in which DBAs must manage schemas and transform the data before it can be analyzed."

"Drill is the ideal interactive SQL engine for Hadoop. One of the main reasons organizations choose Hadoop is due to its flexibility and agility. Unlike traditional databases, getting data into Hadoop is easy, and users can load data in any shape or size on their own. Early attempts at SQL on Hadoop (eg, Hive, Impala) force schemas to be created and maintained even for self-describing data like JSON, Parquet and HBase tables."

"These systems also require data to be transformed before it can be queried. Drill is the only SQL engine for Hadoop that doesn’t force schemas to be defined before data can be queried."

 According to eWeek, regarding MetaModel:

"Apache  MetaModel is a data access framework that provides a common interface  for the discovery, exploration, and querying of different types of data  sources. Unlike traditional mapping frameworks, MetaModel emphasizes  metadata of the data source itself and the ability to add more data  sources at runtime. MetaModel’s schema model and SQL-like query API is  applicable to databases, CSV files, Excel spreadsheets, NoSQL databases,  Cloud-based business applications, and even regular Java objects. This  level of abstraction makes MetaModel great for dynamic data processing  applications, less so for applications modeled strictly around a  particular domain, ASF officials said."

 "MetaModel  enables you to consolidate code and consolidate data a lot quicker than  any other library out there," said Kasper Sorensen, vice president of  Apache MetaModel, in a statement. "In these ‘big data days’ there’s a  lot of focus on performance and scalability, and surely these topics  also surround Apache MetaModel. The big data challenge is not always  about massive loads of data, but instead massive variation and feeding a  lot of different sources into a single application. Now to make such an  application you both need a lot of connectivity capabilities and a lot  of modeling flexibility. Those are the two aspects where Apache  MetaModel shines. We make it possible for you to build applications that  retain the complexity of your data – even if that complexity may change  over time. The trick to achieve this is to model on the metadata and  not on your assumptions."

 On the topic of what graduation to Top Level Project status means at Apache, Tomer Shiran said:

"Graduation is a decision made by the Apache Software Foundation (ASF) board, and it provides confidence to potential users and contributors that the project has a strong foundation. From a governance standpoint, a top-level project has its own board (also known as PMC). The PMC Chair (Jacques Nadeau) is a VP at Apache."

Related Activities

Related Software

Related Blog Posts

via OStatic blogs http://ift.tt/1zQR0ko

Introduction To Python *args and **kwargs For Beginners – Part 2

In the first part I explained to you guys the purpose of *args in python programming language through some simple practical examples. In this second part I will give you some other examples but with…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

via Unixmen http://feedproxy.google.com/~r/unixmenhowtos/~3/ZB_WlcJuzOc/

Pear Returning, In the Movies, and More Highlights

Today in Linux news Softpedia.com is reporting that Pear OS is making signs of a comeback. In other news, Debian is spotted in a new movie and Phil Shapiro shares a cheap laptop story. We have 2014 highlights on Ubuntu, GNOME, and FOSS in general as well as Jack Wallen’s wishes for the new year.

Linux was spotted on the big screen again. This time in Sci-Fi thriller Lucy. Softpedia.com is reporting that you can see a Linux distribution clearly running Xfce in an important scene in the movie. They say it most likely looks like Debian and they’ve posted a video with those seconds of interest.

Speaking of Softpedia.com, they also today relayed the rumor that Pear OS may be making a comeback. Pear OS was a Linux distribution that looked disturbingly like Mac OS X and disappeared about a year ago. Well someone spotted a new screenshot as if to tease of a new Pear OS release. On this Softpedia said, "From what little it can be discerned from the image, it could just be the real deal. The quality of the desktop matches what we would expect from Pear OS, but all those watermarks are strange."

OMG!Ubuntu! today looked back at the year in Ubuntu with the big developments each month. Christine Hall at Foss Force looks at the five biggest FOSS stories of the year. Systemd and Devuan made her list. The GNOMEs posted their highlights of the year including the releases of 3.12 and 3.14. And finally, Jack Wallen shares his wish list for the new year including hopes that the Ubuntu Phone actually gets released.

Related Activities

Related Software

Related Blog Posts

via OStatic blogs http://ostatic.com/blog/pear-returning-in-the-movies-and-more-highlights

Apache Markes Year’s End By Graduating Two Big Data Projects

As this year draws to a close, it’s worth taking note of two important projects from the Apache Software Foundation (ASF) that have graduated to top-tier project status, ensuring them development resources and more. Apache  MetaModel went from the Apache Incubator to become a Top Level Project. It provides a model for interacting with data based on metadata, and developers can use it to go beyond just physical data layers to work with most any forms of data.

Meanwhile, we’ve also covered the news of Apache Drill graduating to Top Level Project status. Drill is billed as the world’s first schema-free SQL query engine that delivers real-time insights by removing the constraint of building and maintaining schemas before data can be analyzed.

We ran an interview with Tomer Shiran (shown above), a member of the Drill Project Management Committee, to get his thoughts. He said:

"Analysts and developers can use Drill to interactively explore data in Hadoop and other NoSQL databases, such as HBase and MongoDB. There’s no need to explicitly define and maintain schemas, as Drill can automatically leverage the structure that’s embedded in the data."

"This enables self-service data exploration, which is not possible with traditional data warehouses or SQL-on-Hadoop solutions like Hive and Impala, in which DBAs must manage schemas and transform the data before it can be analyzed."

"Drill is the ideal interactive SQL engine for Hadoop. One of the main reasons organizations choose Hadoop is due to its flexibility and agility. Unlike traditional databases, getting data into Hadoop is easy, and users can load data in any shape or size on their own. Early attempts at SQL on Hadoop (eg, Hive, Impala) force schemas to be created and maintained even for self-describing data like JSON, Parquet and HBase tables."

"These systems also require data to be transformed before it can be queried. Drill is the only SQL engine for Hadoop that doesn’t force schemas to be defined before data can be queried."

 According to eWeek, regarding MetaModel:

"Apache  MetaModel is a data access framework that provides a common interface  for the discovery, exploration, and querying of different types of data  sources. Unlike traditional mapping frameworks, MetaModel emphasizes  metadata of the data source itself and the ability to add more data  sources at runtime. MetaModel’s schema model and SQL-like query API is  applicable to databases, CSV files, Excel spreadsheets, NoSQL databases,  Cloud-based business applications, and even regular Java objects. This  level of abstraction makes MetaModel great for dynamic data processing  applications, less so for applications modeled strictly around a  particular domain, ASF officials said."

 "MetaModel  enables you to consolidate code and consolidate data a lot quicker than  any other library out there," said Kasper Sorensen, vice president of  Apache MetaModel, in a statement. "In these ‘big data days’ there’s a  lot of focus on performance and scalability, and surely these topics  also surround Apache MetaModel. The big data challenge is not always  about massive loads of data, but instead massive variation and feeding a  lot of different sources into a single application. Now to make such an  application you both need a lot of connectivity capabilities and a lot  of modeling flexibility. Those are the two aspects where Apache  MetaModel shines. We make it possible for you to build applications that  retain the complexity of your data – even if that complexity may change  over time. The trick to achieve this is to model on the metadata and  not on your assumptions."

 On the topic of what graduation to Top Level Project status means at Apache, Tomer Shiran said:

"Graduation is a decision made by the Apache Software Foundation (ASF) board, and it provides confidence to potential users and contributors that the project has a strong foundation. From a governance standpoint, a top-level project has its own board (also known as PMC). The PMC Chair (Jacques Nadeau) is a VP at Apache."

Related Activities

Related Software

Related Blog Posts

via OStatic blogs http://ostatic.com/blog/apache-markes-years-end-by-graduating-two-big-data-projects

Docker Reigned in 2014, But Competition is Coming

Container technology was without a doubt one of the biggest stories of 2014, and if you mention the container arena to most people, Docker is what they think of. As impressive as Docker is, as recently as June of last year, OStatic highlighted some of its instabilities.

As 2014 ends, we are about to see the container space get a whole lot more complicated and competitive. Some big fish are swimming right next to Docker. Google has set its sights squarely on Docker by transforming its Kubernetes platform into a full-fledged part of Google Cloud Platform with Google Container Engine. Meanwhile Canonical is leaping into the into the virtualization arena with a new hypervisor called LXD  that uses the same Linux container tools that have allowed Docker to isolate instances from one another. And, I’ve reported on how Joyent has announced that it is open sourcing its core technology, which can compete with OpenStack and other cloud offerings, and facilitates efficient use of container technologies like Docker.

A few months ago, I covered the news that Google had released Kubernetes under an open-source license, which is essentially a version of Borg, designed to harness computing power from data centers into a powerful virtual machine. It can make a difference for many cloud computing deployments, and optimizes usage of container technology. You can find the source code for Kubernetes on GitHub

Following my initial report, news arrived that some vey big contributors to the Kubernetes project, including IBM, Microsoft, Red Hat, Docker, CoreOS, Mesosphere, and SaltStack are working in tandem on open source tools and container technologies that can run on multiple computers and networks. Now, Google has transformed Kubernetes int a full-fledged part of Google Cloud Platform with Google Container Engine

In a blog post, Brian Stevens, VP of Product Management, characterizes Google Container Engine as ideal for handling virtual machines:

"Google Container Engine lets you move from managing application components running on individual virtual machines to launching portable Docker containers that are scheduled into a managed compute cluster for you. Create and wire together container-based services, and gain common capabilities like logging, monitoring and health management with no additional effort. Based on the open source Kubernetes project and running on Google Compute Engine VMs, Container Engine is an optimized and efficient way to build your container-based applications."

 While Google is a big fish, lots of people are talking about Canonical’s LXD project as well. As noted by Silicon Angle:

"Canonical Ltd. dropped a bombshell last week after revealing that its following fellow operating system vendors Red Hat Inc. and Microsoft Corp. into the virtualization market with a new hypervisor that promises to deliver the same experience as the competition faster and more efficiently. Dubbed LXD, the software relies on the same Linux containerization feature that provided the foundation for Docker to isolate instances from one another but adds integration with popular security utilities along with management and monitoring functionality."

Canonical, has recently launched a new “snappy” version of Ubuntu Core. This minimalist take on Ubuntu can especially serve Docker deployments and platform-as-a-service environments.

Also on the Linux competition front, we reported on how the CoreOS team is developing a Docker competitior dubbed Rocket. Rocket is a new container runtime, designed for composability, security, and speed, according to the CoreOS team. The group has released a prototype version on GitHub to begin getting community feedback.

According to a post on Rocket:

“When Docker was first introduced to us in early 2013, the idea of a “standard container” was striking and immediately attractive: a simple component, a composable unit, that could be used in a variety of systems. The Docker repository included a manifesto of what a standard container should be. This was a rally cry to the industry, and we quickly followed. We thought Docker would become a simple unit that we can all agree on.”

“Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform.”

“We still believe in the original premise of containers that Docker introduced, so we are doing something about it. Rocket is a command line tool, rkt, for running App Containers. An ‘App Container’ is the specification of an image format, container runtime, and a discovery mechanism.”

 Joyent has has also announced two new open source initiatives and the general availability of a container service in the Joyent Public Cloud to accelerate the adoption of application containers in the enterprise. Docker application containers are grabbing headlines everywhere and overhauling how data centers operate. Joyent maintains, though, that there remain limitations in the areas of security, virtual networking and persistence that present challenges for enterprises looking to deploy Docker in support of production applications. The open source initiatives Joyent is announcing, Linux Branded Zones (LXz) and the extension of Docker Engine to SmartDataCenter, are targeted to "deliver proven, multi-tenant security and bare metal performance to Linux applications running in Docker application containers."

Joyent maintains that with LXz, you can run Linux applications, including those running in Docker Containers, natively on secure OS virtualization without an intervening hardware hypervisor layer.

"Running Docker containers on legacy hardware hypervisor hosts, like VMware or Amazon EC2, means you give up the workload density and performance benefits associated with infrastructure containers," said Bill Fine, VP Products, Joyent. "LXz and Docker Engine for SmartDataCenter provide an infrastructure container runtime environment capable of delivering secure, bare metal performance to Docker-based applications in a multi-tenant environment." 

Docker application containers are grabbing headlines everywhere and overhauling how data centers operate. They will remain a big story in 2015, but Docker will also deal with competition. Many major public and private cloud providers advise enterprises to run Docker containers on top of legacy hardware hypervisors because of security concerns related to the default Linux infrastructure containers. They will look closely at technology that competes with Docker, and that will be a story to watch in 2015.

 

Related Activities

Related Software

Related Blog Posts

via OStatic blogs http://ostatic.com/blog/docker-reigned-in-2014-but-competition-is-coming