Thanks to visit codestin.com
Credit goes to www.packtpub.com

Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Servers

57 Articles
article-image-unity-editor-will-now-officially-support-linux
Vincy Davis
31 May 2019
2 min read
Save for later

Unity Editor will now officially support Linux

Vincy Davis
31 May 2019
2 min read
Yesterday Martin Best, Senior Technical Product Manager at Unity, briefly announced that the Unity Editor will now officially support Linux. Currently the Editor is available only on ‘preview’ for Ubuntu and CentOS, but Best has stated that it will be fully supported by Unity 2019.3. Another important note is to make sure that before opening projects via the Linux Editor, the 3rd-party tools also support it. Unity has been offering an unofficial, experimental Unity Editor for Linux since 2015. Unity had released the 2019.1 version in April this year, in which it was mentioned that the Unity editor for Linux has moved into preview mode from the experimental status. Now the status has been made official. Best mentions in the blog post, “growing number of developers using the experimental version, combined with the increasing demand of Unity users in the Film and Automotive, Transportation, and Manufacturing (ATM) industries means that we now plan to officially support the Unity Editor for Linux.” The Unity Editor for Linux will be accessible to all Personal (free), Plus, and Pro licenses users, starting with Unity 2019.1. It will be officially supported on the following configurations: Ubuntu 16.04, 18.04 CentOS 7 x86-64 architecture Gnome desktop environment running on top of X11 windowing system Nvidia official proprietary graphics driver and AMD Mesa graphics driver Desktop form factors, running on device/hardware without emulation or compatibility layer Users are quite happy that the Unity Editor will now officially support Linux. A user on Reddit comments, “Better late than never.” Another user added, “Great news! I just used the editor recently. The older versions were quite buggy but the latest release feels totally on par with Windows. Excellent work Unity Linux team!” https://twitter.com/FourthWoods/status/1134196011235237888 https://twitter.com/limatangoalpha/status/1134159970973470720 For the latest builds, check out the Unity Hub. For giving feedback on the Unity Editor for Linux, head over to the Unity Forum page. Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players. Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Unity updates its TOS, developers can now use any third party service that integrate into Unity
Read more
  • 0
  • 0
  • 24204

article-image-gnu-bash-5-0-is-here-with-new-features-and-improvements
Natasha Mathur
08 Jan 2019
2 min read
Save for later

Bash 5.0 is here with new features and improvements

Natasha Mathur
08 Jan 2019
2 min read
GNU project made version 5.0 of its popular POSIX shell Bash ( Bourne Again Shell) available yesterday. Bash 5.0 explores new improvements and features such as BASH_ARGV0, EPOCHSECONDS, and EPOCHREALTIME among others. Bash was first released in 1989 and was created for the GNU project as a replacement for their Bourne shell. It is capable of performing functions such as interactive command line editing, and job control on architectures that support it. It is a complete implementation of the IEEE POSIX shell and tools specification. Key Updates New features Bash 5.0 comes with a newly added EPOCHSECONDS variable, which is capable of expanding to the time in seconds. There is another newly added EPOCHREALTIME variable which is similar to EPOCHSECONDS in Bash 5.0. EPOCHREALTIME is capable of obtaining the number of seconds since the Unix Epoch, the only difference being that this variable is a floating point with microsecond granularity. BASH_ARGV0 is also a newly added variable in Bash 5.0 that expands to $0 and sets $0 on assignment. There is a newly defined config-top.h in Bash 5.0. This allows the shell to use a static value for $PATH. Bash 5.0 has a new shell option that can enable and disable sending history to syslog at runtime. Other Changes The `globasciiranges' option is now enabled by default in Bash 5.0 and can be set to off by default at configuration time. POSIX mode is now capable of enabling the `shift_verbose' option. The `history' builtin option in Bash 5.0 can now delete ranges of history entries using   `-d start-end'. A change that caused strings containing + backslashes to be flagged as glob patterns has been reverted in Bash 5.0. For complete information on bash 5.0, check out its official release notes. GNU ed 1.15 released! GNU Bison 3.2 got rolled out GNU Guile 2.9.1 beta released JIT native code generation to speed up all Guile programs
Read more
  • 0
  • 0
  • 21038

article-image-why-did-slack-suffer-an-outage-on-friday
Fatema Patrawala
01 Jul 2019
4 min read
Save for later

Why did Slack suffer an outage on Friday?

Fatema Patrawala
01 Jul 2019
4 min read
On Friday, Slack, an instant messaging platform for work spaces confirmed news of the global outage. Millions of users reported disruption in services due to the outage which occurred early Friday afternoon. Slack experienced a performance degradation issue impacting users from all over the world, with multiple services being down. Yesterday the Slack team posted a detailed incident summary report of the service restoration. The Slack status page read: “On June 28, 2019 at 4:30 a.m. PDT some of our servers became unavailable, causing degraded performance in our job processing system. This resulted in delays or errors with features such notifications, unfurls, and message posting. At 1:05 p.m. PDT, a separate issue increased server load and dropped a large number of user connections. Reconnection attempts further increased the server load, slowing down customer reconnection. Server capacity was freed up eventually, enabling all customers to reconnect by 1:36 p.m. PDT. Full service restoration was completed by 7:20 p.m. PDT. During this period, customers faced delays or failure with a number of features including file uploads, notifications, search indexing, link unfurls, and reminders. Now that service has been restored, the response team is continuing their investigation and working to calculate service interruption time as soon as possible. We’re also working on preventive measures to ensure that this doesn’t happen again in the future. If you’re still running into any issues, please reach out to us at [email protected].” https://twitter.com/SlackStatus/status/1145541218044121089 These were the various services which were affected due to outage: Notifications Calls Connections Search Messaging Apps/Integrations/APIs Link Previews Workspace/Org Administration Posts/Files Timeline of Friday’s Slack outage According to user reports it was observed that some Slack messages were not delivered with users receiving an error message. On Friday, at 2:54 PM GMT+3, Slack status page gave the initial signs of the issue,  "Some people may be having an issue with Slack. We’re currently investigating and will have more information shortly. Thank you for your patience,". https://twitter.com/SlackStatus/status/1144577107759996928 According to the Down Detector, Slack users noted that message editing also appeared to be impacted by the latest bug. Comments indicated it was down around the world, including Sweden, Russia, Argentina, Italy, Czech Republic, Ukraine and Croatia. The Slack team continued to give updates on the issue, and on Friday evening they reported of services getting back to normal. https://twitter.com/SlackStatus/status/1144806594435117056 This news gained much attraction on Twitter, as many of them commented saying Slack is already preps up for the weekend. https://twitter.com/RobertCastley/status/1144575285980999682 https://twitter.com/Octane/status/1144575950815932422 https://twitter.com/woutlaban/status/1144577117788790785   Users on Hacker News compared Slack with other messaging platforms like Mattermost, Zulip chat, Rocketchat etc. One of the user comments read, “Just yesterday I was musing that if I were King of the (World|Company) I'd want an open-source Slack-alike that I could just drop into the Cloud of my choice and operate entirely within my private network, subject to my own access control just like other internal services, and with full access to all message histories in whatever database-like thing it uses in its Cloud. Sure, I'd still have a SPOF but it's game over anyway if my Cloud goes dark. Is there such a project, and if so does it have any traction in the real world?” To this another user responded, “We use this at my company - perfectly reasonable UI, don't know about the APIs/integrations, which I assume are way behind Slack…” Another user also responded, “Zulip, Rocket.Chat, and Mattermost are probably the best options.” Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Dropbox gets a major overhaul with updated desktop app, new Slack and Zoom integration Slack launches Enterprise Key Management (EKM) to provide complete control over encryption keys  
Read more
  • 0
  • 0
  • 20539

article-image-microsoft-releases-procdump-for-linux-a-linux-version-of-the-procdump-sysinternals-tool
Savia Lobo
05 Nov 2018
2 min read
Save for later

Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool

Savia Lobo
05 Nov 2018
2 min read
Microsoft developer, David Fowler revealed ‘ProcDump for Linux’, a Linux version of the ProcDump Sysinternals tool, over the weekend on November 3. ProcDump is a Linux reimagining of the classic ProcDump tool from the Sysinternals suite of tools for Windows. It provides a convenient way for Linux developers to create core dumps of their application based on performance triggers. Requirements for ProcDump The tool currently supports Red Hat Enterprise Linux / CentOS 7, Fedora 26, Mageia 6 and Ubuntu 14.04 LTS, with other versions being tested. It also supports gdb >= 7.6.1 and zlib (build-time only). Limitations of ProcDump Runs on Linux Kernels version 3.5+ Does not have full feature parity with Windows version of ProcDump, specifically, stay alive functionality, and custom performance counters Installing ProcDump ProcDump can be installed using two methods, first, Package Manager, which is a preferred method. The other one is via.deb package. To know more about ProcDump in detail visit its GitHub page. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report ‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers
Read more
  • 0
  • 0
  • 19558

article-image-windows-server-2019-comes-with-security-storage-and-other-changes
Prasad Ramesh
21 Dec 2018
5 min read
Save for later

Windows Server 2019 comes with security, storage and other changes

Prasad Ramesh
21 Dec 2018
5 min read
Today, Microsoft unveiled new features of Windows Server 2019. The new features are based on four themes—hybrid, security, application platform, and Hyper-Converged Infrastructure (HCI). General changes Windows Server 2019, being a Long-Term Servicing Channel (LTSC) release, includes Desktop Experience. During setup, there are two options to choose from: Server Core installations or Server with Desktop Experience installations. A new feature called System Insights brings local predictive analytics capabilities to Windows Server 2019. This feature is powered by machine learning and aimed to help users reduce operational expenses associated with managing issues in Windows Server deployments. Hybrid cloud in Windows Server 2019 Another feature called the Server Core App Compatibility feature on demand (FOD) greatly improves the app compatibility in the Windows Server Core installation option. It does so by including a subset of binaries and components from Windows Server with the Desktop Experience included. This is done without adding the Windows Server Desktop Experience graphical environment itself. The purpose is to increase the functionality of Windows server while keeping a small footprint. This feature is optional and is available as a separate ISO to be added to Windows Server Core installation. New measures for security There are new changes made to add a new protection protocol, changes in virtual machines, networking, and web. Windows Defender Advanced Threat Protection (ATP) Now, there is a Windows Defender program called Advanced Threat Protection (ATP). ATP has deep platform sensors and response actions to expose memory and kernel level attacks. ATP can respond via suppressing malicious files and also terminating malicious processes. There is a new set of host-intrusion prevention capabilities called the Windows Defender ATP Exploit Guard. The components of ATP Exploit Guard are designed to lock down and protect a machine against a wide variety of attacks and also block behaviors common in malware attacks. Software Defined Networking (SDN) SDN delivers many security features which increase customer confidence in running workloads, be it on-premises or as a cloud service provider. These enhancements are integrated into the comprehensive SDN platform which was first introduced in Windows Server 2016. Improvements to shielded virtual machines Now, users can run shielded virtual machines on machines which are intermittently connected to the Host Guardian Service. This leverages the fallback HGS and offline mode features. There are troubleshooting improvements to shield virtual machines by enabling support for VMConnect Enhanced Session Mode and PowerShell Direct. Windows Server 2019 now supports Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server inside shielded virtual machines. Changes for faster and safer web Connections are coalesced to deliver uninterrupted and encrypted browsing. For automatic connection failure mitigation and ease of deployment, HTTP/2’s server-side cipher suite negotiation is upgraded. Storage Three storage changes are made in Windows Server 2019. Storage Migration Service It is a new technology that simplifies migrating servers to a newer Windows Server version. It has a graphical tool that lists data on servers and transfers the data and configuration to newer servers. Their users can optionally move the identities of the old servers to the new ones so that apps and users don’t have to make changes. Storage Spaces Direct There are new features in Storage Spaces Direct: Deduplication and compression capabilities for ReFS volumes Persistent memory has native support Nested resiliency for 2 node hyper-converged infrastructure at the edge Two-server clusters which use a USB flash drive as a witness Support for Windows Admin Center Display of performance history Scale up to 4 petabytes per cluster Mirror-accelerated parity is two times faster Drive latency outlier detection Fault tolerance is increased by manually delimiting the allocation of volumes Storage Replica Storage Replica is now also available in Windows Server 2019 standard edition. A new feature called test failover allows mounting of destination storage to validate replication or backup data. Performance improvements are made and Windows Admin Center support is added. Failover clustering New features in failover clustering include: Addition of cluster sets and Azure-aware clusters Cross-domain cluster migration USB witness Cluster infrastructure improvements Cluster Aware Updating supports Storage Spaces Direct File share witness enhancements Cluster hardening Failover Cluster no longer uses NTLM authentication Application platform changes in Windows Server 2019 Users can now run Windows and Linux-based containers on the same container host by using the same docker daemon. Changes are being continually done to improve support for Kubernetes. A number of improvements are made to containers such as changes to identity, compatibility, reduced size, and higher performance. Now, virtual network encryption allows virtual network traffic encryption between virtual machines that communicate within subnets and are marked as Encryption Enabled. There are also some improvements to network performance for virtual workloads, time service, SDN gateways, new deployment UI, and persistent memory support for Hyper-V VMs. For more details, visit the Microsoft website. OpenSSH, now a part of the Windows Server 2019 Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019
Read more
  • 0
  • 0
  • 18362

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 17788
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-389-directory-server-set-to-replace-openldap-as-red-hat-and-suse-withdraw-support-for-openldap-in-their-enterprise-linux-offerings
Bhagyashree R
29 Aug 2018
2 min read
Save for later

389 Directory Server set to replace OpenLDAP as Red Hat and SUSE withdraw support for OpenLDAP in their Enterprise Linux offerings

Bhagyashree R
29 Aug 2018
2 min read
Red Hat and SUSE have withdrawn their support for OpenLDAP in their Enterprise Linux offers, which will be replaced by Red Hat’s own 389 Directory Server. The openldap-server packages were deprecated starting from Red Hat Enterprise Linux (RHEL) 7.4, and will not be included in any future major release of RHEL. SUSE, in their release notes, have mentioned that the OpenLDAP server is still available on the Legacy Module for migration purposes, but it will not be maintained for the entire SUSE Linux Enterprise Server (SLE) 15 lifecycle. What is OpenLDAP? OpenLDAP is an open source implementation of Lightweight Directory Access Protocol (LDAP) developed by the OpenLDAP Project. It is a collective effort to develop a LDAP suite of applications and development tools, which are robust, commercial-grade, and open source. What is 389 Directory Server? The 389 Directory Server is an LDAP server developed by Red Hat as a part of Red Hat’s community-supported Fedora Project. The name “389” comes from the port number used by LDAP. It supports many operating systems including Fedora, Red Hat Enterprise Linux 3 and above, Debian, Solaris 8 and above. The 389 Directory Server packages provide the core directory services components for Identity Management (IdM) in Red Hat Enterprise Linux and the Red Hat Directory Server (RHDS). The package is not supported as a stand-alone solution to provide LDAP services. Why Red Hat and SUSE withdrew their support? According to Red Hat, customers prefer Identity Management (IdM) in Red Hat Enterprise Linux solution over OpenLDAP server for enterprise use cases. This is why, they decided to focus on the technologies that Red Hat historically had deep understanding, and expertise in, and have been investing into, for more than a decade. By focusing on Red Hat Directory Server and IdM offerings, Red Hat will be able to better serve their customers of those solutions and increase the value of subscription. To know more on Red Hat and SUSE withdrawing their support for OpenLDAP, check out Red Hat’s announcement and SUSE release notes. Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices
Read more
  • 0
  • 0
  • 17784

article-image-zeit-releases-serverless-docker-in-beta
Richard Gall
15 Aug 2018
3 min read
Save for later

Zeit releases Serverless Docker in beta

Richard Gall
15 Aug 2018
3 min read
Zeit, the organization behind the cloud deployment software Now, yesterday launched Serverless Docker in beta. The concept was first discussed by the Zeit team at Zeit Day 2018 back in April, but it's now available to use and promises to radically speed up deployments for engineers. In a post published on the Zeit website yesterday, the team listed some of the key features of this new capability, including: An impressive 10x-20x improvement in cold boot performance (in practice this means cold boots can happen in less than a second A new slot configuration property that defines resource allocation in terms of CPU and Memory, allowing you to fit an application within the set of constraints that are most appropriate for it Support for HTTP/2.0 and WebSocket connections to deployments, which means you no longer need to rewrite applications as functions. The key point to remember with this release, according to Zeit, is that  "Serverless can be a very general computing model. One that does not require new protocols, new APIs and can support every programming language and framework without large rewrites." Read next: Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 What's so great about Serverless Docker? Clearly, speed is one of the most exciting things about serverless Docker. But there's more to it than that - it also offers a great developer experience. Johannes Schickling, co-founder and CEO of Prisma (a GraphQL data abstraction layer) said that, with Serverless Docker, Zeit "is making compute more accessible. Serverless Docker is exactly the abstraction I want for applications." https://twitter.com/schickling/status/1029372602178039810 Others on Twitter were also complimentary about Serverless Docker's developer experience - with one person comparing it favourably with AWS - "their developer experience just makes me SO MAD at AWS in comparison." https://twitter.com/simonw/status/1029452011236777985 Combining serverless and containers One of the reasons people are excited about Zeit's release is that it provides the next step in serverless. But it also brings containers into the picture too. Typically, much of the conversation around software infrastructure over the last year or so has viewed serverless and containers as two options to choose from rather than two things that can be used together. It's worth remembering that Zeit's product has largely been developed alongside its customers that use Now. "This beta contains the lessons and the experiences of a massively distributed and diverse user base, that has completed millions of deployments, over the past two years." Eager to demonstrate how Serverless Docker works for a wide range of use cases, Zeit has put together a long list of examples of Serverless Docker in action on GitHub. You can find them here. Read next A serverless online store on AWS could save you money. Build one. Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 17655

article-image-ubuntu-19-04-disco-dingo-beta-releases-with-support-for-linux-5-0-and-gnome-3-32
Bhagyashree R
01 Apr 2019
2 min read
Save for later

Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32

Bhagyashree R
01 Apr 2019
2 min read
Last week, the team behind Ubuntu announced the release of Ubuntu 19.04 Disco Dingo Beta, which comes with Linux 5.0 support, GNOME 3.32, and more. Its stable version is expected to release on April 18th, 2019. Following are some of the updates in Ubuntu 19.04 Disco Dingo: Updates in Linux kernel Ubuntu 19.04 is based on Linux 5.0, which was released last month. It comes with support for AMD Radeon RX Vega M graphics processor, complete support for the Raspberry Pi 3B and the 3B+, Qualcomm Snapdragon 845, and much more. Toolchain Upgrades The tools are upgraded to their latest releases. The upgraded toolchain includes glibc 2.29, OpenJDK 11, Boost 1.67, Rustc 1.31, and updated GCC 8.3, Python 3.7.2 as default,  Ruby 2.5.3, PHP 7.2.15, and more. Updates in Ubuntu Desktop This release ships with the latest GNOME 3.32 giving it a refreshed visual design. It also brings a few performance improvements and new features: GNOME Disks now supports VeraCrypt, a utility used for on-the-fly encryption. A panel is added to the Settings menu to help users manage Thunderbolt devices. With this release, more shell components are cached in GPU RAM, which reduces load and increases FPS count. Desktop zoom works much smoother. An option is added to automatically submit error reports to the error reporting dialog window. Other updates include new Yaru icon sets, Mesa 19.0, QEMU 13.1, and libvirt 14.0. This release will be supported for 9 months until January 2020. Users who require Long Term Support are recommended to use Ubuntu 18.04 LTS instead. To read the full list of updates, visit Ubuntu’s official website. Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Ubuntu releases Mir 1.0.0 Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released
Read more
  • 0
  • 0
  • 17406

article-image-linus-torvalds-is-sorry-for-his-hurtful-behavior-is-taking-a-break-from-the-linux-community-to-get-help
Natasha Mathur
17 Sep 2018
4 min read
Save for later

Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’

Natasha Mathur
17 Sep 2018
4 min read
Linux is one of the most popular operating systems built around the Linux kernel by Linus Torvalds. Because it is free and open source, it gained a huge audience among developers very fast. Torvalds further welcomed other developers’ contributions to add to the kernel granted that they keep their contributions free. Due to this, thousands of developers have been working to improve Linux over the years, leading to its huge popularity today. Yesterday, Linus, who has been working on the Kernel for almost 30-years caught the Linux community by surprise as he apologized and opened up about going on a break over his ‘hurtful’ behavior that ‘contributed to an unprofessional environment’. In a long email to the Linux Kernel mailing list, Torvalds announced Linux 4.19 release candidate and then talked about his ‘look yourself in the mirror’ moment. “This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry” admitted Torvalds. The confession came about after Torvalds confessed to messing up the schedule of the Maintainer's Summit, a meeting of Linux's top 40 or so developers, by planning a family vacation. “Yes, I was somewhat embarrassed about having screwed up my calendar, but honestly, I was mostly hopeful that I wouldn't have to go to the kernel summit that I have gone to every year for just about the last two decades. That whole situation then started a whole different kind of discussion --  I realized that I had completely mis-read some of the people involved,” confessed Torvalds. Torvalds has been notorious for his outspoken nature and outbursts towards others (especially the developers in the Linux Community). Sarah Sharps, Linux maintainer quit the Linux community in 2015 over Torvald’s offensive behavior and called it ‘toxic’. Torvalds exploded at Intel, earlier this year, for spinning Spectre fix as a security feature. Also, Torvalds responded with profanity, last year, about different approaches to security during a discussion about whitelisting the proposed features for Linux version 4.15. “Maybe I can get an email filter in place so that when I send email with curse-words, they just won't go out. I really had been ignoring some fairly deep-seated feelings in the Community...I am not an emotionally empathetic kind of person...I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely,” writes Torvalds. Torvalds then went ahead to talk about him taking a break from the Linux Community. “This is not some kind of "I'm burnt out, I need to just go away" break. I'm not feeling like I don't want to continue maintaining Linux. I very much want to continue to do this project that I've been working on for almost three decades. I need to take a break to get help on how to behave differently and fix some issues in my tooling and workflow”. A discussion with over 500 comments has started already on Reddit regarding Torvald’s decision.  While some people are supporting Torvald by accepting his apology, there are others who feel that the apology was long overdue and will believe him after he puts his words into action. https://twitter.com/TejasKumar_/status/1041527028271312897 https://twitter.com/coreytabaka/status/1041468174397399041 Python founder resigns – Guido van Rossum goes ‘on a permanent vacation from being BDFL’ Facebook and Arm join Yocto Project as platinum members for embedded Linux development NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 201
Read more
  • 0
  • 0
  • 17363
article-image-the-linux-and-risc-v-foundations-team-up-to-drive-open-source-development-and-adoption-of-risc-v-instruction-set-architecture-isa
Bhagyashree R
29 Nov 2018
3 min read
Save for later

The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA)

Bhagyashree R
29 Nov 2018
3 min read
Yesterday, the Linux Foundation announced that they are joining hands with the RISC-V Foundation to drive the open source development and adoption of the RISC-V instruction set architecture (ISA). https://twitter.com/risc_v/status/1067553703685750785 The RISC-V Foundation is a non-profit corporation, which is responsible for directing the future development of the RISC-V ISA. Since its formation, the RISC-V Foundation has quickly grown and now includes more than 100 member organizations. With this collaboration, the foundations aim to further grow this RISC-V ecosystem and provide improved support for the development of new applications and architectures across all computing platforms. Rick O’Connor, the executive director of the RISC-V Foundation, said, “With the rapid international adoption of the RISC-V ISA, we need increased scale and resources to support the explosive growth of the RISC-V ecosystem. The Linux Foundation is an ideal partner given the open source nature of both organizations. This joint collaboration with the Linux Foundation will enable the RISC-V Foundation to offer more robust support and educational tools for the active RISC-V community, and enable operating systems, hardware implementations and development tools to scale faster.” The Linux Foundation will provide governance, best practices for open source development, and resources such as training programs and infrastructure tools. Along with this, they will also help RISC-V in community outreach, marketing, and legal expertise. Jim Zemlin, the executive director at the Linux Foundation believes that RISC-V has great potential seeing its popularity in areas like AI, machine learning, IoT, and more. He said, “RISC-V has great traction in a number of markets with applications for AI, machine learning, IoT, augmented reality, cloud, data centers, semiconductors, networking and more. RISC-V is a technology that has the potential to greatly advance open hardware architecture. We look forward to collaborating with the RISC-V Foundation to advance RISC-V ISA adoption and build a strong ecosystem globally.” The two foundations have already started working on a pair of getting started guides for running Zephyr, a small, scalable open source real-time operating system (RTOS) optimized for resource-constrained devices. They are also conducting RISC-V Summit, a 4-day event starting from December 3-6 in Santa Clara. This summit will include sessions on RISC-V ISA architecture, commercial and open-source implementations, software and silicon, vectors and security, applications and accelerators, and much more. Read the complete announcement on the Linux Foundation’s official website. Uber becomes a Gold member of the Linux Foundation The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project Google becomes new platinum member of the Linux foundation
Read more
  • 0
  • 0
  • 17053

article-image-kunit-a-new-unit-testing-framework-for-linux-kernel
Savia Lobo
18 Oct 2018
2 min read
Save for later

KUnit: A new unit testing framework for Linux Kernel

Savia Lobo
18 Oct 2018
2 min read
On Tuesday, Google engineer Brendan Higgins announced an experimental set of 31 patches by introducing KUnit as a new Linux kernel unit testing framework to help preserve and improve the quality of the kernel's code. KUnit is a lightweight unit testing and mocking framework designed for the Linux kernel. Unit tests necessarily have finer granularity, they are able to test all code paths easily solving the classic problem of difficulty in exercising error handling code. KUnit is heavily inspired by JUnit, Python's unittest.mock, and Googletest/Googlemock for C++. KUnit provides facilities for defining unit test cases, grouping related test cases into test suites, providing common infrastructure for running tests, mocking, spying, and much more. Brenden writes, "It does not require installing the kernel on a test machine or in a VM and does not require tests to be written in userspace running on a host kernel. Additionally, KUnit is fast: From invocation to completion KUnit can run several dozen tests in under a second. Currently, the entire KUnit test suite for KUnit runs in under a second from the initial invocation (build time excluded)." When asked if KUnit will replace the other testing frameworks for the Linux Kernel, Brenden denied it,  saying, “Most existing tests for the Linux kernel are end-to-end tests, which have their place. A well tested system has lots of unit tests, a reasonable number of integration tests, and some end-to-end tests. KUnit is just trying to address the unit test space which is currently not being addressed.” To know more about KUnit in detail, read Brendan Higgins’ email threads. What role does Linux play in securing Android devices? bpftrace, a DTrace like tool for Linux now open source Linux drops Code of Conflict and adopts new Code of Conduct
Read more
  • 0
  • 0
  • 16721

article-image-twitter-experienced-major-outage-yesterday-due-to-an-internal-configuration-issue
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Twitter experienced major outage yesterday due to an internal configuration issue

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Twitter went down across major parts of the world including the US and the UK. Twitter users reported being unable to access the platform on web and mobile devices. The outage lasted on the site for approximately an hour. According to DownDetector.com, the site began experiencing major issues at 2:46pm EST, with problems being reported from users attempting to access Twitter through its website, iPhone or iPad app and via Android devices. While the majority of problems being reported from Twitter were website issues (51%), nearly 30% were from iPhone and iPad app usage and another 18% from Android users, as per the outage report. Twitter acknowledged that the platform was experiencing issues on its status page shortly after the first outages were reported online. The company listed the status as “investigating” and noted a service disruption was causing the seemingly global issue. “We are currently investigating issues people are having accessing Twitter,” the statement read. “We will keep you updated on what's happening.” This month has experienced several high-profile outages among social networks. Facebook and Instagram experienced a day-long outage affecting large parts of the world on July 3rd. LinkedIn went down for several hours on Wednesday. Cloudfare suffered two major outages in the span of two weeks this month. One was due to an internal software glitch and another was caused when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Reddit was experiencing outages on its website and app earlier in the day, but appeared to be back up and running for most users an hour before Twitter went down, according to DownDetector.com. In March, Facebook and its family of apps experience a 14 hour long outage which was reasoned as server config change issue. Twitter site then began operating normally nearly an hour later at approximately 3:45pm EST. The users on Twitter joked saying they were "all censored for the last hour" when the site eventually was back up and running. On the status page of the outage report Twitter said that the outage was caused due to “an internal configuration change, which we're now fixing.” “Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible,” the company said in a follow up statement. https://twitter.com/TwitterSupport/status/1149412158121267200 On Hacker News too users discussed about number of outages in major tech companies and why is this happening. One of the user comments reads, “Ok, this is too many high-profile, apparently unrelated outages in the last month to be completely a coincidence. Hypotheses: 1) software complexity is escalating over time, and logically will continue to until something makes it stop. It has now reached the point where even large companies cannot maintain high reliability. 2) internet volume is continually increasing over time, and periodically we hit a point where there are just too many pieces required to make it work (until some change the infrastructure solves that). We had such a point when dialup was no longer enough, and we solved that with fiber. Now we have a chokepoint somewhere else in the system, and it will require a different infrastructure change 3) Russia or China or Iran or somebody is f*(#ing with us, to see what they are able to break if they needed to, if they need to apply leverage to, for example, get sanctions lifted 4) Just a series of unconnected errors at big companies 5) Other possibilities?” On this comment another user adds, “I work at Facebook. I worked at Twitter. I worked at CloudFlare. The answer is nothing other than #4. #1 has the right premise but the wrong conclusion. Software complexity will continue escalating until it drops by either commoditization or redefining problems. Companies at the scale of FAANG(+T) continually accumulate tech debt in pockets and they eventually become the biggest threats to availability. Not the new shiny things. The sinusoidal pattern of exposure will continue.” Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files Facebook family of apps hits 14 hours outage, longest in its history How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others
Read more
  • 0
  • 0
  • 16657
article-image-whats-new-in-google-cloud-functions-serverless-platform
Melisha Dsouza
17 Aug 2018
5 min read
Save for later

What’s new in Google Cloud Functions serverless platform

Melisha Dsouza
17 Aug 2018
5 min read
Google Cloud Next conference in San Francisco in July 2018 saw some exciting new developments in the field of serverless technology. The company is giving development teams the ability to build apps without worrying about managing servers with their new serverless technology. Bringing the best of both worlds: Serverless and containers, Google announced that Cloud Functions is now generally available and ready for production use. Here is a list of the all-new features that developers can watch out for- #1 Write Cloud Functions using  Node 8, Python 3.7 With support for async/await and a new function signature, you can now write Cloud Functions using Node 8. Dealing with multiple asynchronous operations is now easier thanks to Cloud Functions that provide data and context. You can use the await keyword to await the results of asynchronous operations. Python 3.7 can also be used to write Cloud Functions.  Similar to Node, you get data and context for background functions, and request for HTTP. Python HTTP functions are based on the popular Flask microframework. Flask allows you to get set up really fast. The requests are based on flask.Request and the responses just need to be compatible with flask.make_response. As with Node, you get data (dict) with Python background functions and context (google.cloud.functions.Context). To signal completion, you just need to return from your function or raise an exception and Stackdriver error handling will kick in. And, similarly to Node (package.json), Cloud Functions will automatically do the installation of all of your Python dependencies (requirements.txt) and build in the cloud. You can have a look at the code differences between Node 6 and Node 8 behavior and at a Flask request on the Google Cloud website. #2 Cloud Functions is now out  for Firebase Cloud Functions for Firebase is also generally available. It has full support for Node 8, including ECMAScript 2017 and async/await. The additional granular controls include support  for runtime configuration options, including region, memory, and timeout. Thus allowing you to refine the behavior of your applications. You can find more details from the Firebase documentation. Flexibility for the application stack now stands improved. Firebase events (Analytics, Firestore, Realtime Database, Authentication) are directly available in the Cloud Functions Console on GCP. You can now trigger your functions in response to the Firebase events directly from your GCP project. #3 Run headless Chrome by accessing system libraries Google Cloud functions have also broadened the scope of libraries available by rebasing the underlying Cloud Functions operating system onto Ubuntu 18.04 LTS. Access to system libraries such as ffmpeg and libcairo2 is now available- in addition to imagemagick- as well as everything required to run headless Chrome. For example, you can now process videos and take web page screenshots in Chrome from within Cloud Functions. #4 Set environment variables You can now pass configuration to your functions by specifying key-value pairs that are bound to a function. The catch being, these pairs don’t have to exist in your source code. Environment variables are set at the deploy time using the --set-env-vars argument. These are then injected into the environment during execution time. You can find more details on the Google cloud webpage. #5 Cloud SQL direct connect Now connect Cloud Functions to Cloud SQL instances through a fully managed secure direct connection.  Explore more from the official documentation. What to expect next in Google Cloud Functions? Apart from these, Google also promises a range of features to be released in the future. These include: 1. Scaling controls This will be used to limit the number of instances on a per-function basis thus limiting traffic. Sudden traffic surge scenarios will , therefore,come under control when Cloud Functions rapidly scales up and overloads a database or general prioritization based on the importance of various parts of your system. 2. Serverless scheduling You’ll be able to schedule Cloud Functions down to one-minute intervals invoked via HTTP(S) or Pub/Sub. This allows you to execute Cloud Functions on a repeating schedule. Tasks like daily report generation or regularly processing dead letter queues will now pick up speed! 3. Compute Engine VM Access Now connect to Compute Engine VMs running on a private network using --connected-vpc option. This provides a direct connection to compute resources on an internal IP address range. 4. IAM Security Control The new Cloud Functions Invoker IAM role allows you to add IAM security to this URL. You can control who can invoke the function using the same security controls as used in Cloud Platform 5. Serverless containers With serverless containers, Google provides the same infrastructure that powers Cloud Functions. Users will now be able to simply provide a Docker image as input. This will allow them to deploy arbitrary runtimes and arbitrary system libraries on arbitrary Linux distributions This will be done while still retaining the same serverless characteristics as Cloud Functions. You can find detailed information about the updated services on Google Cloud’s Official page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Google Cloud Launches Blockchain Toolkit to help developers build apps easily Zeit releases Serverless Docker in beta
Read more
  • 0
  • 0
  • 16654

article-image-learn-azure-serverless-computing-free-download-ebook-microsoft
Packt
05 Mar 2018
2 min read
Save for later

Learn Azure serverless computing for free - Download a free eBook from Microsoft

Packt
05 Mar 2018
2 min read
There has been a lot of noise around serverless computing over the last couple of years. There have been arguments that it’s going to put the container revolution to bed, and while that’s highly unlikely (containers and serverless are simply different solutions that are appropriate in different contexts), it’s significant that a trend like serverless could emerge so quickly to capture the attention of engineers and architects. It says a lot about the rapidly changing nature of software infrastructures and the increased demands for agility, scalability, and power. Azure is a cloud solution that’s only going to help drive serverless adoption further. But we know there’s always some trepidation among tech decision makers when choosing to implement something new or use a new platform. That’s why we’re delighted to be partnering with Microsoft Azure to give the world free access to Azure Serverless Computing Cookbook. Packed with more than 50 Azure serverless tutorials and recipes to help solve common and not so common challenges, this 325-page eBook is both a useful introduction to Azure’s serverless capabilities and a useful resource for anyone already acquainted with it. Simply click here to go to Microsoft Azure to download the eBook for free.
Read more
  • 0
  • 0
  • 16569
Modal Close icon
Modal Close icon