Friday, April 18, 2025
Home Blog

NASA launch First Space-Based Quantum Gravity Gradiometer

Quantum Gravity Gradiometer Pathfinder (QGGPf)

NASA’s Jet Propulsion Laboratory is getting ready to launch the first space-based quantum gravity gradiometer in collaboration with academic and small business partners. In order to map Earth’s gravity field from orbit, a suite of quantum sensing technologies will be tested by NASA’s Earth Science Technology Office as part of the Quantum Gravity Gradiometer Pathfinder (QGGPf).

The project aims to show how quantum sensors, particularly those based on ultra-cold atom interferometry, may be used to detect gravitational anomalies with great precision, according to a NASA news release. Applications ranging from subsurface geology and national security to water resource management depend on these changes in gravity, which are brought about by mass redistribution under the surface of the Earth.

How It Operates: Gravity Gradients and Ultra-Cold Atoms

Clouds of rubidium atoms that have been chilled to almost zero will be used as test masses for QGGPf. Because the atoms act as matter waves at this temperature, it is possible to compare the gravitational acceleration of two points in space with accuracy. Gravity gradiometers detect the gravity gradient, which is the difference in the rates at which two test masses fall over short distances. Greater acceleration brought on by stronger gravitational fields enables scientists to pick up on even the smallest changes in mass distribution.

According to the press release, using ultra-cold atoms in space allows for longer-duration and more accurate measurements than traditional mechanical test masses. “It can assure you that every measurement will be the same with atoms,” Sheng-wey Chiow, a JPL experimental physicist, said. Over time, measurement stability is improved by atom-based sensors’ reduced susceptibility to drift and thermal noise.

The fundamental sensor of QGGPf will be substantially smaller than conventional spaceborne gravity instruments, measuring only 0.25 cubic meters and weighing only 125 kilograms. According to previous predictions, the quantum device is predicted to achieve sensitivity up to ten times higher than current classical sensors, despite its small size. The mission’s main objective is to evaluate the technology in orbit, but the findings may also help guide future planetary exploration and Earth research missions.

Towards Advancing Quantum Technologies in Space

NASA’s efforts to incorporate quantum technologies into its science missions are shown by this initiative. “In order to determine how well it will function, as need to fly it,” JPL postdoctoral researcher Ben Stray stated. “That will enable us to develop quantum technology in general as well as the quantum gravity gradiometer.”

A number of subsystems created through public-private cooperation will be the foundation of the instrument. While NASA’s Goddard Space Flight Center is collaborating with Vector Atomic to create the laser systems required for controlling and measuring the atomic clouds, JPL is collaborating with AOSense and Infleqtion to advance the sensor head based on atom interferometry.

Atom Interferometry and Measurement Precision

The method employed in the QGGPf, atom interferometry, measures phase shifts brought on by gravitational forces by splitting and recombining matter waves. The atomic clouds’ varying rates of acceleration are intimately correlated with these phase shifts. The gradiometer maps gravity gradients with great spatial precision by comparing two such clouds in free-fall.

In a recent publication published in EPJ Quantum Technology, Jason Hyon, the director of the Quantum Space Innovation Center and Chief Technologist for Earth Science at JPL, highlighted the promise of atom-based sensing. Jason Hyon pointed out that quantum instrumentation might eventually be used to measure characteristics like mineral reserves and subterranean water reservoirs from orbit with technologies like QGGPf.

Applications and Future Potential

Tectonic movement, glacier melting, groundwater extraction, and other geophysical activities cause the Earth’s gravitational field to change throughout time. Precisely monitoring these shifts is important for climate model improvement and environmental policy. For gravity measurements, NASA has historically depended on missions like GRACE and GRACE-FO, but QGGPf offers a completely new sensing technology that could someday supplement or possibly replace conventional gravimetry missions.

The QGGPf will mostly be used as a technology pathfinder, but the launch is scheduled for the end of the decade. The accomplishment of this mission could pave the way for the creation of compact, high-precision quantum equipment for planetary science and Earth observation, as well as operational quantum gravity gradiometer.

News source

Codex CLI Grant: Building The Code With OpenAI Models

0

Enhance your coding workflow with Codex CLI, a local terminal tool that brings the reasoning power of advanced AI models (including future GPT-4.1 support) to your fingertips.

The newest models in o-series, o3 OpenAI and o4-mini, are being released. These models are trained to consider for extended periods of time before reacting. These are the most intelligent models OpenAI has ever produced, and they mark a significant advancement in ChatGPT’s functionality for both novice and expert users. Their reasoning models can now use and integrate all of ChatGPT’s tools agentically for the first time. This includes online searches, Python analysis of uploaded files and other data, in-depth reasoning about visual inputs, and even picture generation.

In order to tackle more complicated issues, these models are trained to reason about when and how to apply tools to generate thorough and considered solutions in the appropriate output formats, usually in less than a minute. As a result, they are better equipped to handle complex queries, which is a step towards ChatGPT becoming more agentic and capable of carrying out activities on your behalf. Setting a new standard for intelligence and utility, the combination of cutting-edge reasoning with complete tool access results in noticeably better performance on real-world activities and academic benchmarks.

What’s changed

It’s most potent reasoning model, o3 OpenAI, is pushing the boundaries of coding, arithmetic, physics, visual perception, and other fields. It establishes a new SOTA on benchmarks such as MMMU, Codeforces, and SWE-bench. It’s perfect for complicated questions that call for multifaceted investigation and whose solutions might not be immediately clear. It excels at visual activities, such as analysing charts, graphics, and photographs.

According to assessments conducted by outside specialists, O3 performs 20 percent better than OpenAI O1 on challenging, real-world activities, particularly when it comes to programming, business/consulting, and creative ideation. Early testers praised its capacity to produce and critically assess original theories, especially in the fields of biology, mathematics, and engineering, as well as its analytical rigour as a thinking partner.

A smaller model designed for quick, economical reasoning, OpenAI o4-mini performs quite well for its size and price, especially in tasks involving arithmetic, coding, and visuals. On AIME 2024 and 2025, it is the benchmarked model with the best performance. According to expert assessments, it also performs better than its predecessor, OpenAI o3-mini, in fields like data science and non-STEM applications. For queries that benefit from reasoning, o4-mini is a powerful high-volume, high-throughput choice because of its efficiency, which allows for far larger use limitations than o3.

Due to increased intelligence and the incorporation of online resources, external expert assessors assessed both models as showing better instruction following and more practical, verifiable replies than their predecessors. These two models should also seem more conversational and natural than earlier versions of reasoning models, particularly as they use memory and previous discussions to personalize and contextualize replies.

Continuing to scale reinforcement learning

As o3 OpenAI has evolved, it has shown that the “more compute = better performance” tendency seen in GPT-series pretraining also appears in large-scale reinforcement learning. This time, by repeating the scaling route in RL, it has extended training compute and inference-time reasoning by an extra order of magnitude while still observing noticeable performance increases, confirming that the more the models are given freedom to think, the better they perform. o3 OpenAI performs better in ChatGPT at the same latency and cost as OpenAI o1, and OpenAI has confirmed that its performance increases with longer thinking times.

Additionally, it used reinforcement learning to teach both models how to utilise tools and to make decisions about when to employ them. They are more competent in open-ended scenarios, especially those requiring visual reasoning and multi-step workflows, since they can deploy tools according to intended outcomes. According to early testers, this improvement is seen in both academic benchmarks and real-world activities.

Thinking with images

These models are able to incorporate visuals directly into their mental process for the first time. They ponder about a picture rather than merely seeing it. Their cutting-edge performance across multimodal benchmarks reveals a new kind of problem-solving that combines textual and visual thinking.

The model can comprehend images that are low quality, hazy, or inverted, such as hand-drawn sketches, textbook diagrams, or whiteboard photos that people post. As part of their reasoning process, the models can employ tools to dynamically rotate, zoom in, or change pictures.

These models address previously unsolvable problems by achieving best-in-class accuracy on visual perception challenges.

Limitations

Currently, thinking with visuals has the following drawbacks:

  • Overly lengthy lines of reasoning: Models may execute pointless or redundant tool calls and picture manipulation operations, leading to excessively lengthy thought chains.
  • Errors in perception: Basic perceptual errors can still occur in models. Visual misinterpretations can result in inaccurate final responses, even when tool calls are made correctly and progress the reasoning process.
  • Dependability: During several iterations of a problem, models may try various visual reasoning techniques, some of which may provide inaccurate conclusions.

Toward agentic tool use

The o3 OpenAI and o4-mini have complete access to ChatGPT’s tools as well as the ability to call functions in the API to create your own unique tools. In order to provide thorough and well-considered responses in the appropriate output formats in less than a minute, these models are trained to reason about problem-solving, deciding when and how to employ tools.

By combining many tool calls, the model may produce Python code to create a forecast, create a graph or graphic, and explain the main elements influencing the prediction. It can also search the web for public utility data. The models’ ability to reason enables them to respond to new information and change course as necessary. For instance, individuals can use search engines to conduct many online searches, examine the results, and try new searches if they require further information.

The models can handle tasks that call for access to current information outside their own knowledge, extended reasoning, synthesis, and output production across modalities with this adaptable, strategic approach.

The most intelligent models it has ever released are o3 OpenAI and o4-mini, which are also frequently more effective than their predecessors, OpenAI o1 and o3-mini. For instance, the cost-performance frontier for o3 strictly improves over o1, and the frontier for o4-mini strictly improves over o3mini in the 2025 AIME maths competition. In general, it anticipate that o3 and o4-mini will be more intelligent and less expensive than o1 and o3-mini, respectively, for the majority of real-world applications.

Security

Every increase in the model’s capabilities calls for corresponding increases in safety. It has redesigned safety training data for o3 OpenAI and o4-mini, including new refusal prompts in areas like malware production, jailbreaks, and biological dangers (biorisk). O3 and O4-mini have performed well on internal rejection benchmarks (such as instruction hierarchy⁠ and jailbreaks) as a result of this updated data.

OpenAI has created system-level mitigations to identify hazardous prompts in frontier risk regions, in addition to outstanding performance for model refusals. It trained a thinking LLM monitor that operates from human-written and interpretable safety standards, much like previous work in image generation⁠. In human red-teaming operation, this monitor effectively detected around 99 percent of talks when applied to biorisk.

OpenAI used most stringent safety approach to date to stress test both models. It assessed o3 and o4-mini in the three monitored capacity areas biological and chemical, cybersecurity, and AI self-improvement that are covered by revised Preparedness Framework⁠. Based on the outcomes of these assessments, it has concluded that in all three areas, o3 and o4-mini continue to fall below the Framework’s “High” level. The comprehensive findings from these assessments are available in the system card that goes with it.

Codex CLI: frontier reasoning in the terminal

Additionally, it is revealing a brand-new experiment: Codex CLI, a portable coding agent that you can launch from your terminal. It operates directly on your PC and is made to optimise the reasoning power of models such as o3 and o4-mini. Other API models, such as GPT‑4.1⁠, will soon be supported.

By providing the model with low-fidelity drawings or images, along with your code locally, you may use the advantages of multimodal reasoning from the command line. To link models to consumers and their computers, it consider it a minimum interface.

In addition, it is starting a $1 million program to help projects that use OpenAI models and Codex CLI. Grant submissions in increments of $25,000 USD in the form of API credits will be reviewed and accepted.

Access

Starting currently, o1, o3‑mini, and o3‑mini‑high will be replaced with o3, o4-mini, and o4-mini-high in the model selector for ChatGPT Plus, Pro, and Team users. Users of ChatGPT Enterprise and ChatGPT Edu will be able to access it within a week. By choosing ‘Think’ in the composer before to submitting their question, free users can test out o4-mini. All plans’ rate limits are the same as they were in the previous set of models.

In a few weeks, it anticipate releasing o3 OpenAI-pro with complete tool support. O1-pro is still accessible to Pro users as of right now.

Currently, developers may access both o3 and o4-mini using the Chat Completions API and Responses API. However, in order to use these models, certain developers will need to confirm their organisations. The Responses API will soon feature built-in tools like web search, file search, and code interpreter within the model’s reasoning.

It also supports reasoning summaries and the ability to maintain reasoning tokens around function calls for improved speed. Explore documentation to get started, and check back for future updates.

o3 OpenAI and o4-mini are now Available in Microsoft Azure OpenAI Service

The most recent versions of the o-series of models, the o3 OpenAI and o4-mini models, are now available on Microsoft Azure OpenAI Service in Azure AI Foundry and GitHub, according to Microsoft Azure.

What’s next

OpenAI is merging the specialised thinking skills of the o-series with more of the natural conversational abilities and tool usage of the GPT-series, which is reflected in releases. It’s future models will enable proactive tool usage, sophisticated problem-solving, and smooth, organic discussions by combining these characteristics.

HRL Laboratories Boeing Quantum Space Mission Key Validation

Q4S satellite mission

Boeing and HRL Laboratories have reported a major accomplishment in their collaboration on the Q4S satellite mission. A critical subassembly for quantum communication that demonstrates four-photon quantum entanglement swapping in space has been successfully constructed and technically validated.

This achievement is a significant step towards the development of distributed quantum networks and secure quantum communication links outside of Earth. The verified subassembly is now prepared for flight after meeting important performance goals, such as high-fidelity entanglement and a photon pair detection rate of more than 2,500 per second.

Quantum Communication

Development of Space-Based Quantum Communication: The main focus is on the development of operational quantum communication systems in space. It is the goal of the Q4S mission to be a “first-of-its-kind effort to demonstrate four-photon quantum entanglement swapping in space.” According to the presentation, this “powerful capability” is “essential to enabling future secure communications and distributed quantum networks.”

Effective Hardware Development and Validation: HRL Laboratories‘ effective creation and thorough testing of the quantum communication subassembly are crucial components. Thus, “the optical board, control electronics, and final thermo-mechanical packaging are combined into a single, space-ready system.” Additionally, the subassembly “passed initial end-to-end software verification.”

Reaching Performance Goals: The validation tests have shown that the subassembly satisfies the mission’s performance requirements. With a detection rate of “over 2,500 matching photon pairs per second,” the two photon sources in the subassembly demonstrated “strong signal quality (fidelity between 0.8 and 0.9)”—”enough to meet the project’s requirements for accurate quantum measurements.”

Boeing is using a “ground twin” approach for mission assurance, in which the verified subassembly will function as a duplicate of the currently-produced on-orbit payload. This strategy would “mirror the on-orbit payload,” implying that it will be utilized for operational support, testing, and troubleshooting on the ground.

“Demonstrating entanglement swapping between these two entangled photon pairs will enable us to entangle previously unconnected nodes, a foundational breakthrough for building secure, scalable quantum computing and sensing networks in space.” This demonstrates how this approach could support quantum technology developments in the future that go beyond secure communication.

Cooperation and Trailblazing Effort: The release highlights the collaboration between Boeing and HRL Laboratories in this “pioneering demonstration.” “Proud to partner with Boeing on this pioneering demonstration and lay the groundwork for secure communications in space.” This demonstrates how working together is essential to advancing quantum technologies.

You can also read What Is Quantum Teleportation And Why Is It Important?

HRL Laboratories Validation

  • Goal of the Q4S satellite mission: “Four-photon quantum entanglement swapping in space.”
  • HRL Laboratories has finished “construction of the fully integrated, space-grade subassembly.”
  • Space-Readiness: Boeing’s El Segundo Space Simulation Laboratory has validated the subassembly, proving its “space qualification.”
  • The validated subassembly will be the “ground twin to mirror the on-orbit payload which is currently in production.”
  • Validation tests were successful in demonstrating “quantum entanglement for each of the two sources in this subassembly.”
  • Its demonstration “will enable us to entangle previously unconnected nodes, a foundational breakthrough for building secure, scalable quantum computing and sensing networks in space.”
  • The performance of each photon source was demonstrated by its “fidelity between 0.8 and 0.9” and its ability to detect “over 2,500 matching photon pairs per second.”
  • The subassembly combines “an optical lab’s worth of capability in a compact, 15kg integrated space-capable assembly.”
  • Validation and environmental testing are critical milestones on the path to a successful space mission.” That is why testing is so important.

Implications

This accomplishment marks a major advancement in the creation of quantum technology based in space. If four-photon entanglement swapping in space is successfully demonstrated, it may lead to:

  • Ultra-secure satellite communication networks: Quantum key distribution (QKD) over satellite could improve the security of commercial, military, and governmental communications by providing potentially unbreakable encryption across great distances.
  • Global quantum networks: One important technique for expanding the scope of quantum networks and maybe opening the door to the development of a global quantum internet is entanglement swapping.
  • Distributed quantum computing and sensing: By entangling quantum sensors and processors over long distances, new possibilities in domains such as Earth observation, fundamental physics research, and precision measurement may become possible.
  • Enhancement of preparedness for quantum technologies: This mission will contribute to the general maturity of quantum technologies by offering useful information and expertise in deploying and running intricate quantum systems in the hostile environment of space.

Considerations

A better comprehension of the accomplishment would be possible with additional information regarding the precise architectural and technological requirements of the quantum communication subassembly.

The ultimate success of the mission will depend critically on the system’s long-term operating stability and performance in space.

You can also read Rigetti Computing News: Quantum Computing’s Future Unfolds

Conclusion

An important step forward in the quest for space-based quantum capabilities has been made with HRL Laboratories‘ successful construction and validation of the quantum communication subassembly for the Boeing Q4S mission. A strong approach to this innovative undertaking is demonstrated by the achievement of key performance objectives and the application of a ground twin strategy. Four-photon entanglement swapping in space has the potential to transform secure communications and pave the way for the creation of dispersed quantum applications and future global quantum networks.

Entrust Cryptographic Security Platform Aids Cyberattacks

Entrust Cryptographic Security Platform

To safeguard the fundamentals of data security, Entrust announces the first unified cryptographic security platform in the industry.

The Entrust Cryptographic Security Platform, the first comprehensive, end-to-end cryptographic security management solution for keys, secrets, and certificates in the industry, was unveiled today by Entrust, the world leader in identity-centric security solutions.

The scope and sophistication of cyberattacks on identification and data security systems are rapidly increasing. Without a strong cryptographic basis, every linked device, application, and system is vulnerable in digital-first environments, and traditional methods of protecting data and identities are failing. Furthermore, it is now practically hard to manage cryptography at an enterprise scale with confidence due to the dispersed instruments for managing cryptographic sprawl, such as encryption keys, secrets, and certificates.

In order to overcome this difficulty, the Entrust Cryptographic Security Platform offers complete visibility and control over the whole cryptographic estate, which includes networks, endpoints, apps, and both public and private cloud environments. Now, security, IT, and DevOps can have the centralised inventory and visibility to manage more complex operations and get ready for the transition to post-quantum cryptography, as well as the control and agility they need to streamline the deployment of cryptographic solutions.

For the first time, development organisations, IT departments, and security professionals can control every facet of cryptographic security from a single platform. Secured with Entrust nShield and third-party hardware security modules (HSMs), the Entrust Cryptographic Security Platform combines industry-leading capabilities to deliver unified compliance management, PKI deployment and operation, and lifecycle management for keys, secrets, and certificates. It is also interoperable with leading security, identity, and IT management systems through extensive integrations, offering unparalleled protection.

In a world where AI-enhanced assaults are increasingly targeting keys, secrets, and certificates, siloed cybersecurity tools are no longer sufficient. Entrust are in the middle of a multi-year transition to quantum-secure cryptography, and Entrust are witnessing a boom in the amount of data and devices that require cryptographic security. “As the cornerstone of data and identity security, it is evident that every organisation needs to give cryptographic estate management more attention,” stated Bhagwat Swaroop, President of Digital Security at Entrust. “Entrust and it partners are assisting organisations in safeguarding their cryptographic foundations with the new Cryptographic Security Platform.”

“Cryptographic management must keep up with the impending “Q-Day,” when quantum computers will be able to swiftly crack conventional encryption. According to Jennifer Glenn, IDC Research Director for Information and Data Security, “companies must have full cryptographic estate monitoring and observability while also maintaining flexibility to ensure they’re keeping up with the technology landscape.” “Organisations are seeking a comprehensive, long-term solution that will adapt to the future of security.”

Customers may take charge and lessen the chance of disruption throughout these extensive and intricate transformations with the help of the Entrust Cryptographic Security Platform, which offers:

  • Enterprise-Wide Visibility: From a single dashboard, keep an eye on cryptographic assets, audit modifications, and get notifications for improved security supervision.
  • Cryptographic Risk Management: Secure keys, secrets, and certificates across dispersed departments, teams, and divisions; enforce policy; and automatically evaluate the cryptographic risk posture.
  • Scalable Architecture: Provide on-premises and managed service choices for high-performance, future-proof cryptographic solutions that adhere to the most recent standards.
  • Interoperable: Allow for customisation via open APIs and broad integrations with leading identity, security, and IT management systems.

In May 2025, the Entrust Cryptographic Security Platform will be accessible.

Concerning Entrust

Entrust offers a comprehensive platform of scalable, AI-enabled security solutions and is a pioneer in identity-centric security solutions. In order to confidently adapt their businesses, Entrust give organisations the ability to defend their operations, evolve without compromising, and safeguard their interactions in a connected world. Entrust collaborates with a global network of partners and serves clients in more than 150 countries.

News Source

What is Martech Solutions And Generative AI In Marketing

0

What is Martech solutions?

Software used by marketers to maximize their efforts and accomplish their goals is referred to as martech. It uses technology to create, implement, and assess campaigns and other marketing activities. Basically, it can make the work of marketers easier. We’ll talk more about a martech stack, which is a collection of marketing technology. These products are frequently utilized in omnichannel, multi-touchpoint environments to optimize marketing procedures.

Marketing is about the code that makes the big ideas come to life, not just the ideas themselves. As the need for customized advertising and client experiences increases, developers are essential in transforming innovative ideas into scalable, quantifiable solutions. Google has launched a variety of open-source MarTech solutions driven by generative AI to help close the gap between engineering and marketing. These solutions may be utilized for Google campaigns and more.

These three cutting-edge tools help developers create and manage marketing materials more efficiently, whether they are converting video into new formats, producing and managing photos in large quantities, or producing high-quality written copy for advertisements.

ViGenAiR

Use Gen AI to rework video commercials for a broader audience

Video ads on YouTube and social media are one of the best ways to contact consumers and raise awareness. However, creating versions for different audiences and platforms is expensive and time-consuming.

Using multimodal generative AI models on Google Cloud, ViGenAiR automatically converts long-form video advertisements into shorter, format-specific versions while gleaning important data to target various demographics. Create video, picture, and text assets to support Demand Gen and YouTube video campaigns by selecting from Al’s suggested variations or by exercising complete creative control through manual editing.

ViGenAiR will provide you with:

  • Variety of content: Add more vertical and square videos to your collection, along with Demand Gen text and image components.
  • Customization: Use individualized movies and plots to connect with and engage the appropriate audience.
  • Quality: Produce films that adhere to YouTube’s ABCDs (Attention, Branding, Connection, Direction) and automatically adjust their alignment for square and vertical screens.
  • Efficiency: Quickly make fresh versions and cut down on video production time and expense.

How ViGenAiR uses gen AI video editing for ads

Gemini on Vertex AI is used by ViGenAiR to comprehend a video’s plot before dividing it into distinct audio and video pieces. The video won’t be cut in the middle of a scene or a dialogue because ViGenAiR uses the information it has collected from segments such as spoken dialogue, visual shots, on-screen text, and background music to integrate semantically and contextually linked video segments. Both user-driven recombination and gen AI are built on these cohesive A/V segments.

ViGenAiR uses Gemini on Vertex AI to understand a video's plot before separating it into audio and video
Image credit to Google for developers

Adios

Manage and generate personalized advertising with AI

Though managing thousands or even millions of photos can quickly cause operational bottlenecks for marketers, it is crucial to establish the appropriate visuals for each ad group.

An open-source program called Adios, powered by Gemini, makes it simple for marketers to submit and oversee picture assets for thousands of ad groups. Not a picture? Not an issue. The Imagen model on Google Cloud’s Vertex AI platform is used by Adios to create customized, superior images that are suited to the context of your particular ad group, improving the appearance and efficacy of your campaign.

Adios benefits your marketing departments:

  • Generate at scale: Integrate nearly any gen AI API with little code change and produce millions of images customized for your ad groups.
  • Automate asset management by uploading and controlling picture assets in Google Ads, regardless of whether you created them using Adios.
  • Examine the photos that were generated: Verify the generated photographs by hand before they go public to ensure the best possible quality.
  • Try A/B experiments: Make Google Ads tests to evaluate how well new and old image assets perform.

Adios for AI-driven content creation

Adios’s most recent version provides more configuration flexibility, allowing you to quickly modify GCP region, AI models, and other settings without having to change the code. Improved gen API interactions’ stability and dependability, as well as automatic retrying of unsuccessful requests for a more seamless and reliable experience, are recent enhancements. The Google Ads API employs version 17, and Gemini 1.5 Flash is used for text-to-image prompt generation.

With Adios' latest configuration flexibility, you may change AI models, GCP region, and other options without modifying code
Image credit to Google for developers

Copycat

Generate on-brand ad copy for Google Search campaigns

Customers see your brand when they are specifically searching for a good or service with search engine marketing strategies. Writing Search advertising, however, may take time, and the techniques now in use frequently result in generic ad language that lacks the distinctive tone and style of a business.

A Python program called Copycat analyzes your best-performing advertisements and brand standards using Gemini models. After learning your distinct voice and writing style, the program creates excellent, consistent ad copy with fresh keywords. Whether you need to create new responsive text ads and search ads or modify existing ones, Copycat provides:

  • Efficiency: Produce effective ad copy fast for a range of campaigns, save time and money.
  • Ads should be of a high caliber and represent the distinct style of your brand.
  • Scalability: Reach a wider audience with Google Ads without sacrificing brand coherence or quality.

How AI is used in commercial copywriting by Copycat

You should use high-quality search advertising from your Google advertising account to train copycat. To guarantee variation and cut down on redundancy, it condenses the training advertising into a smaller group of “exemplar ads” using Affinity Propagation. Gemini then uses the sample advertisements to create a style guide, which you can add your own rules to. Your keywords, instructions, and the style guide are combined by Copycat to form a prompt for Gemini that creates the new ad copy. Additionally, Copycat can fill in the blank spaces in your advertisements if your headlines or descriptions already exist.

Copycat should learn from your Google advertising high-quality search advertising.
Image credit to Google for developers

CNSA 2.0 Algorithms: OpenSSL 3.5’s Q-Safe Group Selection

CNSA 2.0 Algorithms

In accordance with the NSA’s CNSA 2.0 recommendations, OpenSSL version 3.5 adds improvements to TLS(Transport Layer Security) 1.3 to give quantum-safe cryptographic algorithms priority. With these additions, servers can choose Q-safe algorithms preferentially during the TLS handshake and clients can express a preference for them.

OpenSSL uses novel configuration techniques to accomplish this without changing the TLS standard. For example, it uses a delimiter for servers to arrange algorithms by security level and a prefix character for clients to suggest key sharing.

By guaranteeing backward compatibility and reducing unnecessary network round trips, these modifications seek to enable a seamless transition to post-quantum cryptography while upholding the “prefer” requirement for Q-safe algorithms. OpenSSL is the first significant TLS library to fully implement the CNSA 2.0 preference with this version, and because of its long-term support status, it is expected to be widely adopted.

Q Safe

The Need for Quantum-Safe Cryptography and the Imminent Danger of Quantum Computers

The primary force behind this work is the potential for quantum computers to crack asymmetric cryptography schemes that are currently in use.

You can also read EQCC Gets Quantum System Two By IBM, Basque Government

  • “Future quantum computers will be able to break the asymmetric cryptographic algorithms widely in use online today.”
  • To preserve the security of online communication, this calls for a switch to quantum-safe (or Q-safe) cryptographic techniques.

The CNSA 2.0 mandate of the NSA as a Major Initiator

The Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), released by the US National Security Agency (NSA), contains a list of authorized quantum-safe algorithms along with a schedule for their implementation. The allowed methods for TLS are ML-KEM (FIPS-203) for key agreements and ML-DSA (FIPS-204) or SPINCS+ (FIPS-205) for certificates.

According to the CNSA 2.0 mandate, systems must be set up to “prefer CNSA 2.0 algorithms” during the initial transition period and to “accept only CNSA 2.0 algorithms” as products advance. The goal of this two-phase strategy is a smooth and gradual transition.

The TLS Standard’s “Preference” Implementation Challenge

Clients and servers can choose post quantum cryptographic algorithms freely to the TLS standard (RFC 8446), which does not require a particular preference mechanism. “Such a choice is not required by the TLS standard. The TLS standard gives both the client and the server a great deal of flexibility in selecting their preferred encryption methods.

Therefore, a method to set up TLS connections to allow preference for CNSA 2.0 algorithms is desperately needed. This calls for figuring out how to implement the idea of favoring Q-safe algorithms without changing the TLS protocol itself.

The Solution: OpenSSL v3.5 Improving Configuration Features

Since changing the TLS standard itself was not an option, the developers concentrated on improving OpenSSL’s configuration capabilities to overcome the obstacle. Allowing OpenSSL-using apps (like cURL, HAproxy, and Nginx) to take advantage of the new preference options without needing to modify their code was the aim.

You can also read SES and SpeQtral Sign MoU Quantum-Secure Communications

Client-Side Solution: Using a Prefix Character to Indicate Preference

With OpenSSL v3.5, clients can express their preference for Q-safe algorithms by placing a special prefix character (”) before the algorithm name in the list of acceptable algorithms that is separated by colons. For instance, in the ClientHello message, “ML_KEM-1024:ML-KEM-512:*x25519” instructs the client to produce and send key shares for ML-KEM-1024 and x25519, indicating support for four algorithms.

A client can only send a maximum of four key shares, which can be changed using a build option, in order to avoid network overload caused by the larger size of Q-safe key shares. A fully Q-safe algorithm, a hybrid algorithm, a legacy algorithm, and a spare are all intended to be supported under this architecture.

If no ‘*’ prefix is specified, a single key share is automatically added for the first algorithm in the list to preserve backward compatibility.

Server-Side Solution: Establishing Preference Hierarchy Algorithm Tuples

By providing a new method of defining the server’s preferred algorithm order using tuples delimited by the ‘/’ character within the colon-separated list of algorithms, the server-side approach overcomes TLS’s absence of a native “preference” mechanism.

This enables the server to choose algorithms using a three-level priority system:

  • Processing tuples from left to right has the highest priority.
  • Overlap with client-provided key sharing within a tuple is the second priority.
  • The third priority is to overlap within a tuple with other client-supported methods (without key shares).

An illustration would be “ML-KEM-768 / X25519MLKEM768

x25519 / SecP256r1MLKEM768″ defines three tuples. Within each tuple, the server will first priorities algorithms in the earlier tuples, then take into account the availability of key shares, and lastly, general support.

In spite of the possibility of a HelloRetryRequest (HRR) penalty, the text demonstrates how this technique guarantees that the server favors Q-safe algorithms even in the presence of a legacy algorithm with an easily accessible key share: “However, the prefer mandate of CNSA 2.0 enforces higher priority to the use of Q-safe algorithms, even if that comes at the cost of a round-trip penalty which is completely achieved using the new specification syntax.”

Preserving Backward Compatibility and Reducing the Effect on Current Systems

Ensuring complete backward compatibility to facilitate a seamless transition was a key design element. Existing applications can use the new configuration syntax without needing to change their code. To prevent affecting other features, the OpenSSL codebase modifications were meticulously applied in “a few pinpointed locations” throughout the vast codebase.

Extra Considerations for Implementation

A “?” prefix was added to enable ignoring unknown algorithm names, handle pseudo-algorithm names like “DEFAULT,” and permit the use of the same specification string on both the client and server sides (requiring the client to ignore server-specific delimiters and the server to ignore client-specific prefixes).

OpenSSL v3.5’s Collaborative Development and Significance

The OpenSSL maintainer team and other specialists were consulted and worked with extensively during the development process. The “excellent interactions” during the development are highlighted in the text.

A noteworthy accomplishment is OpenSSL v3.5, which is described as “the first TLS library to fully adhere to the CNSA 2.0 mandate to prefer Q-safe algorithms.” It is anticipated that Linux distributions would adopt OpenSSL v3.5 more broadly due to its Long-Term Support (LTS) status, which will make these new quantum-safe communication features widely available.

Conclusion

In order to protect online communication from the potential threat of quantum computers, OpenSSL v3.5’s incorporation of the Q-safe algorithm preference is essential. The developers have met the NSA’s CNSA 2.0 mandate by ingeniously expanding OpenSSL’s current configuration facilities without necessitating major code changes in OpenSSL-reliant apps or changes to the TLS standard itself.

You can also read Hot Schrödinger Cat States: High-Temp Quantum Superpositions

A progressive shift to a more secure digital future is made possible by the client-side prefix and server-side tuple-based preference systems, which offer a workable and backward-compatible method of giving quantum-resistant cryptography priority. OpenSSL v3.5’s LTS status guarantees its broad use, increasing accessibility to quantum-safe communication on a variety of systems.

FAQs

What is Quantum Safe?

“Quantum safe” describes security procedures and encryption that are made to resist attacks from both classical and quantum computers. It entails creating and implementing cryptographic algorithms that are impervious to the possible dangers presented by potent quantum computers.

Bringing Amuse 3.0 to Life with AMD Ryzen AI + Radeon GPUs

0

Amuse 3.0 with AMD

Ryzen AI CPUs and AMD Radeon graphics cards have improved and accelerated generative AI. AMD is offering AMD optimized models that accelerate inference by up to 4.3x on suitable hardware through an engineering partnership with Stability AI.

Tensorstack.AI’s recently released Amuse 3.0 Beta program allows users to experience these models along with a number of other fascinating new capabilities.

  • The future Adrenalin 24.5.1 mainline version or the Adrenalin 24.30.31.05 AMD Optimised and Amuse 3.0 Preview Driver (Download Here) are required for Amuse 3.0 and AMD optimised models.

Amuse 3.0’s new features include:

  • Presenting models optimised for AMD (developed in partnership with Stability AI)
  • Presenting superior Photo AI Filters
  • Presenting Video Diffusion in draft quality
  • Introducing the Video Restyle in draft quality
  • Over 100 new image models and adjustments have been made.
Generative AI with AMD Optimized Models
Image credit to AMD
Generative AI with AMD Optimized Models

Amuse 3.0 enables AMD users with appropriate hardware to run cutting-edge models such as Black Forest Labs’ FLUX.1 models and Stability AI’s Stable Diffusion 3.5 family. In order to provide customers with a one-stop shop to optimise their creative potential, this edition also included 100 of the most well-liked custom image models and great tunes.

With the AMD XDNA Super Resolution capability, users of AMD Ryzen AI systems with an AMD XDNA NPU may rapidly and simply produce 4MP print-quality photos in a matter of minutes.

Suggested system specifications

ModelsAMD Radeon Graphics CardsAMD Ryzen AI Processors
SD 1.5 and LCMMost AMD Radeon Graphics Card with at least 6GB of vRAMMost AMD laptops with 16GB RAM.
SDXL, SDXL Lightning, SD3 Medium, SD 3.5 MediumAMD Radeon RX 9070 XT, Radeon 7900 XTX, Radeon 7900 XT, Radeon 7900 GRE, Radeon 7800 XTAMD Ryzen AI 300 series with 32GB RAM, Ryzen AI 395+ MAX series with 32GB RAM. (VGM = 16GB)
FLUX.Schnell, SD 3.5 Large TurboAMD Radeon RX 9070 XT, Radeon 7900 XTX, Radeon 7900 XT, Radeon 7900 GRE, Radeon 7800 XTAMD Ryzen AI MAX+ 395 series with 64GB or 128GB RAM. (VGM = 48GB)
SD 3.5 Large, FLUX.DevAMD Radeon PRO W7900 XT 48GB, Radeon PRO W7800 48GB 

Depending on the hardware, users can get an inference speedup of increase to 4.3x to Amuse’s inclusion of AMD-optimized models. Users can anticipate 3.1x quicker performance in SDXL 1.0 and up to 4.3x faster picture creation in Stable Diffusion 1.5 on the AMD Radeon RX 9070 XT. Performance in SD 3.5 Large on the AMD Radeon RX 9070 XT can be up to 3.3 times quicker with AMD-optimized versions in the SD 3.5 family.

AMD Ryzen RX 9070 XT: Generate images up to 4.3X faster
Image credit to AMD
AMD Ryzen RX 9070 XT: Generate images up to 4.3X faster

Depending on the model selected, users with AMD Ryzen AI 9 HX 370 will get speedups of 5% to 50% and be able to run larger models than ever before. For instance, only AMD-optimized devices can run SD 3.5 Large Turbo. AI fans will get up to a 3.3x speedup in the SD 3.5 family of models and increase to 70% and 40% faster performance in Stable Diffusion 1.5 and SDXL 1.0, respectively, with AMD Ryzen AI MAX+ 395 (codename: Strix Halo) series CPUs.

AMD Ryzen AI: Generate images up to 3.3X faster
Image credit to AMD
AMD Ryzen AI: Generate images up to 3.3X faster

On the majority of contemporary AMD graphics cards and processors, AMD-optimized models will, of course, provide inference speedups; nevertheless, the precise configuration, architecture, family, and model selected will affect the results.

However, Amuse 3.0 isn’t only about performance. We’ve heard you enjoy high-end AI photo filters. The most recent version of Amuse gives you beautiful Photo AI effects that operate entirely locally by combining top-notch SDXL Lightning-based models with control nets. Don’t want to upload your photo to a cloud service but still want to follow the newest AI image trend? Use Photo AI filters on your device in a private, local environment.

This time, it’s not simply pictures that are receiving all the attention. With Amuse 3.0, video diffusion capability is now formally available. You can now completely restyle your videos locally using the draft quality video-to-video model or text-to-image production of clips.

QuantX Labs to Launch Optical Atomic Clock into Space

QuantX Labs

QuantX Labs will send innovative optical atomic clock technology into orbit.

QuantX Labs, a global leader in quantum sensor and precise timing technologies, is preparing to launch its cutting-edge equipment into space, marking a significant milestone in Australian space technology. In collaboration with the French space logistics firm Exotrail, QuantX plans to introduce TEMPO, a crucial aspect of its atomic clock technology, on the spacevan vehicle that will set out on a SpaceX mission as early as December 2025.

With the help of a $3.7 million grant from the Moon to Mars program of the Australian Space Agency, QuantX Labs will introduce a crucial component of their next-generation optical atomic clock. The Agency’s strong focus, vision, and faith in the Australian space industry are demonstrated by this investment, which will develop sovereign capabilities and establish Australia as a leader in space-based precise timing and navigation.

Beyond high-performance timing, this crucial subsystem, known as an optical frequency comb, is a state-of-the-art instrument that enables a wide range of space applications, including synchronized Earth observations, navigation, deep-space communications, and precise location. Since their invention at the turn of the century, optical combs have garnered widespread attention and were awarded the 2005 Nobel Prize in Physics.

An optical frequency comb will be launched into orbit for the first time with QuantX’s demonstration. Its comb has already passed stringent environmental testing, including the requirement that it withstand the harsh launch conditions. This included operating under the harsh temperature fluctuations that are typical in space, intense acceleration and vibration, vacuum exposure, and radiation levels that were higher than those anticipated during the course of the mission.

They all use satellite navigation systems (GPS and similar networks) in their cars and phones daily, and high-precision timing in space is already a useful resource. The goal of this initial deployment of TEMPO’s incredibly accurate timing technology is to establish the groundwork for an Australian-only solution and a replacement for the existing GPS and comparable networks.

Professor Andre Luiten, Managing Director of QuantX Labs, emphasised the importance of the impending launch: “This launch is the result of innumerable hours of labour by our engineers and physicists, and it also marks a breakthrough for our TEMPO technology. We have achieved this result at a significantly lower cost and in a significantly shorter amount of time than is customary. We are immensely pleased of our accomplishments and thrilled to see Australia become a leader in precision timing based in space.

To further test and integrate the module onboard spacevan, a team from QuantX Labs will visit France this month to work with Exotrail. These procedures will be carried out at Exotrail’s headquarters facilities just south of Paris before the vehicle carrying the QuantX payload is shipped to the launch location in the United States. After launching a successful demonstration flight at the end of 2023 and continuing to serve customers in orbit, this one-year in-space mission will be Exotrail’s second.

Read more on QpiAI Launches 25-Qubit Superconducting Quantum Computer

The significance of this launch was highlighted by Dr. Sebastian Ng, PNT Program Lead at QuantX Labs: “The Frequency Comb launch is a critical milestone for the KAIROS mission, as the enabling technology for LEO optical clocks.” If it deploys successfully, it will yield important information as we move closer to the complete clock payload. Next-generation positioning, navigation, and timing capabilities are made possible by this technology. Our team will continue to develop and integrate the entire TEMPO system to make sure it is prepared for upcoming space missions.

Read more on The QUANTUM WR-5246-40: A NeW HPC Test Chambers

For QuantX Labs and Australia’s expanding space industry, the Frequency Comb’s successful launch into space will be a historic accomplishment. As the KAIROS mission approaches launch, its partnership with Exotrail demonstrates how effective global collaborations are at developing cutting-edge technologies. With a history of successful missions and Exotrail’s demonstrated proficiency in in-space mobility solutions, this launch will not only showcase TEMPO’s capabilities but also firmly establish Australia as a pioneer in space-based precision timing.

News source

NVIDIA Llama Nemotron Ultra Reinvent Open AI Performance

0

AI now encompasses profound reasoning, intricate problem-solving, and potent adaptability for practical applications in business, finance, customer service, and healthcare. It is no longer only about producing text or images.

The most recent Llama Nemotron Ultra reasoning model from NVIDIA is now available. It increases computation efficiency while delivering the highest accuracy among open-source models across intelligence and coding benchmarks. Hugging Face has the model, weights, and training data for AI workflow automation, research assistants, and coding copilots.

NVIDIA Llama Nemotron Ultra excels at math and science coding

The capabilities of AI in scientific reasoning, coding, and math standards are being redefined by Llama Nemotron Ultra. With the depth and adaptability needed for high-impact AI, the model is constructed for real-world industry demands, ranging from copilots and knowledge assistants to automated processes. It is post-trained for complex reasoning, human-aligned conversation, RAG, and tool use.

Llama Nemotron Ultra improves upon Llama 3.1 by utilizing both synthetic and commercial data, along with sophisticated training methods. Developed for agentic processes, it provides affordable, high-performance AI with robust reasoning capabilities. NVIDIA has made two excellent training datasets used in post-training publicly available to facilitate the wider development of reasoning models.

These materials provide the community a head start in creating models that are both cost-effective and high-performing. The NVIDIA team, which recently won first place in a competitive reasoning benchmark at the @KaggleAI Mathematical Olympiad, demonstrated their efficacy. After then, Llama Nemotron Ultra was subjected to the data, technology, and insights. These three standards are examined in detail in the following sections.

GPQA Diamond benchmark

In a scientific reasoning benchmark, the Llama Nemotron Ultra thinking model performs better than other open models, as seen in Figures 1, 2 and 3. The 198 carefully constructed questions in the fields of biology, physics, and chemistry that make up the GPQA Diamond benchmark were created by PhD-level specialists.

These graduate-level questions need profound understanding and multistep thinking, which go far beyond simple memory or superficial inference. With an accuracy of 76%, Llama Nemotron Ultra has achieved a new benchmark and established itself as the top open model in scientific reasoning, although humans with PhDs typically achieve about 65% accuracy on this difficult subset. The Vellum and Artificial Analysis leaderboards show this outcome.

The Artificial Analysis – GPQA benchmark for evaluating scientific reasoning
Image credit to NVIDIA
Leading models' accuracy scores on the Artificial Analysis – GPQA standard for scientific reasoning
Image credit to NVIDIA
Leading models' Vellum – GPQA scientific reasoning accuracy scores
Image credit to NVIDIA

LiveCodeBench benchmark

As demonstrated in Figures 4, 5 Llama Nemotron Ultra has demonstrated exceptional performance on LiveCodeBench, a reliable benchmark intended to evaluate real-world coding skills, in addition to performing exceptionally well on advanced science benchmarks. Code generation, debugging, self-repair, test output prediction, and execution are among the more general coding activities that LiveCodeBench focuses on.

In LiveCodeBench, every issue is date-stamped to guarantee impartial, out-of-distribution assessment. It checks true generalization by prioritizing real problem-solving over code output. The leaderboards for GitHub LiveCodeBench and Artificial Analysis both display this outcome.

Leading open-weight models' accuracy results on the Artificial Analysis – LiveCodeBench benchmark for coding skills
Image credit to NVIDIA
Leading model accuracy results on the Artificial Analysis – LiveCodeBench benchmark for coding abilities
Image credit to NVIDIA

AIME benchmark

In the AIME benchmark, which is frequently used to assess mathematical reasoning skills, Llama Nemotron Ultra outperforms other open models. View the LLM leaderboard in real time.

Leading models' Vellum – AIME math accuracy scores
Image credit to NVIDIA

Open datasets and tools 

Llama Nemotron’s open design concept is among its most important accomplishments. The model itself and two key, commercially viable datasets that shaped its reasoning abilities were released by NVIDIA AI. These datasets are now trending at the top of Hugging Face Datasets.

Over 735K Python samples from 28K distinct problems from well-known competitive programming platforms make up the OpenCodeReasoning Dataset. This dataset, which was created especially for supervised fine-tuning (SFT), allows enterprise developers to include sophisticated reasoning skills into their models. Organizations may improve the ability of AI systems to solve problems by utilizing OpenCodeReasoning, which will result in more intelligent and resilient code solutions.

The Llama-Nemotron-Post-Training Dataset was created artificially utilizing open and publically accessible models, such as the DeepSeek-R1 models, the Nemotron family, the Qwen family, and Llama. This dataset, which is intended to improve a model’s performance on important reasoning tasks, is perfect for enhancing general reasoning, math, coding, and instruction following skills. It provides a useful tool for optimizing models to comprehend and react to intricate, multi-step instructions more effectively, assisting developers in creating AI systems that are more capable and cohesive.

NVIDIA hopes to democratize the training of reasoning models by making these datasets available for free on Hugging Face. Now that startups, research labs, and businesses can access the same resources as NVIDIA internal teams, the wider adoption of agentic AI which is capable of reasoning, planning, and acting on its own inside complex workflows is accelerated.

Enterprise-ready features: Speed, accuracy, and flexibility

A commercially successful model, Llama Nemotron Ultra can be applied to a range of agentic AI use cases, such as task-oriented assistants, autonomous research agents, chatbots for customer support, and coding copilots. It is a great basis for real-world applications that require accuracy, flexibility, and multistep problem solving due to its outstanding performance in scientific reasoning and coding benchmarks.

In the open-reasoning model class, Llama Nemotron Ultra provides the highest throughput and the best model accuracy. Savings are closely correlated with its throughput (efficiency). In order to execute the model in a data center setting, it significantly lower the model’s memory footprint while maintaining performance by using a Neural Architecture Search (NAS) technique. This allows for greater workloads and fewer GPUs.

The model then went through a thorough post-training pipeline that included reinforcement learning (RL) and supervised fine-tuning to enhance its capabilities and make it superior at both reasoning and non-reasoning tasks. Businesses can use reasoning only when necessary with the model’s “On” and “Off” features, which also lowers overhead for easier, non-agentic tasks.