EU Digital services act
Since today, 25.08.2023, the European DSA is legally in force for all very large online platforms and search engines, hosting services and online businesses (with the exception of small and micro businesses excluded under Art.29 of the Regulation).

You can access the 67-page Regulation via this link. It is available in a multitude of languages. This is the EN version :

https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R2065

It is impossible to explain all aspects of the DSA. Accordingly, I will limit myself to the most relevant points regarding the very large online platforms such as X, FB, Instragram.... and the very large online search engines such as Google, Modzilla, Edge ...

The EU has deliberately chosen to define the term "illegal content" very broadly.

The definition of the term, in the text of the Regulation itself does not hint at its real intentions.

Art.3(h) defines the term as follows :
"illegal content" means any information which in itself or in connection with an activity, including the sale of products or the provision of services, contravenes Union law or the law of a Member State consistent with Union law, irrespective of the precise law of a Member State, irrespective of the precise subject matter or nature of that law
Art.3(k) also defines "dissemination to the public" as making information available to a potentially unlimited number of third parties at the request of the provider of the service.

In paragraph 12 of the Preamble, the EU says the following in this regard :
To ensure the objective of a safe, predictable and secure online environment, the concept of "illegal content" for the purposes of this Regulation should broadly reflect the existing rules in the offline environment reflect. More specifically, the term "illegal content" should be broadly defined to include information related to illegal content, products, services and activities.
The real intentions of the EU are hidden in paragraphs 83, 91 and 108 of the Regulation's Preamble :

Point 83:
A fourth category of risks arises from similar concerns about the design, operation or use, including through manipulation, of very large online platforms and of very large online search engines with an actual or foreseeable negative impact on the protection of public health, minors and serious adverse effects on a person's physical and mental well-being, or gender-based violence. Such risks may also arise from coordinated disinformation campaigns related to public health, or from online interface design that could encourage behavioral addictions in service recipients.
Point 91:
In times of crisis, providers of very large online platforms may need to take certain specific measures as a matter of urgency, in addition to the measures they take in light of their other obligations under this Regulation. In this context, a crisis should be considered to occur in exceptional circumstances that may lead to a serious threat to public security or public health in the Union or significant parts of the Union. Such crises may result from armed conflicts,emerging conflicts, acts of terrorism, natural disasters such as earthquakes and hurricanes, as well as pandemics and other serious international public health threats.

The Commission, on the recommendation of the European Digital Services Council ("the Digital Services Council"), it should be able to require that providers of very large online platforms and providers of very large search engines initiate an urgent crisis response. Measures that those providers can identify and possibly implement may include, for example: adjusting content moderation procedures and increasing the resources dedicated to content moderation, adjusting general terms and conditions, relevant algorithmic systems and advertising systems, further intensifying cooperation with reliable flaggers, taking awareness-raising measures, promoting reliable information and adapting the design of their online interfaces.

The necessary requirements should be provided to ensure that such measures are taken within a very short time frame, that the crisis response mechanism is used only when and to the extent strictly necessary, and that all measures taken under this mechanism are effective and proportionate are properly taking into account the rights and legitimate interests of all concerned.
The EU may impose crisis protocols that must be complied with even without or before the occurrence of a crisis :

Paragraph 108:
In addition to the crisis response mechanism for very large online platforms and very large online search engines, the Commission may initiate the establishment of voluntary crisis protocols to ensure a rapid, collective and cross-border response in the online environment. This may be the case, for example, when online platforms are abused for the rapid dissemination of illegal content or disinformation, or when the need arises for the rapid dissemination of reliable information. In light of the important role of very large online platforms in disseminating information in our societies and across borders providers of such platforms should be encouraged to establish and apply. Such crisis protocols should only be activated for a limited period of time and, in addition, the measures adopted should be limited to what is strictly necessary to deal with the extraordinary circumstance.
What does this mean in practice?

That for any matter the EU qualifies as a crisis (pandemic, climate, a war...) and even without there being any crisis, it can and will order very large online platforms and search engines to ban any information that it qualifies as illegal (read : any information that goes against a chosen narrative), delete the posts or make them invisible and ban the profiles spreading such information, if they do not do so spontaneously (like Meta, for example, which has been applying the EU rules spontaneously since October 2022 even though they were not enforceable at the time and permanently shadow bans and blocks millions of profiles every other day).

Shitposting and blabla posts that publicly ridicule certain individuals or situations are allowed as long as they do not touch on a specific narrative. Spreading correct information that refutes a narrative or demonstrates its mendacious nature is not allowed, due to being labelled "illegal".

The very large online platforms and search engines must immediately comply with an EU injunction and take all appropriate measures under penalty of heavy fines:
Art.9: Injunction to act against illegal content

1. Upon receipt of an injunction issued by competent national judicial or administrative authorities under applicable Union law or national law compliant with Union law to act against one or more specific illegal content elements, brokering service providers shall inform the issuing authority or any other authority specified in the order, without delay, of the action taken in response to the order, specifying whether, and if so when, the action was taken.
Moreover, if the very large online platforms and search engines become aware or have knowledge of persons committing offenses or publishing illegal content, they should immediately notify the EU so that it can take immediate action (blacklisting, banning on all platforms...) :
Article 18: Notification of suspected offenses

1. A hosting service provider that becomes aware of information giving rise to a suspicion that a criminal offense has been, is being or is likely to be committed threatening the life or safety of a person or persons shall immediately inform the law enforcement or judicial authorities of the Member State or Member States concerned of its suspicions and shall provide all available relevant information.
The very large online platforms and online search engines MUST ban users' profiles for a reasonable time. The reasonable time was not defined.
Article 23

Measures and protection against abuse

1. Providers of online platforms shall suspend, for a reasonable period and after a prior warning, the provision of their services to service recipients who frequently provide manifestly illegal content.

2. Online platform providers shall suspend, for a reasonable period and after a prior warning, the processing of reports and complaints submitted through the reporting and action mechanisms and internal complaint handling systems referred to in Articles 16 and 20.
There is no general monitoring or active fact-finding obligation under the regulation but, to be exempted from liability, they must also remove all content as soon as they become aware of it:
To qualify for the liability exemption for hosting services, when the provider has actual knowledge or awareness of illegal activity or illegal content, it must act promptly to remove that content or make it inaccessible.
The EU provides for very strong control of very large online platforms and search engines.
Given the need to ensure monitoring by independent experts, providers of very large online platforms and of very large online search engines should account for compliance with the obligations set out in this regulation through independent audits and, where applicable, any additional commitments they have made under codes of conduct and crisis protocols. In order to ensure that checks are carried out in a timely, effective and efficient manner, providers of very large online platforms and of very large online search engines should provide the necessary provide cooperation and assistance to the organizations conducting the checks, including by giving the checker access to all relevant data and premises needed to properly conduct the check, including, where applicable, to data related to algorithmic systems, and by answering oral or written questions. Auditors should also be able to use other objective sources of information, such as studies by recognized researchers. Providers of very large online platforms and of very large online search engines should not impede the conduct of checks. Checks should be carried out according to industry best practices and with high professional ethics and objectivity, taking due account, where applicable, of auditing standards and codes of conduct.
The EU has also provided in a snitching system.
Point 118:

For the effective enforcement of the obligations set out in this Regulation, individuals or representative organisations should be able to lodge a complaint about the fulfilment of those obligations with the digital service coordinator in the territory where they received the service, without prejudice to the provisions of this Regulation on the allocation of powers and without prejudice to the applicable rules for handling complaints in accordance with national principles of good administration.
The EU draws all power and competences to itself which has the advantage that the very large online platforms and search engines know who their point of contact is.
Point 124 :

Therefore, the Commission should have exclusive power to monitor and enforce the additional obligation to manage systemic risk imposed by this regulation on providers of very large online platforms and very large online search engines.
This regulation marks the end of freedom of expression. No one will still be able to freely express their opinions in matters that have been or will be classified by the EU as illegal or harmful to public health, climate, war or any other societal crisis. Even if there is no crisis. Contrary debate is henceforth forever excluded. Criticism of a policy no longer possible. The EU decides what we can and cannot say. X carries out the orders.

Elon Musk had already told Politico on 18 July 2023 that he would comply with the DSA and this was confirmed today by X News Daily, with one sentence. Earlier in the day, the EU spokesperson, here on X, posted a video saying that the DSA is in force since today. Musk was quick to confirm under that post " we are working hard on it ".

With that, everything is said.