Portland News

ChatGPT leaves positive impression on real estate industry

Image source: Search Engine Journal

ChatGPT: For years, artificial intelligence (AI) has been heralded as the upcoming miracle that will revolutionize labor.

As more individuals utilize ChatGPT for a variety of activities in both work and education, it has drawn notice for its writing services.

With relation to online listings, AI has shown to be a valuable tool in the industry of real estate.

The convenience

JJ Johannes, a real estate agent from Cedar Rapids, Iowa, painted a stunning picture of a property using ChatGPT.

In less than five seconds, the realtor quickly typed the description using a few key terms.

If Johannes had written it himself, it would have taken him longer than an hour.

“It saved me so much time,” he added. “It’s not perfect but it was a great starting point.”

“My background is in technology and writing something eloquent takes time. This made it so much easier.”

Since AI’s big release in late November, several realtors have started using it, including JJ Johannes.

Residential and commercial agents hailed the experience as transformative for their jobs and said it helped them generate listings, social media postings, and legal paperwork.

Additionally, ChatGPT has been used to automate tasks like doing intricate calculations and responding to frequently asked questions.

The AI

A tool called ChatGPT stores a ton of online data in order to quickly react to customer inquiries.

So far, these activities have been used with it:

  • Original essays
  • Stories
  • Song lyrics
  • Research paper abstracts

Additionally, ChatGPT creates emails and handles accounting tasks.

When another user utilized the AI to pass an exam at an Ivy League university, questions were raised about the AI’s reliability and the potential for exam cheating.


ChatGPT was unveiled in November, and since then, there have been concerns regarding its possible drawbacks among numerous sectors.

However, it has already had a substantial influence on real estate agents’ regular business operations across the country.

Many more people have said they couldn’t imagine working without ChatGPT.

Broker Andres Asion of the Miami Real Estate Group said:

“I’ve been using it for more than a month, and I can’t remember the last time something has wowed me this much.”

Asion received a message from a customer who had recently moved into a property that was still under construction and was having trouble opening the windows.

Months passed with no response in her attempts to contact him.

After that, the broker sent a copy of one of her emails to the AI, editing it to highlight liability issues.

“ChatGPT wrote it as a legal issue and all of a sudden, the developer showed up at her house,” said Asion.

He has also used it to create legal papers like addenda and sent them to attorneys for review.

“I fine-tune all kinds of drafts with ChatGPT,” said the broker.

“Sometimes I’ll tell it to make it shorter or funner, and it gives you so many samples to pick and edit from.”

Read also: BuzzFeed enters partnership with OpenAI


Even though ChatGPT is free to use, its parent company OpenAI is reportedly contemplating charging a $42 monthly charge.

Andres Asion asserted that he was prepared to pay for the assistance.

“I would easily pay $100 or $200 a year for something like this,” he said. “I’d be crazy not to.”

Frank Trelles of State Street Realty in Miami described how it has changed how he operates, and he is prepared to pay for it.

“As soon as I tried it out, I was sold,” said Trelles.

“I went to sign up for a package, thinking it would be at least $100 a month, and was blown away that it was free.”

“Nothing in this world is free though – and that made me a bit nervous.”

Trelles claims that he leverages the AI to find permitted applications for specific plots of land and zones in the Miami-Dade County area.

He also uses algorithms and mortgage calculators to determine what mortgage payments or return on investment would be ideal for the customer.

“I can be in a car with a client when they ask me what their mortgage payments might be,” said Trelles.

“I can ask ChatGPT what a mortgage payment would be on a $14 million purchase at a 7.2% interest rate amortized over 25 years with two origination points at closing, and in two seconds, it gives me that information.”

“It also explains how it got the answer. It’s amazing.”

Potentials and limitations

Even though ChatGPT is among the best technologies now in use, it still has several drawbacks.

For instance, the tool had trouble with simple math in the past.

Trelles claims that it is only helpful for short, rudimentary computations and is worthless for getting correct answers.

According to Serge Reda of the Fordham Real Estate Institute, some people may be better equipped to use ChatGPT than others.

Additionally, the usage of AI has made it possible for brokers to create listings and responses more quickly.

Because lead generation and transaction closing usually necessitate a tailored approach, automation of client responses is rarely the ideal method.

“It’s accessible to everyone right now because it’s free and they can get a taste of how this powerful tool can work,” said Reda.

“But there are definitely significant limitations.”

AI uses in other forms

Realtors are focusing on ChatGPT now, although other real estate-related applications have already employed AI.

For instance, Zillow employed AI to generate floor designs and do 3D mapping.

In order to confirm that price estimates accurately reflect the state of the market, the Zestimate tool employed AI to analyze pictures to identify if certain houses featured stainless steel appliances or hardwood flooring.

Zillow has unveiled an AI capability that enables potential buyers to conduct research using natural language.

Zillow’s Matt Kreamer claims that the real estate sector adjusts more slowly than other sectors.

But he sees a lot more changes happening.

According to Kreamer, Zillow is not overly concerned with agents adopting ChatGPT to automate processes and cut down on time.

“We aren’t promoting or wary of ChatGPT but are interested in how it’s being used and watching it,” said Kreamer.

JJ Johannes thinks AI has the ability to drastically transform many other industries, yet it’s too early to say whether the technology will become a norm in real estate.

“It may not be with ChatGPT, but I believe some form of artificial intelligence like this will become a big part of how we work and live our lives,” he said.

BuzzFeed enters partnership with OpenAI

Image source: Fortune

Buzzfeed: Most individuals would be happy to finally have a vacation break and get away from their jobs but some may decide to put in more time at the office.

During the holidays, Jonah Peretti, co-founder and CEO of BuzzFeed, got the chance to study more about artificial intelligence.

Peretti is one of those people who is ready to experiment with the most recent technology.

He gained knowledge about AI’s operations through time and developed insights about how it may impact the media industry.

It is hardly unexpected given how regularly BuzzFeed has used AI.

Late last month, Peretti took some time to sit down in his California home and consider how the fundamental tenets of BuzzFeed may be applied to the developing field of AI writing technology.

The news

In an interview on Thursday, Jonah Peretti mentioned that he and a few coworkers prototyped ways to leverage technology to improve interactive articles and well-liked quizzes, to name a few.

Peretti claimed to have enjoyed himself, saying, “It started to feel like we were all playing.”

Many Google docs were created as a result of their method as fun labor, which had implications for how BuzzFeed and the technology may be merged into the platform.

The docs indicate that Peretti might be able to expand it to additional formats.


Jonah Peretti officially announced on Thursday that as a result of the encounter, BuzzFeed will work with OpenAI, the brand established by ChatGPT.

Through their partnership, OpenAI is able to provide audience content while also integrating AI into their “core business.”

Peretti admitted that readers of the report might believe BuzzFeed was using robots to replace people.

He claimed that wasn’t the purpose for the technology, but he believed other companies will adopt that strategy.

“I think there are two paths for AI in digital media,” said Peretti.

Read also: Copyright issue catches up to Midjourney as it faces lawsuit

“One path is the obvious path that a lot of people will do– but it’s a depressing path – using the technology for cost savings and spamming out a bunch of SEO articles that are lower quality than what a journalist can do, but a tenth of the cost.”

“That’s one vision, but to me, that’s a depressing vision and shortsighted vision because in the long run, it’s not going to work.”

He continued, saying:

“The other path, which is the one that gets me really excited, is the new model for digital media that is more personalized, more creative, more dynamic – where really talented people who work at our company are able to use AI together and entertain and personalize more than you could ever do without AI.”

Jonah Peretti views artificial intelligence (AI) as a tool to help his staff operate more effectively, not as a replacement.

BuzzFeed quiz

The company used the BuzzFeed quiz to explain Peretti’s specific goal.

Utilizing the users’ input, humans create the questions and responses for them.

With AI, staff workers may instead respond to visitors’ questions while the program creates extremely personalized responses.

For instance, the user may complete a brief questionnaire while the AI utilizes the data to produce a short piece of RomCom composition.

“We don’t have to train the AI to be as good as the BuzzFeed writers because we have the Buzzfeed writers,” Peretti explained.

“So they can inject language, ideas, cultural currency and write them into prompts and the format.”

“And then the AI pulls it together and creates a new piece of content.”

A different path

Jonah Peretti has no interest in replacing human journalists with artificial intelligence to generate news articles.

He highlighted how disastrous the effects were for the tech publication CNET.

“There’s the CNET path, and then there is the path that BuzzFeed is focused on,” said Peretti.

“One is about costs and volume of content, and one is about ability.”

“Even if there are a lot of bad actors who try to use AI to make content farms, it won’t win in the long run,” he continued.

“I think the content farm model of AI will feel very depressing and dystopian.”

MacBook set for significant upgrades with new chips

Image source: Apple

MacBook: It was major news when Apple announced on Tuesday that buyers could now choose between the M2 Pro and M2 Max CPUs for the new Macs.

For the Mac and MacBooks, the two CPUs are Apple’s most potent processors.

The M2 chips

Performance is improved by Apple’s next-generation systems on chips (SoCs), the M2 Pro and M2 Max.

With more than 32GB of efficient, unified memory, a 12-core CPU, and a 19-core GPU, the Pro improves the M2 architecture.

The M2 Pro boasts a 38-core GPU that increases the unified memory bandwidth and 96GB of unified memory, while the Max enhances those features.

The CPUs come with enhanced proprietary technology, such as a quicker 16-core Neural Engine and Apple’s powerful and influential media engine.

The 14-inch and 16-inch MacBook Pro’s performance and functionality are greatly enhanced by the M2 Max, while the Mac mini introduces the M2 Pro.

“Only Apple is building SoCs like M2 Pro and M2 Max,” said Johny Srouji, the senior vice president of Apple’s Hardware Technologies.

“They deliver incredible pro performance along with industry-leading power efficiency.”

“With an even more powerful CPU and GPU, support for a larger unified memory system, and an advanced media engine, M2 Pro and M2 Max represent astonishing advancements in Apple silicon.”

The Mac mini

The new Mac mini offers features including quicker speed, more unified memory, and cutting-edge networking, and it is more powerful and competent at an accessible price.

The M2 model supports two displays, while the M2 Pro version supports up to three.

The Mac mini includes the Studio Display, Magic Access, and Mac OS Ventura, which is efficient and user-friendly.

Apple’s Senior Vice President of Worldwide Marketing, Greg Joswiak, said:

“With incredible capabilities and a wide array of connectivity in its compact design, Mac mini is used in so many places, in so many different ways.”

“Bringing even more performance and a lower starting price, Mac mini with M2 is a tremendous value.”

“And for users who need powerful pro performance, Mac mini with M2 Pro is unlike any other desktop in its class.”

Additionally, Apple no longer manufactures or markets Mac minis with Intel processors.

The Mac Pro is the final device with an Intel CPU installed.

Read also: Copyright issue catches up to Midjourney as it faces lawsuit


After Luca Maestri’s warning in October, the M2 Pro and M2 Max processors, as well as the Mac mini, were released.

The chief financial officer of Apple forecasted a yearly decline in revenue for the December quarter.

The decline, meanwhile, may be the result of Apple’s failure to ship the MacBooks in time for the holiday shopping season.

Apple released the results of its fiscal fourth quarter in October, and its earnings and sales per share outperformed Wall Street forecasts.

The tech giant fell short of its sales goals in several product categories, such as the iPhone business and services.

The new Macs will be introduced in a fragile economic environment where many customers are still struggling with inflation.

The vast majority of consumers have also learned to be more careful with spending in light of the lingering recessionary concerns.

Earnings report

The end of this month will see the release of Apple’s earnings report.

The tech company announced December quarter sales of $10.85 billion, up 25% from the same period the year before.

Despite the pandemic’s impacts and issues with the supply chain, the business at the time recorded its highest quarterly sales ever.


Beginning on Tuesday, pre-orders for Apple’s new products, which will go on sale on January 24, will be accepted.

The M2 chip-equipped Mac mini will cost $599, which is $100 cheaper than the prior versions.

However, you would have to pay about $1,299 for the M2 Pro model.

The new 14-inch MacBook Pro will also be priced at $1,999.

Lastly, the price of the 16-inch MacBook Pro variant is around $2,499.


Apple announces new Mac mini, MacBook Pro with M2 Pro and M2 Max chips

Apple introduces new Mac mini with M2 and M2 Pro – more powerful, capable, and versatile than ever

Apple unveils M2 Pro and M2 Max: new-generation chips for next-level workflows

Copyright issue catches up to Midjourney as it faces lawsuit

Image source: Engadget

Copyright: Creative works and innovations are protected by intellectual property rights, which are legal rights.

Patents, copyrights, trademarks, and trade secrets are a few examples of these rights.

Patents also shield inventions and discoveries.

Copyrights protect creative works like music, art, and literature.

Trademarks protect brand names, logos, and trade secrets protect private company information.

These rights allow inventors and creators to control how their creations are utilized and can stop others from using them without their authorization.

Despite the significance of these intellectual property rights, Midjourney AI disregarded them, which prompted a new lawsuit.

The news

The United States District Court for the Northern District of California has received a lawsuit from a group of artists.

They had filed complaints against DeviantArt, Midjourney, and Stability AI, which are accused of violating artists’ copyright when they created AI art.

The artists are represented by Matthew Butterick, Lockridge, Grindal, Nauen PLLP, and Joseph Saveri Law Firm LLP.

“As burgeoning technology continues to change every aspect of the modern world, it’s critical that we recognize and protect the rights of artists against unlawful theft and fraud,” said Joseph Saveri.

“This case represents a larger fight for preserving ownership rights for all artists and other creators.”

Read also: Lensa AI has terms and conditions that needs reading

The lawsuit

The complaint claims that each of the concerned artists is suing for damages that DeviantArt, Midjourney, and Stability AI have caused.

The lawsuit seeks to shield victims from further suffering.

In a post describing the situation, Matthew Butterick said that authors, artists, programmers, and other creatives targeted the three corporations.

Most plaintiffs are worried about AI systems being trained using numerous works protected by copyright without their knowledge, permission, or compensation.

In addition to seeking monetary compensation for the infractions, the complaint also asks for an injunction to stop such issues.

On Twitter, artist Karla Ortiz expressed her enthusiasm at the lawsuit, writing:

“I am proud to be one of the plaintiffs for this class action suit. I am proud to do this with fellow peers, that we’ll give a voice to potentially thousands of affected artists.”

“I’m proud that now we fight for our rights not just in the public sphere but in the courts.”

The connection between the companies

Stable Diffusion is used by DeviantArt, Midjourney, and Stability AI even though they are all separate businesses.

Based on the 1 billion photos in the LAION-5B dataset, Stable Diffusion is an AI product.

The complaint claims that it involved using billions of pictures that were protected by copyright in its creation.

Additionally, it asserts that the images cause the following:

  • Direct copyright infringement
  • Vicarious copyright infringement related to forgeries
  • Violations of the Digital Millennium Copyright Act
  • Violation of class members’ rights to publicity
  • Breach of contract pertaining to the DeviantArt Terms of Service

Importance of copyright in art

Intellectual property rights, especially copyright, are essential for visual artists who produce paintings and digital images.

The individual right to control how their works are used, including the ability to reproduce, distribute, exhibit, and develop derivative works, is granted to the artist under copyright.

This implies that others cannot copy the artist’s creations, market them as copies, or incorporate them into other works without the artist’s permission.

These rights are crucial for visual artists because they enable them to safeguard and profit from their creations.

Without copyright protection, third parties might easily duplicate and sell an artist’s work without paying them anything.

This would make it challenging for visual artists to support themselves via their work.

Visual artists have control over how their work is used and displayed thanks to copyright.

They can set conditions for how their work is used and choose to license it for certain purposes, such as commercial usage in television or cinema.

This enables artists to guarantee that the usage of their work is consistent with their goals and beliefs.

For visual artists using digital graphics and paintings, copyrights are even more crucial in the modern day.

It’s even more important for visual artists to have control over how their work is used, reproduced, and profited given how simple it is to reproduce and share digital data.

In conclusion, intellectual property rights, especially copyright, are crucial for visual artists because they enable them to safeguard, profit from, and regulate how their work is utilized.

It is crucial for artists to support themselves financially and guarantee that their creations’ usage is consistent with their purpose and core principles.


DeviantArt, Midjourney face lawsuit for using ‘billions of copyrighted’ images in AI art

‘AI art’ companies & DeviantArt are being sued by artists

Afeela introduced by Sony and Honda in presentation

Image source: Engadget

Afeela: Sony and Honda worked to develop an innovative electric vehicle (EV) prototype.

Two titanic companies, each with a distinct field of expertise, are brought together by the collaboration.

At the CES in Las Vegas on Wednesday, Sony introduced the Afeela electric vehicle prototype.

The EV

Sony and Honda collaborated to develop the joint mobility electric vehicle (EV) prototype known as the Afeela.

It is a compact, lightweight vehicle specifically designed for urban use to provide city dwellers with a convenient, sustainable mobility option.

The Afeela features cutting-edge technology, a sleek, modern style, and potent driver assistance systems developed by Sony.

More than 40 sensors, including lidar, radar, ultrasonic, and cameras, will be added to the vehicle.

They will support autonomous driving and object detection.

More than just a vehicle

The vehicle still has a slew of unanswered questions.

Yasuhide Mizuno, CEO of Sony Honda Mobility, claims that the company developed a one-of-a-kind EV using its expertise in artificial intelligence, augmented reality, entertainment, and virtual reality.

“Afeela represents our concept of an interactive relationship where people feel the sensation of interactive mobility and where mobility can detect and understand people and society by utilizing sensing and AI technologies,” said Mizuno.

According to the CEO, the car will try to embody three concepts: autonomy, augmentation, and affinity.


When it was first presented on Wednesday, the Afeela had a completely different appearance from the initial design Sony unveiled at CES three years earlier.

A sedan with a light bar across the front, a high-gloss black roof, and a closed-off grille was on exhibit during a recent presentation.

The wheels themselves had a distinctive appearance, with bright accents above them and black hubcaps.

The prototype was described as looking like a hybrid of a Porsche 911 and a Lucid Air.


A premium electric vehicle with affordable pricing, the Sony Honda EV competes with Mercedes-Benz, Volvo, Audi, and BMW.

Sony expects its software to include subscription services, for which car owners will likely need to pay regularly to access specialized functions.

Read also: Robots in restaurant industry 2022 progress

Initial concept

It was widely anticipated when Sony unveiled their Vision-S automobile idea at the Consumer Electronics Show (CES) in Las Vegas in January 2020.

It was a sample car, not a final product, and showcased Sony’s concepts and innovations.

In addition to its transit features, the Vision-S offered its passengers a range of entertainment and music alternatives.

It included numerous built-in speakers and a sizable touchscreen display that could be used to access music, movies, and other forms of entertainment.

Initially, the Vision-S was only meant to serve as a showcase for what Sony was capable of doing with a vehicle.

Despite Sony’s claims that it had no plans, information about Sony’s partnership with Honda first surfaced in early 2022.

Production and release

The production of Sony Honda vehicles will start at a Honda plant in the US.

They did not, however, go into detail about how many cars they wanted to make.

In 2026, the Afeela will initially become accessible in the US, followed by waves in Japan and Europe.

The vehicles will be available for preorder in 2025.

Other ideas

The Afeela was supposed to have some entertainment features, including an integrated PS5.

According to Yasuhide Mizuno, the concept was to develop hardware akin to that found in vehicles and customized for the network and entertainment Sony Honda Mobility aims to offer.

Sony and vehicles

The automotive industry is essential to Sony’s technological developments.

The company has studied and invested in electric vehicles for a long time.

Sony also believes EVs have the potential to reduce reliance on fossil fuels, improve air quality, and contribute to the development of a more sustainable future.

The company views vehicles as a powerful medium for delivering its technology and services to customers.

It is always seeking new ways to use its expertise to create exceptional, valuable experiences and products for drivers and passengers.

In addition to its work on electric vehicles, Sony is interested in developing additional technologies and products for the automotive industry, such as advanced driver support systems and in-car entertainment systems.

Honda EVs

Honda is developing a different lineup of electric vehicles in addition to the Aleefa.

General Motors is collaborating in the production of The Prologue.

In North America, the Prologue, Honda’s first long-range EV, will go on sale in 2024.

By the end of the decade, the corporation will also roll out 30 hybrid battery-electric fuel-cell vehicles, with this particular model being a part of the initial wave.

The company will also use GM’s Ultium platform to power an unannounced Acura model for 2024 based on the Precision concept, in addition to its first electric vehicle.


Sony and Honda just announced their new electric car brand, Afeela

Elon Musk’s promise to end child exploitation on Twitter falls short due to scarce workforce

Image source: The Hill

Previously, Elon Musk said he could halt child trafficking on Twitter, the social media company he paid $44 billion to acquire.

Less than a month ago, he spoke about the problem and declared it his top priority.

However, there has not been any proof that Twitter has been acting aggressively under his administration since he took over.

Musk hasn’t invested much in getting rid of child exploitation-related information on Twitter, according to interviews with four former workers, one current employee, internal business records, and people working toward the same objective.


Elon Musk elevated the conversation about online safety into a significant attempt to discredit Twitter’s former executives.

Additionally, he is exercising his ownership as a member of a social movement against far-left to center-left values, often known as “the woke mind virus.”

The change came after he agreed with the far-right online rhetoric, which frequently makes exaggerated allegations of child sex assault.

“It is a crime that they refused to take action on child exploitation for years,” Musk tweeted on Friday.

His remark was in response to a member of Twitter’s Trust and Safety Council who focused on child abuse concerns resigning via letter.

Jack Dorsey, the previous CEO, countered, “This is false.”

New management

Under Musk’s new leadership, Twitter announced that the number of account suspensions for child sex abuse content in November was greater than in other months.

New collaborations with anonymous organizations and new “detection and enforcement measures” were cited as the reasons for the suspensions.

However, the corporation has limited resources to combat online child sex abuse because of the following:

  • Layoffs
  • Mass-firings
  • Resignations

Read also: Twitter to require $3 more for iPhone users for Twitter Blue

Internal records

Although Twitter’s employment count is still fluctuating, internal documents obtained by NBC News and CNBC show that 25 of the 1,600 people still working for the firm had titles connected to “Trust and Safety.”

More than 100 people who Musk approved to work for Twitter, Tesla, SpaceX, The Boring Company, and various investors and advisers are included in the total.

A former worker who focused on child safety claimed to know a small Twitter team still working on the problem.

However, most of the team’s product managers and engineers have left.

The worker requested anonymity out of concern for reprisals.

Twitter’s workforce grew to more than 7,500 by the end of 2021.

Former workers claim that layoffs would have been possible even if Musk hadn’t bought the business.

Child safety groups

Under the current management, Twitter reduced its commitments to external organizations that support kid safety.

The social media company’s Trust and Safety Council, which consisted of 12 organizations and advised Twitter on its initiatives to raise awareness of child sexual exploitation, was disbanded on Monday.

Little has changed under Musk’s leadership, according to the National Center for Missing & Exploited Children (NCMEC), an agency the US government has charged with monitoring allegations of child sexual abuse content online.

Gavin Portnoy, a spokesperson of NCMEC, comments on the organization’s unified CSAM reporting system:

“Despite the rhetoric and some of what we’ve seen people posting online, their CyberTipline numbers are almost identical to what they were prior to Musk coming on board.”

The absence of Twitter from the organization’s annual social media roundtable was another shift noted by Portnoy.

“The previous person was one of the folks who resigned,” he said.

When asked if they wanted to send a proxy, Twitter allegedly declined, according to Portnoy.


Twitter reported 86,666 instances of CSAM discovered on the service in 2017; however, Portnoy thinks the actual number may be higher.

“We’ve always felt that there should have been more reports coming out of Twitter, no matter how you cut it, and just given the sheer number of users that are there,” he said.

Though it still affects most social media sites, child sexual exploitation content continues to plague Twitter.

Early this year, Twitter’s advertisers departed after discovering that their adverts frequently appeared next to offensive content.

A child sex abuse victim and their mother filed a lawsuit against the firm last year, alleging that they failed to act quickly after being informed of a video showing the youngster roaming the platform.

Read also: Mortgage application soars as interest rates dip

Content moderation

Regarding content moderation, child abuse content must be found and removed using automated detection technologies, internal specialist teams, and outside contracts.

According to Twitter’s rules, the content is:

“Imagerey and videos, referred to as child pornography, but also written solicitation and other material that promotes child sexual exploitation.”

According to people with knowledge of the situation and internal records, Twitter’s engineering staff—which included numerous employees and leaders who worked on trust and safety features as well as enhancements to the current platforms—was reduced by more than half as a result of layoffs, firings, and resignations.

Ella Irwin, the current head of Trust and Safety at Twitter, claims that Musk also laid off contractors as the firm turned to high-tech automation for its moderation requirements.

“You tend to think more bodies equals more safety,” said Portnoy.

“So, I mean, that is disheartening.”

It’s still being determined how many Twitter employees are still working on child safety problems.


Elon Musk says he can stop child exploitation on Twitter. So far, he’s axed jobs and pushed out watchdogs

TikTok study shown to introduce harmful content to teens

Image source: Psy Post

TikTok encountered a lot of problems in 2022, with security concerns topping the list.

However, a recent study suggests it might be harmful to young users.

The video-sharing app may start to promote hazardous content about eating disorders and suicide after youngsters register an account on it.

The results are anticipated to fuel the flames as TikTok’s problems become more serious, particularly in light of how it affects young users.

The study

On Wednesday, a report from the nonprofit Center for Countering Digital Hate (CCDH) was made public.

They found that viewing body image and suicide-related content on TikTok took less than three minutes after signing up.

Five minutes later, users can find a group on the app that promotes knowledge about eating disorders.

According to the researchers, they opened up eight additional accounts in the US.

In the US, UK, Canada, and Australia, new TikTok users must be at least 13 years old.

The accounts took a little break and liked articles about body image and mental wellness.

According to the CCDH, TikTok suggested videos about mental health and body image every 39 seconds throughout a 30-minute period.

TikTok woes

The study is being made public as regional, state, and federal authorities examine new measures to punish TikTok, especially concerning privacy and security issues.

They are also assessing how safe the app is for teenagers.

The analysis was made public more than a year after lawmakers interrogated executives from social media companies at a series of congressional hearings.

They were concerned about how their platforms would subject younger users, especially adolescent girls, to damaging information that might undermine their mental well-being and self-esteem.

After hearings and revelations from Facebook whistleblower Frances Haugen, the businesses agreed to increase their control over teens.

However, the CCDH study suggests that additional work has to be done.

“The results are every parent’s nightmare,” said Imran Ahmed, the CEO of the CCDH.

“Young people’s feeds are bombarded with harmful, harrowing content that can have a significant cumulative impact on their understanding of the world around them and their physical and mental health.”

Read also: NetChoice sues California over law, claims it violates First Amendment


A TikTok spokesman responded to the study’s release by asserting that it was inaccurate for many reasons, including:

  • Small sample size
  • The limited 30-minute window for testing
  • How the accounts scrolled past unrelated topics to find other content

“This activity and resulting experience does not reflect genuine behavior or viewing experiences of real people,” said the spokesperson.

“We regularly consult with health experts, remove violations of our policies, and provide access to supportive resources for anyone in need.”

“We’re mindful that triggering content is unique to each individual and remain focused on fostering a safe and comfortable space for everyone, including people who choose to share their recovery journeys or educate others on these important topics.”

The spokesperson asserts that the CCDH doesn’t distinguish between positive and negative videos on specific topics, pointing out that people routinely share motivational accounts of overcoming eating disorders.


TikTok claims to be constantly adding more user protections.

For example, filters are now available in the app to filter out explicit or “potentially hazardous” films.

TikTok created a “maturity score” in July to identify videos with potentially mature or advanced content.

Users may also pick how long they want to spend viewing TikTok videos, set up regular screen time breaks, and access a dashboard that displays data such as how frequently they opened the app.

TikTok also has several parental controls available.


Last year, the office of US Senator Richard Blumenthal created a phony Instagram account of a 13-year-old girl.

Following the account were posts about eating problems and dieting (which are supposed to be banned).

According to Blumenthal, the account started getting promoted to accounts with more extreme diets.

Instagram later deactivated the accounts because they had broken their rules against inciting eating disorders.

Read also: Meta to pull out news content if JCPA is approved

Policy violations

TikTok states that it does not allow anything that depicts, promotes, normalizes, or glorifies activities that could lead to suicide or self-harm.

The data below represents videos that were removed for violating the prohibitions against suicide and self-harm from April through June 2022:

  • 93.4% were removed at zero views
  • 91.5% were removed 24 hours after being posted
  • 97.1% were removed before anyone reported them

According to the spokesman, anybody searching for illegal phrases like “#selfharm” won’t find anything.

Instead, local assistance programs will be suggested to them.

Despite the guarantees, the CCDH insists that more measures are necessary to restrict some content and improve protection for underage users.

“This report underscores the urgent need for reform of online space,” said Ahmed.

“Without oversight, TikTok’s opaque platform will continue to profit by serving its users – children as young as 13, remember – increasingly intense and distressing content without checks, resources or support.”


TikTok may push potentially harmful content to teens within minutes, study finds

Lensa AI has terms and conditions that needs reading

Image source: Product Hunt

Lensa AI: As the year draws to a close, the topic of AI in art has become a hot topic.

People have also used AI applications to transform themselves.

Lensa AI is one of the new AI selfie apps making the rounds online.

However, people are warning users about it, pointing out a scary detail in the terms and conditions.

The news

Most people have come to register online without properly reading the terms and conditions.

Lensa AI, an app that lets users create AI-generated selfies of 10 to 20 images, is no different.

Users who upload their photos will receive over 50 AI-generated selfies in various forms.

However, social media highlighted the terms and conditions, saying the agreement allows Lensa AI to distribute and use their pictures without additional compensation.

Also, downloading user content means accepting the company license.

As a result, the app can use their photos.

Read also: Apple improves security approach for users

Terms and conditions

Lensa AI’s terms and conditions are as follows:

“Lensa may allow you to upload, edit, store and share content, including photos and videos.”

“We do not claim ownership over your User Content,” it continues.

“However, in order to provide you with the services (e.g., to make our services accessible to you), we seek a legal permission from you to use your User content.”

“You give us such a permission by means of a Company License.”

Users are then prompted to grant Lensa AI a long-term license so that they can perform the following actions:

  • Use
  • Reproduce
  • Modify
  • Distribute
  • Create derivative works

If they agree, users will receive no further compensation.

In addition, they will be subject to their additional explicit content for use if required by applicable law in their privacy policy (company license).

Read also: Jason M. Allen’s art competition win with AI-generated image sparks controversy in the art community

Company License

The Lensa AI company license is intended to allow Lensa to operate and improve its existing and future products.

Users who accept allow Lensai’s AI to train in the Magic Avatars function user application.

“The Company License terminates when you delete the User Content from Lensa’s Library or by terminating your account.”

However, the terms of Lensa AI also state that they revoke the license by contacting the company’s email (contact@lensa-ai.com).


People are warning others about using new AI selfie app due to scary detail

NetChoice sues California over law, claims it violates First Amendment

Image source: Knight Columbia

Amazon, Google, Meta, TikTok, and Twitter are just a few tech juggernauts that make up the enormous industry organization NetChoice.

The group declared its decision to sue California on Wednesday.

Their choice is motivated by their desire to stop the state’s recently passed Age-Appropriate Design Code Act, which they contend violates the First Amendment.

The Age-Appropriate Design Code Act

The laws in California were based on those in the UK.

It aims to set guidelines to make the internet safer for teenagers.

According to the Age-Appropriate Design Code Act, minors must always have the highest level of privacy enabled.

Additionally, it states that websites for children under 18 must evaluate the risk of abuse or exploitation of users.

The lawsuit

A developing legal dispute involving online free expression is the NetChoice lawsuit.

Legislators frequently want to undermine the broad liability protections for user posts and content management provided by online platforms.

Issues with privacy and content moderation affect all political parties.

Republicans and Democrats still disagree on the approaches that should be taken to address the problems, though.

Netchoice filed lawsuits against Texas and Florida for the social media laws imposed by majority Republican legislatures, even though a majority Democratic legislature supported the California statute.

The legislation aims to hold tech companies responsible for removing posts with political overtones.


According to NetChoice, the new law in California would hurt adolescents rather than protect them, going against what it was intended to do.

Additionally, they claim that it violates businesses’ First Amendment rights to free speech by requiring them to infer from users the meaning of “inherently subjective terms.”

According to NetChoice, the state can inflict crippling financial penalties if the corporations make the wrong assumption.

“The State can also impose such penalties if companies fail to enforce their content moderation standards to the Attorney General’s satisfaction,” said the group.

In July 2024, the Age-Appropriate Design Code Act is expected to go into force.

According to Netchoice, the bill will pressure content providers to excessively limit their output to avoid paying fines for producing what California deems detrimental.

“The over-moderation will stifle important resources, particularly for vulnerable youth who rely on the Internet for life-saving information,” said NetChoice.

Read also: Mortgage application soars as interest rates dip

Defense of the law

Despite the allegations, a spokesman for California Attorney General Rob Bonta defended the legislation.

According to the statement, the policy offers vital new safeguards against collecting and using children’s data.

Additionally, it attempts to address some of the legitimate and substantiated adverse effects of social networking and other online goods and services.

“We are reviewing the complaint and look forward to defending this important children’s safety law in court.”

Prior concerns

The lawsuit’s phrasing parallels a bipartisan federal bill being opposed by civil society organizations seeking to establish online protection for children.

The organizations warned that the bill would make youngsters and teenagers more dangerous.

The following groups were among those that opposed the bill:

  • The American Civil Liberties Union
  • Center for Democracy & Technology
  • Electronic Frontier Foundation
  • Fight for the Future
  • Glaad
  • Wikimedia Foundation

The organizations warned over the bill’s possible adverse effects, particularly on the rights of the LGBTQ community.

People in the community already worry about how political biases might influence content filters’ criteria.

The bipartisan bill

The legislation would have imposed obligations for websites minors under 16 are likely to access.

They would therefore be in charge of lowering the possibility that young users may suffer bodily or emotional harm, particularly by promoting the following:

  • Self-harm or suicide
  • Encouragement of addictive behavior
  • Enabling online bullying
  • Predatory marketing

“KOSA would require online services to ‘prevent’ a set of harms to minors, which is effectively an instruction to employ broad content filtering to limit minors’ access to certain online content,” wrote the groups.

“Online service would face substantial pressure to over-moderate, including from state Attorneys General seeking to make political points about what kind of information is appropriate for young people.”

“At a time when books with LGBTQ+ themes are being banned from school libraries, and people providing healthcare to trans children are being falsely accused of ‘grooming,’ KOSA would cut off another vital avenue to access to information for vulnerable youth.”

Revamping the federal bipartisan bill

In a revised version of the measure, the legislators in charge tried to solve the issues.

Updates that addressed concerns from the LGBTQ community and influential lawmakers were included in the release on Tuesday night.

A modified “duty of care” clause was added to the updated version to address concerns that attorneys general with anti-LGBTQ beliefs would abuse the legislation.

It also revised a clause that states businesses are not obligated to gather extra user information to determine the age.

Groups continued to object to the law, notwithstanding the amendments.

Read also: Elon Musk’s promise to end child exploitation on Twitter falls short due to scarce workforce

Content moderation

The measures in Florida and Texas that NetChoice opposes aim to weaken Section 230 of the Communications Decency Act, which provides liability protection for the tech sector.

The right to control content is protected under the Act.

Republicans, on the other hand, have been trying to impose further controls on social media because of what they regard as the suppression of conservative viewpoints on well-known websites.

While this has happened, popular platforms have denied applying their community guidelines unfairly.

An independent study reveals that conservative ideas typically dominate internet debates.

The Supreme Court prevented a Texas version from going into force in May.

It didn’t decide on the case’s merits, nevertheless.

Florida’s version has thus far been rejected by lower courts.


Tech industry group sues to block California law designed to protect kids online over free speech concerns

Kids Online Safety Act may harm minors, civil society groups warn lawmakers

Revamped kids’ online privacy bill emerges in year-end push (1)

Meta to pull out news content if JCPA is approved

Image source: Search Engine Journal

Meta, the parent company of Facebook, on Monday threatened to remove news content from its platforms.

The threat follows reports that US lawmakers are adding controversial pro-media legislation to the annual defense licensing bill.

Meta’s warning highlights the danger it sees to its business model from the proposed bill, the Journalism Competition and Preservation Act (JCPA).

The bill

Senator Amy Klobuchar introduced the legislation with the support of more than a dozen lawmakers from both sides.

It would create a four-year exemption under US antitrust law that allows news organizations to collectively bargain against social media platforms for a larger share of advertising revenue in exchange for news content.

Additionally, the legislation is one of several antitrust laws that focus on technology, waiting on Capitol Hill.

Read also: Cannabis shops launch in New York, problems may rise


Andy Stone, a Meta spokesman, wrote a statement saying:

“If Congress passes an ill-considered journalism bill as part of national security legislation,” he started.

“We will be forced to consider removing news from our platform altogether rather than submit to government-mandated negotiations that unfairly disregard any value we provide to news outlets through increased traffic and subscriptions.”

Previous actions

Meta has already shown that it is serious with threats.

A similar legislation was proposed and passed in Australia last year.

As a result, Meta has temporarily removed the ability for users to view and share links to posts on its platforms.

However, the company changed course when Australia passed the law.

Read also: Meta tipped to make a significant layoff

The tech industry

Digital rights organization Fight for the Future spoke about the reports, saying multiple sources said efforts to include the JCPA in the annual defense bill had been successful.

In addition, the National Defense Authorization Act has been incorporated into the language of the JCPA.

Meanwhile, the tech industry is strongly opposed to the JCPA.

Additionally, the bill has been criticized by more than two dozen civil society groups, which often clash with Big Tech on political issues.

The groups wrote a letter to congressional leaders on Monday saying the JCPA could aggravate misinformation and disinformation.

The bill could allow news sites to sue tech platforms for restricting the publicity of a story and intimidate them from moderating offensive or misleading content.

Additionally, the letter says the JCPA may favor large media companies over smaller local and independent outlets, which have been hurt by declining digital ad revenue.

The groups that signed the letter include:

  • The American Civil Liberties Union
  • The Electronic Frontier Foundation
  • The Wikimedia Foundation
  • Public Knowledge


Meta threatens to remove news content over US journalism bargaining bill