• Remote
Extract Job Details from Remote

Remote is a platform that connects professionals with remote job opportunities across various industries and regions. It offers a wide range of job listings from companies looking to hire remote workers.

Scraping job listing details from Remote allows users to stay updated with the latest remote job opportunities, helping job seekers find positions that match their skills and preferences. Extract details like job titles, companies, and locations, to stay updated with the latest remote job opportunities. This can be particularly valuable for those looking to work from home or seeking flexible job arrangements.

Use Cases:

  • Job Market Analysis: Use our robot to aggregate data on remote job listings, including job titles, companies, and locations. This information can help career advisors, job seekers, and analysts understand current trends in the remote job market, enabling them to identify high-demand roles and required skills.
  • Job Board Management: Integrate the scraped data into your job board to provide users with up-to-date remote job listings, enhancing the value of your platform.
  • Recruitment Agencies: Recruitment agencies can use this robot to monitor remote job listings and identify potential clients or candidates, streamlining their recruitment process.

Integrate this robot with Google Sheets, Airtable, or Zapier to automate the data collection process and seamlessly update your databases or workflows.

By using this prebuilt robot to scrape job listings from Remote, users can efficiently gather and analyze remote job opportunities, keeping their job search or recruitment efforts organized and up-to-date.

Sample Output
Captured Text
Job Title
Company
Workplace
Location
Job Type
Salary
Date Posted
Description
Link to Apply

Senior DevOps Engineer

Bloomreach

Remote

Europe only

Full-time

4k - 4k EUR/month

Jul 16, 2024

Are you looking for a cutting-edge tech stack to work with on a daily basis? We are currently expanding our Infrastructure team and are looking for a new colleague to join as a Senior DevOps / Infrastructure Engineer. The salary starts from €3,500 based on your experience level, and you can work from home or one of our Central Europe offices on a full-time basis. Are you ready to grow with us? What tech stack do we have for you? Python, Golang Kubernetes, Terraform, Gitlab Google Cloud, GCP Bigtable, GCP BigQuery, GRPC   MongoDB, Redis, Elasticsearch, Influxdb, Etcd, Kafka Victoria Metrics, Grafana, Sentry Minimum requirements: At least 3 years of production experience with: Kubernetes - we are looking for engineer that not only deployed applications to a cluster, but who also understands what is happening behind the scenes and can operate 24x7 production  GCP (preferred)/AWS/Azure - our solution is built on top of GCP platform. Candidate should be comfortable working with public cloud, understand the risks and benefits associated with running applications in the public cloud, be familiar with infrastructure as a code principle and have ability to make design choices between using cloud managed solutions versus self hosted alternatives Python/Go - you should be a solid programmer capable of developing custom tooling If you don’t meet these requirements, don’t worry we are also looking for Junior DevOps engineers. How to know if you are good fit: The qualifications outlined below serve as a guide to determine if your skills and experience align with the requirements of this position: Continuous Learning: You have a keen interest in Kubernetes and related technologies, demonstrated by your active engagement in reading and staying updated about them. Conference Participation: You have participated in DevOps related conferences, showcasing your commitment to continuous learning and networking in the field. Configuration Proficiency: You have hands-on experience configuring pod/container security context, network policies, roles and role bindings, pod affinity, host path, pod disruption budgets, priority classes, node taints, to name a few. Resource Optimization: You have analyzed resource usage of applications hosted on a cluster and implemented or suggested changes to resource requests/limits, Horizontal Pod Autoscalers (HPAs), or Vertical Pod Autoscalers (VPAs). Cluster Management: You have a deep understanding of the clusters you manage, including the types of machines used in node pools, the reasons for their selection, the enabled or disabled cluster features, the cluster version, and the node autoscaling setup. You have successfully upgraded Kubernetes cluster versions without causing interruptions to live applications hosted on the cluster. Terraform Proficiency: You have written a Terraform module with multiple interconnected resources.  Monitoring and Alerting: You have experience setting up monitoring systems and configuring alerts. On-duty experience is preferred, along with experience with Grafana and Prometheus. DevOps and CI/CD Experience: You have experience with DevOps, Orchestration/Configuration Management, and Continuous Integration technologies such as Terraform, GitLab, Ansible, Docker, etc. Team Onboarding and Training: You have experience with onboarding and training new team members, demonstrating your leadership skills and commitment to team growth. About your team: The Infrastructure team operates and maintains Bloomreach Engagement core infrastructure built on Google Cloud with security, high availability, costs, and scalability in mind. Our vision is to identify and implement opportunities to achieve a robust, reliable, and efficient infrastructure and development platform. We strongly support DevOps culture: each team is responsible for releasing, operating, and monitoring their own applications. The role of the Infrastructure Team is to provide a strong foundation upon which all teams can build, for example, manage big infrastructure components like Kubernetes, databases, and cloud components in Google Cloud. An important role of the team is also providing support for developers, reviewing design proposals, validating the performance and availability of applications, and sometimes even developing new core application components like logging or authorization. Tasks and responsibilities: In the position of DevOps Engineer, you’d be expected to work with other Engineering teams to design sustainable infrastructure, microservice solutions, and an efficient and robust production environment. Additionally, you’ll be working on a variety of tasks and projects, including automating tools and infrastructure to reduce manual work, monitoring applications and participating in an on-call rotation as required.  The ideal candidate will be passionate about learning new things, creative, willing to take the initiative, and able to think outside the box to solve problems strategically. #LI-DU1

https://boards.greenhouse.io/bloomreach/jobs/6109929?source=remote.com&utm_source=remote.com&ref=remote.com

Didn't find what you're looking for?

No problem – we're here to help.

Do it yourself.
No coding needed.

Anyone can use Browse AI to extract or monitor data from any website. We've made it as simple and quick as possible.

Sign up and try now. It's free.
Limited-Time Offer

Have a specialist setup your web automation.

We're offering free setup and onboarding support to all users on Team and Company plans to make sure you have a great experience.

Talk to Sales

Get the latest updates on new features we're adding to Browse AI