Work With Webner

We keep looking for the right people to add to our team. On this page, you will find all the open positions with us currently. Send your detailed resume to companyhr@webners.com to apply. We work with well-established companies from USA, Europe, UK, Australia, and New Zealand. Work with us in Mohali to achieve the career growth you deserve. At Webner we work as a family. Our employees are given opportunities to excel in their knowledge and skill by working on challenging projects. Our compensation model makes our employees a partner in our success.

Benefits of Working with Webner

  • 5 Day work week

  • 20 casual leaves a year

  • Excellent work-life balance

  • Competitive Salary for your profile

  • Performance Bonus

  • New Year Bonus

  • Additional Royal Club Bonus on the excellent performance

  • No Micro-Management

  • Very high Job Stability

To apply for a job, send an email to companyhr@webners.com. Do not forget to attach your detailed resume.

Hadoop Ecosystem Support Engineers

We are looking for skilled Hadoop Ecosystem Support Engineers to provide operational support and ensure the stability, performance, and availability of big data platforms. The ideal candidate will have hands-on experience managing and troubleshooting the Hadoop ecosystem — including HDFS, Hive, Spark, YARN, and other related components.

This role focuses on providing support, maintenance, and resolving issues.

Key Responsibilities

  • Provide L3 production support for Hadoop ecosystem components (HDFS, Hive, Spark, YARN, Oozie, etc.).
  • Monitor cluster health, performance, and resource utilization using tools such as Ambari, Cloudera Manager, or Grafana.
  • Troubleshoot and resolve HDFS, Hive, and Spark job failures and performance issues.
  • Perform root cause analysis (RCA) for recurring incidents and work with engineering teams to implement fixes.
  • Manage user access, quotas, and security policies in Hadoop clusters.
  • Conduct routine maintenance tasks such as service restarts, cluster upgrades, and patch management.
  • Collaborate with data engineers and platform teams to ensure optimal cluster performance and reliability.
  • Document support procedures, incident reports, and configuration changes.

Required Skills & Experience

  • 3–8 years of experience supporting or administering Hadoop ecosystems in production.
  • Strong hands-on knowledge of:
    • HDFS (file system management, data balancing, recovery)
    • Hive (query execution, metastore management, troubleshooting)
    • Spark (job monitoring, debugging, performance tuning)
    • YARN, Oozie, and Zookeeper
  • Experience with cluster management tools like Ambari, Cloudera Manager, or similar.
  • Proficiency in Linux/Unix system administration and shell scripting.
  • Strong analytical and problem-solving skills with a focus on incident management and RCA.
  • Familiarity with Kerberos, Ranger, or other security frameworks within Hadoop.
Education
  • Regular MCA or Bachelor’s degree in Computer Science, Information Technology, Engineering, or equivalent experience.

Nice to Have

  • Exposure to cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc).
  • Basic understanding of Python or Scala for log analysis and automation.
  • Experience with Kafka, Airflow, or other data orchestration tools.
  • Knowledge of ticketing systems (ServiceNow, JIRA) and ITIL processes.

Hadoop Administrator / Big Data Platform Engineer

We are looking for an experienced Hadoop Administrator to design, build, and maintain our Hadoop ecosystem based on the Open Data Platform (ODP) framework. The ideal candidate will have hands-on experience in the installation, configuration, tuning, and administration of key Hadoop components, including HDFS, YARN, Hive, Spark, Ranger, and Oozie.

This role involves end-to-end platform setup, ongoing maintenance, performance optimization, and support for enterprise big data workloads.

Key Responsibilities

  • Install, configure, and manage Hadoop clusters and ecosystem components (HDFS, YARN, Hive, Spark, Zookeeper, Oozie, etc.) on ODP-compliant distributions.
  • Build and deploy Hadoop stacks from scratch, including hardware sizing, capacity planning, and architecture design.
  • Implement cluster high availability (HA), backup/recovery, and disaster recovery strategies.
  • Manage user access, security policies, and Kerberos/Ranger configurations.
  • Perform cluster performance tuning, troubleshooting, and log analysis to ensure system stability.
  • Monitor system health and optimize resource utilization using Ambari, Cloudera Manager, or other monitoring tools.
  • Automate cluster operations using shell scripts or Python for deployment, maintenance, and patching.
  • Collaborate with data engineering and infrastructure teams for upgrades, migrations, and platform integrations.
  • Maintain detailed documentation for architecture, configurations, and operational runbooks.

Required Skills & Experience

  • 5–10 years of experience in Hadoop ecosystem administration.
  • Proven experience building Hadoop clusters from scratch using ODP distributions (Hortonworks, Cloudera, or similar).
  • Strong expertise in:
    • HDFS, YARN, Hive, Spark, Zookeeper, Oozie
    • Ambari or Cloudera Manager (installation, service management, and monitoring)
    • Kerberos, Ranger, or Sentry for security and authorization.
    • Proficiency in Linux system administration, shell scripting, and configuration management.
    • Experience with performance tuning, capacity planning, and troubleshooting in production environments.
    • Familiarity with HA configurations, NameNode failover, and cluster scaling.

Education

  • Regular MCA or Bachelor’s degree in Computer Science, Information Technology, Engineering, or equivalent experience.

Nice to Have

  • Experience with cloud-based Hadoop environments (AWS EMR, Azure HDInsight, GCP Dataproc).
  • Exposure to containerized big data platforms (Kubernetes, Docker).
  • Knowledge of automation tools (Ansible, Terraform, Puppet).
  • Experience with Kafka, Airflow, or NiFi for data pipeline integration.\
  • Understanding of data governance, auditing, and monitoring best practices.