As a Human Resources Intern at Arus, a software services organization based in HSR Layout, Bangalore, you will have the opportunity to gain hands-on experience in various aspects of HR. Arus specializes in Application Integration, Automation, ERP Implementations, Data Engineering, Analytics, and Data Science, with a focus on creating value for customers through well-crafted software engineering solutions. Your responsibilities will include assisting with the recruitment process, from job postings to coordinating interviews, as well as supporting onboarding activities and maintaining employee records. You will also have the chance to contribute to HR projects and initiatives, such as employee engagement programs and training sessions, and assist in preparing HR-related documents like employment contracts and policies. To qualify for this role, you should be currently pursuing or a recent graduate in Human Resources, Business Administration, or a related field. A strong interest in HR, excellent communication skills, proficiency in Microsoft Office Suite, and strong organizational abilities are essential. You should also demonstrate the ability to handle confidential information with discretion and be a team player with a positive attitude. As an intern at Arus, you can expect mentorship and guidance from experienced HR professionals, the opportunity to work in a dynamic and supportive team environment, and the potential for future career growth within the company. This internship offers a stipend of up to INR 18,000 per month for a duration of 6 months, with employment confirmation based on performance. Join Arus as a Human Resources Intern to kickstart your career in HR and gain valuable experience in a fast-paced and innovative software services organization.,
Key Responsibilities : Design, build, and optimize data pipelines using Pyspark and Data Pipelines in Microsoft Fabric. Ensure data quality, reliability, and scalability in all pipeline developments. Contribute to the deployment and performance tuning of pipelines within Microsoft Fabric. Experience and Skills 5+ Years of experience in Azure Data Engineering stack (MS Fabric or ADF + any Pyspark experience (Synapse, Databricks etc)) At least 1+ years of experience in Microsoft Fabric. Basic power bi skill for maintenance activities.
Responsibilities: * Collaborate with cross-functional teams on CI/CD pipelines * Manage infrastructure using Jenkins, Ansible & Terraform * Optimize resource utilization through Kubernetes & Docker
Role Overview: We are seeking a proactive PostgreSQL Database Administrator who can manage, optimize, and secure large-scale PostgreSQL environments, while also bringing strong DevOps automation and cloud deployment expertise. Youll involve on the reliability, performance, security and operability of mission-critical PostgreSQL systems, design and automate runbooks, and work closely with development, cloud and operations teams to deliver scalable, highly available database platforms. Key Responsibilities: Administer, monitor, and tune PostgreSQL databases (backup, recovery, performance tuning, security). Automate database provisioning, scaling, and deployments using tools like Ansible, Terraform, or similar. Implement CI/CD pipelines and integrate DB changes into DevOps workflows. Manage database operations in cloud platforms (AWS/Azure/GCP) including high availability and disaster recovery. Troubleshoot production incidents; perform root-cause analysis and post-mortems. Work closely with developers, cloud engineers, and security teams for performance and scalability improvements. Coordinate with clients/stakeholders for approvals, change windows and reporting. Ability to lead cross-functional teams during high-severity incidents and drive quick resolution. Required Skills & Qualifications: 36 years of hands-on experience as a PostgreSQL DBA. Solid understanding of database internals, query optimization, Data modelling, RBAC and replication. Solid Linux administration and shell scripting (bash); comfortable debugging OS-level issues impacting DB. Scripting & automation: Python and/or advanced shell scripting. Experience with DevOps tools (Git, Jenkins, Docker/Kubernetes, Ansible/Terraform, Packer, etc.). Hands-on exposure to at least one major cloud provider (AWS RDS/Aurora, Azure Database for PostgreSQL, or GCP Cloud SQL). Containerization basics: Docker; awareness of running DBs in container/k8s environments. Monitoring & logging: Prometheus, Dynatrace, Grafana, ELK/Fluentd or other observability stacks. Ability to leverage strong troubleshooting and incident management skills, with challenges and delivering timely resolutions. Willingness to work in rotational shifts, including weekends. Nice-to-Have: Experience with database migration, upgrade projects, logical replication, BDR or Patroni. Knowledge of monitoring tools such as Prometheus, Grafana, or ELK. Familiarity with Yugabyte or distributed SQL systems (helpful given project context). Prior experience in Banking environment clients. Certifications: PostgreSQL, AWS/Azure Cloud certs. Experience in multi-tenant DB architectures and large scale OLTP systems. Soft Skills & Other Expectations: Good communicator; able to explain technical issues to non-technical stakeholders. Client-facing experience and comfort with shift-based on-call rotations. Proactive mindset, documentation oriented, team mentorship capability. Criteria: Should not have any accepted offer on hand Should be willing to sign an employee mutual agreement to pay 2x monthly salary in the event of offer-drop. The company will compensate 4x monthly salary, if we are unable to honour our offer. Notice period should not more than 45 days. Please do not apply, if you can't meet any of the above criteria.