Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
3 Lacs
Bengaluru
Work from Office
Job Title: DB2 Database Administrator Location: Bangalore Job Type: Full-Time Experience: 3 to 5 Years Job Summary: We are seeking a skilled and proactive DB2 Database Administrator to manage and support our enterprise database environment. The ideal candidate will be responsible for day-to-day operations (BAU monitoring), database deployments, performance tuning, maintenance, and disaster recovery (DR) preparedness. This role requires a strong understanding of IBM DB2, IBM CDC, and associated technologies, along with the flexibility to support 24/7 operations on a rotational basis. Perform daily monitoring and routine health checks of DB2 databases across AIX, Linux, and Windows platforms. Plan and execute scheduled maintenance activities, database upgrades, and performance tuning. Manage and deploy database objects and scripts across environments, including production promotions. Implement and maintain database backup and recovery strategies, including DR drills and documentation. Support database replication using IBM CDC and troubleshoot replication issues. Generate and deliver periodic reports - daily, monthly, quarterly - covering system health, capacity, and performance. Coordinate and support vulnerability assessments (VAPT) and apply necessary remediation. Provide incident management support, root cause analysis, and timely resolution of database issues. Collaborate with application, infrastructure, and security teams for integrated solutions and deployments. Participate in on-call rotation and provide 24/7 support for critical issues and planned activities. Required Skills Technologies: Strong experience with IBM DB2 (LUW and/or z/OS) database administration. Hands-on experience with IBM InfoSphere CDC (Change Data Capture) . Proficient in AIX , Linux , and Windows environments. Experience with database monitoring tools and incident management systems . Familiarity with JDBC connectivity , application integration, and troubleshooting. Exposure to Kafka for real-time data integration is a plus. Scripting knowledge (Shell, Python, or SQL) for automation and reporting. Additional Information: Role involves rotational shifts and on-call support as part of 24/7 operations. Excellent analytical, problem-solving, and communication skills required. Must be able to handle multiple tasks and priorities in a dynamic environment.
Posted 1 week ago
3.0 - 5.0 years
4 Lacs
Bengaluru
Work from Office
Job Title: Middleware Administrator Location: Bangalore Employment Type: Full-Time Level - 2 Years of experience - 3 to 5 Years Position Overview: We are looking for a Middleware Administrator to join our enterprise infrastructure team. This role is focused on ensuring the stability, performance, and security of critical middleware platforms supporting business applications. The ideal candidate will have strong hands-on experience with a wide range of middleware technologies and the ability to perform under a 24x7 support environment with rotational shifts and on-call responsibilities. Role Objectives: Conduct regular health checks across middleware components hosted on AIX, Linux, and Windows servers. Monitor application and system logs to proactively identify issues and anomalies. Respond to and manage incidents, ensuring timely resolution and root cause analysis. Coordinate and execute patching and vulnerability remediation, including VAPT fixes. Perform SSL certificate management - renewal, deployment, and validation. Engage in capacity planning, performance tuning, and environment optimization. Execute scheduled and ad-hoc maintenance during approved windows, including deployments and system restarts. Support DR drills and maintain replication and backup configurations across environments. Collaborate closely with application, security, and infrastructure teams to support integrations and troubleshoot cross-platform issues. Create and enhance automation scripts using shell scripting for operational efficiency. Technical Environment: Middleware Platforms: IBM WebSphere Application Server IBM WebSphere Portal Server IBM HTTP Server (IHS) Apache Tomcat
Posted 1 week ago
2.0 - 4.0 years
2 - 6 Lacs
Pune
Work from Office
Location: Pune Experience: 2-4 Years Job Type: Full-Time Work Model: On-site Job Overview: We are looking for a hands-on DevOps/System Administrator with strong experience in firewalls, networking, on-premise and cloud infrastructure, and automation tools . The ideal candidate should have practical expertise in Hardware Firewalls, Endpoint Security , Linux/Windows server management , security practices , and monitoring tools and AWS along with a proactive problem-solving attitude. Ideal candidate should have experience working in enterprise applications. Key Responsibilities: Firewall Networking: Manage and configure hardware firewalls (Sophos, Fortinet, SonicWall). Implement and troubleshoot IPSec VPNs , web/application filtering , NATing , and routing policies . Monitor and analyse firewall logs for suspicious activities. Configure LAN/WAN segmentation and Network peering for hybrid infrastructure. Server Administration (On-Prem Cloud): Provision, configure, and maintain Linux (Ubuntu, CentOS) and Windows Servers . Manage VMware/Hypervisor/GenCenter virtualization platforms. Perform patch management , backups, and disaster recovery planning . Troubleshoot performance issues (CPU, memory, I/O latency) using tools like top, iotop, journalctl, etc. Automation Monitoring: Write and manage Ansible Playbooks and inventory files for bulk server updates. Deploy and manage monitoring solutions ( Grafana, Prometheus, Nagios, CloudWatch ). Understand PromQL for custom monitoring and alerting. Use ManageEngine, SCCM , or similar tools for patch management and compliance. Security Compliance: Implement endpoint security solutions (e.g., Trend Micro, Kaspersky, CrowdStrike, Cortex, NetProtect). Handle incident response: isolate infected systems, analyze malware, enforce USB and device control policies. Maintain access control using IAM policies and Access/Secret Keys . Conduct Root Cause Analysis (RCA) for incidents and document preventive measures. Cloud Infrastructure (AWS Preferred): Configure and manage AWS EC2, S3, IAM, VPC, Route Tables, CloudWatch . Implement secure communication between private EC2 instances across multiple VPCs . Automate infrastructure provisioning using Ansible / Terraform and manage state files securely. Implement CloudWatch Alarms , monitoring dashboards, and log analysis . Required Skills: Solid knowledge of Hardware firewall configurations , VPN setup , and network troubleshooting . Endpoint Security (Central Administration of the Antivirus, Disk Encryption, etc) Hands-on with Linux/Windows server troubleshooting , patching, and performance tuning. Good experience in AWS services , IAM roles , CLI tools , and S3-EC2 integrations . Proficient in Terraform , Ansible , or similar IaC and orchestration tools . Familiarity with monitoring tools and writing queries (PromQL or equivalent). Understanding of endpoint protection tools and incident management workflows . Good to Have: Exposure to Azure cloud services. Experience in deploying web applications (Apache, Node.js, ReactJS, MongoDB). Familiarity with Docker , Kubernetes , and CI/CD pipelines . Knowledge of Disaster Recovery (DR) strategies in hybrid environments. What Kind of Person Fits This Role: Someone who likes solving tech problems and can handle pressure when things go wrong. Working in shift timings starting at 6AM to 12AM Comfortable working with both physical devices and cloud systems . Knows how to automate tasks to save time. Good at explaining technical stuff and writing documentation .
Posted 1 week ago
4.0 - 12.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Cloud Developer This role has been designed as Onsite with an expectation that you will primarily work from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today s complex world. Our culture thrives on finding new and better ways to accelerate what s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. : In the HPE Hybrid Cloud , we lead the innovation agenda and technology roadmap for all of HPE. This includes managing the design, development, and product portfolio of our next-generation cloud platform, Green Lake. Working with customers, we help them reimagine their information technology needs to deliver a simple, consumable solution that helps them drive their business results. Join us redefine what s next for you. What youll do: As an DevOps team member, you will be responsible for enhancing the reliability and performance of applications, with automation and continuous integration and delivery. Also works with other teams to help them in operational works to give reliable service to the customer. We want motivated high performers who can leverage brilliant ideas into automated configuration and deployment while troubleshooting and resolving any issues. Management Level Definition: Applies developed subject matter knowledge to solve common and complex business issues and recommends appropriate alternatives. Works on problems of diverse complexity and scope. Exercises independent judgment to identify and select a solution. Ability to handle most unique situations. May seek advice in order to make decisions on complex business issues. Responsibilities: Think through hard problems in a production environment and drive solutions to reality Work in a dynamic collaborative environment Build automated solutions using the latest Cloud technologies and tools Engage often with customers, partners and product teams Participate in development projects Production Engineering Support and ready to Handling OnCall support on rotational basis. Public Cloud cost optimization and tool selection What you need to bring: Bachelors or Masters degree in Computer Science, Information Systems, or equivalent. Typically 4-12 years experience. Knowledge and Skills: Good experience in Python/GoLang Developer with Infrastructure Automation/Development background Strong knowledge of core Enterprise LINUX (Red Hat/CentOS) with a focus on automating, building, maintaining, securing, and performance tuning systems. Experience in VMware VMs, ESX Management and Troubleshooting. Understanding of Storage, SAN solution. Hands-On experience with any cloud service such as AWS, Azure. Experience with container management and microservices architectures in Kubernetes, Helm, Docker, and other virtual infrastructure platforms. Hands-On experience with CI/CD tooling - GitHub, Jenkins/Spinnaker, ArgoCD is highly preferred. Expertise with monitoring, alerting, and incident management, such as Grafana, Prometheus, Alert Manager, Kibana, PagerDuty, Opsgenie. Experience with SQL/NoSQL systems such as PostgresSQL, Cassandra, Redis. Experience in the development of operational procedures, processes, and scripts. Proven experience in capacity planning, performance tuning, and infrastructure architecture. Excellent Communication, strong analytical and problem solving skills. Additional Skills: Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Release Management, Security-First Mindset, User Experience (UX) What We Can Offer You: Health Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Lets Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #hybridcloud Job: Engineering Job Level: TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity . Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
3.0 - 8.0 years
8 - 12 Lacs
Noida, Bengaluru
Work from Office
We are looking for passionate engineers who designs, develops, codes and customizes software applications from product conception to end user interface. The person should be able to Analyz and understand customer requirements and preferences, incorporate these into the design and development process. About You - experience, education, skills, and accomplishments Bachelors degree or higher in related field, such as Computer Engineering or Computer Science, plus at least 3 years of software development experience, or equivalent combination of education and experience. At least 3 years experience working with E-Business Suite; specifically, with financials, order management, service contracts, inventory, Accounts Receivables and Advanced pricing modules. At least 3 yrs experience performance tuning in E-Business Suite. Experience developing custom components using OAF and ADF workflow, developing solutions using Oracle Apex. Experience integrating data from Oracle eBS to Sales force and working with AIM and formulating strategies for implementation. Expert knowledge of Oracle Applications interfaces, tables, and APIs. Expertise in RICE (developed new Reports, Interface, Customization, Extensions, and form personalization). It would be great, if you also have Experience in web technologies like HTML, JavaScript, CSS, JQuery Proficiency in Java with ability to write clean, efficient and maintainable code in Java Experience in designing, developing and maintaining Java applications Sound knowledge of Object-Oriented Programming (OOP) concepts (Optionally) Experience in AngularJS and Angular What will you be doing in this role Write clean, efficient, and maintainable code in accordance with coding standards. Review other code to ensure clean, efficient, and maintainable code. Defines architecture of software solution. Suggests alternative methodologies or techniques to achieving desired results. Develops and maintains understanding of software development lifecycle and delivery methodology. Reviews and revises new procedures as needed for the continuing development of high-quality systems. Maintains knowledge of technical advances and evaluates new hardware / software for company use. Follows departmental policies, procedures, and work instructions. Works closely with higher-level engineers to increase functional knowledge. Automate tests and unit tests all assigned applications. Participates as a team member on various engineering projects. Writes application technical documentation. About the team: The position is for Finance team within the Enterprise Services organization, a dynamic and collaborative group focused on supporting the company s key finance applications, including order to cash functions, invoice delivery, cash collections, service contracts, third-party integrations, and the general ledger. This team ensures seamless and efficient financial processes, maintaining healthy cash flow and accurate financial reporting. The team is committed to continuous improvement, leveraging the latest technologies and best practices. Join a team that values collaboration, innovation, and excellence in supporting the companys financial operations and strategic goals.
Posted 1 week ago
5.0 - 10.0 years
9 - 13 Lacs
Mohali
Work from Office
Key Responsibilities: Lead Tableau Cloud implementation and architecture design , including site structure, user provisioning, data connections, and security models. Develop and enforce Tableau governance policies , including naming conventions, folder structure, usage monitoring, and content lifecycle management. Guide and mentor Tableau developers and analysts in best practices for visualization design, performance tuning, and metadata management. Collaborate with Data Engineering and BI teams to design scalable, secure, and high-performing data models. Monitor Tableau Cloud usage, site health, and performance, proactively resolving issues and managing upgrades or configuration changes. Work closely with stakeholders to understand business requirements and translate them into high-impact Tableau dashboards and reports. Develop technical documentation, training materials, and user guides for Tableau Cloud users and developers. Lead or support migration efforts from Tableau Server or on-prem environments to Tableau Cloud. Lead Tableau Cloud implementation and architecture design , including site structure, user provisioning, data connections, and security models. Develop and enforce Tableau governance policies , including naming conventions, folder structure, usage monitoring, and content lifecycle management. Guide and mentor Tableau developers and analysts in best practices for visualization design, performance tuning, and metadata management. Collaborate with Data Engineering and BI teams to design scalable, secure, and high-performing data models. Monitor Tableau Cloud usage, site health, and performance, proactively resolving issues and managing upgrades or configuration changes. Work closely with stakeholders to understand business requirements and translate them into high-impact Tableau dashboards and reports. Develop technical documentation, training materials, and user guides for Tableau Cloud users and developers. Lead or support migration efforts from Tableau Server or on-prem environments to Tableau Cloud. Qualifications Required: Bachelor s or Master s degree in Computer Science, Information Systems, or a related field. 5+ years of experience with Tableau, with at least 1-2 years in Tableau Cloud (SaaS) specifically. Strong experience with dashboard design, Tableau calculations (LOD, table calcs), parameters, and actions. Solid understanding of data visualization best practices and UX principles. Hands-on experience managing Tableau Cloud environments, including content governance, access control, and site administration. Preferred: Tableau Certified Associate or Tableau Certified Professional certification. Experience with Tableau Prep, REST API, or other automation tools for Tableau administration. Familiarity with DevOps, CI/CD, or version control for BI assets (e.g., Git). Experience migrating from Tableau Server to Tableau Cloud. Knowledge of data security, privacy, and compliance standards in cloud environments.
Posted 1 week ago
6.0 - 11.0 years
22 - 25 Lacs
Hyderabad
Work from Office
Job_Description":" 6+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Data QA Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services. ","
Posted 1 week ago
2.0 - 7.0 years
50 - 60 Lacs
Bengaluru
Work from Office
As passionate about our people as we are about our mission. What We re All About : Q2 is proud of delivering our mobile banking platform and technology solutions, globally, to more than 22 million end users across our 1,300 financial institutions and fintech clients. At Q2, our mission is simple: Build strong, diverse communities by strengthening their financial institutions. We accomplish that by investing in the communities where both our customers and employees serve and live. What Makes Q2 Special Being as passionate about our people as we are about our mission. We celebrate our employees in many ways, including our Circle of Awesomeness award ceremony and day of employee celebration among others! We invest in the growth and development of our team members through ongoing learning opportunities, mentorship programs, internal mobility, and meaningful leadership relationships. We also know that nothing builds trust and collaboration like having fun. We hold an annual Dodgeball for Charity event at our Q2 Stadium in Austin, inviting other local companies to play, and community organizations we support to raise money and awareness together. Job Description We re seeking an Observability Systems Engineer focused on creating, implementing, and managing monitoring, alerting, and remediation tools and processes to join the Q2 Observability Automation Tools team. Q2 Software is focused on empowering returns on relationships for community-centered financial institutions and their retail and commercial account holders. We do this with the most comprehensive, secure and adaptable smart banking platform of its kind, designed to deliver a compelling, consistent user experience on any device and enable customers to deliver secure, innovative services and increasingly, to generate new sources of revenue. Named as one of Austin s fastest growing companies and one of the best places to work, Q2 offers our employees a culture fueled by engaged, motivated and dedicated team members. Summary: As part of the Q2 Observability Automation Tools team you will be working across multiple technologies, vendors, and proprietary in house tooling to build a framework of integrated systems and processes that make the management and support of the Q2 platform faster and easier by providing operational and incident response teams with the tools and information needed to effectively understand the health of a complex environment and its components. You help Q2 proactively get ahead of issues before clients can feel the impact. You are a force multiplier empowering users and clients to do more, more effectively! As an Observability Systems Engineer , you will help implement and manage best in class solutions with key vendor partnerships and internal development teams. Observability is more than just triggering alerts, it is understanding a system end to end, providing data driven, actionable insights into that system, and helping to ensure that systems are constantly undergoing improvements against root cause solutions. We are looking for a proactive and highly self motivated individual that is as excited to go looking for new problems to solve as they are to solve them. In this role, you will contribute to designing and implementing observability solutions in conjunction with architects, software engineers, data engineers, project management, and other SMEs. RESPONSIBILITIES: Responsible for ensuring that all security, availability, confidentiality and privacy policies and controls are adhered to Plans, designs, acquires, implements, integrates, and tests observability solutions of moderate complexity comprised of Windows, Linux, and SaaS based front-end and back-end components that support the company infrastructure, business processes and operations and/or network-based (cloud) product systems. Work individually and collaboratively to deliver solutions in live production systems Support, maintain, and resolve problems for the tools and services owned by the Observability Automation Tools team in live production systems, with occasional on-call availability Actively contribute to the configuration, layout and performance tuning of the production infrastructure Support upgrade and go-live activities related to new customer onboarding projects Attend and actively participate in weekly meetings Participate in projects while working with the workgroup to achieve defined goals within set timelines Performs IT functions such as application and infrastructure installation, patching, upgrades, and management related to the tools and services owned by the Observability Automation Tools team in live production systems Additional Job Description HELPFUL EXPERIENCE AND KNOWLEDGE: Typically requires a Bachelor s degree in (relevant degree) and a minimum of 2 years of related experience; or an advanced degree without experience; or equivalent work experience. Proficient in modern observability technologies and methods, including OpenTelemetry and correlation of logs, metrics, and traces across multiple sources Able to work with RESTful API web services, integrations, and web hooks Proficient in working with various database and data storage back-ends, including SQL and no-SQL Experience in vendor sourced tools and technologies such as Splunk, LogicMonitor, Datadog, PagerDuty, AppDynamics, RunDeck, Apica, Selenium, and similar Experience in cloud provider tools and technologies such as AWS, Azure, Cloudwatch, Cloudflare, and similar Experience with Linux and Windows scripting, including bash/perl/python and Powershell Experience with graphing and analytics tools (Graphite/Grafana/Splunk) Experience with CI/CD deployment Recent experience evaluating and solving business problems with data driven analysis of systems and incidents Knowledge and experience in the fintech industry preferred Excellent communication skills with the ability to remain patient with non-technical contacts Knowledge of multiple infrastructure technologies including storage, network, backups, virtualization Configuration management using Terraform and understanding of container technologies such as Docker Experienced with Agile sprint methodologies Ability to think multi-dimensionally about problems and your proposed solutions and how a single change can impact entire environments. Comfortable working in a complex collaborative group of teams Desire to build solutions, drive adoption for your solutions, and integrate your solutions into larger ecosystems Highly customer (internal and external) oriented, enjoys both teaching to, and learning from, customers. Ability to work on problems of diverse scope, where analysis of situation requires a review of identifiable factors Must be able to exercise judgment within defined procedures to determine appropriate action Must have strong organizational and multi-tasking skills to prioritize workload in a fast paced environment Moderate-to-Advanced knowledge and troubleshooting of Windows Server, Linux, and Cloud based infrastructure and services Experience with Systems Administration in Public Cloud Datacenters such as AWS, Azure or Google Cloud Experience with Configuration and System Management tools in support of Linux systems Job Description We re seeking an Observability Systems Engineer focused on creating, implementing, and managing monitoring, alerting, and remediation tools and processes to join the Q2 Observability Automation Tools team. Q2 Software is focused on empowering returns on relationships for community-centered financial institutions and their retail and commercial account holders. We do this with the most comprehensive, secure and adaptable smart banking platform of its kind, designed to deliver a compelling, consistent user experience on any device and enable customers to deliver secure, innovative services and increasingly, to generate new sources of revenue. Named as one of Austin s fastest growing companies and one of the best places to work, Q2 offers our employees a culture fueled by engaged, motivated and dedicated team members. Summary: As part of the Q2 Observability Automation Tools team you will be working across multiple technologies, vendors, and proprietary in house tooling to build a framework of integrated systems and processes that make the management and support of the Q2 platform faster and easier by providing operational and incident response teams with the tools and information needed to effectively understand the health of a complex environment and its components. You help Q2 proactively get ahead of issues before clients can feel the impact. You are a force multiplier empowering users and clients to do more, more effectively! As an Observability Systems Engineer , you will help implement and manage best in class solutions with key vendor partnerships and internal development teams. Observability is more than just triggering alerts, it is understanding a system end to end, providing data driven, actionable insights into that system, and helping to ensure that systems are constantly undergoing improvements against root cause solutions. We are looking for a proactive and highly self motivated individual that is as excited to go looking for new problems to solve as they are to solve them. In this role, you will contribute to designing and implementing observability solutions in conjunction with architects, software engineers, data engineers, project management, and other SMEs. RESPONSIBILITIES: Responsible for ensuring that all security, availability, confidentiality and privacy policies and controls are adhered to Plans, designs, acquires, implements, integrates, and tests observability solutions of moderate complexity comprised of Windows, Linux, and SaaS based front-end and back-end components that support the company infrastructure, business processes and operations and/or network-based (cloud) product systems. Work individually and collaboratively to deliver solutions in live production systems Support, maintain, and resolve problems for the tools and services owned by the Observability Automation Tools team in live production systems, with occasional on-call availability Actively contribute to the configuration, layout and performance tuning of the production infrastructure Support upgrade and go-live activities related to new customer onboarding projects Attend and actively participate in weekly meetings Participate in projects while working with the workgroup to achieve defined goals within set timelines Performs IT functions such as application and infrastructure installation, patching, upgrades, and management related to the tools and services owned by the Observability Automation Tools team in live production systems Additional Job Description HELPFUL EXPERIENCE AND KNOWLEDGE: Typically requires a Bachelor s degree in (relevant degree) and a minimum of 2 years of related experience; or an advanced degree without experience; or equivalent work experience. Proficient in modern observability technologies and methods, including OpenTelemetry and correlation of logs, metrics, and traces across multiple sources Able to work with RESTful API web services, integrations, and web hooks Proficient in working with various database and data storage back-ends, including SQL and no-SQL Experience in vendor sourced tools and technologies such as Splunk, LogicMonitor, Datadog, PagerDuty, AppDynamics, RunDeck, Apica, Selenium, and similar Experience in cloud provider tools and technologies such as AWS, Azure, Cloudwatch, Cloudflare, and similar Experience with Linux and Windows scripting, including bash/perl/python and Powershell Experience with graphing and analytics tools (Graphite/Grafana/Splunk) Experience with CI/CD deployment Recent experience evaluating and solving business problems with data driven analysis of systems and incidents Knowledge and experience in the fintech industry preferred Excellent communication skills with the ability to remain patient with non-technical contacts Knowledge of multiple infrastructure technologies including storage, network, backups, virtualization Configuration management using Terraform and understanding of container technologies such as Docker Experienced with Agile sprint methodologies Ability to think multi-dimensionally about problems and your proposed solutions and how a single change can impact entire environments. Comfortable working in a complex collaborative group of teams Desire to build solutions, drive adoption for your solutions, and integrate your solutions into larger ecosystems Highly customer (internal and external) oriented, enjoys both teaching to, and learning from, customers. Ability to work on problems of diverse scope, where analysis of situation requires a review of identifiable factors Must be able to exercise judgment within defined procedures to determine appropriate action Must have strong organizational and multi-tasking skills to prioritize workload in a fast paced environment Moderate-to-Advanced knowledge and troubleshooting of Windows Server, Linux, and Cloud based infrastructure and services Experience with Systems Administration in Public Cloud Datacenters such as AWS, Azure or Google Cloud Experience with Configuration and System Management tools in support of Linux systems This position requires fluent written and oral communication in English. Health Wellness Hybrid Work Opportunities Flexible Time Off Career Development Mentoring Programs Health Wellness Benefits, including competitive health insurance offerings and generous paid parental leave for eligible new parents Community Volunteering Company Philanthropy Programs Employee Peer Recognition Programs - You Earned it
Posted 1 week ago
8.0 - 10.0 years
16 - 20 Lacs
Hyderabad
Work from Office
Data: integrate data in a flexible, open scalable platform to power healthcare s digital transformation Analytics: deliver analytic applications services that generate insight on how to measurably improve Expertise: provide clinical, financial operational experts who enable accelerate improvement Engagement: attract, develop and retain world-class team members by being a best place to work Job Title: Principal Snowflake Data Engineer Data Engineering Lead Experience: 8-10 Years Employment Type: Full-Time About the Role: We are seeking a Principal Snowflake Data Engineer with 8-10 years of experience in data engineering, including deep specialization in the Snowflake Data Cloud, and a proven track record of technical leadership and team management. This role goes beyond individual contribution you will also lead and mentor cross-functional teams across data synchronization, Data Operations, and ETL domains, driving best practices and architectural direction while ensuring the delivery of scalable, efficient, and secure data solutions across the organization. Key Responsibilities Technical Leadership Own the architectural vision and implementation strategy for Snowflake-based data platforms. Lead the design, optimization, and maintenance of ELT pipelines and data lake integrations with Snowflake. Drive Snowflake performance tuning, warehouse sizing, clustering design, and cost governance. Leverage Snowflake-native features like Streams, Tasks, Time Travel, Snowpipe, and Materialized Views for real-time and batch workloads. Establish robust data governance, security policies (RBAC, data masking, row-level access), and regulatory compliance within Snowflake. Ensure best practices in schema design, data modeling, and version-controlled pipeline development using tools like dbt, Airflow, and Git. Team People Management Lead and mentor the data synchronization, Data Operations, and ETL engineering teams ensuring alignment with business and data strategies. Drive sprint planning, project prioritization, and performance management within the team. Foster a culture of accountability, technical excellence, collaboration, and continuous learning. Partner with product managers, business analysts, and senior leadership to translate business requirements into technical roadmaps. Operational Excellence Oversee end-to-end data ingestion and transformation pipelines using Spark, AWS Glue, and other cloud-native tools. Implement CI/CD pipelines and observability for data operations. Establish data quality monitoring, lineage tracking, and system reliability processes. Champion automation and Infrastructure-as-Code practices across the Snowflake and data engineering stack. Required Skills 8-10 years of data engineering experience with at least 4-5 years of hands-on Snowflake expertise. Proven leadership of cross-functional data teams (ETL, Data Operations, data synchronization). Deep expertise in: o Snowflake internals (clustering, caching, performance tuning) o Streams, Tasks, Snowpipe, Materialized Views, UDFs o Data governance (RBAC, secure views, masking policies) Strong SQL and data modeling (dimensional normalized) Hands-on experience with: o Apache Spark, PySpark, AWS Glue o Orchestration frameworks (Airflow, dbt, Dagster, or AWS Step Functions) o CI/CD and Git-based workflows Strong understanding of data lakes, especially Delta Lake on S3 or similar Nice to Have Snowflake Certifications (SnowPro Advanced Architect preferred) Experience with Data Operations tools (e.g., Datadog, CloudWatch, Prometheus) Familiarity with Terraform, CloudFormation, and serverless technologies (AWS Lambda, Docker) Exposure to Databricks and distributed compute environments Why Join Us Lead and shape the future of data architecture and engineering in a high-impact, cloudnative environment. Be the go-to Snowflake expert and technical mentor across the company. Enjoy the opportunity to manage teams, drive innovation, and influence strategy at scale. Flexible remote work options, high autonomy, and strong support for career development. The above statements describe the general nature and level of work being performed in this job function. They are not intended to be an exhaustive list of all duties, and indeed additional responsibilities may be assigned by Health Catalyst . Studies show that candidates from underrepresented groups are less likely to apply for roles if they don t have 100% of the qualifications shown in the job posting. While each of our roles have core requirements, please thoughtfully consider your skills and experience and decide if you are interested in the position. If you feel you may be a good fit for the role, even if you don t meet all of the qualifications, we hope you will apply. If you feel you are lacking the core requirements for this position, we encourage you to continue exploring our careers page for other roles for which you may be a better fit. At Health Catalyst, we appreciate the opportunity to benefit from the diverse backgrounds and experiences of others. Because of our deep commitment to respect every individual, Health Catalyst is an equal opportunity employer.
Posted 1 week ago
1.0 - 5.0 years
3 - 4 Lacs
Chennai
Work from Office
We are seeking a skilled and passionate Android Developer to join our dynamic team. In this role, you will be responsible for designing, developing, and maintaining high-quality Android applications that enhance our healthcare solutions. You will collaborate closely with cross-functional teams to deliver seamless and intuitive user experiences, contributing to the advancement of our mobile applications. Key Responsibilities Application Development: Design and develop advanced applications for the Android platform using Java and Kotlin. UI/UX Collaboration: Work closely with UX/UI designers to translate wireframes and mockups into functional, user-friendly applications. API Integration: Integrate third-party APIs and services to enhance application functionality and performance. Code Quality: Write clean, maintainable, and efficient code adhering to best practices and coding standards. Testing Debugging: Conduct thorough testing and debugging to ensure the performance, quality, and responsiveness of applications. Performance Optimization: Identify and address performance bottlenecks to ensure optimal application performance. Continuous Learning: Stay updated with the latest industry trends, technologies, and best practices in Android development. Collaboration: Participate in code reviews, team meetings, and collaborative efforts to achieve project goals. Qualifications Education: Bachelors degree in Computer Science, Information Technology, or a related field. Experience: 1 5 years of experience in Android application development. Technical Skills: Proficiency in Java and Kotlin. Strong knowledge of Android SDK and Android Studio. Experience with RESTful APIs and JSON. Familiarity with version control systems such as Git. Understanding of Android UI design principles and best practices. Experience with offline storage, threading, and performance tuning. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work independently and as part of a team. Attention to detail and commitment to quality.
Posted 1 week ago
10.0 - 15.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Job Title: Principal Data Engineer About Skyhigh Security: Skyhigh Security is a dynamic, fast-paced, cloud company that is a leader in the security industry. Our mission is to protect the world s data, and because of this, we live and breathe security. We value learning at our core, underpinned by openness and transparency. Skyhigh Security Is more than a company; here, when you invest your career with us, we commit to investing in you. We embrace a hybrid work model, creating the flexibility and freedom you need from your work environment to reach your potential. From our employee recognition program, to our Blast Talks learning series, and team celebrations (we love to have fun!), we strive to be an interactive and engaging place where you can be your authentic self. We are on these too! Follow us on LinkedIn and Twitter @SkyhighSecurity . Role Overview: Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (e.g., anomaly detection, metadata enrichment, automation). Our Engineering team is driving the future of cloud security developing one of the world s largest, most resilient cloud-native data platforms. At Skyhigh Security, we re enabling enterprises to protect their data with deep intelligence and dynamic enforcement across hybrid and multi-cloud environments. As we continue to grow, we re looking for a Principal Data Engineer to help us scale our platform, integrate advanced AI/ML workflows, and lead the evolution of our secure data infrastructure. Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (e.g., anomaly detection, metadata enrichment, automation). Evaluate and integrate LLM-based automation and AI-enhanced observability into engineering workflows. Ensure data security and privacy compliance. Mentoring engineers, ensuring high engineering standards, and promoting technical excellence across teams. What We re Looking For (Minimum Qualifications) 10+ years of experience in big data architecture and engineering, including deep proficiency with the AWS cloud platform. Expertise in distributed systems and frameworks such as Apache Spark, Scala, Kafka, Flink, and Elasticsearch, with experience building production-grade data pipelines. Strong programming skills in Java for building scalable data applications. Hands-on experience with ETL tools and orchestration systems. Solid understanding of data modeling across both relational (PostgreSQL, MySQL) and NoSQL (HBase) databases and performance tuning. What Will Make You Stand Out (Preferred Qualifications) Experience integrating AI/ML or LLM frameworks (e.g., LangChain, LlamaIndex) into data workflows. Experience implementing CI/CD pipelines with Kubernetes, Docker, and Terraform. Knowledge of modern data warehousing (e.g., BigQuery, Snowflake) and data governance principles (GDPR, HIPAA). Strong ability to translate business goals into technical architecture and mentor teams through delivery. Familiarity with visualization tools (Tableau, Power BI) to communicate data insights, even if not a primary responsibility. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each others unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement Were serious ab out our commitment to a workplace where everyone can thrive and contribute to our industry-leading products and customer support, which is why we prohibit discrimination and harassment based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Posted 1 week ago
8.0 - 13.0 years
9 Lacs
Bengaluru
Work from Office
Job Description for Database Consultant-I (PostgreSQL) Job Title: Database Consultant-I (PostgreSQL) Company: Mydbops About Us: Mydbops is a trusted leader with 8+ years of excellence in open-source database management. We deliver best-in-class services across MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. Our focus is on building scalable, secure, and high-performance database solutions for global clients. As a PCI DSS-certified and ISO-certified organisation, we are committed to operational excellence and data security. Role Overview: As a Database Consultant - I (PostgreSQL) , you will take ownership of PostgreSQL database environments, offering expert-level support to our clients. This role involves proactive monitoring, performance tuning, troubleshooting, high availability setup, and guiding junior team members. You will play a key role in customer-facing technical delivery, solution design, and implementation. Key Responsibilities: Manage PostgreSQL production environments for performance, stability, and scalability. Handle complex troubleshooting, performance analysis, and query optimisation. Implement backup strategies, recovery solutions, replication, and failover techniques. Set up and manage high availability architectures (Streaming Replication, Patroni, etc.). Work with DevOps/cloud teams for deployment and automation. Support upgrades, patching, and migration projects across environments. Use monitoring tools to proactively detect and resolve issues. Mentor junior engineers and guide troubleshooting efforts. Interact with clients to understand requirements and deliver solutions. Requirements: 3-5 years of hands-on experience in PostgreSQL database administration. Strong Linux OS knowledge and scripting skills (Bash/Python). Proficiency in SQL tuning, performance diagnostics, and explain plans. Experience with tools like pgBackRest, Barman for backup and recovery. Familiarity with high availability, failover, replication, and clustering. Good understanding of AWS RDS, Aurora PostgreSQL, and GCP Cloud SQL. Experience with monitoring tools like pg_stat_statements, PMM, Nagios, or custom dashboards. Knowledge of automation/configuration tools like Ansible or Terraform is a plus. Strong communication and problem-solving skills. Preferred Qualifications: Bachelor s or Master s degree in Computer Science, Engineering, or equivalent. PostgreSQL certification (EDB/Cloud certifications preferred). Past experience in a consulting, customer support, or managed services role. Exposure to multi-cloud environments and database-as-a-service platforms. Prior experience with database migrations or modernisation projects. Why Join Us: Opportunity to work in a dynamic and growing industry. Learning and development opportunities to enhance your career. A collaborative work environment with a supportive team. Job Details: Job Type: Full-time Work Days: 5 Days Work Mode: Work From Home Experience Required: 3-5 years
Posted 1 week ago
3.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for an experienced QA Engineer with a strong foundation in manual testing, database validation, SQL queries, and Sybase to SQL migration. The ideal candidate should be detail-oriented, analytical, and capable of ensuring high-quality software through structured testing methodologies. Key Responsibilities : Design, develop, and execute manual test cases for functional, regression, and integration testing. Conduct database testing to ensure integrity, consistency, and accuracy in data migrations. Write and optimize SQL queries to validate data transformations and troubleshoot migration issues. Lead and support Sybase to SQL migration by identifying risks, validating data mapping, and ensuring seamless transitions. Collaborate with developers, DBAs, and stakeholders to define and implement robust testing strategies. Identify, document, and track defects using QA tools such as JIRA or TestRail. Ensure test environments are maintained for accurate validation of software and database updates. Required Skills Qualifications: 3+ years of experience in QA manual testing with exposure to database validation. Proficiency in SQL queries for data validation and performance tuning. Hands-on experience with Sybase to SQL migration projects. Strong understanding of testing methodologies, bug reporting, and defect tracking. Experience working in Agile environments with cross-functional teams. Excellent problem-solving skills and attention to detail. Preferred Skills: Knowledge of test automation tools (Selenium, TestNG, etc.) for future scalability. Familiarity with CI/CD pipelines and testing integration processes. Ability to optimize database queries for efficiency.
Posted 1 week ago
6.0 - 8.0 years
6 - 10 Lacs
Pune, Bengaluru
Work from Office
Position: Database Administrator II Job Description: Database Administrator II Database Administrator are on the frontline of EBS application support. They are focused on helping to resolve technical issues end users encounter. And responsible for upgrades, migrations and other project related activities. What You ll Be Doing: Will be part of the DBA organization Will work with the Application Development teams to review code and recommend tuning for bad code Will be responsible to coordinate performance related issues and provide a fix or redirect to the appropriate teams Will be responsible to provide tuning recommendations for new projects undertaken by the Application Development team and DBA team What We Are Looking For: Oracle DBA with at least 6-8 years experience At least 2 years experience in tuning code using tools like SQL Profiler, SQLT Explain or any custom performance tools Working knowledge of tools like AWR, ASH SQL Tuning Advisor Good knowledge of Oracle Performance tuning hints Should be able to look at bad SQL code and recommend tuning opportunities Knowledge in Oracle eBusiness Suite is a plus About Arrow Arrow Electronics, Inc. (NYSE: ARW), an award-winning Fortune 110 and one of Fortune Magazine s Most Admired Companies. Arrow guides innovation forward for over 175,000 leading technology manufacturers and service providers. With 2019 sales of USD $29 billion, Arrow develops technology solutions that improve business and daily life. Our broad portfolio that spans the entire technology landscape helps customers create, make and manage forward-thinking products that make the benefits of technology accessible to as many people as possible. Learn more at www.arrow.com. . For more job opportunities, please visit https: / / careers.arrow.com / us / en. Location: IN-KA-Bangalore, India (SKAV Seethalakshmi) GESC Time Type: Full time Job Category: Information Technology
Posted 1 week ago
8.0 - 13.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Role Overview: Ema is on the lookout for a highly skilled and experienced Staff Machine Learning Engineer to spearhead the development of next-generation AI technologies. This role is pivotal in driving the evolution of EmaFusion and other proprietary models to empower enterprises globally by enhancing productivity through AI-driven automation. As a Staff Machine Learning Engineer, you will lead complex projects that blend deep learning with practical applications, ensuring that Emas AI systems are at the forefront of innovation. Roles and Responsibilities: Lead the design, development, and deployment of state-of-the-art machine learning models, with a strong emphasis on scalability and reliability across various enterprise use cases. Collaborate with cross-functional teams including data scientists, software engineers, and product managers to translate business requirements into actionable ML solutions. Drive the innovation and enhancement of EmaFusion and Generative Workflow Engine (GWE) by integrating cutting-edge research into real-world applications. Mentor and guide junior engineers and data scientists, fostering a culture of knowledge sharing and continuous learning within the team. Ensure the robustness and security of machine learning models by implementing best practices in data protection and compliance with international standards. Contribute to strategic planning by identifying new opportunities for machine learning applications within the enterprise ecosystem, and aligning them with the companys long-term vision. Communicate complex technical concepts and project updates to both technical and non-technical stakeholders through presentations, reports, and documentation. Qualifications: PhD in Computer Science, Machine Learning, or a related field ; or a Master s degree with 8+ years of relevant experience. 10+ years of hands-on experience in designing and deploying machine learning models in production environments, with a proven track record of delivering scalable solutions. Proficiency in programming languages such as Python, Java, or C++, and extensive experience with deep learning frameworks like TensorFlow, PyTorch, or similar. Expertise in neural networks, reinforcement learning, and generative models with a solid understanding of their practical applications in large-scale systems. Experience working with large datasets and distributed computing systems, with a focus on optimization and performance tuning. Strong problem-solving skills , with the ability to navigate complex challenges and devise innovative solutions that drive the companys AI capabilities forward. Excellent communication skills , with the ability to convey technical information clearly and effectively to diverse audiences. A strong publication record in top-tier conferences or journals, and experience in patenting innovative solutions is highly desirable. Why Join Ema: At Ema, we are building the future of work by leveraging AI to empower enterprises worldwide. You will be part of a dynamic and diverse team that values creativity, innovation, and a deep commitment to excellence. Join us in shaping the future of AI technology, where your work will directly impact the productivity and success of global enterprises. We offer competitive compensation, a collaborative work environment, and the opportunity to work alongside some of the brightest minds in the industry.
Posted 1 week ago
2.0 - 5.0 years
6 Lacs
Ahmedabad
Work from Office
Company Overview: At Webito Infotech, we are a young and enthusiastic team with a passion for technology. We embrace innovation and think big, unafraid to stand out from the crowd. We believe that every aspect of web pages and app UI can create a unique and special experience, transforming businesses into exceptional ones. Our mission is to provide businesses with exceptional web and app experiences that reflect their brand identity and create lasting impressions through user-centric design. Job Summary: We are looking for a skilled .NET Core Developer with 2-5 years of experience in .NET technology, particularly in handling projects under .NET Core and a proven track record of delivering high-quality software solutions. Key Responsibilities: Design, develop, and maintain microservices using .NET Core. Implement RESTful APIs and ensure integration with front-end components and other microservices. Collaborate with the team to define, design, and ship new features. Optimize applications for maximum speed, scalability, and security. Write clean, scalable code following best practices, including SOLID principles and design patterns. Troubleshoot, debug, and upgrade existing software. Perform code reviews to maintain code quality and ensure adherence to coding standards. Utilize GitHub for version control, collaboration, and continuous integration. Work with SQL Server for database design, optimization, and query performance tuning.Write stored procedures queries when required. Integrate and work with MongoDB when required, leveraging its advantages in specific scenarios.Write mongodb queries when required. Participate in Agile ceremonies and contribute to the continuous improvement of development processes. Required Skills Qualifications: Technical Skills Bachelor s degree in Computer Science, Software Engineering, or a related field. 2-5 years of experience in software development, specifically with .NET Core and microservices architecture. Proficient in C#, .NET Core, and building RESTful APIs. strong knowledge of SQL Server and experience with database design and optimization. Familiarity with GitHub for source control and collaboration. Excellent problem-solving skills and logical thinking. Understanding of microservices patterns like service discovery, circuit breakers, and distributed tracing. Experience with MongoDB is a plus. Soft Skills Strong communication skills and the ability to work effectively in a team environment. Self-motivated with a passion for learning new technologies and methodologies. Attention to detail and a commitment to delivering high-quality software. Why Join US Be part of a forward-thinking and innovative team. Work on challenging projects using cutting-edge technologies. Enjoy a supportive culture that encourages growth and creativity. If you are a skilled .NET Core Developer with the required expertise and a passion for problem-solving, we d love to hear from you! Apply now to join Webito Infotech and make an impact with your talent.
Posted 1 week ago
4.0 - 10.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Job Overview: We are looking for a skilled and experienced Snowflake Data Engineer to join our team. The ideal candidate will have 4-10 years of experience in data engineering, with at least 3 years of hands-on expertise with Snowflake. You will be responsible for designing, building, and optimizing data pipelines and workflows, ensuring data integrity and performance within our Snowflake environment. Strong proficiency in SQL, advanced knowledge of Snowflake s features, and experience with SQL Server to Snowflake migrations are crucial for this role. Familiarity with Microsoft technologies such as SSIS, SSAS, and SSMS, as well as a proactive problem-solving approach, are also essential. Key Responsibilities: Data Pipeline Design: Design, build, and optimize data pipelines and workflows using Snowflake, Azure, and DBT to support data ingestion, transformation, and analytics. Performance Optimization: Optimize query performance and data storage within the Snowflake environment to ensure efficient data processing and retrieval. ETL/ELT Processes: Design and implement ETL/ELT processes for effective data ingestion and transformation, leveraging Snowflake s capabilities. Cross-Functional Collaboration: Work with cross-functional teams to understand data requirements and deliver high-quality, scalable data solutions. Data Governance: Ensure adherence to data security, compliance, and governance standards, including data privacy and regulatory requirements. Required Qualifications: Education: Bachelor s degree in Computer Science, Information Systems, or a related field. Experience: 4-10 years of experience in data engineering or related roles, with a minimum of 3 years of hands-on experience with Snowflake. SQL Proficiency: Expert-level proficiency in SQL, including advanced data querying, performance tuning, and data modeling techniques. Snowflake Expertise: In-depth understanding of Snowflake architecture, features, and best practices for optimizing performance and managing data. Data Warehousing: Strong knowledge of data warehousing concepts and dimensional modeling. Microsoft Technologies: Familiarity with Microsoft suite of technologies including SSIS, SSAS, and SSMS. Infrastructure as Code: Knowledge of Infrastructure as Code (IaC) tools like Terraform for automating infrastructure management. CI/CD: Experience with GitHub actions and an understanding of Continuous Integration/Continuous Deployment (CI/CD) practices. Workflow Orchestration: Experience with workflow orchestration tools such as Airflow for managing and automating data workflows. Communication: Excellent problem-solving abilities with a proactive and positive attitude. Strong written and verbal communication skills are essential. Preferred Qualifications: Migration Experience: Experience with SQL Server to Snowflake migration using Fivetran, including data integration and transformation. DBT Proficiency: Proficiency in using the data build tool (dbt) for data transformation and modeling. Fivetran Experience: Experience with Fivetran for data integration and ELT processes. Cloud Platforms: Knowledge of cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Certifications: Relevant certifications, such as Snowflake SnowPro, AWS Certified Data Analytics, or Azure Data Engineer
Posted 1 week ago
10.0 - 15.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a highly skilled and experienced Data Architect to join our team. The ideal candidate will have a deep understanding of big data technologies and experience working with Hadoop, Python, Snowflake, and Databricks. As a Data Architect, you will be responsible for designing, implementing, and managing complex data architectures that support our business needs and objectives. Key Responsibilities: Design and Architecture: Design scalable and efficient data architecture solutions to meet the businesss current and future data needs. Lead the development of data models, schemas, and databases that align with business requirements. Architect and implement solutions on cloud platforms such as AWS, Azure, or GCP. Data Management: Develop and maintain data pipelines and ETL processes using Hadoop, Databricks, and other tools. Oversee data integration and data quality efforts to ensure data consistency and reliability across the organization. Implement data governance and best practices for data security, privacy, and compliance. Collaboration and Leadership: Work closely with data engineers, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions. Provide technical leadership and mentorship to junior data engineers and architects. Collaborate with cross-functional teams to ensure data solutions align with business goals. Optimization and Performance: Optimize existing data architectures for performance, scalability, and cost-efficiency. Monitor and troubleshoot data systems to ensure high availability and reliability. Continuously evaluate and recommend new tools and technologies to improve data architecture. Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. A Masters degree is preferred. 10+ years of experience in data architecture, data engineering, or a related field. Proven experience with Hadoop ecosystems (HDFS, MapReduce, Hive, HBase). Strong programming skills in Python for data processing and automation. Hands-on experience with Snowflake and Databricks for data warehousing and analytics. Experience with cloud platforms (AWS, Azure, GCP) and their data services. Familiarity with data modeling tools and methodologies. Skills: Deep understanding of big data technologies and distributed computing. Strong problem-solving skills and the ability to design solutions to complex data challenges. Excellent communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Knowledge of SQL and database performance tuning. Experience with CI/CD pipelines and automation in data environments. Preferred Qualifications: Certification in cloud platforms such as AWS Certified Data Analytics, Google Professional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate. Experience with additional programming languages like Java or Scala. Knowledge of machine learning frameworks and their integration with data pipelines
Posted 1 week ago
10.0 - 15.0 years
14 - 19 Lacs
Hyderabad, Bengaluru
Work from Office
Job Overview: GrowthArc is seeking a dynamic and strategic Senior Integration Architect to lead solution design, pre-sales support, and integration architecture across diverse enterprise initiatives. Positioned as a middle management role, this individual will not only architect and guide technical delivery but also engage in response to RFPs, build reusable assets, and establish Centers of Excellence (CoEs) to scale integration capabilities across the organization. This role bridges business needs and technical execution, ensuring GrowthArc delivers future-ready, scalable, and secure integration solutions. Key Responsibilities of Senior Integration Architect: Own the architecture and design of integration solutions across cloud and hybrid environments, ensuring alignment with business needs and technical standards. Lead pre-sales and RFP responses by crafting solution approaches, technical estimates, and architectural presentations. Collaborate with senior stakeholders, account leads, and delivery teams to define integration strategy and roadmap. Drive standardization and reusability by developing templates, accelerators, and best practices as part of a growing Integration CoE. Provide technical leadership and mentorship to engineering teams, enabling high-quality solution delivery. Evaluate, select, and recommend integration tools and technologies; drive platform upgrades or migrations where required. Oversee architecture reviews, ensure performance tuning, and support security/compliance efforts. Collaborate cross-functionally with ERP, SaaS, cloud, and data teams to deliver holistic automation solutions. Required Qualifications Experience: Bachelor s or Master s degree in Computer Science, Information Systems, Engineering, or a related field. 10+ years of experience in integration architecture or enterprise solution design, with a focus on cloud and hybrid ecosystems. Experience working with enterprise clients across multiple domains (Finance, HR, Procurement, etc.). Skills: Proven expertise in at least two of the following iPaaS platforms: Workato, MuleSoft, Dell Boomi. Strong understanding of cloud platforms (AWS, Azure, or GCP) and their integration services. Hands-on experience with API design (REST/SOAP), data transformation (JSON, XML), and orchestration. Demonstrated experience in guiding and owning the establishment of Integration CoEs, including development of governance frameworks, reusable assets, and best practices. Proven ability to design, build, and scale accelerators, templates, and integration utilities that enable faster and consistent solution delivery. Experience crafting and presenting RFP and pre-sales solutions, including level-of-effort and pricing guidance. Excellent communication and stakeholder engagement skills. Preferred Skills : Exposure to Tines, Camunda, or similar automation/BPM platforms Familiarity with DevOps, CI/CD pipelines, and observability for integration performance and issue resolution. Understanding of data privacy, security, and compliance regulations in integration contexts. Preferred Qualifications: iPaaS Certifications (e.g., Workato Automation Pro, MuleSoft Certified Architect). Cloud Certifications (e.g., AWS Solutions Architect, Azure Architect Expert). Prior experience working in Agile and customer-facing roles such as solution consulting or technical pre-sales. Strong organizational skills with the ability to manage multiple priorities and stakeholders.
Posted 1 week ago
6.0 - 8.0 years
5 - 9 Lacs
Pune, Bengaluru
Work from Office
Position: Database Administrator II Job Description: Database Administrator II Database Administrator are on the frontline of EBS application support. They are focused on helping to resolve technical issues end users encounter. And responsible for upgrades, migrations and other project related activities. What You ll Be Doing: Will be part of the DBA organization Will work with the Application Development teams to review code and recommend tuning for bad code Will be responsible to coordinate performance related issues and provide a fix or redirect to the appropriate teams Will be responsible to provide tuning recommendations for new projects undertaken by the Application Development team and DBA team What We Are Looking For: Oracle DBA with at least 6-8 years experience At least 2 years experience in tuning code using tools like SQL Profiler, SQLT Explain or any custom performance tools Working knowledge of tools like AWR, ASH SQL Tuning Advisor Good knowledge of Oracle Performance tuning hints Should be able to look at bad SQL code and recommend tuning opportunities Knowledge in Oracle eBusiness Suite is a plus About Arrow Arrow Electronics, Inc. (NYSE: ARW), an award-winning Fortune 110 and one of Fortune Magazine s Most Admired Companies. Arrow guides innovation forward for over 175,000 leading technology manufacturers and service providers. With 2019 sales of USD $29 billion, Arrow develops technology solutions that improve business and daily life. Our broad portfolio that spans the entire technology landscape helps customers create, make and manage forward-thinking products that make the benefits of technology accessible to as many people as possible. Learn more at www.arrow.com. Our strategic direction of guiding innovation forward is expressed as Five Years Out, a way of thinking about the tangible future to bridge the gap between whats possible and the practical technologies to make it happen. Learn more at https://www.fiveyearsout.com/. For more job opportunities, please visit https: / / careers.arrow.com / us / en. Location: IN-KA-Bangalore, India (SKAV Seethalakshmi) GESC Time Type: Full time Job Category: Information Technology
Posted 1 week ago
9.0 - 14.0 years
14 - 20 Lacs
Pune
Remote
Are you an experienced Oracle DBA looking for a challenging role? We're seeking a highly skilled and proactive Senior Database Engineer to join our team and provide critical Oracle database support. In this role, you'll be responsible for ensuring the health, performance, and stability of our Oracle database environments, including RAC, ASM, Clusterware, and Multitenant architectures and OEM Administration. If you're passionate about database optimization, proactive monitoring, and thrive in a dynamic environment, we encourage you to apply! Primary Responsibilities The Offshore Senior Database Engineer has the primary focus on: Monitoring Database health / performance (RAC, Non-RAC and Data Guard Instance). Review Quarterly Database Patching requirement/steps and execute the patch process. Review Quarterly OEM Patching requirement/steps and execute the patch process. Monitor and take appropriate action proactively on night jobs, i.e., RMAN backups and Data Purging, etc. The overall technical support of the Oracle database, shutdown/startup database, issue troubleshooting, incident, alert and responding to requests from DevOps and other IT teams. Performs Database/Query monitoring, ongoing performance tuning and optimization activities Position Requirements Must have 7+ years of experience focused on Oracle DBA Administration in medium to large corporate environments, 3+ years of Oracle 19c administration (multitenant PDB/CDB, Oracle ASM, Oracle Cluster ware, Data Guard, Oracle Single Instance and Patching process). Installation and configuration for OEM13c for database monitoring and management. At least 5 years experience performance tuning both instance and SQL statement level tuning, good understanding of SQL Profile and query optimization. Ability to identify the culprit with tool (i.e., OEM) and without tool (query from Oracle dictionary). At least 5 years experience with backup and recovery procedures using RMAN. System knowledge and experience with Unix/Linux Operating Systems. Working knowledge of scripting programing (Shell, Python), GitHub, and Automation tools (AWX or Ansible) is a plus but not required. Knowledge, Skills & Abilities Must be a creative problem-solver, flexible, and proactive. Very good people skills, plus written and oral communication skills. Take responsibility, plan and structure all tasks (technical or non-technical) assigned as part of job role to ensure a business efficient service is delivered to customers. Key Performance Indicators Ability to manage multiple assignments (i.e., database/cluster patching, OEM management and troubleshooting) comfortably. Attentive to operational process (i.e. incident management, pickup/resolve/close timely, using SN monitoring tool to proactively monitor database platform, Change Management, etc.). Attentive to Agile process (i.e. story management, pick up/development/implementation and close timely). Onboarding Time: Immediate to 15 days only
Posted 1 week ago
8.0 - 13.0 years
15 - 30 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Hiring - Sr. SQL DBA Location- Gurugram- Hybrid Fulltime Education - Bachelor or higher degree. 10+ years experience in working in IT services organization. At least 8+ years of SQL DBA experience covering implementation, production support and project work. At least 5+ years of exposure as L3/Lead/ SME support role. Knowledge and Understanding the ITIL process In client facing role(s) globally for at least 5 years. Excellent communication skills On call support for at least 5 years. Excellent Documentations skills in MS SharePoint 2010 /2013 Mandatory skills - SQL Server & Cloud administration required skills • Senior level Microsoft SQL Server DBA with experience in large and critical environments. Excellent knowledge of performance tuning at both server level and query level. Candidate must have knowledge of perfmon, SQL Server dynamic management views. Knowledge of SQL server internals. • Knowledge of SQL partitioning and compression • Hands-on experience on mirroring, replication and Always ON Must have automation experience. Good knowledge of scripting. Ability to write T-Sql scripts, Power Shell or Python. Excellent knowledge of SSIS packages. Hands-on experience on SQL Server Clustering. Hands-on experience on Integration Services, Reporting services and Analysis services • Must have good knowledge in SQL Server permission/security policies • At least 6 months experience with AWS Cloud RDS Good to have skills MySQL knowledge / experience. • Linux Knowledge Experience with Snow Flake Certifications Desired MS certifications on MSSQL latest versions. AWS Certifications
Posted 1 week ago
5.0 - 9.0 years
12 - 22 Lacs
Hyderabad, Bengaluru
Hybrid
Position : PySpark Data Engineer Location : Bangalore / Hyderabad Experience : 5 to 9 Yrs Job Type : On Role Job Description: PySpark Data Engineer:- 1. API Development : Design, develop, and maintain robust APIs using FastAPI and RESTful principles for scalable backend systems. 2. Big Data Processing : Leverage PySpark to process and analyze large datasets efficiently, ensuring optimal performance in big data environments. 3. Full-Stack Integration : Develop seamless backend-to-frontend feature integrations, collaborating with front-end developers for cohesive user experiences. 4. CI/CD Pipelines : Implement and manage CI/CD pipelines using GitHub Actions and Azure DevOps to streamline deployments and ensure system reliability. 5. Containerization : Utilize Docker for building and deploying containerized applications in development and production environments. 6. Team Leadership : Lead and mentor a team of developers, providing guidance, code reviews, and support to junior team members to ensure high-quality deliverables. 7. Code Optimization : Write clean, maintainable, and efficient Python code, with a focus on scalability, reusability, and performance. 8. Cloud Deployment : Deploy and manage applications on cloud platforms like Azure , ensuring high availability and fault tolerance. 9. Collaboration : Work closely with cross-functional teams, including product managers and designers, to translate business requirements into technical solutions. 10. Documentation : Maintain thorough documentation for APIs, processes, and systems to ensure transparency and ease of maintenance. Highlighted Skillset:- Big Data : Strong PySpark skills for processing large datasets. DevOps : Proficiency in GitHub Actions , CI/CD pipelines , Azure DevOps , and Docker . Integration : Experience in backend-to-frontend feature connectivity. Leadership : Proven ability to lead and mentor development teams. Cloud : Knowledge of deploying and managing applications in Azure or other cloud environments. Team Collaboration : Strong interpersonal and communication skills for working in cross-functional teams. Best Practices : Emphasis on clean code, performance optimization, and robust documentation. Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -
Posted 1 week ago
6.0 - 10.0 years
27 - 42 Lacs
Chennai
Work from Office
Job Summary We are seeking a highly skilled Sr. Developer with 3 to 10 years of experience specializing in Reltio MDM. The ideal candidate will work in a hybrid model with day shifts. This role does not require travel. The candidate will contribute to the companys mission by developing and maintaining high-quality MDM solutions that drive business success and societal impact. Responsibilities Develop and maintain Reltio MDM solutions to ensure data quality and integrity. Collaborate with cross-functional teams to gather and analyze business requirements. Design and implement data models and workflows in Reltio MDM. Provide technical expertise and support for Reltio MDM configurations and customizations. Conduct performance tuning and optimization of Reltio MDM applications. Ensure compliance with data governance and security policies. Troubleshoot and resolve issues related to Reltio MDM. Create and maintain technical documentation for Reltio MDM solutions. Participate in code reviews and provide constructive feedback to team members. Stay updated with the latest trends and best practices in MDM and data management. Contribute to the continuous improvement of development processes and methodologies. Mentor junior developers and provide guidance on best practices. Collaborate with stakeholders to ensure successful project delivery. Qualifications Possess strong expertise in Reltio MDM and data management. Have a solid understanding of data modeling and data integration techniques. Demonstrate proficiency in performance tuning and optimization. Show experience in troubleshooting and resolving technical issues. Exhibit excellent communication and collaboration skills. Have a strong attention to detail and a commitment to quality. Be able to work independently and as part of a team. Display a proactive approach to learning and staying current with industry trends. Possess a bachelors degree in Computer Science or a related field. Have experience with Agile development methodologies. Show the ability to mentor and guide junior team members. Demonstrate strong problem-solving skills. Be committed to delivering high-quality solutions that meet business needs. Certifications Required N
Posted 1 week ago
3.0 - 8.0 years
8 - 14 Lacs
Mumbai
Work from Office
Key Responsibilities: Develop, enhance, and maintain SAP ABAP programs for SD, CO, and FI modules. Design and implement technical solutions using Reports, BAPIs, BADIs, BDCs, SmartForms, SAP Scripts, and IDOCs. Work on performance tuning, debugging, and troubleshooting ABAP code. Develop and optimize CDS Views, AMDP, and OData services for SAP S/4HANA applications. Integrate SAP systems with third-party applications using ALE, RFC, and Web Services.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane