Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
We are looking for a highly skilled and motivated Python, AWS, Big Data Engineer to join our data engineering team. The ideal candidate should have hands-on experience with the Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. Your responsibilities will include designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives. Virtusa is a company that values teamwork, quality of life, and professional and personal development. We are proud to have a team of 27,000 people globally who care about your growth and seek to provide you with exciting projects, opportunities, and work with state-of-the-art technologies throughout your career with us. At Virtusa, we believe in the potential of great minds coming together. We emphasize collaboration and a team environment, providing a dynamic place for talented individuals to nurture new ideas and strive for excellence.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Technology Lead Analyst role is a senior position where you will be responsible for implementing new or updated application systems and programs in collaboration with the Technology team. Your main objective will be to lead applications systems analysis and programming activities. Your responsibilities will include partnering with various management teams to ensure the integration of functions to achieve goals, identifying necessary system enhancements for new products and process improvements, resolving high-impact problems/projects by evaluating complex business processes, providing expertise in applications programming, ensuring application design aligns with the architecture blueprint, developing standards for coding, testing, debugging, and implementation, gaining comprehensive knowledge of business areas integration, analyzing issues to develop innovative solutions, advising mid-level developers and analysts, assessing risks in business decisions, and being a team player who can adapt to changing priorities. The required skills for this role include strong knowledge in Spark using Java/Scala & Hadoop Ecosystem with hands-on experience in Spark Streaming, proficiency in Java Programming with experience in the Spring Boot framework, familiarity with database technologies such as Oracle, Starburst & Impala query engine, and knowledge of bank reconciliations tools like Smartstream TLM Recs Premium / Exceptor / Quickrec is an added advantage. To qualify for this position, you should have 10+ years of relevant experience in Apps Development or systems analysis role, extensive experience in system analysis and programming of software applications, experience in managing and implementing successful projects, be a Subject Matter Expert (SME) in at least one area of Applications Development, ability to adjust priorities quickly, demonstrated leadership and project management skills, clear and concise communication skills, experience in building/implementing reporting platforms, possess a Bachelor's degree/University degree or equivalent experience (Master's degree preferred). This job description is a summary of the work performed, and other job-related duties may be assigned as needed.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at Cisco ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of the cloud and big data platforms. Your role will involve representing the NADP SRE team, contributing to the technical roadmap, and collaborating with cross-functional teams to design, build, and maintain SaaS systems operating at multi-region scale. Your efforts will be crucial in supporting machine learning (ML) and AI initiatives by ensuring the platform infrastructure is robust, efficient, and aligned with operational excellence. You will be tasked with designing, building, and optimizing cloud and data infrastructure to guarantee high availability, reliability, and scalability of big-data and ML/AI systems. This will involve implementing SRE principles such as monitoring, alerting, error budgets, and fault analysis. Additionally, you will collaborate with various teams to create secure and scalable solutions, troubleshoot technical problems, lead the architectural vision, and shape the technical strategy and roadmap. Your role will also encompass mentoring and guiding teams, fostering a culture of engineering and operational excellence, engaging with customers and stakeholders to understand use cases and feedback, and utilizing your strong programming skills to integrate software and systems engineering. Furthermore, you will develop strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices. To be successful in this role, you should have relevant experience (8-12 yrs) and a bachelor's engineering degree in computer science or its equivalent. You should possess the ability to design and implement scalable solutions, hands-on experience in Cloud (preferably AWS), Infrastructure as Code skills, experience with observability tools, proficiency in programming languages such as Python or Go, and a good understanding of Unix/Linux systems and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure is essential, along with a sense of ownership and accountability in architecting software and infrastructure at scale. Additional qualifications that would be advantageous include experience with the Hadoop Ecosystem, certifications in cloud and security domains, and experience in building/managing a cloud-based data platform. Cisco encourages individuals from diverse backgrounds to apply, as the company values perspectives and skills that emerge from employees with varied experiences. Cisco believes in unlocking potential and creating diverse teams that are better equipped to solve problems, innovate, and make a positive impact.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of cloud and big data platforms. Your role will involve representing the NADP SRE team, working in a dynamic environment, and providing technical leadership in defining and executing the team's technical roadmap. Collaborating with cross-functional teams, including software development, product management, customers, and security teams, is essential. Your contributions will directly impact the success of machine learning (ML) and AI initiatives by ensuring a robust and efficient platform infrastructure aligned with operational excellence. In this role, you will design, build, and optimize cloud and data infrastructure to ensure high availability, reliability, and scalability of big-data and ML/AI systems. Collaboration with cross-functional teams will be crucial in creating secure, scalable solutions that support ML/AI workloads and enhance operational efficiency through automation. Troubleshooting complex technical problems, conducting root cause analyses, and contributing to continuous improvement efforts are key responsibilities. You will lead the architectural vision, shape the team's technical strategy and roadmap, and act as a mentor and technical leader to foster a culture of engineering and operational excellence. Engaging with customers and stakeholders to understand use cases and feedback, translating them into actionable insights, and effectively influencing stakeholders at all levels are essential aspects of the role. Utilizing strong programming skills to integrate software and systems engineering, building core data platform capabilities and automation to meet enterprise customer needs, is a crucial requirement. Developing strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices is also part of the role. Qualifications for this position include 8-12 years of relevant experience and a bachelor's engineering degree in computer science or its equivalent. Candidates should have the ability to design and implement scalable solutions with a focus on streamlining operations. Strong hands-on experience in Cloud, preferably AWS, is required, along with Infrastructure as a Code skills, ideally with Terraform and EKS or Kubernetes. Proficiency in observability tools like Prometheus, Grafana, Thanos, CloudWatch, OpenTelemetry, and the ELK stack is necessary. Writing high-quality code in Python, Go, or equivalent programming languages is essential, as well as a good understanding of Unix/Linux systems, system libraries, file systems, and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure, architecting software and infrastructure at scale, and certifications in cloud and security domains are beneficial qualifications for this role. Cisco emphasizes diversity and encourages candidates to apply even if they do not meet every single qualification. Diverse perspectives and skills are valued, and Cisco believes that diverse teams are better equipped to solve problems, innovate, and create a positive impact.,
Posted 2 months ago
8.0 - 10.0 years
10 - 12 Lacs
Bengaluru
Work from Office
Senior Data Engineer (Databricks, PySpark, SQL, Cloud Data Platforms, Data Pipelines) Job Summary Synechron is seeking a highly skilled and experienced Data Engineer to join our innovative analytics team in Bangalore. The primary purpose of this role is to design, develop, and maintain scalable data pipelines and architectures that empower data-driven decision making and advanced analytics initiatives. As a critical contributor within our data ecosystem, you will enable the organization to harness large, complex datasets efficiently, supporting strategic business objectives and ensuring high standards of data quality, security, and performance. Your expertise will directly contribute to building robust, efficient, and secure data solutions that drive business value across multiple domains. Software Required Software & Tools: Databricks Platform (Hands-on experience with Databricks notebooks, clusters, and workflows) PySpark (Proficient in developing and optimizing Spark jobs) SQL (Advance proficiency in writing complex SQL queries and optimizing queries) Data Orchestration Tools such as Apache Airflow or similar (Experience in scheduling and managing data workflows) Cloud Data Platforms (Experience with cloud environments such as AWS, Azure, or Google Cloud) Data Warehousing Solutions (Snowflake highly preferred) Preferred Software & Tools: Kafka or other streaming frameworks (e.g., Confluent, MQTT) CI/CD tools for data pipelines (e.g., Jenkins, GitLab CI) DevOps practices for data workflows Programming LanguagesPython (Expert level), and familiarity with other languages like Java or Scala is advantageous Overall Responsibilities Architect, develop, and maintain scalable, resilient data pipelines and architectures supporting business analytics, reporting, and data science use cases. Collaborate closely with data scientists, analysts, and cross-functional teams to gather requirements and deliver optimized data solutions aligned with organizational goals. Ensure data quality, consistency, and security across all data workflows, adhering to best practices and compliance standards. Optimize data processes for enhanced performance, reliability, and cost efficiency. Integrate data from multiple sources, including cloud data services and streaming platforms, ensuring seamless data flow and transformation. Lead efforts in performance tuning and troubleshooting data pipelines to resolve bottlenecks and improve throughput. Stay up-to-date with emerging data engineering technologies and contribute to continuous improvement initiatives within the team. Technical Skills (By Category) Programming Languages: EssentialPython, SQL PreferredScala, Java Databases/Data Management: EssentialData modeling, ETL/ELT processes, data warehousing (Snowflake experience highly preferred) Preferred NoSQL databases, Hadoop ecosystem Cloud Technologies: EssentialExperience with cloud data services (AWS, Azure, GCP) and deployment of data pipelines in cloud environments PreferredCloud native data tools and architecture design Frameworks and Libraries: EssentialPySpark, Spark SQL, Kafka, Airflow PreferredStreaming frameworks, TensorFlow (for data prep) Development Tools and Methodologies: EssentialVersion control (Git), CI/CD pipelines, Agile methodologies PreferredDevOps practices in data engineering, containerization (Docker, Kubernetes) Security Protocols: Familiarity with data security, encryption standards, and compliance best practices Experience Minimum of 8 years of professional experience in Data Engineering or related roles Proven track record of designing and deploying large-scale data pipelines using Databricks, PySpark, and SQL Practical experience in data modeling, data warehousing, and ETL/ELT workflows Experience working with cloud data platforms and streaming data frameworks such as Kafka or equivalent Demonstrated ability to work with cross-functional teams, translating business needs into technical solutions Experience with data orchestration and automation tools is highly valued Prior experience in implementing CI/CD pipelines or DevOps practices for data workflows (preferred) Day-to-Day Activities Design, develop, and troubleshoot data pipelines for ingestion, transformation, and storage of large datasets Collaborate with data scientists and analysts to understand data requirements and optimize existing pipelines Automate data workflows and improve pipeline efficiency through performance tuning and best practices Conduct data quality audits and ensure data security protocols are followed Manage and monitor data workflows, troubleshoot failures, and implement fixes proactively Contribute to documentation, code reviews, and knowledge sharing within the team Stay informed of evolving data engineering tools, techniques, and industry best practices, incorporating them into daily work processes Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or related field Relevant certifications such as Databricks Certified Data Engineer, AWS Certified Data Analytics, or equivalent (preferred) Continuous learning through courses, workshops, or industry conferences on data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills with a focus on scalable solutions Excellent communication skills to effectively collaborate with technical and non-technical stakeholders Ability to prioritize tasks, manage time effectively, and deliver within tight deadlines Demonstrated leadership in guiding team members and driving project success Adaptability to evolving technological landscapes and innovative thinking Commitment to data privacy, security, and ethical handling of information
Posted 2 months ago
6.0 - 8.0 years
25 - 30 Lacs
Bengaluru
Work from Office
6+ years of experience in information technology, Minimum of 3-5 years of experience in managing and administering Hadoop/Cloudera environments. Cloudera CDP (Cloudera Data Platform), Cloudera Manager, and related tools. Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Impala, etc.). Linux system administration with experience with scripting languages (Python, Bash, etc.) and configuration management tools (Ansible, Puppet, etc.) Tools like Kerberos, Ranger, Sentry), Docker, Kubernetes, Jenkins Cloudera Certified Administrator for Apache Hadoop (CCAH) or similar certification. Cluster Management, Optimization, Best practice implementation, collaboration and support.
Posted 2 months ago
3.0 - 8.0 years
5 - 10 Lacs
Chennai
Hybrid
Duration: 8Months Work Type: Onsite Position Description: Looking for qualified Data Scientists who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, and Optimization. Potential candidates should have hands-on experience in applying first principles methods, machine learning, data mining, and text mining techniques to build analytics prototypes that work on massive datasets. Candidates should have experience in manipulating both structured and unstructured data in various formats, sizes, and storage-mechanisms. Candidates should have excellent problem-solving skills with an inquisitive mind to challenge existing practices. Candidates should have exposure to multiple programming languages and analytical tools and be flexible to using the requisite tools/languages for the problem at-hand. Skills Required: Machine Learning, GenAI, LLM Skills Preferred: Python, Google Cloud Platform, Big Query Experience Required: 3+ years of hands-on experience in using machine learning/text mining tools and techniques such as Clustering/classification/decision trees, Random forests, Support vector machines, Deep Learning, Neural networks, Reinforcement learning, and other numerical algorithms Experience Preferred: 3+ years of experience in at least one of the following languages: Python, R, MATLAB, SAS Experience with GoogleCloud Platform (GCP) including VertexAI, BigQuery, DBT, NoSQL database and Hadoop Ecosystem Education Required: Bachelor's Degree
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |