Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description As a BIE - Intern on the team, you will enable effective decision making by retrieving and aggregating data from multiple sources and triangulating the data points/ metrics into reports and actionable format. Your ownership of the scoping and design of new metrics and enhancement of existing ones will help support the future state of business processes and ensure sustainability. You will communicate analysis and insights to stakeholders and business leaders, both verbally and in writing. These analytics and metrics will help ensure we are focused on what’s important, enable clarity and focus, and delight our customers. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Ensure data accuracy by validating data for new and existing tools. Retrieve and analyze data using SQL, Excel, and other data management systems and develop reporting and data visualization solutions –using tools like Tableau, AWS-QuickSight and Looker. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Model data and metadata to support the reporting pipelines and automate the manual reporting solutions using programming language such as Python Java, Scala. Understand trends related to business metrics and recommend strategies to stakeholders to help drive business growth. Reporting of key insight trends, using statistical rigor (Hypothesis testing, measuring experiment success defining statistical significance, developing basic regression and forecasting models) to simplify and inform the larger team of noteworthy story lines. Basic Qualifications Bachelor’s degree in Computer Science, Mathematics, Statistics, Operations Research Proficiency in SQL and in one or more programming language – Python, Scala, Ruby and Java. Knowledge of data management fundamentals, data storage principles, ETL, Data Modeling, and Data Architecture. Knowledge on using business intelligence reporting tools – Tableau/QuickSight/Looker Familiar with theory and practice of information retrieval, relevance, Statistics, and data mining and skilled at data visualization and presentation. Excellent problem solving skills, combined with the ability to present your findings/insights clearly and compellingly in both verbal and written form. Preferred Qualifications Pursuing Bachelor’s/Master’s degree in Computer Science, Mathematics, Statistics, Operations Research, and Analytics Used statistical package in R or Used Python for data analysis and predictive modelling or exposed to tools like SAS, SPSS Have understanding of Bigdata – tools and technologies – Hadoop, Spark, EMR and skills sufficient to extract, transform, and clean large (multi-TB) data sets in a Unix/Linux environment using programming languages Java, Scala, Ruby or Python. Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of data modeling and data pipeline design Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2802219
Posted 1 week ago
7.0 - 12.0 years
20 - 35 Lacs
Chennai
Work from Office
Back-End Developer - Python Experience Range: 05 - 12 years Location of Requirement: Chennai Job Description Build and maintain the server-side logic, APIs, and databases that power web and mobile applications. Focus on performance, scalability, and data integrity. Desired Candidate Profile: Must-Have Skills: Python, SQL, Docker, Kubernetes Good-to-Have Skills: Scala, Clojure, Node.js, Typescript, Rest, GraphQL, Git, Terraform, Helm, Gitlab, OpenAPI, Oauth2.0 Soft Skills: Communication, English, Documentation Tools: Gitlab, Github, Datadog, JIRA, Prometheus, Grafana (nice-to-have)
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with cloud computing platforms. - Strong understanding of application development methodologies. - Familiarity with data integration and ETL processes. - Experience in programming languages such as Python or Scala. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education
Posted 1 week ago
4.0 - 9.0 years
20 - 30 Lacs
Hyderabad
Hybrid
Position : Big Data Developers (mid to senior level) Location : Hyderabad (Hybrid Mode) Face to Face interview for Final Round is Mandatory Looking for Immediate to days notice period. Must-Have Skills Big Data (Py Spark + Java/Scala) Preferred Skills: AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar)
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Back-End Developer - Python Experience Range: 05 - 12 years Location of Requirement: Chennai Job Description Build and maintain the server-side logic, APIs, and databases that power web and mobile applications. Focus on performance, scalability, and data integrity. Desired Candidate Profile: Must-Have Skills: Python, SQL, Docker, Kubernetes Good-to-Have Skills: Scala, Clojure, Node.js, Typescript, Rest, GraphQL, Git, Terraform, Helm, Gitlab, OpenAPI, Oauth2.0 Soft Skills: Communication, English, Documentation Tools: Gitlab, Github, Datadog, JIRA, Prometheus, Grafana (nice-to-have)
Posted 1 week ago
0 years
6 - 10 Lacs
Bhubaneswar, Odisha, India
Remote
Job Summary We are seeking a passionate and experienced Technical Trainer to deliver in-depth training programs aligned with our Data Engineering curriculum. The ideal candidate will have expertise in programming (Python, Java), cloud technologies (AWS), big data frameworks (Apache Spark), and software engineering principles. The trainer will be responsible for effectively delivering instructor-led sessions, hands-on labs, and assessments to upskill learners for real-world deployment. Key Responsibilities Deliver high-quality technical training sessions based on the provided curriculum covering: Programming Logic and Techniques HTML, Web Technologies, and Web Services Python (Core + Data Structures) AWS Fundamentals (Compute, Storage, Networking) Big Data Processing using Apache Spark (RDD, DataFrames, Spark SQL) Data Streaming using Spark Streaming and Kafka Software Engineering Practices (OOP, SDLC, Testing) Data Integration Tools (Glue, NiFi) CI/CD and Deployment Practices Facilitate real-time demonstrations and hands-on labs to reinforce learning. Evaluate learners' progress through assignments, quizzes, and project assessments. Customize training delivery to suit varied learning paces and styles. Collaborate with the content and curriculum team to refine and enhance course materials. Provide feedback and mentoring to learners as needed. Required Skills & Qualifications Bachelor's Degree in Computer Science, Engineering, or related field. Proven expertise in: Python Programming (data structures, OOP, exception handling) Apache Spark (RDD, Spark SQL, Spark Streaming) AWS Cloud Services (EC2, S3, IAM, Lambda, Glue, CloudWatch, etc.) Kafka, NiFi, and Airflow (preferred) Software Development Fundamentals (version control, testing, debugging) Strong understanding of web technologies and cloud architecture. Excellent presentation, communication, and interpersonal skills. Experience in technical mentoring or prior training delivery is a must. Preferred AWS Certification (e.g., AWS Solutions Architect – Associate) Experience in SDET or full-stack development roles Familiarity with LMS platforms and virtual training tools (Zoom, MS Teams) Soft Skills Passion for teaching and knowledge sharing Empathy and patience for learners with diverse backgrounds Adaptability and strong organizational skills What We Offer Flexible working environment (remote/hybrid). Opportunity to shape the next generation of Java developers. Competitive compensation based on expertise and delivery. Access to internal tech resources and continued learning opportunities. Work Schedule Flexible to accommodate training sessions during evenings and weekends as needed. Flexible in travelling across India. Location Remote/On-site (based on organizational requirements). Skills: python,apache spark,aws,big data,kafka,nifi,data visualization,data warehouse,hadoop,hive,scala
Posted 1 week ago
0 years
6 - 10 Lacs
Mumbai Metropolitan Region
Remote
Job Summary We are seeking a passionate and experienced Technical Trainer to deliver in-depth training programs aligned with our Data Engineering curriculum. The ideal candidate will have expertise in programming (Python, Java), cloud technologies (AWS), big data frameworks (Apache Spark), and software engineering principles. The trainer will be responsible for effectively delivering instructor-led sessions, hands-on labs, and assessments to upskill learners for real-world deployment. Key Responsibilities Deliver high-quality technical training sessions based on the provided curriculum covering: Programming Logic and Techniques HTML, Web Technologies, and Web Services Python (Core + Data Structures) AWS Fundamentals (Compute, Storage, Networking) Big Data Processing using Apache Spark (RDD, DataFrames, Spark SQL) Data Streaming using Spark Streaming and Kafka Software Engineering Practices (OOP, SDLC, Testing) Data Integration Tools (Glue, NiFi) CI/CD and Deployment Practices Facilitate real-time demonstrations and hands-on labs to reinforce learning. Evaluate learners' progress through assignments, quizzes, and project assessments. Customize training delivery to suit varied learning paces and styles. Collaborate with the content and curriculum team to refine and enhance course materials. Provide feedback and mentoring to learners as needed. Required Skills & Qualifications Bachelor's Degree in Computer Science, Engineering, or related field. Proven expertise in: Python Programming (data structures, OOP, exception handling) Apache Spark (RDD, Spark SQL, Spark Streaming) AWS Cloud Services (EC2, S3, IAM, Lambda, Glue, CloudWatch, etc.) Kafka, NiFi, and Airflow (preferred) Software Development Fundamentals (version control, testing, debugging) Strong understanding of web technologies and cloud architecture. Excellent presentation, communication, and interpersonal skills. Experience in technical mentoring or prior training delivery is a must. Preferred AWS Certification (e.g., AWS Solutions Architect – Associate) Experience in SDET or full-stack development roles Familiarity with LMS platforms and virtual training tools (Zoom, MS Teams) Soft Skills Passion for teaching and knowledge sharing Empathy and patience for learners with diverse backgrounds Adaptability and strong organizational skills What We Offer Flexible working environment (remote/hybrid). Opportunity to shape the next generation of Java developers. Competitive compensation based on expertise and delivery. Access to internal tech resources and continued learning opportunities. Work Schedule Flexible to accommodate training sessions during evenings and weekends as needed. Flexible in travelling across India. Location Remote/On-site (based on organizational requirements). Skills: python,apache spark,aws,big data,kafka,nifi,data visualization,data warehouse,hadoop,hive,scala
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Saarthee: Saarthee is a Global Strategy, Analytics, Technology and AI consulting company, where our passion for helping others fuels our approach and our products and solutions. We are a onestop shop for all things data and analytics. Unlike other analytics consulting firms that are technology or platform specific, Saarthee’s holistic and tool agnostic approach is unique in the marketplace. Our Consulting Value Chain framework meets our customers where they are in their data journey. Our diverse and global team work with one objective in mind: Our Customers’ Success. At Saarthee, we are passionate about guiding organizations towards insights fueled success. That’s why we call ourselves Saarthee–inspired by the Sanskrit word ‘Saarthi’, which means charioteer, trusted guide, or companion. Cofounded in 2015 by Mrinal Prasad and Shikha Miglani, Saarthee already encompasses all the components of Data Analytics consulting. Saarthee is based out of Philadelphia, USA with office in UK and India We are seeking a talented Talent Acquisition Executive/Lead. The ideal candidate will be responsible for driving talent acquisition strategies to support our company's growth objectives. You will work closely with the HR department, business leaders, and hiring managers to identify, attract, and hire top talent in the industry. If you are passionate about building high-performing teams and have a proven track record of sourcing, hiring, and retaining top talent in the data analytics industry and related, we encourage you to apply for this exciting opportunity. Key Responsibilities: Technical Talent Acquisition: Lead the end-to-end recruitment process for roles in Data Engineering, Data Science, and Data Analytics, including software engineers, data scientists, machine learning engineers, and data architects. Utilize your technical expertise to assess candidates' proficiency in programming languages (Python, Java, Scala), data pipelines (ETL, Kafka), cloud platforms (AWS, Azure, GCP), and big data technologies (Hadoop, Spark). Technical Screening & Assessment: Design and implement rigorous technical assessment processes, including coding tests, algorithm challenges, and system design interviews, to ensure candidates meet the high technical standards required for our projects. Stakeholder Collaboration: Partner with CTO, Engineering Leads, and Data Science teams to understand the specific technical requirements of each role. Translate these needs into effective job descriptions, recruitment strategies, and candidate evaluation criteria. Pipeline Development: Build and maintain a robust pipeline of highly qualified candidates by leveraging networks, industry events, online platforms (GitHub, Stack Overflow), and advanced sourcing techniques such as Boolean search, AI-driven talent matching, and targeted outreach. Industry Expertise: Stay current with trends in Data Engineering, Data Science, and Analytics, including advancements in AI/ML, data warehousing (Snowflake, Redshift), real-time analytics, and DevOps practices. Use this knowledge to proactively identify and engage with potential candidates who are at the forefront of these fields. Diversity & Inclusion in Tech: Implement strategies to ensure diverse and inclusive hiring practices, focusing on underrepresented groups in technology. Develop partnerships with organizations and communities that support diversity in tech. Talent Development & Retention: Work with technical leadership to create clear career pathways for data and engineering professionals within the company. Support ongoing training and development initiatives to keep teams updated with the latest technologies and methodologies. Qualifications: Experience: 3+ years in Talent Acquisition, with significant experience recruiting for Data Engineering, Data Science, Data Analytics, and Technology roles in high-growth or technically complex environments. Technical Knowledge: Strong background in the technologies and tools used in Data Engineering, Data Science, and Data Analytics, including but not limited to: AI-ML Programming languages: Python, R, Java, Scala Big Data technologies: Hadoop, Spark, Kafka Cloud platforms: AWS, Azure, GCP Data processing: ETL, data pipelines, real-time streaming Analytics and BI tools: Tableau, Power BI, Looker Leadership: Proven experience in building teams with a focus on technical roles, driving strategies that result in successful, high-impact hires. Analytical & Data-Driven: Expertise in using data to guide recruitment decisions and strategies, including metrics on sourcing, pipeline health, and hiring efficiency. Communication: Excellent ability to communicate complex technical requirements to both technical and non-technical stakeholders. Commitment to Excellence: A relentless focus on quality, with a keen eye for identifying top technical talent who can thrive in challenging, innovative environments. Soft Skills: Problem-Solving: Strong analytical and troubleshooting skills. Collaboration: Excellent teamwork and communication skills to work effectively with cross-functional teams. Adaptability: Ability to manage multiple tasks and projects in a fast-paced environment. Attention to Detail: Precision in diagnosing and fixing issues. Continuous Learning: A proactive attitude towards learning new technologies and improving existing skills Excellent Verbal and Writing skills.
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Noida
Work from Office
Roles and responsibilities Adobe RTCDP is looking for an exceptional developer who can thrive in a fast-paced, customer-focused environment. The ideal candidate is one who is adaptable to an agile environment, passionate about new opportunities and has a demonstrable track record of success in delivering new features and products. Responsibilities Owns development for features of medium to large complexity and apply in-depth knowledge to turn requirements into feature specs. Contribute extensively to the analysis, design, prototype, and implementation of new features and improving existing ones. Collaborate with product management and Engineering leads to evaluate and determine new features to be added. Should be a proactive self-starter and fast learner who can develop methods, techniques, and evaluation criterion for getting results. Ensure high quality code and related documentation What you need to succeed B.Tech/M.Tech from an outstanding institute with 2 to 5 years of hands-on design / development experience in software development, preferably in a product development organization. Strong programming skills in Java/Scala. Hands on experience with REST APIs Proven understanding on frameworks like Springboot, Apache Spark, Kafka etc. Knowledge of software fundamentals including design & analysis of algorithms, data structure design, and implementation, documentation, and unit testing. Good understanding of object-oriented design and knowledge of product life cycles and associated issues. Should have excellent computer science fundamentals and a good understanding of architecture, design, and performance. Ability to work proactively and independently with minimal direction. Be an excellent teammate with good written and oral communication skills. .
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
FEQ426R101 As a Senior Solutions Architect, you will shape the future of the Data & AI landscape by working with the most sophisticated data engineering and data science teams in the world. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field. You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem. Youll grow as a leader in your field, while finding solutions to our customers biggest challenges in big data, analytics, data engineering and data science problems Reporting to the Field Engineering Manager, you will collaborate with our most strategic prospects and customers, work directly with product and engineering to drive the Databricks roadmap forward, and work with the broader customer-facing team to develop architectures and solutions using our platform. You will guide customers through the competitive landscape, best practices, and implementation, and develop technical champions along the way. The impact you will have: You will partner with the sales team and provide technical leadership to help customers understand how Databricks can help solve their business problems. You will work directly with the sales team to develop your book of business, define account strategies, and execute those strategies to help your customers and prospects solve their business problems with Databricks. Consult on Big Data architectures, implement proof of concepts for strategic projects, spanning data engineering, data science and machine learning, and SQL analysis workflows. As well as validating integrations with cloud services, home grown tools, and other 3rd party applications Collaborate with your fellow Solutions Architects, using your skills to support each other and our users Become an expert in, promote, and recruit contributors for Databricks inspired open-source projects (Spark, Delta Lake, and MLflow) across the developer community. What we look for: 8+ years in a data engineering, data science, technical architecture, or similar pre-sales/consulting role 8+ years of experience with Big Data technologies, including Apache Spark , AI, Data Science, Data Engineering, Hadoop, Cassandra, and others Strong consulting / customer facing experience , working with external clients across a variety of industry markets Coding experience in Python, R, Java, Apache Spark or Scala Experience building distributed data systems Have built solutions with public cloud providers such as AWS, Azure, or GCP Available to travel to customers in your region Nice to have: Databricks Certification About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide including Comcast, Cond Nast, Grammarly, and over 50% of the Fortune 500 rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark , Delta Lake and MLflow. To learn more, follow Databricks on Twitter , LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https: / / www.mybenefitsnow.com / databricks . Our Commitment to Diversity and Inclusion . Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employers discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 1 week ago
5.0 - 10.0 years
25 - 37 Lacs
Pune
Work from Office
Role Overview: Synechron is looking for an experienced Scala Spark Developer to join our advanced analytics and big data engineering team in Pune. The ideal candidate should have a strong background in data processing using Scala and Spark in distributed environments, with the ability to handle large-scale data pipelines for complex business needs. Key Responsibilities: Design and develop scalable big data processing solutions using Scala and Apache Spark. Optimize data workflows and Spark jobs for performance and cost-efficiency. Work with large datasets across structured and unstructured sources. Collaborate with data engineers, analysts, and stakeholders to build end-to-end solutions. Maintain clean, modular, and well-documented code following best practices. Preferred Qualifications: Experience with big data ecosystems (Hadoop, Hive, HDFS, etc.). Familiarity with cloud platforms (AWS, Azure, or GCP) for data engineering. Understanding of data lakes, streaming data, and real-time analytics is a plus. Strong debugging, performance tuning, and communication skills. Educational Qualification: Bachelors or Masters degree in Computer Science, Information Technology, or related field.
Posted 1 week ago
5.0 - 10.0 years
12 - 13 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Description: ACCOUNTABILITIES: Designs, codes, tests, debugs and documents software according to Dell s systems quality standards, policies and procedures. Analyzes business needs and creates software solutions. Responsible for preparing design documentation. Prepares test data for unit, string and parallel testing. Evaluates and recommends software and hardware solutions to meet user needs. Resolves customer issues with software solutions and responds to suggestions for improvements and enhancements. Works with business and development teams to clarify requirements to ensure testability. Drafts, revises, and maintains test plans, test cases, and automated test scripts. Executes test procedures according to software requirements specifications Logs defects and makes recommendations to address defects. Retests software corrections to ensure problems are resolved. Documents evolution of testing procedures for future replication. May conduct performance and scalability testing. RESPONSIBILITIES: Plans, conducts and leads assignments generally involving moderate, high budgets projects or more than one project. Manages user expectations regarding appropriate milestones and deadlines. Assists in training, work assignment and checking of less experienced developers. Serves as technical consultant to leaders in the IT organization and functional user groups. Subject matter expert in one or more technical programming specialties; employs expertise as a generalist of a specialist. Performs estimation efforts on complex projects and tracks progress. Works on the highest level of problems where analysis of situations or data requires an in-depth evaluation of various factors. Documents, evaluates and researches test results; documents evolution of testing scripts for future replication. Identifies, recommends and implements changes to enhance the effectiveness of quality assurance strategies. Enable Skills-Based Hiring No Description Comments Additional Details Description Comments : Full stack backend expertise needed with minimum 5 years of experience.Mandatory skills Java, Spring Boot, MongoDB, Elasticsearch, Shell Scripting, Any UI language.Extra skills with working on Jira, Python, Scala etc. will be preferred. Not to Exceed Rate : (No Value)
Posted 1 week ago
4.0 - 8.0 years
6 - 15 Lacs
Hyderabad
Remote
Role & responsibilities: To Design and develop scalable data pipelines. Integrate and transform financial data using flat files JSON and XML formats. Create optimised queries using Hive SQL PostGRE SQL and other data tools. Preferred candidate profile 4+ years of experience in data bricks using Python And Scala. Hands on experience using Azure cloud platform.
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform. Develop and implement highly scalable ETL pipelines for processing large datasets. Lead the adoption of Apache Spark for distributed data processing and real-time analytics. Define and enforce data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights. Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate data workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability across all data processes. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 10+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analytics. Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Hands-on experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills in a fast-paced environment. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python and Streamlit •Experience in handling unstructured data processing and transformation with programming knowledge. •Hands on experience in building data pipelines using Scala/Python •Big data technologies such as Apache Spark, Structured Streaming, SQL, Databricks Delta Lake •Strong analytical and problem solving skills with the ability to troubleshoot spark applications and resolve data pipeline issues. •Familiarity with version control systems like Git, CICD pipelines using Jenkins.
Posted 1 week ago
7.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 7+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes.
Posted 1 week ago
8.0 - 13.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Build robust, performing and high scalable, flexible data pipelines with a focus on time to market with quality.Responsibilities: Act as an active team member to ensure high code quality (unit testing, regression tests) delivered in time and within budget. Document the delivered code/solution Participate to the implementation of the releases following the change & release management processes Provide support to the operation team in case of major incidents for which engineering knowledge is required. Participate to effort estimations. Provide solutions (bug fixes) for problem mgt. Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional : You have experience with most of these technologiesPyspark, AWS, Databricks, Spark, HDFS, Python Preferred Skills: Technology-Analytics - Packages-Python - Big Data Technology-Big Data - Data Processing-Spark
Posted 1 week ago
5.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional : Primary skills:Technology-Big Data - Data Processing-Spark Preferred Skills: Technology-Big Data - Data Processing-Spark
Posted 1 week ago
3.0 - 5.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Primary skillsTechnology-Big Data - Data Processing-Map Reduce Preferred Skills: Technology-Big Data - Data Processing-Map Reduce
Posted 1 week ago
5.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BCA,BTech,MTech,MSc,MCA Service Line Strategic Technology Group Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skills:Technology-Functional Programming-Scala - Bigdata Preferred Skills: Technology-Functional Programming-Scala
Posted 1 week ago
2.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skillsHadoop, Hive, HDFS Preferred Skills: Technology-Big Data - Hadoop-Hadoop
Posted 1 week ago
5.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BCA,BSc,MCA,MTech,MSc Service Line Data & Analytics Unit Responsibilities "1. 5-8 yrs exp in Azure (Hands on experience in Azure Data bricks and Azure Data Factory)2. Good knowledge in SQL, PySpark.3. Should have knowledge in Medallion architecture pattern4. Knowledge on Integration Runtime5. Knowledge on different ways of scheduling jobs via ADF (Event/Schedule etc)6. Should have knowledge of AAS, Cubes.7. To create, manage and optimize the Cube processing.8. Good Communication Skills.9. Experience in leading a team" Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Preferred Skills: Technology-Big Data - Data Processing-Spark
Posted 1 week ago
5.0 - 9.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BSc,BCA,MSc,MCA,MTech Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Awareness of latest Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methodstechnologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skills-Azure data bricks Preferred Skills: Technology-Cloud Platform-Azure Development & Solution Architecting
Posted 1 week ago
8.0 - 13.0 years
0 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities JD: 1) Must have 8+ years of experience in Python / Java, Spark, Scala, Hive, Microsoft Azure Cloud Services (Data Bricks platform and developer tools) 2) 4 to 8 years of experience in Data Warehouse, ETL, Snowflake and Report Testing 3) Strong in writing SQL scripts & Database knowledge on Oracle, SQL Server, Snowflake 4) Hands on working experience with any of the ETL tools, preferably Informatica and Report / Dashboard tools 5) Ability to work independently and 12:30 pm to 9:30 pm timings Good to have, Data Processing Ability to build optimized and cleaned ETL pipelines using Databricks flows, Scala, Python, Spark, SQL Testing and Deployment Preparing pipelines for deployment Data Modeling Knowledge of general data modeling concepts to model data into a lakehouse Building custom utilities using Python for ETL automation Experience working in agile(scrum) environment and usage of tools like JIRA. Must haves Data Bricks: 4/5 Pyspark with Scala: 4/5 SQL: 3/5
Posted 1 week ago
2.0 - 11.0 years
16 - 18 Lacs
Pune
Work from Office
Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer. In this role, you will: Data Pipelines Integration and Management. Design and implement scalable data architectures to support the banks data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Develop and enforce data security policies and procedures. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes Requirements To be successful in this role, you should meet the following requirements. 5+ years of experience in data engineering or related field Strong experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms (AWS, Azure, Google Cloud) and their data services. Deep understanding of ETL processes and data pipeline orchestration tools (Airflow, Apache NiFi). Knowledge of data modeling, data warehousing concepts, and data integration techniques. Strong problem-solving skills and ability to work under pressure. Experience in the banking or financial services industry. Familiarity with regulatory requirements related to data security and privacy in the banking sector. Certifications in cloud platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc. ). Experience with machine learning and data science frameworks. You ll achieve more when you join HSBC. .
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France