Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Genpact is a global professional services and solutions firm with over 125,000 employees in more than 30 countries. We are driven by curiosity, entrepreneurial agility, and the desire to create lasting value for our clients, including Fortune Global 500 companies. Our purpose is the relentless pursuit of a world that works better for people, and we serve leading enterprises with deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the role of Senior Principal Consultant, Research Data Scientist. The ideal candidate should have experience in Text Mining, Natural Language Processing (NLP) tools, Data sciences, Big Data, and algorithms. It is desirable to have full-cycle experience in at least one Large Scale Text Mining/NLP project, including creating a business use case, Text Analytics assessment/roadmap, Technology & Analytic Solutioning, Implementation, and Change Management. Experience in Hadoop, including development in the map-reduce framework, is also required. The Text Mining Scientist (TMS) will play a crucial role in bridging enterprise database teams and business/functional resources, translating business needs into techno-analytic problems and working with database teams to deliver large-scale text analytic solutions. The right candidate should have prior experience in developing text mining and NLP solutions using open-source tools. Responsibilities include developing transformative AI/ML solutions, managing project delivery, stakeholder/customer expectations, project documentation, project planning, and staying updated on industrial and academic developments in AI/ML with NLP/NLU applications. The role also involves conceptualizing, designing, building, and developing solution algorithms, interacting with clients to collect requirements, and conducting applied research on text analytics and machine learning projects. Qualifications we seek: Minimum Qualifications/Skills: - MS in Computer Science, Information systems, or Computer engineering - Systems Engineering experience with Text Mining/NLP tools, Data sciences, Big Data, and algorithms Technology: - Proficiency in Open Source Text Mining paradigms like NLTK, OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene, and cloud-based NLU tools such as DialogFlow, MS LUIS - Exposure to Statistical Toolkits like R, Weka, S-Plus, Matlab, SAS-Text Miner - Strong Core Java experience, Hadoop ecosystem, Python/R programming skills Methodology: - Solutioning & Consulting experience in verticals like BFSI, CPG - Solid foundation in AI Methodologies like ML, DL, NLP, Neural Networks - Understanding of NLP & Statistics concepts, applications like Sentiment Analysis, NLP, etc. Preferred Qualifications/Skills: Technology: - Expertise in NLP, NLU, Machine learning/Deep learning methods - UI development paradigms, Linux, Windows, GPU Experience, Spark, Scala - Deep learning frameworks like TensorFlow, Keras, Torch, Theano Methodology: - Social Network modeling paradigms - Text Analytics using NLP tools, Text analytics implementations This is a full-time position based in India-Noida. The candidate should have a Master's degree or equivalent education level. The job posting was on Oct 7, 2024, and the unposting date is ongoing. The primary skills required are digital, and the job category is full-time.,
Posted 5 days ago
0.0 - 4.0 years
0 Lacs
pune, maharashtra
On-site
As a Back End Developer at Cequence Security in India - Pune, you will contribute to building products that safeguard web applications and APIs worldwide from various threats including online fraud, business logic attacks, exploits, and sensitive data exposure. Our innovative platform caters to global enterprise customers in sectors like finance, banking, retail, social media, travel, and hospitality by offering a unified solution for runtime API visibility, security risk monitoring, and behavioral fingerprint-based threat prevention. Cequence Security stands out for its ability to consistently detect and prevent evolving online attacks without the need for extensive application integration, making it a trusted choice for enterprises seeking verified security solutions. If you are passionate about global security, enjoy collaborating with a dedicated team, and are eager to contribute to the growth of a dynamic organization, we welcome your application. As a Backend Developer at Cequence Security, your role involves designing, developing, and maintaining critical backend components of our security products. You will play a key part in architecting, designing, and implementing new product features from inception to final execution, including working on backend services, data pipelines, and network components. Collaboration with Architects, Engineers, Data Scientists, and Security experts will be essential as you tackle challenging problems and bring new features to life. Your responsibilities will include: - Overseeing projects from ideation to completion. - Designing server-side architecture enhancements for existing backend services. - Developing new services in alignment with the overall architecture. - Creating and implementing backend RESTful services and APIs. - Enhancing data pipelines for increased throughput and scalability. - Improving the high throughput data plane for our products. - Operating within an Agile framework to coordinate tasks and engage with team members. - Participating in a Test-Driven Development environment to produce reliable, well-documented code. Requirements for this role: - Bachelor's degree or equivalent experience in Computer Science or related fields. - Proficiency in JVM languages like Java, Kotlin & Scala. - Familiarity with big data tools such as Elasticsearch and Apache Kafka. - Experience with high-throughput networking components like proxies, firewalls, etc. - Background in network and/or application security domains is advantageous. - Strong problem-solving abilities and attention to detail. - Sound understanding of data structures, system design, and algorithms. - Exposure to Cloud services like AWS EC2, EMR, EKS, etc., is a bonus. - Knowledge of Docker and Kubernetes is desirable for this role.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As an integral part of our Data Automation & Transformation team, you will experience unique challenges every day. We are looking for someone with a positive attitude, entrepreneurial spirit, and a willingness to dive in and get things done. This role is crucial to the team and will provide exposure to various aspects of managing a banking office. In this role, you will focus on building curated Data Products and modernizing data by moving it to SNOWFLAKE. Your responsibilities will include working with Cloud Databases such as AWS and SNOWFLAKE, along with coding languages like SQL, Python, and Pyspark. You will analyze data patterns across large multi-platform ecosystems and develop automation solutions, analytics frameworks, and data consumption architectures utilized by Decision Sciences, Product Strategy, Finance, Risk, and Modeling teams. Ideally, you should have a strong analytical and technical background in financial services, particularly in small business banking or commercial banking segments. Your key responsibilities will involve migrating Private Client Office Data to Public Cloud (AWS and Snowflake), collaborating closely with the Executive Director of Automation and Transformation on new projects, and partnering with various teams to support data analytics needs. You will also be responsible for developing data models, automating data assets, identifying technology gaps, and supporting data integration projects with external providers. To qualify for this role, you should have at least 3 years of experience in analytics, business intelligence, data warehousing, or data governance. A Master's or Bachelor's degree in a related field (e.g., Data Analytics, Computer Science, Math/Statistics, or Engineering) is preferred. You must have a solid understanding of programming languages such as SQL, SAS, Python, Spark, Java, or Scala, and experience in building relational data models across different technology platforms. Excellent communication, time management, and multitasking skills are essential for this role, along with experience in data visualization tools and compliance with regulatory standards. Knowledge of risk classification, internal controls, and commercial banking products and services is desirable. Preferred qualifications include experience with Big Data and Cloud platforms, data wrangling tools, dynamic reporting applications like Tableau, and proficiency in data architecture, data mining, and analytical methodologies. Familiarity with job scheduling workflows, code versioning software, and change management tools would be advantageous.,
Posted 5 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description What you’ll do: As a member of the Decision Support Analytics (DSA) team, you will collaborate with cross-functional teams to design, build, and manage scalable data pipelines, data warehouses, and machine learning (ML) models. Your work will involve analyzing and visualizing data, publishing dashboards or data models, and contributing to the development of web services for Engineering Technologies portals and applications. This role requires strong coding abilities, presentation skills, and expertise in big data infrastructure. The ideal candidate will have experience in end-to-end data generation processes, troubleshooting data/reporting issues, and recommending optimal data solutions. A keen attention to detail and proficiency with tools like Tableau and other data analysis platforms are essential. Collaborate with internal stakeholders to gather requirements and understand business workflows. Develop scalable data pipelines and ensure high-quality data flow and integrity. Use advanced coding skills in languages such as SQL, Python, Java, or Scala to address business needs. Leverage statistical methods to analyze data, generate actionable insights, and produce business reports. Design meaningful visualizations using tools like Tableau, Power BI, or similar platforms for effective communication with stakeholders. Implement or upgrade data analysis tools and assist in strategic decisions regarding new systems. Build frameworks and automation tools to streamline data consumption and understanding. Train end-users on new dashboards, reports, or tools. Provide hands-on support for internal customers across various teams. Ensure compliance with data governance policies and security standards. What You Need To Bring Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven track record of working with large datasets in fast-paced environments. Strong problem-solving skills with the ability to adapt to evolving technologies. Typically 8+ years experience. Data Engineering Tools & Frameworks ETL tools such as Wherescape, Apache Airflow, or Azure Data Factory. Big Data technologies like Hadoop, Apache Spark, or Kafka. Cloud Platforms Proficiency in cloud services such as AWS, Azure, or Google Cloud Platform for storage, computing, and analytics. Databases Experience with both relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Data Modeling & Architecture Expertise in designing schemas for analytical use cases and optimizing storage mechanisms. Machine Learning & Automation Familiarity with ML frameworks (e.g., TensorFlow, PyTorch) for building predictive models. Scripting & Automation Advanced scripting for automation using Python/Scala/Java. APIs & Web Services Building RESTful APIs for seamless integration with internal/external systems. Additional Skills Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, Solutions Design, Testing & Automation, User Experience (UX) What We Can Offer You Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #networking Job Engineering Job Level TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 5 days ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.
Posted 6 days ago
1.0 - 5.0 years
7 - 11 Lacs
Gurugram
Work from Office
You Lead the Way Weve Got Your Back, At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways Whether were supporting our customersfinancial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining whats possible ? and were proud to back each other every step of the way When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day ?We back our colleagues with the support they need to thrive, professionally and personally Thats why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture, We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale Join us for an exciting opportunity in the Marketing Technology within American Express Technologies, How will you make an impact in this role There are hundreds of opportunities to make your mark on technology and life at American Express, Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities, Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ), Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps, Create technical solution designs to meet business requirements, Define best practices to be followed by team, Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods, Suggest and recommend solution architecture to resolve business problems, Perform peer code review and participate in technical discussions with the team on the best solutions possible, As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development Find your place in technology of #TeamAmex, Minimum Qualifications: ? BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience, ? 5+ years of hands-on software development experience with Big Data & Analytics solutions Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow, ? Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics, CDP, ? Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability, ? Design and development experience with Kafka, Real time ETL pipeline, API is desirable, ? Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies, ? Certifications in cloud platform (GCP Professional Data Engineer) is a plus, ? Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns, ? Strong Object-Oriented Programming skills and design patterns, ? Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven), ? Good knowledge and experience with configuration management tools like GitHub ? Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively, ? Looks proactively beyond the obvious for continuous improvement opportunities, ? Communicates effectively with product and cross functional team, We back you with benefits that support your holistic well-being so you can be and deliver your best This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law, Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations, Show
Posted 6 days ago
1.0 - 3.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Atomicwork is on a mission to transform the digital workplace experience by uniting people, processes, and platforms through AI automation Our team is building a modern service management platform that enables growing businesses to reduce operational complexity and drive business success, We are seeking a skilled and motivated Data Pipeline Engineer to join our team In this role, you will be responsible for designing, building, and maintaining scalable data pipelines that support our enterprise search capabilities Your work will ensure that data from various sources is efficiently ingested, processed, and indexed, enabling seamless and secure search experiences across the organisation, This position is based out of our Bengaluru office We offer competitive pay to employees and practical benefits for their whole family, If this sounds interesting to you, read on, What Were Looking For (Qualifications) We value hands-on skills and a proactive mindset Formal qualifications are less important than your ability to deliver results and collaborate effectively, Proficiency in programming languages such as Python, Java, or Scala, Strong experience with data pipeline frameworks and tools ( e-g , Apache Airflow, Apache NiFi), Experience with search platforms like Elasticsearch or OpenSearch, Familiarity with data ingestion, transformation, and indexing processes, Understanding of enterprise search concepts, including crawling, indexing, and query processing, Knowledge of data security and access control best practices, Experience with cloud platforms (AWS, GCP, or Azure) and related Backend Engineer Search/Integrations services, Familiarity with Model Context Protocol (MCP) is a strong plus Strong problem-solving and analytical skills, Excellent communication and collaboration What Youll Do (Responsibilities) Design, develop, and maintain data pipelines for enterprise search applications, Implement data ingestion processes from various sources, including databases, file systems, and APIs, Develop data transformation and enrichment processes to prepare data for indexing, Integrate with search platforms to index and update data efficiently, Ensure data quality, consistency, and integrity throughout the pipeline, Monitor pipeline performance and troubleshoot issues as they arise, Collaborate with cross-functional teams, including data scientists, engineers, and product managers, Implement security measures to protect sensitive data during processing and storage, Document pipeline architecture, processes, and best practices, Stay updated with industry trends and advancements in data engineering and enterprise search, Why we are different (culture) As a part of Atomicwork, you can shape our company and business from idea to production Our cultural values also set the bar high, helping us create a better workplace for everyone, Agency: Be self-directed Take initiative and solve problems creatively, Taste: Hold a high bar Sweat the details Build with care and discernment, Ownership: We demonstrate unwavering commitment to our mission and goals, taking full responsibility for triumphs and setbacks, Mastery: We relentlessly pursue continuous self-improvement as individuals and teams, dedicating ourselves to constant learning and growth, Impatience: We recognize that our world moves swiftly and is driven by an unyielding desire to progress with every endeavor, Customer Obsession: We place our customers at the heart of everything we do, relentlessly seeking to understand their needs and exceed their expectations, What we offer (compensation and benefits) We are big on benefits that make sense to you and your family, Fantastic team ?the #1 reason why everybody joins us ? Convenient offices ? well-located offices spread over five different cities ? Paid time off ? Unlimited sick leaves and 15 days off every year, Health insurance ? comprehensive health coverage upto 75% premium covered ?? Flexible allowances ? with hassle-free reimbursements across spends ?? Annual outings ? for everyone to have fun together, What next (applying for this role) Click on the apply button to get started with your application, Answer a few questions about yourself and your work, Wait to hear from us about the next steps, Do you have anything else to tell usEmail careers@atomicwork and let us know whats on your mind, Show
Posted 6 days ago
4.0 - 9.0 years
13 - 23 Lacs
Gurugram, Bengaluru, Mumbai (All Areas)
Hybrid
3-8+ years of experience in Data Science preferably with financial services clients,Python, R, Scala,pyspark,Data Science and Machine Learning concepts and algorithms such as clustering, regression, classification, forecasting,.
Posted 6 days ago
4.0 - 9.0 years
13 - 23 Lacs
Gurugram, Bengaluru, Mumbai (All Areas)
Hybrid
3-8+ years of experience in Data Science preferably with financial services clients,Python, R, Scala,pyspark,Data Science and Machine Learning concepts and algorithms such as clustering, regression, classification, forecasting,.
Posted 6 days ago
10.0 - 15.0 years
11 - 16 Lacs
Bengaluru
Work from Office
Role Overview: Skyhigh Security is seeking a Principal Data Engineer to design and build scalable Big Data solutions. You'll leverage your deep expertise in Java and Big Data architecture to process massive datasets and shape our security offerings. Extensive experience with distributed systems, cloud platforms, and a passion for data quality, apply now to join our innovative team and make a global impact in cybersecurity! Our Engineering team is driving the future of cloud securitydeveloping one of the worlds largest, most resilient cloud-native data platforms. At Skyhigh Security, were enabling enterprises to protect their data with deep intelligence and dynamic enforcement across hybrid and multi-cloud environments. As we continue to grow, were looking for a Principal Data Engineer to help us scale our platform, integrate advanced AI/ML workflows, and lead the evolution of our secure data infrastructure. Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (e.g., anomaly detection, metadata enrichment, automation). Evaluate and integrate LLM-based automation and AI-enhanced observability into engineering workflows. Ensure data security and privacy compliance. Mentoring engineers, ensuring high engineering standards, and promoting technical excellence across teams. What Were Looking For (Minimum Qualifications) 10+ years of experience in big data architecture and engineering, including deep proficiency with the AWS cloud platform. Expertise in distributed systems and frameworks such as Apache Spark, Scala, Kafka, Flink, and Elasticsearch, with experience building production-grade data pipelines. Strong programming skills in Java for building scalable data applications. Hands-on experience with ETL tools and orchestration systems. Solid understanding of data modeling across both relational (PostgreSQL, MySQL) and NoSQL (HBase) databases and performance tuning. What Will Make You Stand Out (Preferred Qualifications) Experience integrating AI/ML or LLM frameworks (e.g., LangChain, LlamaIndex) into data workflows. Experience implementing CI/CD pipelines with Kubernetes, Docker, and Terraform. Knowledge of modern data warehousing (e.g., BigQuery, Snowflake) and data governance principles (GDPR, HIPAA). Strong ability to translate business goals into technical architecture and mentor teams through delivery. Familiarity with visualization tools (Tableau, Power BI) to communicate data insights, even if not a primary responsibility. #LI-MS1
Posted 6 days ago
10.0 - 11.0 years
25 - 27 Lacs
Pune
Work from Office
Job Description: Job Title- Senior Engineer PD Location- Pune, India Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Cloud Data Engineer area. What we ll offer you As part of our flexible scheme, here are just some of the benefits that you ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (BigQuery, Cloud SQl, No Sql, Hive etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at least: Spark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team How we ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs https: / / www.db.com / company / company.htm
Posted 6 days ago
6.0 - 8.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking a highly skilled and proactive Data Engineer to join our dynamic data analytics team. In this role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and solutions on the Google Cloud Platform (GCP). With your expertise, you'll enable data-driven decision-making, contribute to strategic business initiatives, and ensure robust data infrastructure. This position offers an opportunity to work in a collaborative environment with a focus on innovative technologies and continuous growth. Software Requirements Required: Proficiency in Data Engineering tools and frameworks such as Hive , Apache Spark , and Python (version 3.x) Extensive experience working with Google Cloud Platform (GCP) offerings including Dataflow, BigQuery, Cloud Storage, and Pub/Sub Familiarity with Git , Jira , and Confluence for version control and collaboration Preferred: Experience with additional GCP services like DataProc, Data Studio, or Cloud Composer Exposure to other programming languages such as Java or Scala Knowledge of data security best practices and tools Overall Responsibilities Design, develop, and optimize scalable data pipelines on GCP to support analytics and reporting needs Collaborate with cross-functional teams to translate business requirements into technical solutions Build and maintain data models, ensuring data quality, integrity, and security Participate actively in code reviews, adhering to best practices and standards Develop automated and efficient data workflows to improve system performance Stay updated with emerging data engineering trends and continuously improve technical skills Provide technical guidance and support to team members, fostering a collaborative environment Ensure timely delivery of deliverables aligned with project milestones Technical Skills (By Category) Programming Languages: EssentialPython (required) PreferredJava, Scala Data Management & Databases: Experience with Hive, BigQuery, and relational databases Knowledge of data warehousing concepts and SQL proficiency Cloud Technologies: Extensive hands-on experience with GCP services including Dataflow, BigQuery, Cloud Storage, Pub/Sub, and Composer Ability to build and optimize data pipelines leveraging GCP offerings Frameworks & Libraries: Spark (PySpark preferred), Hadoop ecosystem experience is advantageous Development Tools & Methodologies: Agile/Scrum methodologies, version control with Git, project tracking via JIRA, documentation on Confluence Security Protocols: Understanding of data security, privacy, and compliance standards Experience Requirements Minimum of 6-8 years in data or software engineering roles with a focus on data pipeline development Proven experience in designing and implementing data solutions on cloud platforms, particularly GCP Prior experience working in agile teams, participating in code reviews, and delivering end-to-end data projects Experience working with cross-disciplinary teams and understanding varied stakeholder requirements Exposure to industry best practices for data security, governance, and quality assurance is desired Day-to-Day Activities Attend daily stand-up meetings and contribute to project planning sessions Collaborate with business analysts, data scientists, and other stakeholders to understand data needs Develop, test, and deploy scalable data pipelines, ensuring efficiency and reliability Perform regular code reviews, provide constructive feedback, and uphold coding standards Document technical solutions and maintain clear records of data workflows Troubleshoot and resolve technical issues in data processing environments Participate in continuous learning initiatives to stay abreast of technological developments Support team members by sharing knowledge and resolving technical challenges Qualifications Bachelor's or Masters degree in Computer Science, Information Technology, or a related field Relevant professional certifications in GCP (such as Google Cloud Professional Data Engineer) are preferred but not mandatory Demonstrable experience in data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills, with a focus on outcome-driven solutions Excellent communication and interpersonal skills to effectively collaborate within teams and with stakeholders Ability to work independently with minimal supervision and manage multiple priorities effectively Adaptability to evolving technologies and project requirements Demonstrated initiative in driving tasks forward and continuous improvement mindset Strong organizational skills with a focus on quality and attention to detail S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 6 days ago
0.0 - 1.0 years
10 - 13 Lacs
Bengaluru
Work from Office
Job Area: Interns Group, Interns Group > Interim Intern Qualcomm Overview: Qualcomm is a company of inventors that unlocked 5G ushering in an age of rapid acceleration in connectivity and new possibilities that will transform industries, create jobs, and enrich lives. But this is just the beginning. It takes inventive minds with diverse skills, backgrounds, and cultures to transform 5Gs potential into world-changing technologies and products. This is the Invention Age - and this is where you come in. General Summary: Only B.Tech, 2026 Grads As an IT intern, you will work with a team of IT professionals and engineers to develop, implement, and maintain various technologies for the organization. With a degree in computer science, engineering, or information technology, you will be able to contribute to some of the projects below. Below are examples of roles and technologies that you may work on during your internship Framework roll out and tool implementation System-level integration issues Design and integrate new features Project and program documentation Data analysis Network security Vendor management Development, Testing, application, database & infrastructure maintenance and support Project management Server/System administration Technologies OSAndroid, Linux, Windows, Chrome, Native Platforms (RIM) Microsoft office suiteSharePoint, Office365, MSFT Office, Project, etc. Packaged/Cloud (SAAS)SalesForce, Service Now, WorkDay Enterprise service management tools Cloud computing services, such as AWS, Azure Version control, operational programs, such as Git/GitHub, Splunk, Perforce or Syslog High Performance Compute, Virtualization, Firewalls, VPN technologies, Storage, Monitoring tools and proxy services FrameworksHadoop, Ruby on Rails, Grails, Angular, React Programming LanguagesJava, Python, Java Script, Objective C, Go Lang, Scala, .Nete DatabasesOracle, My SQL, PostGreSQL, Mongo DB, Elastic Search, MapR DB AnalyticsETL (Informatica/Spark/Airflow), Visualization (Tableau/Power BI), Custom Applications (Java Script) DevOpsContainers (K8S/Docker), Jenkins, Ansible, Chef, Azure DevOps Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail myhr.support@qualcomm.com or call Qualcomm's toll-free number found here . Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 6 days ago
3.0 - 8.0 years
5 - 9 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification: Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Gen #NJP
Posted 6 days ago
8.0 - 12.0 years
30 - 35 Lacs
Pune
Work from Office
About The Role : Job TitleSenior Engineer PD, AVP LocationPune, India Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Cloud Data Engineer area. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (BigQuery, Cloud SQl, No Sql, Hive etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at leastSpark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 6 days ago
5.0 - 10.0 years
4 - 5 Lacs
Pune
Work from Office
Senior Data Engineer Location: Pune Exp: 5+ Years We re a technology solutions provider dedicated to delivering innovative digital products andservices. Our team of creative thinkers, tech enthusiasts, and strategic experts is transformingbusinesses and enhancing user experiences with cutting-edge technology. We re passionate aboutenabling our partners success, and we invite you to be part of our exciting journey! Responsibilities: Be part of a cross-functional Scrum team. Collaborate closely with other R&D functions. Contribute to new feature development. Provide input on system behaviour to Product Owners (POs) and developers. Support customers and internal teams. Analyse and solve product issues. Must-Have Skills: Minimum 5+ years of experience as a Data Engineer. Strong hands-on experience with Scala. Experience with AWS/Azure cloud services related to data pipeline: EMR S3 Redshift Document DB/MongoDB Spark Streaming Spark HDFS Vinaya Kumbhar Sr.
Posted 6 days ago
4.0 - 7.0 years
9 - 13 Lacs
Coimbatore
Work from Office
Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Lead, you will develop and configure software systems, either end-to-end or for specific stages of the product lifecycle. Your typical day will involve collaborating with various teams to ensure the successful implementation of software solutions, applying your knowledge of technologies and methodologies to support project goals and client needs. You will engage in problem-solving activities, guiding your team through challenges while ensuring that the software development process aligns with best practices and client expectations. Your role will also include mentoring team members and fostering a collaborative environment to drive innovation and efficiency in software development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure adherence to timelines and quality standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of cloud computing principles and practices.- Experience with data engineering and ETL processes.- Familiarity with programming languages such as Python or Scala.- Ability to design and implement scalable data solutions. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Azure Databricks.- This position is based in Coimbatore.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
3.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, Microsoft Azure Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with cross-functional teams to gather requirements, developing application features, and ensuring that the applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality solutions that meet the needs of the organization and its stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in continuous learning to stay updated with the latest technologies and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, Microsoft Azure Data Services.- Strong understanding of data integration techniques and ETL processes.- Experience with cloud-based data storage solutions and data management.- Familiarity with programming languages such as Python or Scala.- Ability to work with data visualization tools to present insights effectively. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
3.0 - 8.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySparkMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to guarantee the quality of the applications you create, while continuously seeking ways to enhance functionality and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct thorough testing and debugging of applications to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with PySpark, Scala.- Strong understanding of data integration and ETL processes.- Familiarity with cloud computing concepts and services.- Experience in application lifecycle management and agile methodologies. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
3.0 - 8.0 years
5 - 9 Lacs
Coimbatore
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking opportunities for improvement and efficiency in your work. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with programming languages such as Python or Scala.- Knowledge of data visualization techniques and tools. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Coimbatore office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
2.0 - 5.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, Microsoft Azure Data Services Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to gather requirements, developing application features, and ensuring that the applications are aligned with business objectives. You will also engage in problem-solving activities, providing innovative solutions to enhance application performance and user experience, while maintaining a focus on quality and efficiency throughout the development process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Microsoft Azure Data Services, Microsoft Azure Databricks.- Good To Have Skills: Experience with data integration tools and ETL processes.- Strong understanding of cloud computing concepts and architecture.- Experience in application development using programming languages such as Python or Scala.- Familiarity with Agile methodologies and project management tools. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
2.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Python (Programming Language), ScalaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that align with business objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Python (Programming Language), Scala.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to application development. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
3.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Engage in code reviews to ensure adherence to best practices and standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud-based data solutions and analytics.- Familiarity with programming languages such as Python or Scala.- Knowledge of data visualization techniques and tools. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
What are we looking for? Must have experience with at least one cloud platform (AWS, GCP, or Azure) – AWS preferred Must have experience with lakehouse-based systems such as Iceberg, Hudi, or Delta Must have experience with at least one programming language (Python, Scala, or Java) along with SQL Must have experience with Big Data technologies such as Spark, Hadoop, Hive, or other distributed systems Must have experience with data orchestration tools like Airflow Must have experience in building reliable and scalable ETL pipelines Good to have experience in data modeling Good to have exposure to building AI-led data applications/services Qualifications and Skills 2–6 years of professional experience in a Data Engineering role. Knowledge of distributed systems such as Hadoop, Hive, Spark, Kafka, etc.
Posted 6 days ago
7.0 - 12.0 years
5 - 9 Lacs
Navi Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Scala Good to have skills : Java Enterprise Edition, Java Full Stack Development, .Net Full Stack DevelopmentMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements in Mumbai. You will collaborate with teams to ensure successful project delivery and contribute to key decisions. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the development and implementation of scalable applications- Conduct code reviews and provide technical guidance to team members- Stay updated with industry trends and technologies to enhance application development Professional & Technical Skills: - Must To Have Skills: Proficiency in Scala- Good To Have Skills: Experience with Java Enterprise Edition- Strong understanding of software development principles- Hands-on experience in building and optimizing applications- Knowledge of database management systems- Familiarity with agile methodologies Additional Information:- The candidate should have a minimum of 7.5 years of experience in Scala- This position is based at our Mumbai office- A 15 years full-time education is required Qualification 15 years full time education
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough